

Key Properties and Applications of the Jacobian Method
Linear equation systems are linked to many engineering and scientific topics, as well as quantitative industry, statistics, and economic problems. It is possible and easy to solve a large number of symmetric, linear algebraic equations after the invention of computers. The solution of a very large set of simultaneous equations by numerical methods in time is an important factor in the practical application of the results. If the equations are solved in considerable time, we can increase productivity significantly.
We have mainly two numerical methods: the direct method and the iterative method for solving linear equation systems. For a big set of linear equations, particularly for sparse and structured coefficient equations, the iterative methods are preferable as they are largely unaffected by round-off errors. The well known classical iterative methods are the Jacobian and Gauss-Seidel methods. Here, we are going to discuss the Jacobi or Jacobi Method.
Jacobi Iteration Method
Matrices in the form of "AX=b" can easily represent a large linear system, where "A" represents a square matrix containing the ordered coefficients of our system of linear equations, "X" contains all of our various variables, and "B" represents the constants equal to each linear equation. For our unknown x-values, we wish to solve, and we can do so by using the Jacobian Method. As discussed, we can summarize the Jacobi Iterative Method with the equation "AX=B." The "a" variables indicate the elements of the "A" coefficient matrix, the "x" variables give us the unknown X-values which we are solving for, and the constants of each equation are represented by "b".
Now, AX=B is a system of linear equations, where
A = \[\begin{bmatrix} a_{11} & a_{12} & \cdots & a_{1n}\\ a_{21} & a_{22} & \cdots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & a_{nn} \end{bmatrix}\], X = \[\begin{bmatrix} x_{1}\\ x_{2} \\ \vdots \\ x_{n} \end{bmatrix}\], B = \[\begin{bmatrix} b_{1}\\ b_{2} \\ \vdots \\ b_{n} \end{bmatrix}\]
Suppose that none of the diagonal entries are zero without loss of generality; otherwise, swap them in rows, and the matrix A can be broken down as,
A=(D+U+L) ,
where D is the Diagonal matrix of A, U denotes the elements above the diagonal of matrix A, and L denotes the elements below the diagonal of matrix A.
Where D = \[\begin{bmatrix} a_{11} & 0 & \cdots & 0\\0 & a_{22} & \cdots & 0\\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & a_{nn} \end{bmatrix}\] and L + U is \[\begin{bmatrix} 0 & a_{12} & \cdots & a_{1n}\\ a_{21} & 0 & \cdots & a_{2n}\\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \cdots & 0 \end{bmatrix}\]
The solution can be obtained iteratively via using the following relation:
X\[^{(k+1)}\] = D\[^{-1}\] (B - (L + U)X\[^{(k)}\])
Where Xk and X(k+1) are kth and (k+1)th iteration of X. The elements of X(k+1) can be calculated by using element based formula that is given below:
X\[_{i}\]\[^{(k+1)}\] = \[\frac{1}{a_{ii}}\] \[\sum_{j\neq i}^{}\] (b\[_{i}\] - a\[_{ij}\] - x\[_{j}^{k}\]), i = 1, 2, 3, ………, n
Therefore, after placing the previous iterative value of X in the equation above, the new X value is determined until the necessary precision is achieved. The vital point is that the method should converge in order to find a solution. The sufficient but not possible condition for the method to converge is that the matrix should be strictly diagonally dominant. The Jacobi Method is also known as the simultaneous displacement method.
Gauss-Seidel and Jacobi Methods
The difference between Gauss-Seidel and Jacobi methods is that, Gauss Jacobi method takes the values obtained from the previous step, while the Gauss–Seidel method always uses the new version values in the iterative procedures. The reason why the Gauss-Seidel method is commonly referred to as the successive displacement method is that the second unknown is calculated by the first unknown in the current iteration, the third unknown is calculated from the 1st and 2nd unknown, etc.
Jacobi Method Example
Question: Solve the below using the Jacobian method, which is a system of linear equations in the form AX = B.
A = \[\begin{bmatrix} 2 & 5\\ 1 & 7 \end{bmatrix}\], b = \[\begin{bmatrix} 13 \\ 11 \end{bmatrix}\], x\[^{0}\] = \[\begin{bmatrix} 1 \\ 1 \end{bmatrix}\]
Ans:
We know that X(k+1) = D-1(B – RX(k)) is the formula that is used to estimate X.
Now, we’ll rewrite the formula as D-1(B – RX(k)) = TX(k) + C for our convenience.
In this, T = -D-1R, C = D-1B, R = L + U
We’ll now split the matrix A as a diagonal matrix and remainder.
D = \[\begin{bmatrix} 2 & 0\\ 0 & 7 \end{bmatrix}\] ⇒ D\[^{-1}\] = \[\begin{bmatrix} \frac{1}{2} & 0\\ 0 & frac{1}{7} \end{bmatrix}\]
The lower and upper parts of the reminder of A are as follows:
R = \[\begin{bmatrix} 0 & 1\\ 5 & 0 \end{bmatrix}\], L = \[\begin{bmatrix} 0 & 0\\ 5 & 0 \end{bmatrix}\], U = \[\begin{bmatrix} 0 & 1\\ 0 & 0 \end{bmatrix}\]
Here,
R = Remainder matrix
L = Lower part of R
U = Upper part of R
T = -D-1(L + U) = D-1[-L + (-U)]
T = \[\begin{bmatrix} \frac{1}{2} & 0\\ 0 & \frac{1}{7} \end{bmatrix}\] {\[\begin{bmatrix} 0 & 0\\ -5 & 0 \end{bmatrix}\] + \[\begin{bmatrix} 0 & -1\\ 0 & 0 \end{bmatrix}\]} = \[\begin{bmatrix} 0 & - \frac{1}{2}\\ - \frac{5}{7} & 0 \end{bmatrix}\]
C = \[\begin{bmatrix} \frac{1}{2} & 0\\ 0 & \frac{1}{7} \end{bmatrix}\] \[\begin{bmatrix} 13 \\ 11 \end{bmatrix}\] = \[\begin{bmatrix} \frac{13}{2} \\ \frac{11}{7} \end{bmatrix}\]
x\[^{1}\] = \[\begin{bmatrix} 0 & - \frac{1}{2}\\ - \frac{5}{7} & 0 \end{bmatrix}\] \[\begin{bmatrix} 1 \\ 1 \end{bmatrix}\] + \[\begin{bmatrix} \frac{13}{2} \\ \frac{11}{7} \end{bmatrix}\] = \[\begin{bmatrix} \frac{12}{2} \\ \frac{6}{7} \end{bmatrix}\] ≈ \[\begin{bmatrix} 6 \\ 0.857 \end{bmatrix}\]
x\[^{2}\] = \[\begin{bmatrix} 0 & - \frac{1}{2}\\ - \frac{5}{7} & 0 \end{bmatrix}\] \[\begin{bmatrix} 6 \\ \frac{6}{7} \end{bmatrix}\] + \[\begin{bmatrix} \frac{13}{2} \\ \frac{11}{7} \end{bmatrix}\] = \[\begin{bmatrix} \frac{85}{14} \\ -\frac{19}{7} \end{bmatrix}\] ≈ \[\begin{bmatrix} 6.071 \\ -2.714 \end{bmatrix}\]
We’ll repeat the process until it converges.
What is the “T” Matrix?
In the Jacobi Method example problem we discussed the “T” Matrix. Let’s now understand what it is about. While the application of the Jacobi iteration is very easy, the method may not always converge on the set of solutions. As a result, a convergence test must be carried out prior to the implementation of the Jacobi Iteration. This convergence test completely depends on a special matrix called our "T" matrix. We'll re-write this system of equations in a way that the whole system is split into the form "Xn+1 = TXn+c." In simple words, the matrix on the RHS of the equation can be split into the matrix of coefficients and the matrix of constants.
Conclusion
The Jacobian method, one of the most basic methods to find solutions of linear systems of equations, is studied. Jacobian problems and solutions have many significant disadvantages, such as low numerical stability and incorrect solutions (in many instances), particularly if downstream diagonal entries are small. We can see while solving a variety of problems, that this method yields very accurate results when the entries are high. This method has applications in Engineering also as it is one of the efficient methods for solving systems of linear equations, when approximate solutions are known. This significantly reduces the number of computations required.
FAQs on Jacobian Method Explained: A Student-Friendly Approach
1. What is the Jacobi method in the context of linear algebra?
The Jacobi method, also known as the Gauss-Jacobi method, is an iterative algorithm used for finding the numerical solution of a system of linear equations. It is particularly effective for systems where the coefficient matrix is diagonally dominant, meaning the absolute value of the diagonal element in each row is larger than the sum of the absolute values of all other elements in that row.
2. What is the primary application or use of the Jacobi method?
The primary application of the Jacobi method is to solve large systems of linear equations that often arise in numerical analysis, such as in the numerical solution of partial differential equations. Because it is an iterative method, it can be simpler to implement on computers for very large, sparse matrices compared to direct methods like Gaussian elimination.
3. What is the basic principle behind the Jacobi iterative formula?
The principle of the Jacobi method is to start with an initial guess for each variable. In each subsequent iteration, it calculates a new, more refined value for each variable using the values from the previous iteration. For a system Ax = b, each variable x_i is solved for, assuming all other x_j values are from the last iteration. This process is repeated until the values converge to a stable solution.
4. When is a solution guaranteed using the Jacobi method?
A solution is guaranteed to be found (i.e., the iterations will converge) if the system of linear equations is strictly diagonally dominant. While the method can sometimes converge for non-diagonally dominant systems, diagonal dominance is the most common condition that ensures convergence to the unique solution, regardless of the initial guess.
5. How does the Jacobi method differ from the Gauss-Seidel method?
The key difference lies in how updated values are used.
- In the Jacobi method, the new values for all variables in an iteration are calculated based solely on the values from the previous complete iteration.
- In the Gauss-Seidel method, as soon as a new value for a variable is computed within an iteration, it is immediately used to calculate the subsequent variables in that same iteration. This often leads to faster convergence for the Gauss-Seidel method.
6. Is the iterative Jacobi method related to the Jacobian matrix from multivariable calculus?
No, they are two distinct concepts in mathematics that share a name. The Jacobi iterative method is a numerical technique for solving systems of linear equations. In contrast, the Jacobian matrix (and its determinant) is used in multivariable calculus for coordinate transformations and represents the matrix of all first-order partial derivatives of a vector-valued function.
7. Why is the Jacobi method classified as an 'iterative' rather than a 'direct' method?
It is classified as an iterative method because it finds an approximate solution by starting with a guess and repeating a process to get progressively closer to the exact solution. This contrasts with direct methods, like Gaussian elimination, which aim to find the exact solution in a finite, predetermined number of steps without needing an initial guess or convergence checks.
8. What is a significant limitation of the Jacobi method?
A significant limitation of the Jacobi method is its rate of convergence, which can be very slow compared to other iterative methods like the Gauss-Seidel method or the Successive Over-Relaxation (SOR) method. Furthermore, its convergence is not guaranteed for all systems of linear equations, being most reliable only for those that are diagonally dominant.

















