This project aims to create a console application that implements various numerical methods to solve linear equations, non-linear equations, and differential equations, along with matrix inversion.
The application includes the following numerical methods:
- Jacobi iterative method
- Gauss-Seidel iterative method
- Gauss elimination
- Gauss-Jordan elimination
- LU factorization
- Bisection method
- False position method
- Secant method
- Newton-Raphson method
- Runge-Kutta method
The application is designed to solve a system of a minimum of 5 linear equations for the solution of linear equations.
- Clone this repository to your local machine using:
git clone https://github.com/jim2107054/-Numerical-Methods.git - Navigate to the project directory.
- Compile and run the application using your preferred programming environment.
The Jacobi Iterative Method is an algorithm for determining the solutions of a diagonally dominant system of linear equations.
- Start with an initial guess for the solution vector \( x^{(0)} \).
- Set the tolerance level for convergence.
- For each equation, calculate the new value for each variable:
[ x_i^{(k+1)} = \frac{1}{a_{ii}} \left( b_i - \sum_{j \neq i} a_{ij} x_j^{(k)} \right) ] where ( k ) is the iteration index.
- Repeat the above step for each variable until the changes between iterations are less than the specified tolerance.
The method converges if the matrix ( A ) is strictly diagonally dominant or symmetric positive definite.
The Gauss-Seidel Method is an improvement over the Jacobi method that uses the most recently updated values.
- Start with an initial guess for the solution vector ( x^{(0)} ).
- Set the tolerance level for convergence.
- For each equation, update the variable immediately after calculating its new value:
[ x_i^{(k+1)} = frac{1}{a_{ii}} left( b_i - sum_{j < i} a_{ij} x_j^{(k+1)} - sum_{j > i} a_{ij} x_j^{(k)} right) ]
- Continue iterating until the solution converges.
Similar to the Jacobi method, convergence is guaranteed under certain conditions, such as when the matrix is strictly diagonally dominant.
Gauss Elimination transforms the system of equations into an upper triangular matrix form.
- Form the augmented matrix ([A | b]).
- Apply row operations to eliminate variables from the equations below.
- Transform the matrix into upper triangular form.
Once in upper triangular form, use back substitution to find the solution:
[ x_n = frac{b_n - sum_{j=n+1}^{m} a_{nj} x_j}{a_{nn}} ] Repeat for ( x_{n-1}, x_{n-2}, ldots, x_1 ).
Gauss-Jordan Elimination further reduces the upper triangular matrix to reduced row echelon form (RREF).
- Start with the augmented matrix ([A | b]).
- Use row operations to achieve RREF:
- Normalize the leading coefficient to 1.
- Eliminate all other entries in the leading coefficient's column.
The resulting matrix directly gives the solutions for each variable, as every leading variable corresponds to a column in the matrix.
LU Factorization decomposes the matrix ( A ) into a lower triangular matrix ( L ) and an upper triangular matrix ( U ).
- Decompose ( A ) such that ( A = LU ) using Gaussian elimination without row swaps.
- Solve ( Ly = b ) using forward substitution to find ( y ).
- Solve ( Ux = y ) using back substitution to find the solution vector ( x ).
- Initialize all variable estimates in the array x to zero or some initial guess.
- Iterate through the equations, updating each variable in the array x_new based on the formula:
x_new[i] = (b[i] - sum(A[i][j] * x[j] for j in range(n)))/A[i][i]; where b is the constants vector and A is the coefficient matrix. - Check for convergence by comparing the current estimates in x_new with the previous ones in x. If the change in estimates is less than a predefined tolerance, the method is considered to have converged.
- Start with initial guesses for the variables stored in the array x.
- For each variable, update its value in x using the latest available estimates. For example, the update for the ith variable is performed using:
x[i] = (b[i] - sum(A[i][j] * x[j] for j in range(i)) - sum(A[i][j] * x_old[j] for j in range(i + 1, n))) / A[i][i]; - Monitor convergence based on the changes in variable estimates by comparing the previous and current values of x.
- Perform row operations on the augmented matrix A to eliminate variables systematically. This is done by modifying rows to create zeros below the diagonal, leading to an upper triangular form:
- Once in upper triangular form, use back substitution to solve for the variables. Starting from the last equation, the solution for each variable x[i] can be calculated using the formula:
x[i] = (b[i] - sum(A[i][j] * x[j] for j in range(i + 1, n))) / A[i][i];
- Similar to Gauss elimination, use row operations on the matrix A to eliminate variables.
- Continue manipulating the matrix to ensure that each leading coefficient in the rows is 1 and that all other entries in the column are zero:
- This results in a direct solution for each variable stored in x without the need for back substitution. The final equations will have the form x[i] = b[i] for all i.
The luFactorization function solves a system of linear equations using LU factorization. It takes three parameters:
vector> &A: A square matrix representing the coefficients of the linear equations.vector &b: A vector representing the right-hand side constants of the equations.vector &x: A vector where the solution will be stored.
- Start with a square matrix A.
- Initialize matrices L and U:
- L starts as an identity matrix:
L =
1 & 0 & 0 0 & 1 & 0 0 & 0 & 1 - U initially equals A.
- For each pivot element in column j, eliminate all entries below it:
For each row i > j:
U[i][k] = U[i][k] - (U[i][j] / U[j][j]) * U[j][k] (for k = j to n)
L[i][j] = U[i][j] / U[j][j]
- Use forward substitution to solve Ly = b.
- Use back substitution to solve Ux = y.
The Bisection Method is a root-finding technique that systematically narrows down the interval within which a root exists. It works as follows:
- Initialization: Choose two points, a and b, such that f(a) and f(b) have opposite signs. This indicates that there is at least one root in the interval [a, b] due to the Intermediate Value Theorem.
- Iteration:
- Calculate the midpoint c = (a + b) / 2.
- Evaluate the function at this midpoint, f(c).
- Determine the next interval:
- If f(c) is close to zero, then c is the root.
- If f(a) and f(c) have opposite signs, set b = c.
- Otherwise, set a = c.
- Convergence: The method guarantees convergence, but it can be slow, especially if the root is located near one end of the interval.
The False Position Method (or Regula Falsi) is an improvement over the Bisection Method that uses a linear interpolation approach:
- Initialization: Similar to the Bisection Method, start with two points a and b with f(a) and f(b) of opposite signs.
- Iteration:
- Instead of taking the midpoint, compute the intersection of the line connecting (a, f(a)) and (b, f(b)) with the x-axis to find a new point c.
- Evaluate f(c) and determine the next interval:
- If f(c) is close to zero, then c is the root.
- If f(a) and f(c) have opposite signs, set b = c.
- Otherwise, set a = c.
- Convergence: This method often converges faster than the Bisection Method, especially when the function is approximately linear in the vicinity of the root.
The Secant Method provides a way to find roots without requiring a bracketing interval:
- Initialization: Choose two initial approximations x0 and x1 for the root.
- Iteration:
- Compute the next approximation using the secant line formed by the points (x0, f(x0)) and (x1, f(x1)).
- Update the approximations: xn+1 = xn - f(xn) * (xn - xn-1) / (f(xn) - f(xn-1)).
- Convergence: The method can converge rapidly, but it may fail if the function behaves poorly or if the initial guesses are not close enough to the actual root.
The Newton-Raphson Method is a powerful technique based on the derivative of the function:
- Initialization: Start with an initial guess x0 for the root.
- Iteration:
- Compute the next approximation using: xn+1 = xn - f(xn) / f'(xn), where f'(xn) is the derivative of f at xn.
- Convergence: The method generally exhibits quadratic convergence, meaning it can quickly approach the root if the initial guess is close enough and the function is well-behaved.
- Initialization:
- Requires two initial guesses, a and b, such that f(a) and f(b) have opposite signs.
- Checks this condition at the start using an
ifstatement to confirm a root exists between them.
- Iteration:
- Calculate the midpoint
midof the interval [a, b] usingmid = (a + b) / 2. - Evaluate the function value at the midpoint with
f(mid). - Update
aorbbased on the sign off(mid):- If
f(mid)is close to zero (within tolerance), returnmid. - If f(a) and
f(mid)have opposite signs, updatebtomid. - Otherwise, update
atomid.
- If
- Calculate the midpoint
- Termination:
- Continue until the absolute difference between
aandbis less than the specified tolerance level, ensuring convergence.
- Continue until the absolute difference between
- Initialization:
- Begins with two initial points
aandbthat bracket a root, ensuringf(a)andf(b)have opposite signs.
- Begins with two initial points
- Iteration:
- Computes the x-intercept
cof the line connecting (a, f(a)) and (b, f(b)):c = b - f(b) * (a - b) / (f(a) - f(b)); - Evaluate
f(c).
- Computes the x-intercept
- Interval Update:
- Depending on the sign of
f(c), updateaorb:- If
f(c)is close to zero, considercas the root. - If f(a) and
f(c)have opposite signs, updatebtoc. - Otherwise, update
atoc.
- If
- Depending on the sign of
- Termination:
- Repeat until the difference between
aandbis less than the specified tolerance level.
- Repeat until the difference between
- Initialization:
- Starts with two initial guesses,
x0andx1, expected to bracket the root.
- Starts with two initial guesses,
- Iteration:
- Calculate the next approximation
x2:x2 = x1 - f(x1) * (x1 - x0) / (f(x1) - f(x0));
- Calculate the next approximation
- Updating Values:
- Update
x0andx1tox1andx2, respectively.
- Update
- Termination:
- Continue until the absolute value of
f(x2)is less than the specified tolerance or maximum iterations reached.
- Continue until the absolute value of
- Initialization:
- Begins with a single initial guess
x0.
- Begins with a single initial guess
- Iteration:
- Calculate the next approximation:
x1 = x0 - f(x0) / f_prime(x0);
- Calculate the next approximation:
- Updating Values:
- Update
x0tox1.
- Update
- Termination:
- Repeat until
f(x1)is sufficiently close to zero or maximum iterations reached.
- Repeat until
- Scanning Range:
- Scans a specified range to find two consecutive points where function values have opposite signs.
- Interval Return:
- Returns the interval
[x_i, x_{i+1}]for use in root-finding methods.
- Returns the interval
- Error Handling:
- If no interval is found, throw an exception indicating no root exists.
- Purpose: Evaluates a polynomial at a given
xbased on its coefficients. - Parameters:
const vector& coeffs: Coefficients of the polynomial.double x: Value at which the polynomial is evaluated.
- Implementation:
- Initialize
resultto zero. - Iterate over coefficients to calculate terms:
for (size_t i = 0; i < coeffs.size(); ++i) { result += coeffs[i] * pow(x, coeffs.size() - i - 1); }
- Initialize
- Return Value: Returns the accumulated
result.
- Purpose: Computes the derivative of a polynomial at a given
x. - Parameters:
const vector& coeffs: Coefficients of the polynomial.double x: Point at which the derivative is evaluated.
- Implementation:
- Initialize
resultto zero. - Iterate through coefficients (excluding constant term):
for (size_t i = 0; i < coeffs.size() - 1; ++i) { result += coeffs[i] * (coeffs.size() - i - 1) * pow(x, coeffs.size() - i - 2); }
- Initialize
- Return Value: Returns the accumulated
result.
The Runge-Kutta Method is a numerical technique used to solve ordinary differential equations (ODEs). In this implementation, the method is specifically designed to solve first-order ODEs of the form dy/dx = f(x, y). The steps involved are as follows:
- Derivative Function: The function
dydxdefines the derivativef(x, y), which in this case is2*x + 1. - Initialization: The method starts with initial conditions
(x0, y0)and a targetxvalue, along with a step sizehto determine how far to progress in each iteration. - Iteration: The method calculates the solution by performing several iterations:
- Calculate four slopes (
k1,k2,k3,k4) based on the derivative function at various points in the interval. - Update the value of
yusing a weighted average of these slopes, where the weights are derived from the Runge-Kutta method formulation. - Increment
x0by the step sizehfor the next iteration.
- Calculate four slopes (
- Output: After completing the iterations, the method outputs the estimated value of
yat the targetx.
The Matrix Inversion method computes the inverse of a square matrix using Gaussian elimination. The key steps are as follows:
- Initialization: Create an identity matrix of the same dimensions as the input matrix
A, which will be transformed into the inverse. - Pivoting: For each row, identify the pivot element (the diagonal element). If the pivot is zero, the matrix is singular, and inversion is not possible.
- Normalization: Normalize the current row by dividing all its elements by the pivot to make the pivot element equal to 1.
- Elimination: Use the normalized row to eliminate corresponding elements in all other rows, effectively transforming the input matrix into the identity matrix while simultaneously applying the same operations to the identity matrix, thereby obtaining the inverse.
- Completion: After processing all rows, the identity matrix will represent the inverse of the original matrix if the process is successful.
A video demonstrating the application can be viewed here.