Skip to content

jim2107054/Numerical-Methods

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

70 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Console Application Development Using Numerical Methods

Objective

This project aims to create a console application that implements various numerical methods to solve linear equations, non-linear equations, and differential equations, along with matrix inversion.

Application Structure

The application includes the following numerical methods:

1. Solution of Linear Equations

  • Jacobi iterative method
  • Gauss-Seidel iterative method
  • Gauss elimination
  • Gauss-Jordan elimination
  • LU factorization

2. Solution of Non-linear Equations

  • Bisection method
  • False position method
  • Secant method
  • Newton-Raphson method

3. Solution of Differential Equations

  • Runge-Kutta method

4. Matrix Inversion

The application is designed to solve a system of a minimum of 5 linear equations for the solution of linear equations.

How to Run the Application

  1. Clone this repository to your local machine using:
    git clone https://github.com/jim2107054/-Numerical-Methods.git
  2. Navigate to the project directory.
  3. Compile and run the application using your preferred programming environment.

Solution of Linear Equations

1. Jacobi Iterative Method

The Jacobi Iterative Method is an algorithm for determining the solutions of a diagonally dominant system of linear equations.

Initialization

  • Start with an initial guess for the solution vector \( x^{(0)} \).
  • Set the tolerance level for convergence.

Iteration

  1. For each equation, calculate the new value for each variable:

    [ x_i^{(k+1)} = \frac{1}{a_{ii}} \left( b_i - \sum_{j \neq i} a_{ij} x_j^{(k)} \right) ] where ( k ) is the iteration index.

  2. Repeat the above step for each variable until the changes between iterations are less than the specified tolerance.

Convergence

The method converges if the matrix ( A ) is strictly diagonally dominant or symmetric positive definite.


2. Gauss-Seidel Iterative Method

The Gauss-Seidel Method is an improvement over the Jacobi method that uses the most recently updated values.

Initialization

  • Start with an initial guess for the solution vector ( x^{(0)} ).
  • Set the tolerance level for convergence.

Iteration

  1. For each equation, update the variable immediately after calculating its new value:

    [ x_i^{(k+1)} = frac{1}{a_{ii}} left( b_i - sum_{j < i} a_{ij} x_j^{(k+1)} - sum_{j > i} a_{ij} x_j^{(k)} right) ]

  2. Continue iterating until the solution converges.

Convergence

Similar to the Jacobi method, convergence is guaranteed under certain conditions, such as when the matrix is strictly diagonally dominant.


3. Gauss Elimination

Gauss Elimination transforms the system of equations into an upper triangular matrix form.

Steps

  1. Form the augmented matrix ([A | b]).
  2. Apply row operations to eliminate variables from the equations below.
  3. Transform the matrix into upper triangular form.

Back Substitution

Once in upper triangular form, use back substitution to find the solution:

[ x_n = frac{b_n - sum_{j=n+1}^{m} a_{nj} x_j}{a_{nn}} ] Repeat for ( x_{n-1}, x_{n-2}, ldots, x_1 ).


4. Gauss-Jordan Elimination

Gauss-Jordan Elimination further reduces the upper triangular matrix to reduced row echelon form (RREF).

Steps

  1. Start with the augmented matrix ([A | b]).
  2. Use row operations to achieve RREF:
    • Normalize the leading coefficient to 1.
    • Eliminate all other entries in the leading coefficient's column.

Solution

The resulting matrix directly gives the solutions for each variable, as every leading variable corresponds to a column in the matrix.


5. LU Factorization

LU Factorization decomposes the matrix ( A ) into a lower triangular matrix ( L ) and an upper triangular matrix ( U ).

Steps

  1. Decompose ( A ) such that ( A = LU ) using Gaussian elimination without row swaps.
  2. Solve ( Ly = b ) using forward substitution to find ( y ).
  3. Solve ( Ux = y ) using back substitution to find the solution vector ( x ).

Implementation Methods for Solving Systems of Linear Equations

1. Jacobi Method

  • Initialize all variable estimates in the array x to zero or some initial guess.
  • Iterate through the equations, updating each variable in the array x_new based on the formula:
    x_new[i] = (b[i] - sum(A[i][j] * x[j] for j in range(n)))/A[i][i]; where b is the constants vector and A is the coefficient matrix.
  • Check for convergence by comparing the current estimates in x_new with the previous ones in x. If the change in estimates is less than a predefined tolerance, the method is considered to have converged.

2. Gauss-Seidel Method

  • Start with initial guesses for the variables stored in the array x.
  • For each variable, update its value in x using the latest available estimates. For example, the update for the ith variable is performed using:
    x[i] = (b[i] - sum(A[i][j] * x[j] for j in range(i)) - sum(A[i][j] * x_old[j] for j in range(i + 1, n))) / A[i][i];
  • Monitor convergence based on the changes in variable estimates by comparing the previous and current values of x.

3. Gauss Elimination

  • Perform row operations on the augmented matrix A to eliminate variables systematically. This is done by modifying rows to create zeros below the diagonal, leading to an upper triangular form:
  • Once in upper triangular form, use back substitution to solve for the variables. Starting from the last equation, the solution for each variable x[i] can be calculated using the formula:
    x[i] = (b[i] - sum(A[i][j] * x[j] for j in range(i + 1, n))) / A[i][i];

4. Gauss-Jordan Elimination

  • Similar to Gauss elimination, use row operations on the matrix A to eliminate variables.
  • Continue manipulating the matrix to ensure that each leading coefficient in the rows is 1 and that all other entries in the column are zero:
  • This results in a direct solution for each variable stored in x without the need for back substitution. The final equations will have the form x[i] = b[i] for all i.

5. LU Factorization

The luFactorization function solves a system of linear equations using LU factorization. It takes three parameters:

  • vector> &A: A square matrix representing the coefficients of the linear equations.
  • vector &b: A vector representing the right-hand side constants of the equations.
  • vector &x: A vector where the solution will be stored.

Function Breakdown

  • Start with a square matrix A.
  • Initialize matrices L and U:
    • L starts as an identity matrix:
    • L =

                      1 & 0 & 0
                      0 & 1 & 0
                      0 & 0 & 1
                      

    • U initially equals A.
  • Perform Gaussian elimination to transform U into an upper triangular form:
    • For each pivot element in column j, eliminate all entries below it:
    •                 For each row i > j:
                      U[i][k] = U[i][k] - (U[i][j] / U[j][j]) * U[j][k]  (for k = j to n)
                      
    • Store the multipliers used in L:
    •                 L[i][j] = U[i][j] / U[j][j]
                      
  • The resulting matrices L and U can be used to solve Ax = b efficiently:
    • Use forward substitution to solve Ly = b.
    • Use back substitution to solve Ux = y.

Numerical Methods for Root Finding of Non-Linear Equations

1. Bisection Method

The Bisection Method is a root-finding technique that systematically narrows down the interval within which a root exists. It works as follows:

  1. Initialization: Choose two points, a and b, such that f(a) and f(b) have opposite signs. This indicates that there is at least one root in the interval [a, b] due to the Intermediate Value Theorem.
  2. Iteration:
    • Calculate the midpoint c = (a + b) / 2.
    • Evaluate the function at this midpoint, f(c).
    • Determine the next interval:
      • If f(c) is close to zero, then c is the root.
      • If f(a) and f(c) have opposite signs, set b = c.
      • Otherwise, set a = c.
  3. Convergence: The method guarantees convergence, but it can be slow, especially if the root is located near one end of the interval.

2. False Position Method

The False Position Method (or Regula Falsi) is an improvement over the Bisection Method that uses a linear interpolation approach:

  1. Initialization: Similar to the Bisection Method, start with two points a and b with f(a) and f(b) of opposite signs.
  2. Iteration:
    • Instead of taking the midpoint, compute the intersection of the line connecting (a, f(a)) and (b, f(b)) with the x-axis to find a new point c.
    • Evaluate f(c) and determine the next interval:
      • If f(c) is close to zero, then c is the root.
      • If f(a) and f(c) have opposite signs, set b = c.
      • Otherwise, set a = c.
  3. Convergence: This method often converges faster than the Bisection Method, especially when the function is approximately linear in the vicinity of the root.

3. Secant Method

The Secant Method provides a way to find roots without requiring a bracketing interval:

  1. Initialization: Choose two initial approximations x0 and x1 for the root.
  2. Iteration:
    • Compute the next approximation using the secant line formed by the points (x0, f(x0)) and (x1, f(x1)).
    • Update the approximations: xn+1 = xn - f(xn) * (xn - xn-1) / (f(xn) - f(xn-1)).
  3. Convergence: The method can converge rapidly, but it may fail if the function behaves poorly or if the initial guesses are not close enough to the actual root.

4. Newton-Raphson Method

The Newton-Raphson Method is a powerful technique based on the derivative of the function:

  1. Initialization: Start with an initial guess x0 for the root.
  2. Iteration:
    • Compute the next approximation using: xn+1 = xn - f(xn) / f'(xn), where f'(xn) is the derivative of f at xn.
  3. Convergence: The method generally exhibits quadratic convergence, meaning it can quickly approach the root if the initial guess is close enough and the function is well-behaved.

Method Implementation Details for solving system of Non-Linear Equations

1. Bisection Method

  • Initialization:
    • Requires two initial guesses, a and b, such that f(a) and f(b) have opposite signs.
    • Checks this condition at the start using an if statement to confirm a root exists between them.
  • Iteration:
    • Calculate the midpoint mid of the interval [a, b] using mid = (a + b) / 2.
    • Evaluate the function value at the midpoint with f(mid).
    • Update a or b based on the sign of f(mid):
      • If f(mid) is close to zero (within tolerance), return mid.
      • If f(a) and f(mid) have opposite signs, update b to mid.
      • Otherwise, update a to mid.
  • Termination:
    • Continue until the absolute difference between a and b is less than the specified tolerance level, ensuring convergence.

2. False Position Method

  • Initialization:
    • Begins with two initial points a and b that bracket a root, ensuring f(a) and f(b) have opposite signs.
  • Iteration:
    • Computes the x-intercept c of the line connecting (a, f(a)) and (b, f(b)): c = b - f(b) * (a - b) / (f(a) - f(b));
    • Evaluate f(c).
  • Interval Update:
    • Depending on the sign of f(c), update a or b:
      • If f(c) is close to zero, consider c as the root.
      • If f(a) and f(c) have opposite signs, update b to c.
      • Otherwise, update a to c.
  • Termination:
    • Repeat until the difference between a and b is less than the specified tolerance level.

3. Secant Method

  • Initialization:
    • Starts with two initial guesses, x0 and x1, expected to bracket the root.
  • Iteration:
    • Calculate the next approximation x2: x2 = x1 - f(x1) * (x1 - x0) / (f(x1) - f(x0));
  • Updating Values:
    • Update x0 and x1 to x1 and x2, respectively.
  • Termination:
    • Continue until the absolute value of f(x2) is less than the specified tolerance or maximum iterations reached.

4. Newton-Raphson Method

  • Initialization:
    • Begins with a single initial guess x0.
  • Iteration:
    • Calculate the next approximation: x1 = x0 - f(x0) / f_prime(x0);
  • Updating Values:
    • Update x0 to x1.
  • Termination:
    • Repeat until f(x1) is sufficiently close to zero or maximum iterations reached.

5. Finding Interval for Root Detection

  • Scanning Range:
    • Scans a specified range to find two consecutive points where function values have opposite signs.
  • Interval Return:
    • Returns the interval [x_i, x_{i+1}] for use in root-finding methods.
  • Error Handling:
    • If no interval is found, throw an exception indicating no root exists.

6. Polynomial Evaluation Functions

Function: evaluatePoly

  • Purpose: Evaluates a polynomial at a given x based on its coefficients.
  • Parameters:
    • const vector& coeffs: Coefficients of the polynomial.
    • double x: Value at which the polynomial is evaluated.
  • Implementation:
    • Initialize result to zero.
    • Iterate over coefficients to calculate terms:
      for (size_t i = 0; i < coeffs.size(); ++i) {
                          result += coeffs[i] * pow(x, coeffs.size() - i - 1);
                      }
  • Return Value: Returns the accumulated result.

Function: evaluatePolyDerivative

  • Purpose: Computes the derivative of a polynomial at a given x.
  • Parameters:
    • const vector& coeffs: Coefficients of the polynomial.
    • double x: Point at which the derivative is evaluated.
  • Implementation:
    • Initialize result to zero.
    • Iterate through coefficients (excluding constant term):
      for (size_t i = 0; i < coeffs.size() - 1; ++i) {
                          result += coeffs[i] * (coeffs.size() - i - 1) * pow(x, coeffs.size() - i - 2);
                      }
  • Return Value: Returns the accumulated result.

Solution of Differential Equations

1. Runge-Kutta Method

The Runge-Kutta Method is a numerical technique used to solve ordinary differential equations (ODEs). In this implementation, the method is specifically designed to solve first-order ODEs of the form dy/dx = f(x, y). The steps involved are as follows:

  • Derivative Function: The function dydx defines the derivative f(x, y), which in this case is 2*x + 1.
  • Initialization: The method starts with initial conditions (x0, y0) and a target x value, along with a step size h to determine how far to progress in each iteration.
  • Iteration: The method calculates the solution by performing several iterations:
    • Calculate four slopes (k1, k2, k3, k4) based on the derivative function at various points in the interval.
    • Update the value of y using a weighted average of these slopes, where the weights are derived from the Runge-Kutta method formulation.
    • Increment x0 by the step size h for the next iteration.
  • Output: After completing the iterations, the method outputs the estimated value of y at the target x.

Matrix Inversion

The Matrix Inversion method computes the inverse of a square matrix using Gaussian elimination. The key steps are as follows:

  • Initialization: Create an identity matrix of the same dimensions as the input matrix A, which will be transformed into the inverse.
  • Pivoting: For each row, identify the pivot element (the diagonal element). If the pivot is zero, the matrix is singular, and inversion is not possible.
  • Normalization: Normalize the current row by dividing all its elements by the pivot to make the pivot element equal to 1.
  • Elimination: Use the normalized row to eliminate corresponding elements in all other rows, effectively transforming the input matrix into the identity matrix while simultaneously applying the same operations to the identity matrix, thereby obtaining the inverse.
  • Completion: After processing all rows, the identity matrix will represent the inverse of the original matrix if the process is successful.

Video Presentation

A video demonstrating the application can be viewed here.

About

This project, called Numerical Methods, is a simple console application that solves different types of math problems, like linear and non-linear equations, differential equations, and matrix inversion.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages