Solving polynomial equations by iteration

?
Introducción
Lecciones
  1. Introduction to solving polynomial equations by iteration
  2. Direct/Fixed point iteration
  3. Iteration by bisection
  4. Newton-Raphson method
?
Ejemplos
Lecciones
  1. Solving Equations Using Direct Iteration
    1. Show that x25x8=0x^2-5x-8=0 can be written in the form x=8+5xx=\sqrt{8+5x}.
    2. Use the iteration formula xn+1=8+5xnx_{n+1}=\sqrt{8+5x_n} to find x3x_3 to 22 decimal places. Start with x0=2x_0=2.
  2. Solving Equations Using Direct Iteration
    1. Show that x3x8=0x^3-x-8=0 can be written in the form x=3x+8x={^3}\sqrt{x+8}.
    2. Use the iteration formula xn+1=3xn+8x_{n+1}={^3}\sqrt{x_n+8} to find x4x_4 to 22 decimal places. Start with x1=0x_1=0.
  3. Evaluating equations Using Iteration by Bisection
    The equation x3+5x7=91x^3+5x-7=91 has a solution between 4 and 5. Use bisection iteration to find the solution and give the answer to 1 decimal place.
    1. Use bisection iteration to solve x3x2=39x^3-x^2=39. Give your answer to 1 decimal place.
      1. Analyzing Equations Using Newton-Raphson Method
        Given x26x+5=0x^2-6x+5=0.
        1. Find the iteration formula.
        2. Use the iteration formula found in (a) to approximate the solution. Start with x1=2x_1=2.
      Notas del Tema
      ?

      Introduction: Solving Polynomial Equations by Iteration

      Solving polynomial equations by iteration is a powerful mathematical technique that allows us to find roots of complex equations. Our introduction video provides a comprehensive overview of this topic, serving as a crucial foundation for understanding the iterative methods. This video highlights the importance of iteration in solving equations that cannot be easily solved through algebraic means. We will explore three main methods for solving polynomial equations: direct iteration, bisection iteration, and the Newton-Raphson method. Each of these techniques offers unique advantages and applications in various mathematical and real-world scenarios. Direct iteration involves repeatedly applying a function to an initial guess, while bisection iteration narrows down the root's location by halving intervals. The Newton-Raphson method, known for its rapid convergence, uses tangent lines to approximate roots. By mastering these iterative techniques, you'll be equipped to tackle a wide range of polynomial equations efficiently and accurately.

      Understanding Iteration in Mathematics

      Iteration, in a mathematical context, refers to the process of repeating a set of operations or calculations, typically using the result of one iteration as the starting point for the next. This powerful technique is fundamental in solving complex mathematical problems, particularly higher-degree polynomial equations, where direct solutions may be challenging or impossible to obtain.

      The primary reason iteration is used to solve higher-degree polynomial equations is its ability to provide increasingly accurate answers through repeated calculations. When dealing with equations that cannot be solved analytically, iteration offers a systematic approach to converge on a solution. By applying a specific formula or algorithm repeatedly, mathematicians can refine their approximations, gradually approaching the true solution with each step.

      The concept of using previous results to inform subsequent calculations is at the heart of iteration. This process can be likened to a feedback loop, where each output becomes the input for the next round. This recursive nature allows for the continuous improvement of the solution, making iteration particularly valuable in situations where precision is crucial.

      To illustrate the iteration process, consider a simple example of finding the square root iterative method of a number, say 2. We can use the following iterative formula: xn+1 = (xn + 2/xn)/2, where xn is our current guess and xn+1 is our improved guess. Starting with an initial guess of 1.5:

      • Iteration 1: (1.5 + 2/1.5)/2 1.4167
      • Iteration 2: (1.4167 + 2/1.4167)/2 1.4142
      • Iteration 3: (1.4142 + 2/1.4142)/2 1.4142

      As we can see, each iteration brings us closer to the actual square root iterative method of 2 (approximately 1.4142). This example demonstrates how iteration can quickly converge on a solution, even with a relatively simple starting point.

      In more complex scenarios, such as solving higher-degree polynomial equations or optimizing functions, iteration becomes an indispensable tool. It allows mathematicians and scientists to tackle problems that would otherwise be intractable, providing a practical means to approximate solutions with a high degree of accuracy.

      The power of iteration lies in its simplicity and effectiveness. By breaking down complex problems into a series of repeated, manageable steps, it offers a systematic approach to problem-solving that can be applied across various mathematical and scientific disciplines. As computational power continues to increase, the applications of iterative methods in mathematics and related fields are likely to expand, further cementing iteration's role as a cornerstone of mathematical problem-solving.

      Direct Iteration Method

      The direct iteration method, also known as fixed point iteration, is a powerful numerical technique for polynomial equations used to solve equations, particularly polynomial equations. This method involves repeatedly applying a function to an initial guess until the solution converges to a fixed point. It's an essential tool in computational mathematics and engineering for finding roots of equations that may be difficult or impossible to solve analytically.

      Step-by-Step Guide to Direct Iteration

      1. Rearrange the equation into the form x = g(x), where g(x) is a function of x.
      2. Choose an initial guess, x.
      3. Apply the function g(x) to the initial guess: x = g(x).
      4. Repeat the process: x = g(x) for n = 1, 2, 3, ...
      5. Continue until the difference between successive iterations is smaller than a predetermined tolerance, or until a maximum number of iterations is reached.

      Solving a Quadratic Equation Using Direct Iteration

      Let's consider the quadratic equation x² - 2x - 3 = 0. We'll use direct iteration to find one of its roots.

      1. Rearrange the equation: x = (x² - 3) / 2
      2. Our iteration function is g(x) = (x² - 3) / 2
      3. Choose an initial guess, say x = 2
      4. Apply the iteration:
        • x = g(2) = (2² - 3) / 2 = 0.5
        • x = g(0.5) = (0.5² - 3) / 2 = -1.3125
        • x = g(-1.3125) = ((-1.3125)² - 3) / 2 = -0.1396
        • x = g(-0.1396) = ((-0.1396)² - 3) / 2 = -1.4902
        • x = g(-1.4902) = ((-1.4902)² - 3) / 2 = 0.1098
      5. Continue this process until the desired accuracy is achieved.

      In this case, the method converges to the root x -1, which is one of the solutions to the equation.

      Applying Direct Iteration to Cubic Functions

      The direct iteration method can also be applied to cubic functions. Let's consider the cubic equation x³ - x - 2 = 0.

      1. Rearrange the equation: x = (x + 2)
      2. Our iteration function is g(x) = (x + 2)
      3. Choose an initial guess, say x = 1
      4. Apply the iteration:
        • x = g(1) = (1 + 2) = 1.4422
        • x = g(1.4422) = (1.4422 + 2) = 1.5196
        • x = g(1.5196) = (1.5196 + 2) = 1.5321
        • x = g(1.5321) = (1.5321 + 2) = 1.5349

      In this case, the method converges to the root x 1.5349, which is one of the solutions to the equation.

      Bisection Iteration Method

      The bisection iteration method, also known as the interval halving method or binary search method, is a powerful root-finding algorithm used in numerical analysis. The term 'bisection' comes from the Latin prefix 'bi-' meaning two, which perfectly describes the core principle of this method: repeatedly dividing an interval into two parts.

      At its heart, the bisection method is an elegant approach to finding the root of a continuous function within a given interval. The method operates on the principle of repeatedly bisecting the interval and selecting the subinterval where the function changes sign, indicating the presence of a root.

      The steps involved in the bisection method are as follows:

      1. Choose an initial interval [a, b] where f(a) and f(b) have opposite signs.
      2. Calculate the midpoint c = (a + b) / 2.
      3. Evaluate f(c).
      4. If f(c) = 0 or is sufficiently close to zero, c is the root.
      5. If f(c) has the same sign as f(a), update a = c; otherwise, update b = c.
      6. Repeat steps 2-5 until the desired accuracy is achieved or a maximum number of iterations is reached.

      Let's explore two examples to illustrate the versatility of the bisection method:

      Example 1 (Standard Problem): Find the root of f(x) = x^3 - x - 2 in the interval [1, 2].

      Initial interval: [1, 2]
      f(1) = -2 (negative), f(2) = 4 (positive)

      Iteration 1: c = (1 + 2) / 2 = 1.5
      f(1.5) = 0.375 (positive), new interval: [1, 1.5]

      Iteration 2: c = (1 + 1.5) / 2 = 1.25
      f(1.25) = -0.796875 (negative), new interval: [1.25, 1.5]

      Continuing this process, we converge to the root x 1.3247.

      Example 2 (Different Scenario): Find the intersection point of two functions g(x) = x^2 and h(x) = 2x - 1 in the interval [0, 2].

      We can reframe this as finding the root of f(x) = g(x) - h(x) = x^2 - 2x + 1

      Initial interval: [0, 2]
      f(0) = 1 (positive), f(2) = -1 (negative)

      Iteration 1: c = (0 + 2) / 2 = 1
      f(1) = 0 (root found!)

      In this case, we found the exact solution x = 1 in just one iteration.

      The bisection method's strength lies in its simplicity and guaranteed convergence for continuous functions. It's particularly useful when dealing with complex functions where derivative information is unavailable or difficult to compute. However, it may converge more slowly compared to methods like Newton-Raphson, especially for functions with shallow slopes near the root.

      While the bisection method is reliable, it does have limitations. It requires that the function changes sign over the initial interval, which means it may miss roots if there's an even number of them in the interval. Additionally, it may struggle with functions that have discontinuities or singularities.

      Despite these limitations, the bisection method remains a fundamental tool in numerical analysis. Its robustness and straightforward implementation make it an excellent choice for many root-finding problems, particularly when a guaranteed, albeit potentially slower, converg

      Newton-Raphson Method

      The Newton-Raphson method is a powerful numerical technique used to find the roots of polynomial equations. This iterative approach is widely applied in various fields of mathematics, engineering, and physics due to its efficiency and rapid convergence. While the method's derivation involves complex mathematical concepts, we'll focus on understanding its application and practical use.

      At its core, the Newton-Raphson method employs the following iteration formula:

      xn+1 = xn - f(xn) / f'(xn)

      Where:

      • xn is the current approximation of the root
      • xn+1 is the next, more accurate approximation
      • f(x) is the function whose root we're seeking
      • f'(x) is the derivative of f(x)

      For those familiar with calculus, the derivative represents the rate of change of a function at a given point. It's crucial in the Newton-Raphson method as it helps guide the iterations towards the root.

      Let's walk through a detailed example to illustrate the method's application. Suppose we want to find the square root of 5 using the Newton-Raphson method. We can rephrase this as finding the root of the equation:

      f(x) = x2 - 5 = 0

      The derivative of this function is f'(x) = 2x. Now, let's apply the iteration formula:

      1. Choose an initial guess. Let's start with x0 = 2.
      2. Calculate the next approximation:
        x1 = 2 - (22 - 5) / (2 * 2) = 2.25
      3. Repeat the process:
        x2 = 2.25 - (2.252 - 5) / (2 * 2.25) 2.2361111
      4. Continue iterating:
        x3 2.2360679775

      At this point, we've reached a very close approximation of 5, which is approximately 2.236067977499790. For most practical purposes, this level of accuracy is sufficient.

      When dealing with more complex equations or requiring higher precision, using a calculator or computer program becomes essential. Modern scientific calculators often have built-in functions for solving equations using methods like Newton-Raphson. For instance, on a graphing calculator, you might use a "solve" or "root-finding" function, inputting the equation and an initial guess.

      It's important to note that while the Newton-Raphson method is powerful, it has limitations. The method may not converge for all functions or initial guesses, and it can struggle with functions that have multiple roots or complex behavior near the root. In practice, it's often combined with other techniques to ensure reliability across a wide range of problems.

      As you explore the Newton-Raphson method further, you'll find it's an invaluable tool in numerical analysis, offering a balance of simplicity and effectiveness for many root-finding problems. Its principles extend beyond polynomial equations, finding applications in optimization, machine learning, and various scientific computations.

      Comparing Iteration Methods

      When solving polynomial equations, three primary iteration methods come into play: direct iteration, bisection, and Newton-Raphson. Each method has its unique advantages and limitations, making them suitable for different scenarios. Understanding these differences is crucial for selecting the most appropriate method based on the equation type and desired accuracy.

      Direct iteration is the simplest of the three methods. It involves repeatedly applying a function to an initial guess until convergence is achieved. The main advantage of this method is its straightforward implementation and low computational cost per iteration. However, it can be slow to converge and may not work for all types of equations. Direct iteration is best suited for well-behaved functions with a clear fixed point and when a rough approximation is sufficient.

      The bisection method, also known as the interval halving method, is a robust technique that always converges for continuous functions. It works by repeatedly dividing an interval in half and selecting the subinterval where the root lies. The primary advantage of bisection is its guaranteed convergence, making it reliable for a wide range of equations. However, it can be slower than other methods, especially when high precision is required. Bisection is ideal for equations where the root is known to lie within a specific interval and when stability is more important than speed.

      Newton-Raphson, often considered the most powerful of the three, uses the function's derivative to find successively better approximations of the root. Its main advantage is its rapid convergence, typically achieving high accuracy in fewer iterations than other methods. However, it requires the function to be differentiable and can be sensitive to the initial guess. Newton-Raphson excels when dealing with smooth, well-behaved functions and when quick convergence to a highly accurate solution is needed.

      When selecting a method, consider the nature of the polynomial equation. For simple, well-behaved functions where a rough estimate suffices, direct iteration may be adequate. If the root is known to lie within a specific interval and stability is crucial, bisection is a safe choice. For complex equations requiring high accuracy and quick convergence, Newton-Raphson is often the best option, provided the function is differentiable and a good initial guess is available.

      In practice, a combination of methods may be used. For instance, bisection could be employed to narrow down the root's location, followed by Newton-Raphson for rapid convergence to the final solution. This hybrid approach leverages the strengths of multiple methods to achieve optimal results. Ultimately, the choice of method depends on the specific problem, desired accuracy, and computational resources available.

      Practical Applications of Iteration in Polynomial Solving

      Iteration methods for solving polynomial equations have numerous real-world applications across various fields, including physics, engineering, and computer science. These computational techniques are essential for tackling complex problems that cannot be solved analytically or require rapid, efficient solutions. In physics, iterative methods are frequently employed to model and analyze phenomena such as fluid dynamics, quantum mechanics, and celestial mechanics. For instance, in fluid dynamics, the Navier-Stokes equations, which describe fluid motion, often involve high-degree polynomials that require iterative solutions to predict flow patterns and turbulence in aircraft design or weather forecasting.

      In engineering, iterative polynomial solving finds extensive use in structural analysis, control systems, and signal processing. Civil engineers utilize these methods to optimize bridge designs, calculating load distributions and stress factors through iterative simulations. Electrical engineers apply iteration to solve complex circuit equations, enabling the design of sophisticated electronic systems and power grids. In the realm of control systems, iterative techniques are crucial for developing stable and efficient feedback mechanisms in robotics and automation.

      Computer science heavily relies on iterative polynomial solving for various applications, including computer graphics, machine learning, and cryptography. In computer graphics, rendering realistic 3D scenes often involves solving polynomial equations to determine light interactions and surface properties. Machine learning algorithms frequently employ iterative methods to optimize model parameters, particularly in neural networks where gradient descent techniques iteratively refine weights to minimize error functions.

      The implementation of these methods in software and calculators has revolutionized problem-solving capabilities across industries. Modern scientific computing platforms like MATLAB, Python with NumPy, and Wolfram Mathematica incorporate sophisticated iterative solvers that can handle a wide range of polynomial equations. These software tools often use a combination of methods such as Newton-Raphson, Secant, and Bisection algorithms, automatically selecting the most appropriate technique based on the equation's characteristics.

      For instance, MATLAB's fzero function employs a combination of bisection, secant, and inverse quadratic interpolation methods to find function roots efficiently. Python's SciPy library offers optimize.root_scalar for one-dimensional root-finding, utilizing various iterative methods. In the realm of calculators, advanced graphing calculators like the TI-84 Plus and HP Prime incorporate iterative solvers, allowing students and professionals to tackle complex equations on portable devices.

      The automotive industry benefits significantly from iterative polynomial solving in engine design and performance optimization. Engineers use these methods to model combustion processes, optimize fuel efficiency, and reduce emissions. In aerospace engineering, iterative techniques are crucial for trajectory calculations, satellite orbit determination, and spacecraft navigation. The financial sector employs these methods for option pricing models and risk assessment, where complex polynomial equations often arise in derivative valuations.

      Environmental scientists and climatologists use iterative polynomial solving in climate models to predict long-term weather patterns and assess the impact of various factors on global climate systems. In the field of materials science, these methods are essential for simulating molecular structures and predicting material properties, facilitating the development of new materials with specific characteristics.

      As computational power continues to increase, the applications of iterative polynomial solving expand into new frontiers. Quantum computing research, for example, relies heavily on these methods to simulate quantum systems and develop quantum algorithms. In biotechnology and pharmaceutical research, iterative techniques are employed in molecular dynamics simulations and drug design, helping researchers model complex biological systems and predict drug interactions.

      The versatility and efficiency of iterative methods for solving polynomial equations make them indispensable tools across a wide spectrum of scientific and engineering disciplines. As problems become more complex and data-intensive, the importance of these computational techniques in driving innovation and solving real-world challenges continues to grow, underscoring their critical role in advancing technology and scientific understanding.

      Conclusion: Mastering Polynomial Equation Solving

      In summary, we've explored essential techniques for solving polynomial equations, including factoring, using the quadratic formula application, and graphing polynomial functions. The introduction video provided a crucial foundation for understanding these concepts. To truly master these methods, regular practice is key. We encourage you to work through various problems, applying the strategies discussed. For further learning, explore online resources, textbooks, and educational websites dedicated to algebra. As you gain confidence, consider delving into more advanced topics such as polynomial long division, synthetic division techniques, and the rational root theorem. These skills will enhance your problem-solving abilities and prepare you for higher-level mathematics. Remember, each problem you solve strengthens your understanding. We invite you to engage with more complex polynomial equations, pushing your boundaries and expanding your mathematical horizons. Keep practicing, stay curious, and don't hesitate to seek help when needed. Your journey in mastering polynomial equations is just beginning!

      Introduction to Solving Polynomial Equations by Iteration

      Direct/Fixed Point Iteration

      Step 1: Understanding Iteration

      Iteration is a mathematical process where a sequence of operations is repeated to get closer to a desired result. In the context of solving polynomial equations, iteration involves using the result from a previous calculation to generate a new result. This process is repeated until the results converge to a more accurate solution. The main advantage of using iteration is that it can provide more precise answers when the initial data is only an approximation.

      Step 2: Introduction to Direct/Fixed Point Iteration

      Direct or fixed point iteration is one of the methods used to solve polynomial equations iteratively. The basic idea is to rearrange the polynomial equation so that the variable with the highest exponent is isolated on one side of the equation. This isolated variable is then expressed in terms of the other variables and constants. The process involves repeatedly substituting the result back into the equation to get closer to the solution.

      Step 3: Rearranging the Polynomial Equation

      The first step in direct iteration is to rearrange the original polynomial equation such that the term with the highest exponent is isolated. For example, consider the polynomial equation:
      x2 + 2x + 1 = 0
      In this equation, the term with the highest exponent is x2. To isolate this term, we need to move the other terms to the right side of the equation. This can be done by subtracting 2x and 1 from both sides:
      x2 = -2x - 1

      Step 4: Performing Inverse Operations

      The next step is to isolate the variable on its own by performing inverse operations. In our example, the variable x is squared. The inverse operation of squaring is taking the square root. Therefore, we take the square root of both sides of the equation:
      x = (-2x - 1)
      This step ensures that the variable is isolated on one side of the equation, making it easier to perform iterative calculations.

      Step 5: Defining the Iterative Process

      In the iterative process, the left-hand side (LHS) of the equation becomes xn+1, and the right-hand side (RHS) becomes xn. This notation indicates that the value of x in the next iteration is determined by the current value of x. For our example, the iterative equation becomes:
      xn+1 = (-2xn - 1)
      This equation will be used to generate successive approximations of the solution.

      Step 6: Iterative Calculation

      To perform the iterative calculation, we start with an initial guess for x, denoted as x0. This initial guess is substituted into the iterative equation to calculate the next value, x1. The process is repeated using the new value to calculate the subsequent value, and so on. The iteration continues until the values converge to a stable solution.
      For example, if we start with an initial guess of x0 = 1, we substitute it into the iterative equation:
      x1 = (-2(1) - 1) = (-3)
      Since the square root of a negative number is not real, we need to choose a different initial guess or modify the equation to ensure real solutions.

      Step 7: Convergence and Accuracy

      The iterative process is repeated until the values of xn converge to a stable solution. Convergence is achieved when the difference between successive values is smaller than a predefined tolerance level. The accuracy of the solution depends on the number of iterations and the initial guess. In practice, a good initial guess and a sufficient number of iterations are essential for obtaining an accurate solution.

      Conclusion

      Direct or fixed point iteration is a powerful method for solving polynomial equations iteratively. By rearranging the equation, performing inverse operations, and defining the iterative process, we can generate successive approximations of the solution. The iterative calculation continues until the values converge to a stable and accurate solution. This method is particularly useful when the initial data is only an approximation, and a more precise answer is required.

      FAQs

      1. How do you calculate by iteration?
      Iteration involves repeatedly applying a formula or process to refine an initial guess. To calculate by iteration: 1. Start with an initial guess. 2. Apply the iteration formula to get a new value. 3. Use the new value as the input for the next iteration. 4. Repeat until the desired accuracy is achieved or a maximum number of iterations is reached.

      2. What is the formula for simple iteration?
      The general formula for simple iteration is xn+1 = g(xn), where xn is the current value, xn+1 is the next value, and g(x) is the iteration function. The specific form of g(x) depends on the equation being solved.

      3. How do you do iteration?
      To perform iteration: 1. Rearrange the equation into the form x = g(x). 2. Choose an initial value x0. 3. Calculate x1 = g(x0). 4. Repeat step 3 using the previous result as input. 5. Continue until the results converge or a stopping criterion is met.

      4. What is an example of iteration?
      An example of iteration is finding the square root of 2: 1. Use the formula xn+1 = (xn + 2/xn)/2 2. Start with x0 = 1.5 3. x1 = (1.5 + 2/1.5)/2 1.4167 4. x2 = (1.4167 + 2/1.4167)/2 1.4142 5. Continue until desired accuracy is reached.

      5. What are the advantages of the Newton-Raphson method?
      The Newton-Raphson method offers several advantages: 1. Rapid convergence, often quadratic. 2. High accuracy in fewer iterations compared to other methods. 3. Effective for a wide range of functions, including polynomials. 4. Can be easily adapted for solving systems of nonlinear equations. 5. Widely used in various fields due to its efficiency and versatility.

      Prerequisite Topics for Solving Polynomial Equations by Iteration

      Understanding the process of solving polynomial equations by iteration requires a solid foundation in several key mathematical concepts. One of the most fundamental prerequisites is solving polynomial equations in general. This skill is crucial as iteration methods build upon basic equation-solving techniques to find solutions for higher-degree polynomial equations.

      Another important concept to grasp is the square root of a function, which is often utilized in iterative methods like the square root iterative method. This technique is particularly useful when dealing with equations that involve radicals or when simplifying complex expressions during the iteration process.

      Familiarity with solving quadratic equations using the quadratic formula is also essential. While iteration methods are often used for higher-degree polynomials, understanding quadratic solutions provides a strong basis for more complex problem-solving strategies.

      Determining the equation of a polynomial function and graphing polynomial functions are vital skills that help visualize the behavior of equations and predict potential solutions. This graphical understanding can guide the iteration process and help verify results.

      Proficiency in polynomial long division and synthetic division techniques is crucial for simplifying polynomials and finding potential roots. These methods often serve as preliminary steps in iterative approaches, helping to identify initial guesses for solutions.

      The rational root theorem is another key concept that aids in finding potential rational solutions to polynomial equations. This theorem can significantly narrow down the search space for iterative methods, making the process more efficient.

      Lastly, an understanding of continuous functions is important when working with iterative methods. Many iteration techniques rely on the continuity of polynomial functions to guarantee convergence to a solution.

      By mastering these prerequisite topics, students will be well-equipped to tackle the challenges of solving polynomial equations by iteration. Each concept builds upon the others, creating a comprehensive toolkit for approaching complex polynomial problems. The iterative methods used in solving these equations often combine aspects of graphical analysis, algebraic manipulation, and numerical approximation, making a strong foundation in these prerequisites essential for success.

      In this lesson, we will learn:

      • Solving Equations Using Direct Iteration
      • Evaluating equations Using Iteration by Bisection
      • Analyzing Equations Using Newton-Raphson Method
      • Iteration means to repeatedly solving an equation to obtain a result using the result from the previous calculation.
      • Direct iteration:
      1. Rearrange the original equation such that the term in which the variable with the highest exponent is isolated.
      2. Leave the variable on its own on the LHS by performing inverse operation.
      3. The LHS becomes xn+1x_{n+1}.
      4. The RHS becomes xnx_n.
      • Iteration by bisection:
      1. Shrink the interval where the roots lies within 2 equal parts.
      2. Decide in which part the solution resides.
      3. Repeat the steps until a consistent answer is achieved.
      • Newton-Raphson method:
      xn+1=xnf(xn)f(xn)x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}