Solving a linear system with matrices using Gaussian elimination

Get the most by viewing this topic in your current grade. Pick your course now.

?
Intros
Lessons
  1. Gaussian elimination overview
?
Examples
Lessons
  1. Gaussian Elimination
    Solve the following linear systems:
    1. x+2y=3 x+2y=3
      2x+3y=12x+3y=1
    2. x+4y+3z=1 x+4y+3z=1
      x+2y+9z=1x+2y+9z=1
      x+6y+6z=1x+6y+6z=1
    3. x+3y+3z=2 x+3y+3z=2
      3x+9y+3z=33x+9y+3z=3
      3x+6y+6z=43x+6y+6z=4
    4. 4x5y=6 4x-5y=-6
      2x2y=12x-2y=1
    5. x+3y+4z=4 x+3y+4z=4
      x+3y+2z=2-x+3y+2z=2
      3x+9y+6z=63x+9y+6z=-6
Topic Notes
?
Now that we have learned how to represent a linear system as a matrix, we can now solve this matrix to solve the linear system! We use a method called "Gaussian elimination". This method involves a lot of matrix row operations. Our goal is to make it so that all entries in the bottom left of the matrix are 0. Once that is done, we take a look at the last row and convert it to a linear system. Then we solve for the variable. Then we look at the second last row, convert it to a linear system, and solve for the other variable. Rinse and repeat, and you will find all the variables which solve the linear system!

Introduction to Solving Linear Systems with Matrices

Solving linear systems using matrices and Gaussian elimination is a fundamental technique in linear algebra. This method provides a powerful approach to tackling complex equations efficiently. Our introduction video serves as an essential starting point, offering a clear and concise explanation of the concept. By watching this video, students gain a solid foundation for understanding the process. Gaussian elimination is widely applicable across various fields, including engineering, physics, and economics. It allows for the systematic reduction of a matrix to row echelon form, simplifying the solution process. This method's versatility makes it invaluable in real-world problem-solving, from optimizing resource allocation to analyzing electrical circuits. As we delve deeper into this topic, you'll discover how matrices and Gaussian elimination form the backbone of many advanced mathematical and scientific applications, highlighting their significance in modern problem-solving techniques.

Understanding Linear Systems and Matrices

Linear systems are fundamental concepts in mathematics and engineering, representing a collection of linear equations that are solved simultaneously. These systems can be elegantly represented and efficiently solved using matrices, providing a powerful tool for tackling complex problems across various fields. In this section, we'll explore the concept of linear systems, their matrix representation, and the advantages of using matrices to solve these systems.

A linear system consists of one or more linear equations involving the same set of variables. For example, consider the following system of two equations with two unknowns:

2x + 3y = 8
4x - y = 5

This system can be solved using traditional algebraic methods, but as the number of equations and variables increases, these methods become increasingly cumbersome. This is where matrices come into play, offering a more efficient and systematic approach to solving linear systems.

To represent a linear system using matrices, we organize the coefficients of the variables and the constants into a structured format. The coefficient matrix contains the coefficients of the variables, while the constant vector holds the right-hand side values. For our example, the matrix representation would be:

[2 3] [x] = [8]
[4 -1] [y] [5]

This compact form, known as Ax = b, where A is the coefficient matrix, x is the variable vector, and b is the constant vector, encapsulates the entire system in a concise manner.

The advantages of using matrices to solve linear systems are numerous:

  1. Efficiency: Matrix operations can be performed quickly, especially with computer algorithms, making it possible to solve large systems rapidly.
  2. Consistency: The matrix form provides a standardized approach to representing and solving linear systems, regardless of their complexity.
  3. Scalability: Matrices can easily handle systems with many equations and variables, where traditional methods would be impractical.
  4. Analytical power: Matrix algebra offers powerful tools for analyzing system properties, such as determinants and eigenvalues, providing insights beyond just finding solutions.

Let's consider another example to illustrate the process of converting a linear system into matrix form. Take the following system of three equations with three unknowns:

x + 2y - z = 3
3x - y + 2z = 7
2x + y + z = 4

To convert this into matrix form, we arrange the coefficients and constants as follows:

[1 2 -1] [x] [3]
[3 -1 2] [y] = [7]
[2 1 1] [z] [4]

This matrix representation allows us to apply various matrix operations and solution techniques, such as Gaussian elimination or matrix inversion, to find the values of x, y, and z efficiently.

In conclusion, the use of matrices to represent and solve linear systems offers a powerful and versatile approach to handling complex mathematical problems. By converting linear equations into matrix form, we gain access to a wide array of computational tools and analytical techniques. This matrix-based approach not only simplifies the process of solving linear systems but also provides a foundation for more advanced mathematical concepts and applications in fields such as physics, engineering, and computer science.

Gaussian Elimination Method: An Overview

Gaussian elimination is a fundamental method in linear algebra, widely used for solving systems of linear equations. Named after the renowned German mathematician Carl Friedrich Gauss, this elimination method has become a cornerstone in mathematical computations and has applications across various fields of science and engineering.

The historical roots of Gaussian elimination can be traced back to ancient China, where similar techniques were used in the text "The Nine Chapters on the Mathematical Art" around 200 BC. However, it was Gauss who formalized and popularized the method in the early 19th century, leading to its widespread adoption in mathematical circles.

The importance of Gaussian elimination in linear algebra cannot be overstated. It serves as a powerful tool for solving systems of linear equations, finding the rank of a matrix, calculating determinants, and inverting matrices. Its efficiency and reliability have made it an essential component in numerous computational algorithms and software packages.

The Gaussian elimination method consists of two main phases: forward elimination and back-substitution. Let's explore each step in detail:

1. Forward Elimination: This phase aims to transform the augmented matrix of the system into row echelon form. The process involves the following steps:

  • Select the leftmost nonzero column as the pivot column.
  • Choose the element with the largest absolute value in the pivot column as the pivot element.
  • Use elementary row operations to eliminate all entries below the pivot element.
  • Repeat the process for the next column, working from left to right.

For example, consider the system of equations:

2x + y - z = 8
-3x - y + 2z = -11
-2x + y + 2z = -3

The augmented matrix would be:

[ 2 1 -1 | 8]
[-3 -1 2 | -11]
[-2 1 2 | -3]

After forward elimination, we get:

[ 2 1 -1 | 8]
[ 0 1 1 | -1]
[ 0 0 1 | 2]

2. Back-Substitution: Once the matrix is in row echelon form, we can solve for the variables using back-substitution. Starting from the bottom row and working upwards, we substitute known values to find the solutions. In our example:

z = 2
y + z = -1, so y = -3
2x + y - z = 8, so x = 5

The Gaussian elimination method is not without its limitations. It can be computationally intensive for large systems and may suffer from round-off errors in floating-point arithmetic. However, various techniques such as partial pivoting and scaled partial pivoting have been developed to enhance its stability and accuracy.

In practice, Gaussian elimination finds applications in diverse areas such as computer graphics, circuit analysis, and economic modeling. Its versatility and reliability make it an indispensable tool in the mathematician's and engineer's toolkit.

As we continue to advance in the digital age, variations and optimizations of the Gaussian elimination method continue to emerge. Parallel computing techniques and specialized hardware implementations have further enhanced its efficiency, allowing for the solution of increasingly complex systems of equations.

In conclusion, the Gaussian elimination method stands as a testament to the power of mathematical algorithms in solving real-world problems. Its historical significance, coupled with its ongoing relevance in modern computational methods, ensures that it will remain a crucial component of linear algebra and numerical analysis for generations to come.

Row Operations in Gaussian Elimination

Gaussian elimination is a fundamental method in linear algebra for solving systems of linear equations and finding the inverse of matrices. At the heart of this process are three elementary row operations that transform a matrix into row echelon form. These operations are scaling, addition, and swapping. Understanding these row operations is crucial for mastering Gaussian elimination and maintaining equation equivalence throughout the process.

The first elementary row operation is scaling. Scaling involves multiplying an entire row of a matrix by a non-zero constant. This operation is used to simplify equations by creating leading ones or eliminating coefficients. For example, if we have a row [2, 4, 6], we can scale it by 1/2 to get [1, 2, 3]. This operation is particularly useful when we want to create a pivot element of 1 in the leading position of a row.

The second operation is addition, also known as row replacement. This involves adding a multiple of one row to another row. The primary purpose of this operation is to eliminate variables and create zero entries below pivot elements. For instance, if we have two rows [1, 2, 3] and [4, 5, 6], we can add -4 times the first row to the second row to get [1, 2, 3] and [0, -3, -6]. This operation is crucial for creating the characteristic "staircase" pattern of zeros in row echelon form.

The third elementary row operation is swapping, which involves exchanging the positions of two rows in the matrix. This operation is typically used when we need to reposition rows to ensure non-zero entries in pivot positions. For example, if we have a matrix with rows [0, 2, 3] and [1, 4, 5], we would swap these rows to get [1, 4, 5] and [0, 2, 3], placing the non-zero entry in the leading position.

Applying these three operations systematically allows us to transform a matrix into row echelon form. Row echelon form is characterized by several key features: all rows consisting of only zeros are at the bottom of the matrix, the leading coefficient (pivot) of a non-zero row is always strictly to the right of the leading coefficient of the row above it, and all entries in a column below a leading coefficient are zeros.

To illustrate the process, let's consider transforming the following matrix into row echelon form: [2, 4, 6] [1, 3, 5] [3, 5, 7] First, we scale the first row by 1/2 to get a leading 1: [1, 2, 3] [1, 3, 5] [3, 5, 7] Next, we use addition to eliminate the entries below our first pivot: [1, 2, 3] [0, 1, 2] [0, -1, -2] Finally, we add the second row to the third row: [1, 2, 3] [0, 1, 2] [0, 0, 0] The resulting matrix is in row echelon form.

Throughout this process, it's crucial to maintain equation equivalence. This means that the system of equations represented by the matrix remains unchanged in terms of its solutions. Each row operation we perform must be applied consistently across the entire augmented matrix (including the constants on the right-hand side of the equations). By adhering to this principle, we ensure that the solutions to our original system of equations are preserved in the transformed system.

In conclusion, the three elementary row operations - scaling, addition, and swapping - are powerful tools in Gaussian elimination. They allow us to systematically transform matrices into row echelon form, simplifying complex systems of equations and facilitating their solution. By understanding and applying these operations while maintaining equation equivalence, we can confidently navigate the process of Gaussian elimination and solve a wide range of linear algebra problems.

Solving Linear Systems Using Gaussian Elimination

Solving linear systems is a powerful method for solving systems of linear equations. In this walkthrough, we'll demonstrate how to use this technique to solve a 3x3 system, highlighting key steps and potential pitfalls along the way.

Step 1: Set Up the System

Let's start with the following system of equations:

    2x + y - z = 8
    -3x - y + 2z = -11
    -2x + y + 2z = -3
    

Step 2: Create the Augmented Matrix

Convert the system into an augmented matrix:

    [  2   1  -1 |  8  ]
    [ -3  -1   2 | -11 ]
    [ -2   1   2 | -3  ]
    

Step 3: Perform Row Operations

Our goal is to achieve row echelon form through a series of elementary row operations:

3.1 Make the first column below the leading 1 all zeros:

R2 = R2 + (3/2)R1

R3 = R3 + R1

    [  2   1  -1 |  8  ]
    [  0  1/2  1/2 | 1  ]
    [  0   2   1  |  5  ]
    

3.2 Make the second column below the second leading 1 zero:

R3 = R3 - 4R2

    [  2   1  -1 |  8  ]
    [  0  1/2  1/2 | 1  ]
    [  0   0   -1 |  1  ]
    

Step 4: Back-Substitution

Now that we have row echelon form, we can solve for our variables:

4.1 Solve for z:

-z = 1, so z = -1

4.2 Solve for y:

(1/2)y + (1/2)(-1) = 1

y = 3

4.3 Solve for x:

2x + 3 - (-1) = 8

x = 2

Final Solution:

x = 2, y = 3, z = -1

Common Challenges and Mistakes to Avoid:

  1. Arithmetic errors: Be careful when performing calculations, especially with fractions.
  2. Inconsistent systems: Not all systems have solutions. Watch for rows that lead to contradictions (e.g., 0 = 1).
  3. Dependent systems: Some systems may have infinite solutions. Recognize when you have free variables.
  4. Pivoting: If you encounter a zero in the pivot position, you may need to swap rows.
  5. Rounding errors: When working with decimals, be aware that rounding can affect accuracy.
  6. Forgetting to carry operations through to the augmented column: Always apply elementary row operations to the entire row, including the constant term.

Tips for Success:

Gauss-Jordan Elimination: Extended Method

Gauss-Jordan elimination is an advanced mathematical technique that builds upon the foundation of Gaussian elimination. This powerful method is widely used in linear algebra to solve systems of linear equations and find matrix inverses. While both Gaussian and Gauss-Jordan elimination share similarities, the latter takes the process a step further, offering distinct advantages in certain scenarios.

The primary difference between Gaussian elimination and the Gauss-Jordan method lies in the final form of the augmented matrix. Gaussian elimination transforms the matrix into row echelon form, whereas Gauss-Jordan elimination goes beyond to achieve reduced row echelon form (RREF). This additional step simplifies the solution process and provides a more straightforward interpretation of results.

One of the key advantages of the Gauss-Jordan method is its ability to simultaneously solve multiple systems of equations with the same coefficient matrix. This feature makes it particularly useful in applications such as finding matrix inverses or solving complex linear programming problems. Additionally, the reduced row echelon form obtained through Gauss-Jordan elimination offers a clearer representation of the solution space, making it easier to identify and analyze special cases like inconsistent or dependent systems.

To illustrate the Gauss-Jordan elimination process, let's walk through a step-by-step example of solving a linear system:

Consider the following system of equations:
2x + y - z = 8
-3x - y + 2z = -11
-2x + y + 2z = -3

Step 1: Set up the augmented matrix
[2 1 -1 | 8]
[-3 -1 2 | -11]
[-2 1 2 | -3]

Step 2: Use row operations to transform the left side of the matrix into the identity matrix
R1 R1 / 2
[1 1/2 -1/2 | 4]
[-3 -1 2 | -11]
[-2 1 2 | -3]

R2 R2 + 3R1
[1 1/2 -1/2 | 4]
[0 1/2 1/2 | 1]
[-2 1 2 | -3]

R3 R3 + 2R1
[1 1/2 -1/2 | 4]
[0 1/2 1/2 | 1]
[0 2 1 | 5]

R2 2R2
[1 1/2 -1/2 | 4]
[0 1 1 | 2]
[0 2 1 | 5]

R3 R3 - 2R2
[1 1/2 -1/2 | 4]
[0 1 1 | 2]
[0 0 -1 | 1]

R3 -R3
[1 1/2 -1/2 | 4]
[0 1 1 | 2]
[0 0 1 | -1]

R2 R2 - R3
[1 1/2 -1/2 | 4]
[0 1 0 | 3]
[0 0 1 | -1]

R1 R1 + 1/2R3
[1 1/2 0 | 4.5]
[0 1 0 | 3]
[0 0 1 | -1]

Applications and Practical Uses

Gaussian elimination and solving linear systems have numerous real-world applications across various fields, making them essential tools in modern science, engineering, and industry. In engineering, these mathematical techniques are crucial for structural analysis and design. Civil engineers use linear systems to calculate load distributions in complex structures like bridges and skyscrapers, ensuring their stability and safety. Electrical engineers apply Gaussian elimination to analyze circuit networks, determining currents and voltages in complex electrical systems.

In physics, linear systems are fundamental to many areas of study. Quantum mechanics relies heavily on matrix operations and linear algebra, with Gaussian elimination playing a role in solving Schrödinger's equation for multi-particle systems. In classical mechanics, these methods are used to solve equations of motion for complex systems with multiple interacting bodies, such as in celestial mechanics for predicting planetary orbits.

The field of economics extensively uses linear systems and Gaussian elimination for various applications. Input-output models, which describe the interdependencies between different economic sectors, are represented as large systems of linear equations. Economists use these techniques to analyze how changes in one sector affect others, helping in economic planning and policy-making. Financial analysts apply linear algebra to portfolio optimization, using Gaussian elimination to solve systems that balance risk and return across multiple assets.

Computer graphics is another area where linear systems and Gaussian elimination find significant use. 3D rendering algorithms often involve solving large systems of equations to determine lighting, shading, and object transformations. In computer vision and image processing, these methods are used for tasks such as image reconstruction, feature extraction, and camera calibration.

In the realm of data science and machine learning, linear systems are fundamental to many algorithms. Least squares regression, a cornerstone of statistical analysis, uses Gaussian elimination to find the best-fit line or plane for a set of data points. This technique is widely used in predictive modeling across industries, from weather forecasting to market trend analysis.

The oil and gas industry employs linear systems and Gaussian elimination in reservoir simulation models, which are crucial for optimizing extraction strategies and predicting reservoir behavior. In environmental science, these methods are used to model complex ecosystems, helping researchers understand and predict the impacts of climate change and human activities on natural systems.

As these examples demonstrate, the applications of Gaussian elimination and linear systems are vast and diverse, touching nearly every aspect of modern scientific and industrial endeavors. Their ability to efficiently solve complex problems involving multiple variables makes them indispensable tools in our increasingly data-driven world.

Conclusion

In conclusion, Gaussian elimination stands as a powerful method for solving linear systems, offering a systematic approach to manipulating matrices. This article has explored the key steps involved in the process, from creating an augmented matrix to performing row operations and back-substitution. The introduction video provided a visual and practical demonstration of these concepts, enhancing understanding. Gaussian elimination's importance in mathematics and various scientific fields cannot be overstated, as it forms the foundation for more advanced linear algebra techniques. Readers are encouraged to practice solving linear systems using this method, gradually increasing complexity to build proficiency. Exploring further resources on linear algebra and matrix operations will deepen understanding and reveal the broader applications of these techniques. Mastering Gaussian elimination opens doors to solving complex problems in engineering, physics, and data science, making it an invaluable skill for students and professionals alike.

Example:

Gaussian Elimination
Solve the following linear systems:
x+2y=3 x+2y=3
2x+3y=12x+3y=1

Step 1: Convert the Linear System to a Matrix

To solve the given linear system using Gaussian elimination, we first need to convert the system into a matrix. The given system is:
x+2y=3 x + 2y = 3
2x+3y=1 2x + 3y = 1
We extract the coefficients of the variables and the constants on the right-hand side of the equations to form the augmented matrix:
\[ \begin{pmatrix} 1 & 2 & | & 3 <br/><br/> 2 & 3 & | & 1 \end{pmatrix} \]

Step 2: Apply Gaussian Elimination

Gaussian elimination involves transforming the matrix to row echelon form. This means we need to make all the elements below the main diagonal zero.
The main diagonal elements are the elements at positions (1,1) and (2,2). We need to make the element at position (2,1) zero.
To do this, we can perform the following row operations:
- Multiply the first row by 2: \[ \begin{pmatrix} 2 & 4 & | & 6 <br/><br/> 2 & 3 & | & 1 \end{pmatrix} \] - Subtract the first row from the second row: \[ \begin{pmatrix} 2 & 4 & | & 6 <br/><br/> 0 & -1 & | & -5 \end{pmatrix} \] Now, the element at position (2,1) is zero.

Step 3: Solve for the Variables

With the matrix in row echelon form, we can now solve for the variables. The second row of the matrix represents the equation:
0x1y=5 0x - 1y = -5
Simplifying this, we get:
y=5 y = 5
Now, substitute y=5 y = 5 back into the first row equation:
2x+4(5)=6 2x + 4(5) = 6
Simplifying this, we get:
2x+20=6 2x + 20 = 6
2x=620 2x = 6 - 20
2x=14 2x = -14
x=7 x = -7

Step 4: Verify the Solution

Finally, we verify the solution by substituting x=7 x = -7 and y=5 y = 5 back into the original equations:
For the first equation:
7+2(5)=3 -7 + 2(5) = 3
7+10=3 -7 + 10 = 3
3=3 3 = 3 (True)
For the second equation:
2(7)+3(5)=1 2(-7) + 3(5) = 1
14+15=1 -14 + 15 = 1
1=1 1 = 1 (True)
Since both equations are satisfied, the solution x=7 x = -7 and y=5 y = 5 is correct.

FAQs

Q1: What is the Gaussian elimination method?
A1: Gaussian elimination is a systematic method for solving systems of linear equations. It involves transforming the augmented matrix of the system into row echelon form through a series of elementary row operations. The process consists of two main steps: forward elimination to create an upper triangular matrix, and back-substitution to solve for the variables.

Q2: What are the rules for Gaussian elimination?
A2: The key rules for Gaussian elimination are: 1. Use elementary row operations: scaling, addition, and swapping. 2. Create zeros below the pivot elements in each column. 3. Work from left to right, top to bottom. 4. Ensure that the pivot element is non-zero; if it is zero, swap rows. 5. Maintain equation equivalence throughout the process.

Q3: What is the difference between Gauss elimination and Gauss-Jordan method?
A3: The main difference is in the final form of the matrix. Gaussian elimination transforms the matrix into row echelon form, while the Gauss-Jordan method goes a step further to produce reduced row echelon form. Gauss-Jordan elimination creates an identity matrix on the left side of the augmented matrix, making it easier to read off solutions directly.

Q4: What are the tips and tricks for Gaussian elimination?
A4: Some helpful tips include: 1. Always choose the largest pivot element to minimize rounding errors. 2. Use fractions instead of decimals to maintain precision. 3. Simplify fractions at each step to keep numbers manageable. 4. Check for special cases like inconsistent or dependent systems. 5. Practice with smaller systems before tackling larger ones.

Q5: What is Gaussian elimination used for in real life?
A5: Gaussian elimination has numerous real-world applications, including: 1. Solving complex engineering problems, such as structural analysis. 2. Balancing chemical equations in chemistry. 3. Optimizing resource allocation in economics and operations research. 4. Analyzing electrical circuits in physics and engineering. 5. Image processing and computer graphics algorithms. 6. Data analysis and machine learning, particularly in linear regression.

Prerequisite Topics

Understanding the foundation of solving linear systems with matrices using Gaussian elimination is crucial for mastering this advanced mathematical technique. To excel in this area, it's essential to grasp several key prerequisite topics that form the building blocks of this method.

First and foremost, a solid understanding of linear equations and their applications is vital. These equations form the basis of linear systems, and knowing how to interpret and manipulate them is crucial. Additionally, familiarity with solving systems of linear equations in various contexts, such as distance and time problems, provides practical insight into the importance of these mathematical tools.

As we delve deeper into matrix-based solutions, it's important to grasp the concept of matrix representation of linear systems. This knowledge bridges the gap between traditional algebraic methods and the more advanced matrix-based approaches, setting the stage for Gaussian elimination.

A critical component of the Gaussian elimination process is understanding elementary row operations. These operations are the fundamental tools used to manipulate matrices during the elimination process. Mastery of these operations is essential for efficiently solving linear systems using matrices.

Furthermore, familiarity with row echelon form is crucial. This concept is at the heart of Gaussian elimination, as the process aims to transform a matrix into row echelon form to simplify the system and find its solution.

By thoroughly understanding these prerequisite topics, students can approach Gaussian elimination with confidence. Each concept builds upon the others, creating a strong foundation for tackling more complex problems. For instance, the ability to represent linear systems as matrices combines with the knowledge of row operations to execute the Gaussian elimination algorithm effectively.

Moreover, recognizing the practical applications of linear equations helps students appreciate the real-world relevance of Gaussian elimination. This method is not just a theoretical concept but a powerful tool used in various fields, from engineering to economics.

In conclusion, mastering these prerequisite topics is not just about memorizing formulas or procedures. It's about developing a comprehensive understanding of the interconnected concepts that make Gaussian elimination a powerful and efficient method for solving linear systems. By investing time in these foundational areas, students will find themselves well-equipped to tackle more advanced problems and applications in linear algebra and beyond.

Note
Gaussian elimination (or row reduction) is a method used for solving linear systems. For example,

x+y+z=3x+y+z=3
x+2y+3z=0x+2y+3z=0
x+3y+2z=3x+3y+2z=3

Can be represented as the matrix:
linear system in matrix form

Using Gaussian elimination, we can turn this matrix into

applying gaussian elimination to a matrix (watch the intro video to learn how to do this!)

Now we can start solving for x,yx,y and zz.

So in the third row, we see that 3z=6-3z=6. So z=2z=-2.

In the second row, we see that 2y+4z=62y+4z=-6. Since we know that z=2z=-2, then we can substitute it into the second row and solve for yy. So,

2y+4z=62y+4z=-6 2y+4(2)=6 2y+4(-2)=-6
2y8=6 2y-8=-6
2y=2 2y=2
y=1 y=1

So now we know that z=2z=-2, and y=1y=1. Now let us take a look at the first row and solve for xx.

x+y+z=3x+y+z=3 x+12=3 x+1-2=3
x1=3 x-1=3
x=4 x=4

Since we have solved for x,yx,y and zz, then we have just solved the linear system.