Properties of linear transformation

Get the most by viewing this topic in your current grade. Pick your course now.

?
Intros
Lessons
  1. Properties of Linear Transformation Overview:
  2. The 3 properties of Linear Transformation
    T(u+v)=T(u)+T(v)T(u+v)=T(u)+T(v)
    T(cu)=cT(u)T(cu)=cT(u)
    T(0)=0T(0)=0
  3. How to see if a transformation is linear
    • Show that: T(cu+dv)=cT(u)+dT(v)T(cu+dv)=cT(u)+dT(v)
    • General formula: T(c1v1+c2v2++cnvn)=c1T(v1)+c2T(v2)++cpT(vp)T(c_1 v_1+c_2 v_2+\cdots+c_n v_n )=c_1 T(v_1 )+c_2 T(v_2 )+\cdots+c_p T(v_p)
?
Examples
Lessons
  1. Understanding and Using the Properties
    Show that the transformation TT defined by understand properties of linear transformation is not linear.
    1. Show that the transformation TT defined by prove linear transformation is not linear.
      1. Proving Questions using the Properties
        An affine transformation T:RnT: \Bbb{R}^n Rm \Bbb{R}^m has the form T(x)=Ax+bT(x)=Ax+b, where AA is an m×nm \times n matrix and bb is a vector in Rn\Bbb{R}^n. Show that the transformation TT is not a linear transformation when b0b \neq 0.
        1. Define T:RnT: \Bbb{R}^n Rm \Bbb{R}^m to be a linear transformation, and let the set of vectors {v1,v2,v3v_1,v_2,v_3 } be linearly dependent. Show that the set of vectors {T(v1),T(v2),T(v3)T(v_1),T(v_2),T(v_3)} are also linearly dependent.
          1. Define T:RnT: \Bbb{R}^n Rm \Bbb{R}^m to be a linear transformation and the set of vectors v1v_1,...,vpv_p are in Rn\Bbb{R}^n. In addition, let T(vi)=0T(v_i )=0 for i=1,2,i=1,2,,p,p. If xx is any vector in Rn\Bbb{R}^n, then show that T(x)=0T(x)=0. In other words, show that TT is the zero transformation.
            Topic Notes
            ?

            Introduction to Properties of Linear Transformations

            Welcome to our exploration of linear transformations! These powerful mathematical tools are essential in various fields, from computer graphics to physics. A linear transformation is a function that preserves vector addition and scalar multiplication. It's like a special rule that changes vectors in a consistent way. The introduction video we'll watch shortly is crucial for grasping this concept visually. Key properties of linear transformations include linearity, which means they maintain proportions, and the fact that they can be represented by matrices. This makes calculations much easier! We'll dive into how these transformations affect vector spaces, their kernels, and ranges. Understanding these properties is fundamental for advanced math and many practical applications. So, let's get ready to transform our understanding of linear algebra together! Remember, mastering linear transformations opens doors to solving complex problems in elegant ways.

            Basic Properties of Linear Transformations

            Linear transformations are fundamental concepts in linear algebra, characterized by three essential properties: additivity, homogeneity, and preservation of the zero vector. These properties define the linearity of a transformation and are crucial in various mathematical and practical applications. Let's explore each of these properties in detail and compare them with matrix multiplication.

            1. Additivity: The first fundamental property of linear transformations is additivity. This property states that for any two vectors u and v in the domain of a linear transformation T, the following equation holds true: T(u + v) = T(u) + T(v). In other words, the transformation of the sum of two vectors is equal to the sum of their individual transformations. For example, consider a rotation example in linear transformations T that rotates vectors by 90 degrees counterclockwise. If we have two vectors u = (1, 2) and v = (3, 4), then T(u + v) = T((4, 6)) = (-6, 4), which is the same as T(u) + T(v) = (-2, 1) + (-4, 3) = (-6, 4).

            2. Homogeneity: The second property is homogeneity, also known as scalar multiplication compatibility. For any scalar c and vector v, a linear transformation T satisfies: T(cv) = cT(v). This property ensures that scaling a vector before applying the transformation is equivalent to scaling the transformed vector. Using the same rotation example in linear transformations, if we scale vector u = (1, 2) by a factor of 3, we get T(3u) = T((3, 6)) = (-6, 3), which is the same as 3T(u) = 3(-2, 1) = (-6, 3).

            3. Preservation of Zero Vector: The third property states that a linear transformation always maps the zero vector to the zero vector in the codomain. Mathematically, this is expressed as T(0) = 0, where 0 represents the zero vector in both the domain and codomain. This property is a consequence of the homogeneity property, as T(0v) = 0T(v) = 0 for any vector v. In our rotation example, the zero vector (0, 0) remains unchanged after the transformation.

            These three properties are intimately connected and collectively define the concept of linearity. They ensure that linear transformations preserve the vector space structure, maintaining proportions and relationships between vectors. It's important to note that any transformation satisfying these properties is considered linear, regardless of its specific effect on vectors.

            Comparing these properties with matrix representation of linear transformations reveals striking similarities. In fact, every linear transformation can be represented by a matrix, and matrix multiplication embodies these same properties. For a matrix A and vectors x and y:

            1. Additivity in matrix multiplication: A(x + y) = Ax + Ay 2. Homogeneity in matrix multiplication: A(cx) = c(Ax) 3. Zero vector preservation: A0 = 0

            These parallels highlight the deep connection between linear transformations and matrices, explaining why matrices are such powerful tools for representing and manipulating linear transformations.

            The importance of these properties in defining linearity cannot be overstated. They provide a clear, concise framework for understanding how linear transformations behave and interact with vectors. This understanding is crucial in various fields, including physics, engineering, linear transformations in computer graphics, and data analysis. For instance, in computer graphics, linear transformations are used to scale, rotate, and project objects in 3D space. The additivity property allows for the composition of multiple transformations, while homogeneity ensures consistent scaling across all dimensions.

            Moreover, these properties enable the development of efficient algorithms for linear systems and computational methods for solving linear systems and analyzing vector spaces. They form the foundation for more advanced concepts in linear algebra, such as eigenvalues, eigenvectors, and diagonalization, which have wide-ranging applications in fields like quantum mechanics, machine learning, and signal processing.

            In conclusion, the three fundamental properties of linear transformations - additivity, homogeneity, and preservation of the zero vector - are essential in defining and understanding linearity. These properties, mirrored in matrix representation of linear transformations, provide a clear, concise framework for understanding how linear transformations behave and interact with vectors. This understanding is crucial in various fields, including physics, engineering, linear transformations in computer graphics, and data analysis.

            Proving a Transformation is Not Linear

            In linear algebra, proving that a transformation is not linear is a crucial skill that relies on understanding the fundamental properties of linear transformations and the power of counterexamples. This section will demonstrate how to prove that a transformation is not linear by showing that at least one of the required properties doesn't hold, using the example from the video where T(0) 0.

            To begin, let's recall the two essential properties that define a linear transformation:

            1. Additivity: T(u + v) = T(u) + T(v) for all vectors u and v
            2. Homogeneity: T(cu) = cT(u) for all vectors u and scalars c

            Additionally, a consequence of these properties is that for any linear transformation T, T(0) must equal 0, where 0 represents the zero vector. This is known as the zero vector property.

            Now, let's focus on the example where T(0) 0. This scenario provides an excellent opportunity to demonstrate how a single counterexample can disprove the linearity of a transformation. Here's a step-by-step process to prove that such a transformation is not linear:

            1. State the given information: We are told that for the transformation T, T(0) 0.
            2. Identify the property being violated: This directly contradicts the zero vector property of linear transformations.
            3. Explain why this violates linearity: If T were linear, we would expect T(0) = 0 for any input, including the zero vector.
            4. Provide a logical argument: Since T(0) 0, at least one of the defining properties of linear transformations must not hold.
            5. Conclude: Therefore, T is not a linear transformation.

            This example highlights the power of counterexamples in mathematical proofs. By finding just one instance where a required property fails, we can definitively prove that a transformation is not linear, regardless of how it behaves for other inputs.

            To further illustrate this concept, let's consider a specific non-linear transformation:

            T(x, y) = (x² + y, y)

            We can prove this is not linear by showing that T(0, 0) (0, 0):

            1. Calculate T(0, 0): T(0, 0) = (0² + 0, 0) = (0, 0)
            2. Observe: In this case, T(0, 0) = (0, 0), so we can't use the zero vector property to prove non-linearity.
            3. Try another approach: Let's check the additivity property instead.
            4. Choose vectors: Let u = (1, 0) and v = (2, 0)
            5. Calculate T(u + v): T((1, 0) + (2, 0)) = T(3, 0) = (9, 0)
            6. Calculate T(u) + T(v): T(1, 0) + T(2, 0) = (1, 0) + (4, 0) = (5, 0)
            7. Compare: T(u + v) T(u) + T(v), as (9, 0) (5, 0)
            8. Conclude: The transformation is not linear because it violates the additivity property.

            This example demonstrates that even when one property (like the zero vector property) holds, we may need to check other properties to prove non-linearity. It underscores the importance of thoroughly examining all aspects of a transformation before drawing conclusions about its linearity.

            In conclusion, proving that a transformation is not linear involves finding a counterexample that relies on understanding the fundamental properties of linear transformations.

            Proving a Transformation is Linear

            Understanding how to prove that a transformation is linear is a crucial skill in linear algebra. This process involves demonstrating that the transformation satisfies specific properties, and it often requires careful algebraic manipulation in linear algebra. Let's explore this concept using the example transformation T(x1, x2) = (x1, -x2) from our video lesson.

            To prove that a transformation is linear, we need to show that it satisfies two key properties:

            1. Additivity: T(u + v) = T(u) + T(v) for all vectors u and v
            2. Homogeneity: T(cu) = cT(u) for all vectors u and scalars c

            However, instead of proving these properties separately, we can use a more general property of linear transformation that combines both:

            T(cu + dv) = cT(u) + dT(v) for all vectors u and v, and all scalars c and d

            This general property of linear transformation encompasses both additivity and homogeneity. If we can prove this property holds for our transformation, we've effectively proven that the transformation is linear.

            Let's apply this to our example transformation T(x1, x2) = (x1, -x2):

            1. Start with the left side of the equation: T(cu + dv)
            2. Let u = (u1, u2) and v = (v1, v2)
            3. Then, cu + dv = (cu1 + dv1, cu2 + dv2)
            4. Apply the transformation: T(cu + dv) = T(cu1 + dv1, cu2 + dv2) = (cu1 + dv1, -(cu2 + dv2))
            5. Simplify: (cu1 + dv1, -cu2 - dv2)

            Now, let's look at the right side of the equation: cT(u) + dT(v)

            1. T(u) = (u1, -u2) and T(v) = (v1, -v2)
            2. cT(u) = (cu1, -cu2) and dT(v) = (dv1, -dv2)
            3. Adding these: cT(u) + dT(v) = (cu1 + dv1, -cu2 - dv2)

            We can see that the left side and the right side of the equation are identical, proving that T(cu + dv) = cT(u) + dT(v) holds for our transformation.

            This proof demonstrates the importance of algebraic manipulation in linear algebra. By carefully breaking down the transformation and applying it to general vectors and scalars, we can show that the required properties hold. This process often involves:

            It's crucial to approach these proofs systematically, clearly stating each step and ensuring that all manipulations are mathematically valid. This not only helps in proving the linearity of the transformation but also develops critical thinking and problem-solving skills essential in advanced mathematics.

            In conclusion, proving that a transformation is linear involves showing that it satisfies the general property of linear transformation. This process requires careful algebraic manipulation and a clear understanding of vector operations in linear algebra. By mastering this technique, you'll be well-equipped to analyze and work with linear transformations in various mathematical and real-world applications.

            Generalized Formula for Linear Transformations

            The generalized formula for linear transformations involving multiple vectors and scalars is a powerful mathematical tool that extends the basic principles of linear algebra to more complex scenarios. This formula is essential for tackling advanced problems in various fields, including physics, engineering, and computer science. To understand its significance, let's first explore why we need to move beyond the simpler two-vector, two-scalar formulations.

            In elementary linear algebra, we often work with transformations of the form T(av + bw), where T is a linear transformation, v and w are vectors, and a and b are scalars. While this form is sufficient for many basic applications, it becomes limiting when dealing with more intricate systems or higher-dimensional spaces. The generalized formula for linear transformations addresses this limitation by allowing for an arbitrary number of vectors and scalars.

            The generalized formula for linear transformations can be expressed as:

            T(av + av + ... + av) = aT(v) + aT(v) + ... + aT(v)

            Where T is the linear transformation, a, a, ..., a are scalars, and v, v, ..., v are vectors. This formula encapsulates the fundamental property of linearity: the transformation of a sum is equal to the sum of the transformations.

            The necessity of this generalized formula becomes apparent when we consider complex proofs and applications in advanced mathematics. Many theorems and proofs in linear algebra, functional analysis, and abstract algebra rely on the ability to work with multiple vectors and scalars simultaneously. Without this generalized form, mathematicians and scientists would be constrained in their ability to describe and analyze complex systems.

            The limitations of using only two vectors and two scalars become evident in several scenarios:

            1. Higher-dimensional spaces: In many real-world applications, we deal with spaces that have more than three dimensions. The generalized formula allows us to work efficiently in these higher-dimensional spaces without being restricted to pairwise combinations of vectors.

            2. Complex systems modeling: When modeling intricate systems, such as those in quantum mechanics or advanced economic models, we often need to consider the interaction of multiple variables simultaneously. The generalized formula provides the flexibility to handle these multi-variable scenarios.

            3. Basis transformations: In linear algebra, changing the basis of a vector space often involves more than two vectors. The generalized formula allows for smooth transitions between different bases, regardless of the dimension of the space.

            4. Matrix operations: When dealing with large matrices, especially in computer science and data analysis, the ability to express transformations involving multiple vectors is crucial for efficient algorithms and computations.

            Examples where the generalized formula proves particularly useful include:

            1. Quantum mechanics: In describing quantum states, physicists often need to work with superpositions of multiple basis states. The generalized formula allows for the manipulation of these complex quantum systems.

            2. Signal processing: When analyzing complex signals composed of multiple frequencies or components, the generalized formula enables engineers to apply transformations to the entire signal efficiently.

            3. Machine learning: In advanced machine learning algorithms, especially those involving neural networks, the ability to transform high-dimensional data using multiple vectors and scalars is essential for feature extraction and model training.

            4. Economic modeling: Complex economic systems often involve numerous variables and factors. The generalized formula allows economists to create and analyze models that account for multiple interacting elements simultaneously.

            5. Computer graphics: In 3D modeling and animation, transformations involving multiple vectors are common when manipulating complex objects or scenes. The generalized formula provides a robust framework for these operations.

            In conclusion, the generalized formula for linear transformations involving multiple vectors and scalars is a fundamental concept that bridges the gap between basic linear algebra and more advanced mathematical applications. Its importance lies in its ability to handle complex systems, work in higher dimensions, and provide a unified approach to linear transformations across various fields of study. By mastering this generalized formula, mathematicians, scientists, and engineers gain a powerful tool for tackling sophisticated problems.

            Applications and Examples of Linear Transformations

            Linear transformations in computer graphics are fundamental mathematical operations with a wide range of real-world applications. These transformations preserve vector addition and scalar multiplication, making them incredibly useful in various fields. Let's explore some practical applications and examples of linear transformations in computer graphics in geometric transformations, signal processing, and data analysis.

            Geometric Transformations

            One of the most intuitive applications of linear transformations is in geometric transformations. These include rotations, reflections, and scaling, which are essential in computer graphics, animation, and image processing.

            Rotations

            Rotations are linear transformations that change the orientation of an object without altering its shape or size. In 2D space, a rotation by an angle θ can be represented by the matrix:

            [cos θ  -sin θ]
            [sin θ   cos θ]

            This transformation is widely used in computer graphics for rotating objects on screen, in robotics for controlling the movement of robotic arms, and in aerospace engineering for calculating the orientation of aircraft and spacecraft.

            Reflections

            Reflections are linear transformations that flip an object across a line or plane. For example, a reflection across the y-axis can be represented by the matrix:

            [-1  0]
            [ 0  1]

            Reflections are used in computer-aided design (CAD) software for creating symmetrical objects, in image processing for creating mirror effects, and in physics for modeling light reflection.

            Scaling

            Scaling transformations change the size of an object without altering its shape. A scaling transformation can be represented by the matrix:

            [sx  0 ]
            [ 0  sy]

            Where sx and sy are the scaling factors along the x and y axes, respectively. Scaling is used in digital zoom features of cameras, in responsive web design for adjusting content to different screen sizes, and in 3D modeling for resizing objects.

            Signal Processing

            Linear transformations play a crucial role in signal processing, which is essential in telecommunications, audio engineering, and image processing.

            Fourier Transform

            The Fourier transform is a linear transformation that decomposes a signal into its constituent frequencies. It's represented as:

            F(ω) =  f(t) e^(-iωt) dt

            This transformation is used in audio compression (e.g., MP3 format), noise reduction in telecommunications, and medical imaging techniques like MRI and CT scans.

            Wavelet Transform

            The wavelet transform is another linear transformation used in signal processing. It provides time-frequency representation of a signal, making it useful for analyzing non-stationary signals. Applications include image compression (JPEG2000 format), detecting discontinuities in signals, and removing noise from ECG signals in medical diagnostics.

            Data Analysis

            Linear transformations are fundamental in various data analysis techniques, helping to extract meaningful information from complex datasets.

            Principal Component Analysis (PCA)

            PCA is a linear transformation technique used to reduce the dimensionality of data while preserving as much variance as possible. It's represented by the transformation:

            Y = XW

            Where X is the original data, W is the transformation matrix, and Y is the transformed data. PCA is widely used in face recognition algorithms, gene expression analysis in bioinformatics, and financial modeling for risk assessment.

            Linear Regression

            Linear regression is a statistical method that uses linear transformations to model the relationship between variables. The transformation is represented as:

            y = Xβ + ε

            Where y is the dependent variable, X is the matrix of independent variables, β is the vector of coefficients,

            Conclusion and Further Study

            In summary, linear transformations possess crucial properties that form the foundation of linear algebra. The introduction video provides an essential overview of these concepts, emphasizing linearity, vector space preservation, and the relationship between transformations and matrices. Understanding these properties is vital for grasping more advanced topics in linear algebra. To deepen your knowledge, we encourage further study of related areas such as linear operators and matrix representations of linear transformations. These topics will enhance your understanding of how linear transformations can be applied in various fields, including computer graphics, data analysis, and quantum mechanics. Exploring the connections between linear transformations and their matrix representations will provide valuable insights into the practical applications of these mathematical concepts. As you continue your journey in linear algebra, remember that mastering these fundamental properties will serve as a solid foundation for more complex mathematical theories and real-world applications.

            Properties of Linear Transformation Overview:

            Properties of Linear Transformation Overview: The 3 properties of Linear Transformation
            T(u+v)=T(u)+T(v)T(u+v)=T(u)+T(v)
            T(cu)=cT(u)T(cu)=cT(u)
            T(0)=0T(0)=0

            Step 1: Introduction to Linear Transformations

            Linear transformations are functions that map vectors to vectors in a way that preserves vector addition and scalar multiplication. These transformations are fundamental in linear algebra and have several key properties that define their behavior. Understanding these properties is crucial for determining whether a given transformation is linear.

            Step 2: Property 1 - Additivity

            The first property of a linear transformation is additivity. This property states that the transformation of the sum of two vectors is equal to the sum of the transformations of the individual vectors. Mathematically, this is expressed as T(u+v)=T(u)+T(v)T(u+v) = T(u) + T(v). This means that if you have two vectors uu and vv, and you apply the transformation TT to their sum, it should be the same as applying TT to uu and vv separately and then adding the results.

            Step 3: Property 2 - Homogeneity

            The second property is homogeneity, which states that the transformation of a scalar multiple of a vector is equal to the scalar multiple of the transformation of the vector. This is written as T(cu)=cT(u)T(cu) = cT(u), where cc is a scalar and uu is a vector. This property ensures that scaling a vector before applying the transformation is the same as scaling the result of the transformation.

            Step 4: Property 3 - Zero Vector

            The third property is that the transformation of the zero vector must be the zero vector. In other words, T(0)=0T(0) = 0. This property is unique to linear transformations and ensures that the zero vector is always mapped to itself, maintaining the structure of the vector space.

            Step 5: Verifying Linearity

            To verify if a given transformation is linear, you need to check if it satisfies all three properties mentioned above. If any one of these properties does not hold, the transformation is not linear. For example, if you have a transformation TT and you find that T(u+v)T(u)+T(v)T(u+v) \neq T(u) + T(v) for some vectors uu and vv, then TT is not linear.

            Step 6: Example Problem

            Let's consider an example problem where we need to determine if a given transformation is linear. Suppose we have a transformation defined by T(x1,x2)=(x1+x2,x1+3)T(x_1, x_2) = (x_1 + x_2, x_1 + 3). To check if this transformation is linear, we will verify each of the three properties.

            Step 7: Checking Property 1 - Additivity

            First, we check if TT satisfies additivity. We need to see if T((x1,x2)+(y1,y2))=T(x1,x2)+T(y1,y2)T((x_1, x_2) + (y_1, y_2)) = T(x_1, x_2) + T(y_1, y_2). Calculate both sides and compare the results. If they are equal, the first property holds.

            Step 8: Checking Property 2 - Homogeneity

            Next, we check if TT satisfies homogeneity. We need to see if T(c(x1,x2))=cT(x1,x2)T(c(x_1, x_2)) = cT(x_1, x_2). Again, calculate both sides and compare the results. If they are equal, the second property holds.

            Step 9: Checking Property 3 - Zero Vector

            Finally, we check if TT maps the zero vector to itself. We need to see if T(0,0)=(0,0)T(0, 0) = (0, 0). Calculate the transformation of the zero vector and see if it results in the zero vector. If it does, the third property holds.

            Step 10: Conclusion

            If all three properties are satisfied, then the transformation TT is linear. If any one of the properties fails, then TT is not linear. In our example, if we find that T(0,0)(0,0)T(0, 0) \neq (0, 0), then we can conclude that the transformation is not linear.

            FAQs

            Here are some frequently asked questions about linear transformations:

            1. What are the properties of linear transformations?

              Linear transformations have two key properties: additivity (T(u + v) = T(u) + T(v)) and homogeneity (T(cu) = cT(u)). These properties ensure that the transformation preserves vector addition and scalar multiplication.

            2. What are the 3 types of linear transformations?

              The three main types of linear transformations are: 1) Rotations, which change the orientation of vectors, 2) Reflections, which flip vectors across a line or plane, and 3) Scaling, which changes the magnitude of vectors while preserving their direction.

            3. How do you prove a transformation is linear?

              To prove a transformation is linear, you need to show that it satisfies both the additivity and homogeneity properties for all vectors in the domain and all scalars. This often involves algebraic manipulation and careful analysis of the transformation's behavior.

            4. What is the formula for linear transformation?

              The general formula for a linear transformation T from R^n to R^m is: T(x) = Ax, where A is an m × n matrix and x is a vector in R^n. This matrix A completely defines the linear transformation.

            5. What are some applications of linear transformations?

              Linear transformations have numerous applications, including: computer graphics (for rotations, scaling, and projections), signal processing (Fourier and wavelet transforms), data analysis (Principal Component Analysis), and quantum mechanics (describing state transformations).

            Prerequisite Topics for Understanding Properties of Linear Transformation

            To fully grasp the properties of linear transformation, it's crucial to have a solid foundation in several key mathematical concepts. Understanding these prerequisite topics will significantly enhance your ability to comprehend and apply linear transformations effectively.

            One of the fundamental concepts you should master is the properties of matrix scalar multiplication. This knowledge forms the basis for understanding how linear transformations scale vectors and matrices. Similarly, being familiar with representing a linear system as a matrix is essential, as it provides insight into how linear transformations can be represented and manipulated mathematically.

            Another important prerequisite is understanding rotational symmetry and transformations. This geometric perspective helps visualize how linear transformations affect shapes and spaces. Additionally, knowing the three types of matrix row operations is crucial for manipulating matrices, which is often necessary when working with linear transformations.

            A strong grasp of solving two-step linear equations using addition and subtraction provides the algebraic foundation needed to work with linear transformations. This skill is particularly useful when dealing with the equations that arise from applying linear transformations to vectors and matrices.

            Understanding vector components is also crucial, as linear transformations often involve manipulating individual components of vectors. This knowledge ties directly into the concept of linear combination and vector equations in R^n, which is fundamental to understanding how linear transformations operate on vector spaces.

            By mastering these prerequisite topics, you'll be well-prepared to delve into the properties of linear transformation. Each concept builds upon the others, creating a comprehensive understanding of how linear transformations work, their applications, and their significance in various fields of mathematics and science. Remember, a strong foundation in these areas will not only make learning about linear transformations easier but also more intuitive and applicable to real-world problems.

            Recall from last chapter the 2 properties of AxAx:
            1. A(u+v)=Au+AvA(u+v)=Au+Av
            2. A(cu)=c(Au)A(cu)=c(Au)

            where uu and vv are vectors in Rn\Bbb{R}^n and cc is a scalar.

            Now the properties of linear transformation are very similar. Linear transformation preserves the operations of vector addition/subtraction and scalar multiplication. In other words, If T is linear, then:
            1. T(u+v)=T(u)+T(v)T(u+v)=T(u)+T(v)
            2. T(cu)=cT(u)T(cu)=cT(u)
            3. T(0)=0T(\vec{0})=\vec{0}

            We can even combine property 1 and 2 to show that:

            T(cu+dv)=cT(u)+dT(v)T(cu+dv)=cT(u)+dT(v)

            where uu, vv are vectors and cc, dd are scalars. Note that if this equation holds, then it must be linear.

            If you have more than 2 vectors and 2 scalars? What if you have p vectors and p scalars? Then we can generalize this equation and say that:

            T(c1v1+c2v2++cpvp)=c1T(v1)+c2T(v2)++cpT(vp) T(c_1 v_1+c_2 v_2+\cdots+c_p v_p )=c_1 T(v_1 )+c_2 T(v_2 )+\cdots+c_p T(v_p)

            Again if this equation holds, then it must be linear.