Orthogonal projections

Get the most by viewing this topic in your current grade. Pick your course now.

?
Intros
Lessons
  1. Orthogonal Projections Overview:
  2. The Orthogonal Decomposition Theorem
    • Make yy as the sum of two vectors y^\hat{y} and zz
    • Orthogonal basis → y^=yv1v1v1v1++yvpvpvpvp\hat{y}= \frac{y \cdot v_1}{v_1 \cdot v_1}v_1 + \cdots + \frac{y \cdot v_p}{v_p \cdot v_p}v_p
    • Orthonormal basis → y^=(yv1)v1++(yvp)vp\hat{y}=(y\cdot v_1)v_1+\cdots +(y\cdots v_p)v_p
    z=yy^z=y - \hat{y}
  3. Property of Orthogonal Projections
    • projsy=y_s y=y
    • Only works if yy is in SS
  4. The Best Approximation Theorem
    • What is the point closest to yy in SS? y^\hat{y}!
    • Reason why: yy^\lVert y - \hat{y} \rVert < yu\lVert y-u \rVert
    • The Distance between the yy and y^\hat{y}
?
Examples
Lessons
  1. The Orthogonal Decomposition Theorem
    Assume that {v1,v2,v3v_1,v_2,v_3 } is an orthogonal basis for Rn\Bbb{R}^n. Write yy as the sum of two vectors, one in Span{v1v_1}, and one in Span{v2,v3v_2,v_3}. You are given that:
    vector 1, 2, 3, 4
    1. Verify that {v1,v2v_1,v_2 } is an orthonormal set, and then find the orthogonal projection of yy onto Span{v1,v2v_1,v_2}.
      Verify these vectors are an orthonormal set
      1. Best Approximation
        Find the best approximation of yy by vectors of the form c1v1+c2v2c_1 v_1+c_2 v_2, where best approximation, vector y, and best approximation, vector v_1, best approximation, vector v_3.
        1. Finding the Closest Point and Distance
          Find the closest point to yy in the subspace SS spanned by v1v_1 and v2v_2.
          Find the closest point to y between vector 1 and vector 2
          1. Find the closest distance from yy to S=S=Span{v1,v2v_1,v_2 } if
            Find the closest distance from y to s
            Topic Notes
            ?

            Introduction to Orthogonal Projections

            Welcome to our exploration of orthogonal projections, a fundamental concept in linear algebra applications! Orthogonal projections are crucial tools that allow us to find the closest point in a subspace to a given vector. This concept is essential in various applications, from data analysis to computer graphics. Our introduction video provides a visual and intuitive understanding of orthogonal projections, making it easier to grasp this abstract concept. As we delve deeper, you'll see how orthogonal projections help us decompose vectors, solve least squares problems, and understand the geometry of vector spaces. They're like mathematical spotlights, illuminating the most important parts of our data in high-dimensional spaces. By mastering orthogonal projections, you'll gain a powerful technique for simplifying complex problems in linear algebra applications. So, let's dive in and discover how these projections can transform your understanding of vector spaces and subspaces!

            Orthogonal Projection onto a Subspace

            Understanding Orthogonal Projection

            Orthogonal projection is a fundamental concept in linear algebra that extends the idea of projecting a vector onto another vector to projecting onto an entire subspace. This powerful technique has numerous applications in mathematics, physics, and engineering.

            Projection onto a Vector vs. Subspace

            When we project a vector Y onto another vector u, we find the component of Y that lies along u. This results in a scalar multiple of u. However, when we project Y onto a subspace S, we find the vector in S that is closest to Y. This projection, often denoted as Y hat, is itself a vector within the subspace S.

            The Orthogonal Decomposition Theorem

            The orthogonal decomposition theorem states that any vector Y can be uniquely expressed as the sum of its projection onto a subspace S and a vector orthogonal to S. Mathematically, we can write this as:

            Y = Y hat + Y perp

            Where Y hat is the projection of Y onto S, and Y perp is orthogonal to every vector in S. This theorem has profound implications for understanding vector decomposition and solving various problems in linear algebra.

            Calculating the Orthogonal Projection

            To find the orthogonal projection of Y onto a subspace S, we need an orthogonal basis for S. Let's say we have an orthogonal basis {u1, u2, ..., uk} for S. The projection Y hat can be calculated as:

            Y hat = (Y · u1 / ||u1||^2) u1 + (Y · u2 / ||u2||^2) u2 + ... + (Y · uk / ||uk||^2) uk

            This formula essentially breaks down Y into its components along each basis vector of S and then reconstructs these components within S.

            Visual Representation

            Imagine a three-dimensional space with a two-dimensional plane S. The orthogonal projection of a vector Y onto S would be the shadow cast by Y if light were shining perpendicular to S. The difference between Y and its projection (Y perp) would be perpendicular to the plane.

            Applications and Implications

            The orthogonal decomposition theorem has numerous practical applications:

            • Least Squares Approximation: In data fitting, we often project data points onto a subspace to find the best-fit line or plane.
            • Signal Processing: Decomposing signals into orthogonal components helps in filtering and analysis.
            • Quantum Mechanics: The projection of quantum states onto eigenspaces is fundamental to measurement theory.

            Example: Projecting onto a Plane

            Consider a vector Y = (3, 4, 5) in R³ and a plane S defined by the equation x + y + z = 0. To find the projection of Y onto S:

            1. Find an orthogonal basis for S, e.g., u1 = (1, -1, 0) and u2 = (1, 0, -1).
            2. Calculate the projections onto each basis vector.
            3. Sum these projections to get Y hat.

            The result would be Y hat, the closest point in the plane to Y, and Y - Y hat would be perpendicular to the plane.

            Conclusion

            Orthogonal projection onto a subspace is a powerful tool in linear algebra. It allows us to decompose vectors into components that lie within a subspace and those orthogonal to it. This concept, formalized in the orthogonal decomposition theorem, provides a framework for understanding vector decomposition and solving complex problems in various fields of mathematics and science. By visualizing

            Formulas for Orthogonal Projection

            Orthogonal projection is a fundamental concept in linear algebra, playing a crucial role in various applications such as data analysis, signal processing, and computer graphics. In this section, we'll explore the formulas for orthogonal projection onto a subspace, focusing on cases involving orthogonal and orthonormal bases.

            General Formula for Orthogonal Projection

            Let's start with the general formula for orthogonal projection onto a subspace W. Given a vector v and a basis {u, u, ..., u} for W, the orthogonal projection of v onto W is:

            projW(v) = Σ((v · u) / (u · u)) * u

            Where · denotes the inner product (dot product in Euclidean space).

            Orthogonal Basis Case

            When the basis {u, u, ..., u} is orthogonal (but not necessarily orthonormal), the formula simplifies to:

            projW(v) = Σ((v · u) / ||u||²) * u

            Here, ||u|| represents the magnitude (length) of the basis vector u.

            Orthonormal Basis Case

            For an orthonormal basis (orthogonal and unit length vectors), the formula further simplifies to:

            projW(v) = Σ(v · u) * u

            Step-by-Step Guide for Using Projection Formulas

            1. Identify the subspace W and its basis vectors.
            2. Determine if the basis is orthogonal, orthonormal, or neither.
            3. Choose the appropriate formula based on the basis type.
            4. Calculate the necessary inner products and vector magnitudes.
            5. Apply the formula to compute the projection.

            Example: Orthogonal Basis

            Let's project v = (2, 3, 1) onto W spanned by u = (1, 0, 1) and u = (0, 2, 0).

            1. Calculate inner products: v · u = 3, v · u = 6
            2. Calculate magnitudes: ||u||² = 2, ||u||² = 4
            3. Apply the formula: projW(v) = (3/2)(1, 0, 1) + (6/4)(0, 2, 0)
            4. Simplify: projW(v) = (1.5, 3, 1.5)

            Example: Orthonormal Basis

            Now, let's project v = (1, 2, 3) onto W spanned by u = (1/2, 1/2, 0) and u = (0, 0, 1).

            1. Calculate inner products: v · u = 3/2, v · u = 3
            2. Apply the formula: projW(v) = (3/2)(1/2, 1/2, 0) + 3(0, 0, 1)
            3. Simplify: projW(

              Properties of Orthogonal Projections

              Orthogonal projections are fundamental concepts in linear algebra with numerous applications in mathematics, physics, and engineering. Understanding their properties is crucial for solving various problems involving vector spaces. This section will explore the key characteristics of orthogonal projections, including the special case when a vector is already in the subspace, and delve into the best approximation theorem and its significance.

              One of the most important properties of orthogonal projections is that they map vectors onto a subspace in a way that minimizes the distance between the original vector and its projection. This property makes orthogonal projections particularly useful in approximation problems and data analysis. When we project a vector v onto a subspace W, we obtain a unique vector w in W that is closest to v.

              Consider the case when a vector is already in the subspace. In this scenario, the orthogonal projection of the vector onto that subspace is simply the vector itself. This property highlights the idempotent nature of orthogonal projections: applying the projection multiple times yields the same result as applying it once. Mathematically, if P is the projection matrix onto a subspace W, and v is a vector in W, then Pv = v.

              The best approximation theorem, also known as the orthogonal decomposition theorem, is a cornerstone result in the study of orthogonal projections. It states that for any vector v in a vector space V and any subspace W of V, the vector v can be uniquely decomposed into the sum of two vectors: its orthogonal projection onto W and a vector orthogonal to W. This decomposition can be expressed as v = w + w, where w is the projection of v onto W, and w is orthogonal to W.

              The significance of the best approximation theorem lies in its wide-ranging applications. In data analysis, it provides a theoretical foundation for least squares regression, allowing us to find the best linear approximation to a set of data points. In signal processing, it enables the separation of signals into different frequency components. In computer graphics, it facilitates the projection of 3D objects onto 2D planes for rendering.

              To visualize these concepts, imagine a vector in three-dimensional space being projected onto a two-dimensional plane. The projection creates a right triangle, where the original vector is the hypotenuse, the projected vector lies in the plane, and the difference vector is perpendicular to the plane. This geometric interpretation helps illustrate why the projected vector is the closest point in the subspace to the original vector.

              Another important property of orthogonal projections is their linearity. If P is an orthogonal projection, then for any vectors u and v and scalar c, we have P(u + v) = Pu + Pv and P(cu) = cPu. This property allows us to break down complex projections into simpler components, making calculations more manageable in many applications.

              The norm-preserving property of orthogonal projections is also worth noting. While the projection generally reduces the length of a vector, it preserves the length of any vector already in the subspace. This property is crucial in many physical applications, such as conserving energy in certain systems.

              In conclusion, orthogonal projections possess a rich set of properties that make them indispensable tools in various fields. The best approximation theorem, in particular, provides a powerful framework for understanding how vectors can be decomposed and approximated within subspaces. By leveraging these properties, researchers and practitioners can develop more efficient algorithms, improve data analysis techniques, and gain deeper insights into the structure of vector spaces.

              Solving Orthogonal Projection Problems

              Orthogonal projection problems are fundamental in linear algebra and have numerous applications in various fields. In this section, we'll explore detailed examples of solving these problems, with a focus on finding the closest point in a subspace to a given vector. We'll walk through the problem-solving process step-by-step, explaining the reasoning behind each step and providing tips for identifying the type of basis and choosing the appropriate formula.

              Let's start with a common problem: finding the closest point in a subspace to a given vector. This type of problem is crucial in data analysis, signal processing, and optimization.

              Example 1: Finding the closest point in a subspace

              Problem: Given the subspace S spanned by vectors u1 = (1, 1, 0) and u2 = (0, 1, 1), find the closest point in S to the vector v = (2, 3, 4).

              Step 1: Determine if the basis is orthogonal or orthonormal First, we need to check if the given basis vectors are orthogonal or orthonormal. In this case, u1 · u2 = 1, so they are not orthogonal. We'll need to use the general projection formula.

              Step 2: Set up the projection matrix The projection matrix P is given by P = A(A^T A)^(-1)A^T, where A is the matrix with columns u1 and u2.

              A = [1 0; 1 1; 0 1]

              Step 3: Calculate A^T A and its inverse A^T A = [2 2; 2 3] (A^T A)^(-1) = [3/2 -1; -1 1]

              Step 4: Calculate the projection matrix P P = A(A^T A)^(-1)A^T = [2/3 1/3 0; 1/3 2/3 1/3; 0 1/3 2/3]

              Step 5: Apply the projection matrix to v proj_S(v) = Pv = [5/3; 8/3; 7/3]

              Therefore, the closest point in S to v is (5/3, 8/3, 7/3).

              Example 2: Using an orthonormal basis

              Problem: Find the closest point in the subspace S spanned by u1 = (1/2, 1/2, 0) and u2 = (0, 1/2, 1/2) to v = (1, 1, 1).

              Step 1: Verify orthonormality u1 · u2 = 0 and ||u1|| = ||u2|| = 1, so the basis is orthonormal.

              Step 2: Use the simplified projection formula For orthonormal bases, we can use proj_S(v) = (v · u1)u1 + (v · u2)u2

              Step 3: Calculate dot products v · u1 = 1/2 + 1/2 = 2 v · u2 = 1/2 + 1/2 = 2

              Step 4: Compute the projection proj_S(v) = 2(1/2, 1/2, 0) + 2(0, 1/2, 1/2) = (1, 1, 1/2)

              The closest point in S to v is (1, 1, 1/2).

              Tips for problem-solving:

              1. Identify the basis type: Always check if the given basis is orthogonal or orthonormal. This determines which

              Applications of Orthogonal Projections

              Orthogonal projections, a fundamental concept in linear algebra, have numerous real-world applications across various fields. In computer graphics, these projections play a crucial role in rendering 3D objects on 2D screens. When you play a video game or watch a 3D animated movie, orthogonal projections are working behind the scenes to accurately represent depth and perspective. For instance, in architectural visualization, orthogonal projections help create precise floor plans and elevations from 3D building models.

              In signal processing, orthogonal projections are essential for filtering and noise reduction. Imagine you're on a phone call in a noisy environment. The voice signal you want to hear is mixed with background noise. Signal processing algorithms use orthogonal projections to separate the desired voice signal from the unwanted noise, improving call quality. This technique is also used in audio production to clean up recordings and enhance sound quality.

              Data analysis is another field where orthogonal projections shine. In machine learning and statistics, techniques like Principal Component Analysis (PCA) rely heavily on orthogonal projections. PCA helps reduce the dimensionality of complex datasets while preserving important information. For example, in facial recognition systems, PCA can be used to identify the most distinctive features of a face, making the recognition process more efficient and accurate.

              In image compression, orthogonal projections help reduce file sizes without significant loss of quality. When you send a photo via messaging apps, orthogonal projection-based algorithms compress the image, making it easier to transmit while maintaining visual fidelity. Similarly, in medical imaging, these projections are used to reconstruct 3D images from 2D X-ray or CT scan slices, aiding in accurate diagnosis and treatment planning.

              The concepts learned about orthogonal projections directly translate to solving practical problems. For instance, in robotics, orthogonal projections help in path planning and obstacle avoidance. A robot navigating a warehouse uses these projections to understand its environment and plan the most efficient route. In weather forecasting, orthogonal projections assist in analyzing complex atmospheric data, leading to more accurate predictions.

              Even in everyday scenarios, we encounter applications of orthogonal projections. When you use your smartphone's camera to scan a QR code, the app uses orthogonal projections to correct for perspective and accurately read the code, regardless of the angle at which you hold your phone. These examples demonstrate how the seemingly abstract concept of orthogonal projections underpins many technologies we use daily, making our lives easier and more efficient.

              Conclusion

              In this lesson, we explored the crucial concept of orthogonal projections in linear algebra. We learned that orthogonal projections allow us to find the closest point in a subspace to a given vector, a fundamental operation in many applications. Key points covered include the definition of orthogonal projections, their properties, and how to calculate them using the projection formula. Understanding orthogonal projections is essential for grasping more advanced topics in linear algebra and their real-world applications. To solidify your knowledge, it's crucial to practice solving a variety of problems involving orthogonal projections. We encourage you to explore additional resources, such as online tutorials and textbook exercises, to deepen your understanding. Remember, mastering these concepts will provide a strong foundation for future studies in mathematics and related fields. Don't hesitate to engage with supplementary materials or seek help if you encounter difficulties. Your efforts in practicing and exploring orthogonal projections will undoubtedly pay off in your linear algebra journey.

              Orthogonal Projections Overview:

              Orthogonal Projections Overview: The Orthogonal Decomposition Theorem
              • Make yy as the sum of two vectors y^\hat{y} and zz
              • Orthogonal basis → y^=yv1v1v1v1++yvpvpvpvp\hat{y}= \frac{y \cdot v_1}{v_1 \cdot v_1}v_1 + \cdots + \frac{y \cdot v_p}{v_p \cdot v_p}v_p
              • Orthonormal basis → y^=(yv1)v1++(yvp)vp\hat{y}=(y\cdot v_1)v_1+\cdots +(y\cdots v_p)v_p
              z=yy^z=y - \hat{y}

              Step 1: Understanding the Orthogonal Decomposition Theorem

              The Orthogonal Decomposition Theorem states that any vector yy in RnR^n can be decomposed into the sum of two vectors: y^\hat{y} and zz. Here, y^\hat{y} is the orthogonal projection of yy onto a subspace SS, and zz is the component of yy that is orthogonal to SS. Mathematically, this is expressed as:
              y=y^+zy = \hat{y} + z
              Where:

              • y^\hat{y} is in the subspace SS
              • zz is orthogonal to SS

              Step 2: Identifying the Subspace SS

              In this context, we are not projecting yy onto a single vector vv, but rather onto a subspace SS. This subspace SS is spanned by a set of vectors. For example, if SS is spanned by vectors v1,v2,,vpv_1, v_2, \ldots, v_p, then any vector in SS can be written as a linear combination of these vectors.

              Step 3: Orthogonal Basis

              If the set of vectors spanning SS forms an orthogonal basis, the orthogonal projection y^\hat{y} of yy onto SS can be calculated using the formula:
              y^=yv1v1v1v1+yv2v2v2v2++yvpvpvpvp\hat{y} = \frac{y \cdot v_1}{v_1 \cdot v_1}v_1 + \frac{y \cdot v_2}{v_2 \cdot v_2}v_2 + \cdots + \frac{y \cdot v_p}{v_p \cdot v_p}v_p
              This formula extends the projection formula for a single vector to multiple vectors in the orthogonal basis.

              Step 4: Orthonormal Basis

              If the set of vectors spanning SS forms an orthonormal basis, the orthogonal projection y^\hat{y} of yy onto SS can be calculated using a simpler formula:
              y^=(yv1)v1+(yv2)v2++(yvp)vp\hat{y} = (y \cdot v_1)v_1 + (y \cdot v_2)v_2 + \cdots + (y \cdot v_p)v_p
              This formula is simpler because the vectors in the orthonormal basis have unit length, eliminating the need to divide by the dot product of each vector with itself.

              Step 5: Calculating the Orthogonal Projection

              To find the orthogonal projection of yy onto the subspace SS, follow these steps:

              1. Determine whether the basis vectors of SS form an orthogonal or orthonormal basis.
              2. Use the appropriate formula based on the type of basis (orthogonal or orthonormal).
              3. Calculate the dot products and perform the necessary multiplications and additions to find y^\hat{y}.

              Step 6: Example Calculation

              Let's consider an example where v1v_1 and v2v_2 form an orthogonal basis for the subspace SS. We want to find the orthogonal projection of yy onto the span of v1v_1 and v2v_2.
              Given:

              • y=[2,1]y = [2, 1]
              • v1=[1,3]v_1 = [1, 3]
              • v2=[6,2]v_2 = [-6, 2]
              The orthogonal projection y^\hat{y} is calculated as follows:
              y^=yv1v1v1v1+yv2v2v2v2\hat{y} = \frac{y \cdot v_1}{v_1 \cdot v_1}v_1 + \frac{y \cdot v_2}{v_2 \cdot v_2}v_2
              Calculate the dot products:
              • yv1=21+13=5y \cdot v_1 = 2 \cdot 1 + 1 \cdot 3 = 5
              • v1v1=11+33=10v_1 \cdot v_1 = 1 \cdot 1 + 3 \cdot 3 = 10
              • yv2=2(6)+12=10y \cdot v_2 = 2 \cdot (-6) + 1 \cdot 2 = -10
              • v2v2=(6)(6)+22=40v_2 \cdot v_2 = (-6) \cdot (-6) + 2 \cdot 2 = 40
              Substitute these values into the formula:
              y^=510v1+1040v2\hat{y} = \frac{5}{10}v_1 + \frac{-10}{40}v_2
              Simplify the fractions:
              y^=12v114v2\hat{y} = \frac{1}{2}v_1 - \frac{1}{4}v_2
              Multiply the vectors by the scalars:
              • 12v1=12[1,3]=[12,32]\frac{1}{2}v_1 = \frac{1}{2}[1, 3] = [\frac{1}{2}, \frac{3}{2}]
              • 14v2=14[6,2]=[32,12]-\frac{1}{4}v_2 = -\frac{1}{4}[-6, 2] = [\frac{3}{2}, -\frac{1}{2}]
              Add the resulting vectors:
              y^=[12,32]+[32,12]=[2,1]\hat{y} = [\frac{1}{2}, \frac{3}{2}] + [\frac{3}{2}, -\frac{1}{2}] = [2, 1]
              Therefore, the orthogonal projection of yy onto the subspace SS is y^=[2,1]\hat{y} = [2, 1].

              FAQs

              Here are some frequently asked questions about orthogonal projections:

              1. What is an orthogonal projection?

              An orthogonal projection is a linear transformation that projects a vector onto a subspace along a direction perpendicular to that subspace. It finds the closest point in the subspace to the given vector, minimizing the distance between the original vector and its projection.

              2. How do you calculate an orthogonal projection?

              To calculate an orthogonal projection of a vector v onto a subspace W with an orthonormal basis {u, u, ..., u}, use the formula: projW(v) = Σ(v · u) * u. For non-orthonormal bases, you'll need to use a more general formula or create a projection matrix.

              3. What is the difference between orthogonal and orthonormal bases?

              An orthogonal basis consists of vectors that are perpendicular to each other, while an orthonormal basis is an orthogonal basis where all vectors also have unit length (magnitude of 1). Orthonormal bases simplify projection calculations.

              4. What is the best approximation theorem?

              The best approximation theorem, also known as the orthogonal decomposition theorem, states that any vector v can be uniquely decomposed into the sum of its orthogonal projection onto a subspace W and a vector orthogonal to W. This decomposition minimizes the distance between v and the subspace W.

              5. What are some real-world applications of orthogonal projections?

              Orthogonal projections have numerous applications, including: - Data analysis and dimensionality reduction (e.g., Principal Component Analysis) - Signal processing and noise reduction - Computer graphics and image rendering - Least squares approximation in statistics - Quantum mechanics for projecting quantum states

              Prerequisite Topics

              Understanding orthogonal projections requires a solid foundation in several mathematical concepts, with one crucial prerequisite being applications of linear equations. This fundamental topic serves as a cornerstone for grasping the intricacies of orthogonal projections and their significance in various fields.

              Orthogonal projections, a concept deeply rooted in linear algebra and geometry, rely heavily on the principles and applications of linear equations. By mastering the linear equation applications, students gain the necessary tools to comprehend how vectors can be projected onto subspaces, which is the essence of orthogonal projections.

              The connection between linear equations and orthogonal projections becomes evident when we consider that projections often involve solving systems of linear equations. For instance, when finding the orthogonal projection of a vector onto a subspace, we typically use the formula that requires manipulating linear equations. This process becomes much more intuitive and manageable for those who have a strong grasp of linear algebra applications.

              Moreover, understanding how to apply linear equations in various contexts prepares students for the practical aspects of orthogonal projections. In fields such as computer graphics, signal processing, and data analysis, orthogonal projections play a crucial role. The ability to translate real-world problems into linear equations and then use those equations to perform projections is a skill that stems directly from a solid foundation in linear equation applications.

              Students who have mastered the applications of linear equations will find it easier to visualize and compute orthogonal projections. They will be better equipped to understand concepts like the dot product, vector spaces, and basis vectors, all of which are integral to working with orthogonal projections. Furthermore, this prerequisite knowledge helps in grasping more advanced topics within linear algebra, such as least squares approximations and the Gram-Schmidt process, which are extensions of orthogonal projection principles.

              In conclusion, the importance of understanding applications of linear equations as a prerequisite to orthogonal projections cannot be overstated. It provides the necessary mathematical framework and problem-solving skills that allow students to approach orthogonal projections with confidence and clarity. By investing time in mastering this fundamental topic, students set themselves up for success in understanding not only orthogonal projections but also a wide range of advanced mathematical concepts that build upon these foundational principles.

            The Orthogonal Decomposition Theorem
            Let SS be a subspace in Rn\Bbb{R}^n. Then each vector yy in Rn\Bbb{R}^n can be written as:

            y=y^+zy=\hat{y}+z

            where y^\hat{y} is in SS and zz is in SS^{\perp}. Note that y^\hat{y} is the orthogonal projection of yy onto SS

            If {v1,,vpv_1,\cdots ,v_p } is an orthogonal basis of SS, then

            projSy=y^=yv1v1v1v1+yv2v2v2v2++yvpvpvpvpproj_{S}y=\hat{y}=\frac{y \cdot v_1}{v_1 \cdot v_1}v_1 + \frac{y \cdot v_2}{v_2 \cdot v_2}v_2 + \cdots + \frac{y \cdot v_p}{v_p \cdot v_p}v_p

            However if {v1,,vpv_1,\cdots ,v_p } is an orthonormal basis of SS, then

            projSy=y^=(yv1)v1+(yv2)v2++(yvp)vpproj_{S}y=\hat{y}=(y \cdot v_1)v_1+(y \cdot v_2)v_2 + \cdots + (y \cdot v_p)v_p

            Property of Orthogonal Projection
            If {v1,,vpv_1,\cdots ,v_p } is an orthogonal basis for SS and if yy happens to be in SS, then
            projSy=yproj_{S}y=y

            In other words, if y is in S=S=Span{v1,,vpv_1,\cdots ,v_p}, then projSy=yproj_{S}y=y.

            The Best Approximation Theorem
            Let SS be a subspace of Rn\Bbb{R}^n. Also, let yy be a vector in Rn\Bbb{R}^n, and y^\hat{y} be the orthogonal projection of yy onto SS. Then yy is the closest point in SS, because

            yy^\lVert y- \hat{y} \rVert < yu\lVert y-u \rVert

            where uu are all vectors in SS that are distinct from y^\hat{y}.