SVD in action

This page is meant to build intuition about a beautiful and useful topic from linear algebra, the Singular Value Decomposition (SVD). Sadly, SVD can seem mysterious: it has a weird name, uses Greek-letter notation, and is often left out of introductory classes. Yet the basic idea behind SVD is simple and elegant: it's just a human-friendly coordinate system for matrices.

To illustrate this, we'll visualize how SVD works for 2x2 matrices. Of course this is just a special case, but it contains all the basic ideas that you need to understand SVD in other dimensions.

1. Visualizing 2D Space

A lot has been written about visualizing three dimensions and beyond. But let's talk for a moment about visualizing 2D space!

The image at left shows two things: (1) a grid representing a plane, and (2) a coin, representing the unit disk.

We can use this simple visualization as a guide for understanding how matrices transform space.

2. Matrices and Transformations of Space


Matrices encode linear transformations of space. The first column of this matrix tells you where the vector (1, 0) goes. The second column tells you where the vector (0, 1) goes.

Try changing the numbers in the matrix to see the effect on the plane. Or drag the red and orange vectors in the plane to create a new transformation, and see what matrix it defines.

As you play with the linear transformation above, you might notice that the geometry follows certain patterns. The unit disk may be stretched or squeezed, rotated or flipped, but it always ends up mapped to an ellipse—never anything weirder, like a starfish or an inkblot.

The singular value decomposition is a way of formalizing this intuition: every matrix can be described as a combination of a few simple operations.

3. Diagonal Matrices

A very simple kind of stretching and squeezing can be described by a so-called diagonal matrix, that is, a matrix where the off-diagonal entries are zero. A diagonal matrix transforms the \(x\)- and \(y\)-coordinates separately. You can think of it as doing "axis-aligned" squeezing and stretching. Use the sliders below, or the 2D image, to see different diagonal matrices in action.

4. Orthogonal Matrices

A second simple kind of matrix does no stretching or squeezing at all. An orthogonal matrix preserves the length of any vector it transforms. Orthogonal matrices come in two flavors. They can be rotations, or a rotation combine with a mirror-image reflection.


=

5. Finally, the singular value decomposition!

It turns out that any 2D matrix can be understood in terms of just one diagonal matrix, and two orthogonal ones. In fact, this is exactly what the singular value decomposition theorem says:

Theorem. Let \(A \) be any real matrix. Then there are orthogonal matrices \(U \) and \(V \), and a diagonal matrix \( \Sigma \), such that \( A = U \Sigma V^t\).


In some ways, the brevity of this theorem just makes it more confusing. So let's expand on it a bit.

1. An informal way to read this is: every linear transformation can be represented by an orthogonal transformation, followed by an axis-aligned squeezing or stretching, followed by another orthogonal transformation.

2. Even though we've been visualizing \( 2 \times 2 \) matrices, it applies to any size matrix of real numbers, even matrices that aren't square. (In fact it's still true if you add complex numbers to the mix, with just a tiny variation.)

3. You might wonder why we write \( V^t \) (the transpose of \(V \) ) instead of just \( V \). The fact is that the transpose of an orthogonal matrix is also an orthogonal matrix, so it's not really necessary to have the transpose in there. But when you use the SVD in practice, it turns out this simplifies a few things.

4. The entries of the diagonal matrix are the "singular values" of the matrix, and are often written with a lower-case sigma: \( \sigma_1, \sigma_2, ... \). They tell you how much the matrix stretches or squeezes space. By convention, each \( \sigma_i \geq 0 \), and they are written in descending order, so that \( \sigma_i \geq \sigma_{i+1} \).

6. The SVD in Action

The best way to understand the SVD is to see it in action. Here's an interactive version of the equation \( A = U \Sigma V^t\). If you adjust \(U, \Sigma, V \) separately, you can start to see the role they play. Try different transforms for \( A \) to see how the SVD breaks them down. (For instance, can you find a pattern in \( U \) and \(V \) when \( A \) is a symmetric matrix?)



\(U \Sigma V^t\) \(U\) \( \Sigma \) \(V\)




The diagonal entries are the singular values.