[Linear Algebra] 3. Matrices as linear transformations

2022. 3. 12. 12:19Mathematics/Linear Algebra

 

 

In this chapter, we focus on 1). what linear transformation is and 2). relation between linear transformation and matrix-vector multiplication.


Transformation is essentially a fancy word for function. It's something that takes in one vector and spits out another vector.

 

So why use the word transformation instead of function if they mean the same thing? This word suggests a way to view input-output relation as certain movement. In other words, This suggests to visualize input-output relation.

 

If a transformation changes one input vector to one output vector, we can imagine that input vector moving over to the output vector. So one way to understand the transformation as a whole easily is to imagine watching every possible input vector move over to its corresponding output vector.

Linear transformation

Linear algebra only deals with a special type of transformation, ones that are easier to understand, called "linear" transformations.

 

Visually speaking, linear transformation has two properties:

  1. All lines must remain lines, without getting curved.
  2. The origin must remain fixed in place.

In general, think linear transformations as 1). keeping grid lines parallel and 2). evenly spaced.

Relation between linear transformation and matrix

Now we can easily understand input-output relation by visualizing transformation.

How to describe these transformations numerically? It can be described by only where basis vectors($\mathbf{\hat{i}}$ and $\mathbf{\hat{j}}$) land.

 

Why geometrical and numerical views?

더보기

Why do we view linear algebra in two perspectives (geometrical and numerical)?

Geometrical perspective can help to intuitively understand the concept of linear algebra by visualizing.

Numerical perspective helps to easily describe the concept of linear algebra by matrix or vector.

 

For example, consider the vector $\mathbf{\vec{x}}$ with coordinate $\begin{bmatrix} x \\ y \end{bmatrix}$ meaning that it equals to $x \mathbf{\hat{i}} + y \mathbf{\hat{j}}$. If we play some transformation and follow where all three of these vectors, $\mathbf{\hat{i}}$, $\mathbf{\hat{j}}$ and $\mathbf{\vec{x}}$ go, the property that grid lines remain parallel and evenly spaced brings a really important consequence: the place where $\mathbf{\vec{x}}$ lands will be $x\mathbf{\hat{i}_{land}} + y\mathbf{\hat{j}_{land}}$. In other words, the linear combination of $\mathbf{\hat{i}}$ and $\mathbf{\hat{j}}$ that forms $\mathbf{\vec{x}}$ maintains same the linear combination of $\mathbf{\hat{i}_{land}}$ and $\mathbf{\hat{j}_{land}}$ after transformation.

 

This means that we know where $\mathbf{\vec{v}}$ move based only on where $\mathbf{\hat{i}}$ and $\mathbf{\hat{j}}$ each land. Furthermore, we can know where any vectors land, as long as we have a record of where $\mathbf{\hat{i}}$ and $\mathbf{\hat{j}}$ each land. What all of this is saying is that 2-D linear transformation is completely described by just 4 numbers: two coordinates for where $\mathbf{\hat{i}}$ lands and where $\mathbf{\hat{j}}$ lands. Package these coordinates into $2 \times 2$ matrix, each columns are where $\mathbf{\hat{i}}$ and $\mathbf{\hat{j}}$ each land. If $\mathbf{\hat{i}}$ land in $\begin{bmatrix} v_1 \\ v_2 \end{bmatrix}$ and $\mathbf{\hat{j}}$ land in $\begin{bmatrix} w_1 \\ w_2 \end{bmatrix}$, it can be described by:

$$
\begin{bmatrix}
v_1 & w_1 \\
v_2 & w_2 \\
\end{bmatrix}
$$

 

Way to transforms $\begin{bmatrix} x \\ y \end{bmatrix}$ by $\begin{bmatrix} v_1 & w_1 \\ v_2 & w_2 \end{bmatrix}$:

$$
x \cdot \begin{bmatrix} v_1 \\ v_2 \end{bmatrix} + y \cdot \begin{bmatrix} w_1 \\ w_2 \end{bmatrix} =
\begin{bmatrix} x \cdot v_1 + y \cdot w_1 \\ x \cdot v_2 + y \cdot w_2 \end{bmatrix}
$$

 

It's natural to define this computation as matrix-vector multiplication.

 

It's nice to think these columns as the transformed version of basis vectors and results as linear combination of transformed basis vectors.

 

Summary, linear transformations are a way to move around space such that 1). the grid lines remain parallel and 2). evenly spaced and such that 3). origin remains fixed. Delightfully, these transformations can be described using only a handful of numbers, the coordinates of where each basis vector lands. Matrices give us a language to describe these transformations where the columns represent coordinates of where each basis vector lands, and matrix-vector multiplication is just a way to compute what that transformation does to a given vector. The important takeaway here is that every time you see a matrix, you can interpret it as a certain transformation of space.