Beta

Nonsquare matrices as transformations between dimensions | Chapter 8, Essence of linear algebra

Below is a short summary and detailed review of this video written by FutureFactual:

Exploring Linear Transformations Across Dimensions with Non-Square Matrices | 3Blue1Brown

Overview

Between chapters today we extend the familiar view of linear transformations from square matrices to non-square cases. The core idea remains that a linear map preserves parallel grid lines and sends the origin to the origin. When inputs live in one dimension and outputs live in another, a matrix of appropriate size encodes the transformation by showing where each input basis vector lands in the output space. This short note demonstrates how to interpret a 3 by 2 matrix as a map from a 2D input space to a 3D output space, and similarly how a 2 by 3 matrix describes a 3D input space mapped into 2D outputs.

  • Non-square matrices encode dimension-changing maps
  • Columns are the images of the input basis vectors
  • The column space forms a plane through the origin in the output space
  • A 1 by 2 matrix maps 2D inputs to a scalar, tying into dot products

Understanding Transformations Between Different Dimensions

In this footnote to the linear transformations series we explore what non square matrices mean geometrically. So far the focus has been on linear maps from 2D to 2D or 3D to 3D, encoded by 2x2 or 3x3 matrices. Yet, as commenters noted, non square matrices can encode transformations between spaces of different dimensions, and thinking about these helps build intuition across dimensions. The essential property of linear maps remains grid lines that stay parallel and evenly spaced, and the origin of the input space mapping to the origin of the output space. The matrix encoding a transformation is constructed by placing the coordinates of where each input basis vector lands into the columns of the matrix. This is the bridge from geometric action to algebraic representation, regardless of whether the input and output spaces have the same dimension or not.

Consider a concrete example discussed in the video: a transformation that takes the 2D input space to a 3D output space. The encoded matrix is 3x2, because there are three coordinates in the output and two basis vectors in the input. The first column records the image of the input I-hat, and the second column records the image of the input J-hat. In the picture the first column is the coordinates of I-hat after the transformation, for instance (2, -1, -2). The second column records the image of J-hat, which could be something like (0, 1, 0) in the three coordinate directions. From these two columns we read off that the transformation maps any 2D input vector [a, b] to a 3D output vector a times the first column plus b times the second column, i.e., [a, b] maps to a(2, -1, -2) + b(0, 1, 0). This is precisely how a 3x2 matrix encodes a map from two dimensions into three, and it reflects the geometric interpretation: the input space has two basis vectors, and their landing spots in 3D are described by three coordinates each.

Conversely, a 2x3 matrix encodes a map from a 3D input space to a 2D output space. Here the three columns correspond to the landing spots of the three input basis vectors in a two coordinate sense. The two rows tell you that each landing spot is described with only two coordinates, so the image lives in a 2D plane of the output space. Such a map feels a bit unusual as you imagine collapsing three dimensions down to two, yet it is a perfectly valid linear transformation. Intuitively, you can picture projecting 3D vectors onto some 2D plane, with the two coordinates in the plane given by the two output rows.

There is also the simple one dimensional case. A transformation from 2D to 1D is encoded by a 1x2 matrix. The two columns each contain a single entry, describing the scalar landing point of each input basis vector. In this scenario the grid lines are compressed into a line, and the mapped image is a single number for each input vector. The transcript notes that this squashing has geometric ties to the dot product, foreshadowing a deeper connection to inner products that will be explored in the next video.

Across these examples the recurring theme is that linearity is preserved even when the input and output dimensions differ. The columns of the matrix always tell you where the input basis vectors land in the output space, and the row count indicates how many coordinates are used to describe those landing spots. With this view you can analyze matrix multiplication, linear systems, and more using the same ideas, now extended to transformations between different dimensions. The video encourages you to experiment with these ideas and explore how the algebraic structure of a matrix captures the geometric action of a linear transformation, even when the transformation migrates between spaces of different sizes.

As you practice, try to visualize how the input basis vectors map to landing spots, how the column space looks in the output space, and how changing the matrix changes the geometry of the transformation. By playing with basis vectors and their images you develop a more flexible intuition for linear maps in higher dimensions, an essential tool for understanding the broader landscape of linear algebra and its connections to concepts like the dot product and projections. Have fun.

Related posts

featured
3Blue1Brown
·09/08/2016

Three-dimensional linear transformations | Chapter 5, Essence of linear algebra

featured
3Blue1Brown
·07/08/2016

Linear transformations and matrices | Chapter 3, Essence of linear algebra

featured
3Blue1Brown
·15/08/2016

Inverse matrices, column space and null space | Chapter 7, Essence of linear algebra

featured
3Blue1Brown
·08/08/2016

Matrix multiplication as composition | Chapter 4, Essence of linear algebra