Beta

Abstract vector spaces | Chapter 16, Essence of linear algebra

Below is a short summary and detailed review of this video written by FutureFactual:

What is a Vector? A Deep Dive into Vector Spaces and Function Spaces

Overview

In this episode the presenter revisits the fundamental question: what are vectors? The talk moves beyond arrows on a plane or lists of numbers to argue for a more general, spatial intuition. It then shows how the same linear-algebra toolkit applies to functions by treating them as vector-like objects, with addition and scalar multiplication defined pointwise, leading to the idea of vector spaces as an abstract framework.

Key insights

  • Functions can be added and scaled just like vectors
  • Linear transformations preserve vector addition and scalar multiplication
  • Polynomials form an infinite dimensional vector space with a natural basis
  • Axioms define vector spaces, enabling abstract results to apply to many objects

Introduction and core question

The video begins with a reexamination of the simple question, what is a vector? It discusses several intuitive pictures, from arrows in a space to lists of numbers, and argues that a healthy mathematics of vectors treats them as an abstract space with structure rather than inherently tied to any one representation. The aim is to motivate a viewpoint in which the specific instantiation of a vector is less important than the operations that define it: addition and scalar multiplication.

Vectors as a general concept and basis independence

The speaker emphasizes that the same linear-algebraic ideas—determinants, eigenvectors, and area scaling—are meaningful regardless of the chosen coordinate system. By moving to the notion of a vector space, one can reason about a wide variety of objects (arrows, lists of numbers, functions, even hypothetical PI-creatures) as long as there is a well defined notion of adding and scaling them. The eight axioms provide a minimal, universal interface that ensures linear algebra results hold across these contexts.

From arrows to functions

A major theme is the parallel between vectors and functions. Just as vectors can be added coordinatewise and scaled, functions can be added and scaled pointwise. The space of functions becomes a vector space under these operations, and linear transformations on vectors generalize to linear transformations on function spaces, sometimes called operators (an example is the derivative in calculus).

Derivative as a linear operator

The derivative is presented as a primary example of a linear transformation on function spaces. It preserves additivity and scaling, so differentiating a sum yields the sum of derivatives, and differentiating a scaled function matches the scaled derivative. This aligns with the broader claim that many familiar calculus operators fit naturally into the framework of linear transformations.

Polynomial space as a concrete infinite dimensional example

The discussion then focuses on polynomials, choosing the natural basis {1, x, x^2, x^3, ...}. Because polynomials have finite degree, their coordinates are finite strings followed by zeros in an infinite coordinate system. The derivative, with respect to x, can be represented by an infinite matrix that acts on these coordinate vectors, illustrating how a linear transformation looks in a function space when represented in a basis. This matrix is mostly zeros with a diagonal shifted by one, reinforcing the idea that differentiation is linear and that matrix representations are a powerful tool even in infinite dimensional spaces.

Basis vectors and the matrix picture

The talk explains how a linear transformation is completely described by its action on the basis vectors. Once you know how the basis is transformed, you can determine the transformation on any polynomial by combining the transformed basis elements. This is the bridge between the abstract notion of a linear operator and the concrete arithmetic of matrices, and it generalizes to function spaces where derivatives and other operators can be captured in matrix form relative to a chosen basis.

Abstracting further: vector spaces and axioms

The final part of the video emphasizes the power of abstraction. By codifying the eight axioms that any vector space must satisfy, mathematicians can prove results in terms of the axioms rather than the particular nature of the objects (arrows, lists, polynomials, or other vector-like entities). This abstraction explains why linear transformations are defined through additivity and scaling rather than geometric pictures alone, enabling broad applicability from geometry to analysis and beyond.

Conclusion and learning approach

The presenter concludes that while a concrete, visual understanding in 2D space is valuable, the real power of linear algebra comes from its abstract axioms and their wide applicability. The video invites viewers to build intuition through simple, tangible pictures, but to appreciate that vectors in mathematics can take many forms, unified by the same underlying structure of addition and scalar multiplication. The aim is to provide a solid conceptual foundation that makes future problem solving in linear algebra more efficient and broadly applicable across disciplines.

Related posts

featured
3Blue1Brown
·06/08/2016

Vectors | Chapter 1, Essence of linear algebra

featured
3Blue1Brown
·06/08/2016

Linear combinations, span, and basis vectors | Chapter 2, Essence of linear algebra

featured
3Blue1Brown
·15/08/2016

Inverse matrices, column space and null space | Chapter 7, Essence of linear algebra

featured
3Blue1Brown
·09/08/2016

Three-dimensional linear transformations | Chapter 5, Essence of linear algebra