Linear Algebra

An interactive guide — mini-spreadsheets for your math equations

1. What Does "Linear" Mean?

"Algebra" means relationships. "Linear Algebra" means line-like relationships — predictable, proportional, no surprises.

A function $F$ is linear if it obeys two rules:

$$F(ax) = a \cdot F(x) \qquad \text{(scaling inputs scales output)}$$ $$F(x + y) = F(x) + F(y) \qquad \text{(adding inputs adds outputs)}$$

Think of a rooftop: move 3 feet forward, rise 1 foot. Move 30 feet forward — you expect a 10-foot rise. That's linear. Climbing a dome? Each foot forward raises you a different amount. Not linear.

Interactive: Linear vs Non-Linear

Compare $F(x) = ax$ (linear) with $F(x) = x^n$ (non-linear). Watch how doubling the input affects the output.

2.0 2.0
Real-World Example: Hooke's Law — Springs in Engineering

The force in a spring is $F = kx$, where $k$ is the spring constant and $x$ is the displacement. This is a linear relationship:

$$F(2x) = k(2x) = 2 \cdot kx = 2 \cdot F(x)$$

Double the stretch, double the force. Automotive suspension engineers, bridge designers, and seismologists rely on this linearity when designing systems of springs and dampers. When deformation exceeds the elastic limit the relationship becomes non-linear — exactly when the math gets hard.

2. Linear Operations

Which operations are linear?

The useful insight: we can combine multiple linear functions into a bigger one that's still linear:

$$G(x, y, z) = ax + by + cz$$ $$G(2x, 2y, 2z) = 2(ax + by + cz) = 2 \cdot G(x, y, z) \quad \checkmark$$

This is "mini arithmetic": multiply each input by a constant, then add the results. Because it's linear, we can split inputs apart, analyse them individually, and combine the results:

$$G(x,y,z) = G(x,0,0) + G(0,y,0) + G(0,0,z)$$

Interactive: Linearity Tester

Enter coefficients $a, b, c$ and two input vectors. Verify that $G(\mathbf{u} + \mathbf{v}) = G(\mathbf{u}) + G(\mathbf{v})$.

, , , ,
Real-World Example: Audio Mixing

A sound engineer mixes three microphone tracks. Each channel is scaled by a gain coefficient and summed:

$$\text{mix}(t) = 0.6 \cdot \text{vocal}(t) + 0.3 \cdot \text{guitar}(t) + 0.1 \cdot \text{drums}(t)$$

This is a linear combination. Doubling all input volumes doubles the mix — no distortion (in the linear regime). Every DAW (digital audio workstation), mixing console, and hearing aid processes audio this way: linear combinations of signals, thousands of times per second.

3. Organizing Inputs & Operations

We have a bunch of inputs to track, and predictable linear operations to perform. How do we organise?

Inputs go in vertical columns:

$$\mathbf{x} = \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}$$

Operations go in horizontal rows. If $F(x,y,z) = 3x + 4y + 5z$, we abbreviate the entire function as the row $[3\;\;4\;\;5]$.

Multiple operations stack into rows; multiple inputs sit side-by-side as columns:

$$\underbrace{\begin{bmatrix} 3 & 4 & 5 \\ 3 & 0 & 0 \end{bmatrix}}_{\text{Operations } M} \underbrace{\begin{bmatrix} a & x \\ b & y \\ c & z \end{bmatrix}}_{\text{Inputs } A} = \underbrace{\begin{bmatrix} 3a+4b+5c & 3x+4y+5z \\ 3a & 3x \end{bmatrix}}_{\text{Outputs } B}$$

Size convention: $m \times n$ means $m$ rows, $n$ columns. Multiply $[m \times n] \cdot [n \times p] = [m \times p]$. The inner dimensions must match.

Real-World Example: RGB Colour Transformation

Digital cameras apply a $3 \times 3$ colour correction matrix to each pixel's RGB values:

$$\begin{bmatrix} R' \\ G' \\ B' \end{bmatrix} = \begin{bmatrix} 1.2 & -0.1 & 0.0 \\ -0.05 & 1.1 & -0.05 \\ 0.0 & -0.1 & 1.2 \end{bmatrix} \begin{bmatrix} R \\ G \\ B \end{bmatrix}$$

Each row is an operation: "How much of each input colour contributes to this output channel." Every phone camera, Photoshop filter, and Instagram effect is a matrix applied to millions of pixel vectors — pure linear algebra running in real time on your GPU.

4. Visualizing the Matrix

Imagine "pouring" each input column through each operation row. As an input passes an operation, it creates one output entry.

Interactive: Matrix Multiplier

Enter two matrices and watch the multiplication step by step. (2×3) × (3×2) = (2×2).

M (operations):
[
]
A (inputs):
[
]
Real-World Example: Neural Network Forward Pass

In a neural network, each layer computes $\mathbf{y} = W\mathbf{x} + \mathbf{b}$, where $W$ is a weight matrix and $\mathbf{x}$ is the input vector. For a layer with 3 inputs and 2 outputs:

$$\begin{bmatrix} y_1 \\ y_2 \end{bmatrix} = \begin{bmatrix} w_{11} & w_{12} & w_{13} \\ w_{21} & w_{22} & w_{23} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}$$

GPT, image classifiers, self-driving car vision systems — they're all stacks of matrix multiplications. Training adjusts the weights; inference is just pouring data through matrices, billions of times per second on specialised hardware (GPUs/TPUs).

5. Fancier Operations

Some important matrices to know (for 3 inputs):

Identity

$$I = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix}$$

Copies input to output unchanged. $IA = A$.

Adder / Averager

$$\text{Add} = [1\;1\;1]$$ $$\text{Avg} = [\tfrac{1}{3}\;\tfrac{1}{3}\;\tfrac{1}{3}]$$

Selector

$$[1\;0\;0] \;\text{picks 1st input}$$ $$[0\;1\;0] \;\text{picks 2nd input}$$

Reorder

$$\begin{bmatrix} 1&0&0 \\ 0&0&1 \\ 0&1&0 \end{bmatrix}$$

$(x,y,z) \to (x,z,y)$

Interactive: Operation Explorer

Select a preset operation matrix or type your own. See how it transforms the input vector.

M:
·
Real-World Example: GPS Coordinate System Conversion

GPS satellites broadcast positions in the Earth-Centred Earth-Fixed (ECEF) coordinate system. Your phone converts to local East-North-Up (ENU) coordinates using a rotation matrix derived from your latitude $\phi$ and longitude $\lambda$:

$$R = \begin{bmatrix} -\sin\lambda & \cos\lambda & 0 \\ -\sin\phi\cos\lambda & -\sin\phi\sin\lambda & \cos\phi \\ \cos\phi\cos\lambda & \cos\phi\sin\lambda & \sin\phi \end{bmatrix}$$

This is a "reorder + rotate" matrix. Every time your phone shows a blue dot on a map, it has just multiplied satellite vectors through this matrix.

6. Real Example: Stock Portfolios

The key insight: linear algebra gives you mini-spreadsheets for your math equations. Let's see it in action.

Suppose a new product launches: Apple stock jumps 20%, Google drops 5%, Microsoft stays flat. We want to (1) update each stock value and (2) compute total profit.

$$\underbrace{\begin{bmatrix} 1.2 & 0 & 0 \\ 0 & 0.95 & 0 \\ 0 & 0 & 1 \\ 0.2 & -0.05 & 0 \end{bmatrix}}_{\text{Operations}} \begin{bmatrix} \text{Apple} \\ \text{Google} \\ \text{Microsoft} \end{bmatrix} = \begin{bmatrix} \text{New Apple} \\ \text{New Google} \\ \text{New Microsoft} \\ \text{Profit} \end{bmatrix}$$

Three inputs enter, four outputs leave. The first three rows are a "modified identity" (update each value); the fourth row computes the change.

Interactive: Portfolio Calculator

Enter stock holdings and market changes. The matrix does the rest.

Holdings ($):
Changes (%):
Real-World Example: Leontief Input-Output Economic Model

Nobel laureate Wassily Leontief modelled entire economies with matrices. If three industries (agriculture, manufacturing, services) each consume outputs of the others, the total production $\mathbf{x}$ needed to meet external demand $\mathbf{d}$ is:

$$\mathbf{x} = (I - A)^{-1}\mathbf{d}$$

where $A$ is the "consumption matrix" (how much each industry uses from the others per unit of output). Governments and the World Bank still use this matrix model to forecast the economic ripple effects of policy changes, trade disruptions, and infrastructure investments.

7. Geometric Transformations

When we treat inputs as 2D coordinates, a $2 \times 2$ matrix becomes a geometric transformation:

Scale: $\begin{bmatrix} s_x & 0 \\ 0 & s_y \end{bmatrix}$

Rotate by $\theta$: $\begin{bmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{bmatrix}$
Reflect (x-axis): $\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}$

Shear: $\begin{bmatrix} 1 & k \\ 0 & 1 \end{bmatrix}$

Interactive: 2D Transformation Playground

Pick a transform and watch a unit square warp in real time. The grid shows how the entire space is affected.

30°
Real-World Example: Computer Graphics & Game Engines

Every frame in a 3D game, each vertex of every model is transformed by a series of matrices:

$$\mathbf{v}_{\text{screen}} = P \cdot V \cdot M \cdot \mathbf{v}_{\text{model}}$$

M (Model) positions the object in the world. V (View) moves the world relative to the camera. P (Projection) flattens 3D to 2D screen coordinates. At 60 FPS with millions of vertices, GPUs perform billions of matrix-vector multiplications per second — the entire visual world you see in games, VR, and film CGI is built on $4 \times 4$ transformation matrices.

8. Solving Simultaneous Equations

A system of linear equations can be written as a single matrix equation $M\mathbf{x} = \mathbf{b}$:

$$\begin{cases} x + 2y + 3z = 3 \\ 2x + 3y + z = -10 \\ 5x - y + 2z = 14 \end{cases} \;\;\Longleftrightarrow\;\; \begin{bmatrix} 1&2&3 \\ 2&3&1 \\ 5&-1&2 \end{bmatrix} \begin{bmatrix} x\\y\\z \end{bmatrix} = \begin{bmatrix} 3\\-10\\14 \end{bmatrix}$$

Gauss-Jordan elimination transforms the augmented matrix $[M|\mathbf{b}]$ into $[I|\mathbf{x}]$ by adding/subtracting rows — revealing the solution without rewriting full equations.

Interactive: 2×2 System Solver

Enter coefficients for two equations in two unknowns. See the solution graphically (intersection of two lines).

x + y =
x + y =
Real-World Example: Circuit Analysis (Kirchhoff's Laws)

An electrical circuit with 3 loops and 3 unknown currents $I_1, I_2, I_3$ yields:

$$\begin{bmatrix} 10 & -4 & 0 \\ -4 & 12 & -6 \\ 0 & -6 & 8 \end{bmatrix} \begin{bmatrix} I_1\\I_2\\I_3 \end{bmatrix} = \begin{bmatrix} 12\\0\\-5 \end{bmatrix}$$

Electrical engineers solve these matrix equations daily — from PCB design to power grid load balancing. SPICE simulators (used in every chip design) solve systems of thousands of linear equations per simulation step.

9. Determinants

The determinant measures how a matrix scales area (2D) or volume (3D). Feed in a unit square; the determinant tells you the area of the output parallelogram.

$$\det \begin{bmatrix} a & b \\ c & d \end{bmatrix} = ad - bc$$

Interactive: Determinant Visualiser

Adjust matrix entries. The shaded region shows the transformed unit square and its area = |det|.

Real-World Example: Structural Engineering — Stability Check

In finite element analysis (FEA), the stiffness matrix $K$ of a structure relates forces to displacements: $K\mathbf{u} = \mathbf{f}$. If $\det(K) = 0$, the structure has a mechanism — it can move freely without resistance (think of a four-bar linkage that collapses). Engineers check the determinant (or condition number) of the stiffness matrix to verify structural stability before construction begins.

10. Eigenvectors & Eigenvalues

Consider spinning a globe: every point moves to a new position — except the points on the axis (the poles). In matrix terms:

$$M\mathbf{v} = \lambda \mathbf{v}$$

An eigenvector $\mathbf{v}$ is an input that doesn't change direction through the matrix — it only scales by factor $\lambda$ (the eigenvalue).

If $\lambda > 1$: the eigenvector stretches. If $0 < \lambda < 1$: it shrinks. If $\lambda < 0$: it flips direction.

Interactive: Eigenvector Explorer

Set a 2×2 matrix. The eigenvectors (red/blue lines) stay on their line when transformed.

Real-World Example: Google PageRank

The original Google search algorithm modelled the web as a giant matrix $M$ where $M_{ij}$ is the probability of clicking from page $j$ to page $i$. The principal eigenvector of $M$ (with $\lambda = 1$) gives the steady-state probability of a random surfer being on each page — this is the PageRank score.

$$M\mathbf{r} = 1 \cdot \mathbf{r}$$

Pages with high eigenvector components rank higher. The same technique powers social-network influence scores, recommendation engines, and epidemiological models (where the dominant eigenvalue determines whether a disease spreads or dies out).

11. Matrix Composition

A funky thought: we can treat the operations matrix as input to another matrix. Applying one operations matrix to another gives a new matrix that does both transformations in order:

$$T \cdot N = X$$

$X$ first applies $N$, then $T$. We didn't need any input data — we combined the operations themselves.

Want to apply the same transform $k$ times? Use $M^k$.

Interactive: Compose Two Transforms

Pick two 2D transformations. See the individual and combined effects on the unit square.

Real-World Example: Robot Arm Kinematics

A robot arm with 3 joints computes the position of its gripper by composing rotation matrices for each joint:

$$T_{\text{gripper}} = T_{\text{base}} \cdot R_1(\theta_1) \cdot R_2(\theta_2) \cdot R_3(\theta_3) \cdot T_{\text{tool}}$$

Each $R_i$ is a rotation matrix for joint $i$. By composing them, the controller instantly knows where the gripper ends up for any combination of joint angles. Factory robots, surgical arms, and Mars rovers all chain dozens of matrices together to plan movements with sub-millimetre precision.

12. Adding Translation (Homogeneous Coordinates)

Our mini-arithmetic has multiplications but no plain addition. But we can cheat: add a dummy "1" entry to the input. Now the matrix has an extra column to play with:

$$\begin{bmatrix} 1 & 0 & t_x \\ 0 & 1 & t_y \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} = \begin{bmatrix} x + t_x \\ y + t_y \\ 1 \end{bmatrix}$$

We pretend the input lives in one higher dimension and place a "1" there. A skew in the higher dimension looks like a slide (translation) in the original. The dummy entry stays 1, ready for more slides.

Interactive: Translation + Rotation

Combine rotation and translation — something a plain 2×2 matrix can't do alone.

30° 1.5 0.5
Real-World Example: Autonomous Vehicle Localisation

A self-driving car uses $4 \times 4$ homogeneous transformation matrices to track its position. Each sensor reading (LiDAR point cloud) is in the sensor's frame. To place it on the map:

$$\mathbf{p}_{\text{world}} = T_{\text{world←car}} \cdot T_{\text{car←sensor}} \cdot \mathbf{p}_{\text{sensor}}$$

Each $T$ is a $4 \times 4$ matrix encoding both rotation and translation. Without homogeneous coordinates, you'd need separate rotation and addition steps — messy, slow, and error-prone when chaining dozens of coordinate frames. Every self-driving car, drone, and warehouse robot composes these matrices thousands of times per second.


Inspired by: BetterExplained — An Intuitive Guide to Linear Algebra by Kalid Azad.