Back to Notes
·3 min read·Updated 17 Jan 2026

Why I'm Learning Linear Algebra as a Web Developer (Part 1)

#AI#Math#Learning#Linear Algebra
Why I'm Learning Linear Algebra as a Web Developer (Part 1)

This is Part 1 of my linear algebra learning journey. See Part 2: Vectors & Similarity and Part 3: Matrices & SVD.


Most web developers never touch linear algebra after university. I was one of them until I tried to understand how LLMs actually work.

The Problem with API-Only Knowledge

When I first started using GPT-4 and Claude for coding assistance, I treated them like magic black boxes. You send text in, you get text out. It works, so why dig deeper?

The problem is that "it works" isn't good enough when you're building production systems. I needed to understand:

  • Why do embeddings sometimes cluster semantically similar concepts?
  • How do I choose the right vector database for my use case?
  • When should I fine-tune vs use RAG vs prompt engineering?

These questions require understanding what's happening under the hood.

Vectors Are Everywhere

Here's the revelation: vectors are the lingua franca of modern AI. Every piece of text, every image, every piece of data gets converted into vectors before neural networks can process them.

# A simple embedding example
from sentence_transformers import SentenceTransformer
 
model = SentenceTransformer('all-MiniLM-L6-v2')
embedding = model.encode("Hello, world!")
 
# embedding is now a 384-dimensional vector
print(embedding.shape)  # (384,)

That 384-dimensional vector captures the "meaning" of "Hello, world!" in a way that allows mathematical operations like finding similar sentences.

The Core Concepts

Here's what I've found most useful so far:

Matrix Multiplication

This is the backbone of neural networks. Every layer in a transformer is essentially a series of matrix multiplications.

Dot Products and Cosine Similarity

How do you measure if two vectors are "similar"? Cosine similarity. This is how semantic search works.

Eigenvalues and SVD

These help you understand how neural networks compress and represent information. Crucial for understanding dimensionality reduction.

Practical Application

I'm not doing this for academic interest. My goal is to build better systems:

  1. Smarter Search: Building semantic search for technical documentation
  2. Custom Embeddings: Fine-tuning embedding models for domain-specific applications
  3. Debugging AI Systems: When RAG retrieval fails, understanding vectors helps me debug why

Resources I'm Using

The Takeaway

You don't need a PhD to use AI effectively. But understanding the mathematical foundations gives you superpowers:

  • Better intuition for choosing solutions
  • Ability to debug and optimize
  • Confidence to build novel applications

Linear algebra isn't just math it's the language that modern AI speaks. And learning it has made me a significantly better engineer.


Continue the Journey

This is Part 1 of my learning series. Here's what comes next:


Aamir Shahzad

Aamir Shahzad

Author

Software Engineer with 7+ years of experience building scalable data systems. Specializing in Django, Python, and applied AI.