Software Concepts

Interactive explainers and study guides

Interactive Explainer

How LLMs Work

A visual walkthrough of how large language models process and generate text.

Interactive Explainer

How LLM Training Works

Explore the training pipeline behind large language models, from data to deployment.

Study Guide

Server-Sent Events

An interactive study guide covering SSE — how it works, when to use it, and how it compares to WebSockets.

Interactive Explainer

QLoRA Fine-tuning

How to fine-tune large language models on a single GPU using quantisation and low-rank adapters.

Architecture Deep Dive

Claude Code Architecture

Interactive exploration of Anthropic's CLI for Claude — tools, agents, permissions, MCP, streaming, and more.

Visual Playground

Claude Code Agent Flow

Watch messages transform in real-time as they flow through the agent system — tools, permissions, streaming, and more.

Visual Playground

Production ML Pipeline

Step through real-world ML workflows — training, deployment, monitoring, drift detection, retraining, and incident response.

Coming soon

Coming Soon

Transformers & Attention

A deep dive into the self-attention mechanism and how transformer architectures process sequences.

Coming Soon

Retrieval-Augmented Generation (RAG)

How RAG extends LLMs with external knowledge — embeddings, vector stores, and retrieval pipelines.

Coming Soon

Prompt Engineering

Techniques for writing effective prompts — zero-shot, few-shot, chain-of-thought, and beyond.

Coming Soon

Tokenisation

How text is broken into tokens, why it matters for LLM behaviour, and how BPE and other algorithms work.

Coming Soon

LLM Agents & Tool Use

How LLMs are turned into autonomous agents that can reason, plan, and call external tools.