Interactive explainers and study guides
A visual walkthrough of how large language models process and generate text.
Interactive ExplainerExplore the training pipeline behind large language models, from data to deployment.
Study GuideAn interactive study guide covering SSE — how it works, when to use it, and how it compares to WebSockets.
Interactive ExplainerHow to fine-tune large language models on a single GPU using quantisation and low-rank adapters.
Architecture Deep DiveInteractive exploration of Anthropic's CLI for Claude — tools, agents, permissions, MCP, streaming, and more.
Visual PlaygroundWatch messages transform in real-time as they flow through the agent system — tools, permissions, streaming, and more.
Visual PlaygroundStep through real-world ML workflows — training, deployment, monitoring, drift detection, retraining, and incident response.
Coming soon
A deep dive into the self-attention mechanism and how transformer architectures process sequences.
How RAG extends LLMs with external knowledge — embeddings, vector stores, and retrieval pipelines.
Techniques for writing effective prompts — zero-shot, few-shot, chain-of-thought, and beyond.
How text is broken into tokens, why it matters for LLM behaviour, and how BPE and other algorithms work.
How LLMs are turned into autonomous agents that can reason, plan, and call external tools.