Search all Throughline newsletters.

← All Newsletters

Course Forge

All Issues

M2E1: The Training Loop: Loss, Gradients, and Why Adam Replaced SGD

May 5, 2026

M3E3: Flash Attention and KV Cache: The Engineering That Makes Inference Possible

May 5, 2026

M3E2: MoE Architecture: How GPT-4, Gemini, and Mixtral Actually Work

May 5, 2026

M5E1: RLHF: The Pipeline That Made ChatGPT Possible and Its Limits

May 5, 2026

M4E2: LoRA and QLoRA: How Fine-Tuning Actually Works in 2025

May 5, 2026

M4E1: Chinchilla Scaling Laws: Why Frontier Models Are Deliberately Overtrained

May 5, 2026

M6E1: Test-Time Compute Scaling: The New Axis of AI Capability

May 5, 2026

M5E2: DPO and Constitutional AI: The Alignment Techniques That Replaced RLHF

May 5, 2026

M9E1: The Attack Surface of AI Systems: Adversarial Examples, Prompt Injection, and Model Theft

May 5, 2026

M8E1: Post-Hoc XAI to Mechanistic Interpretability: What We Actually Know

May 5, 2026

M7E1: Mamba and State Space Models: The Transformer's Most Serious Challenger

May 5, 2026

M11E1: Benchmarks, Saturation, and Evaluation Incommensurability

May 5, 2026