AI Technical Foundations for Chief AI Officers — Course Index
A complete course for senior leaders and technical strategists. 14 modules, 20 episodes — from transformer internals to governance frameworks.
Module 1
- From Turing to the First AI Winter: Promises and Limits of Symbolic AI~36 min
- AlexNet to GPT-4: The Deep Learning Renaissance and Scaling Arc~37 min
Module 2
- The Training Loop: Loss, Gradients, and Why Adam Replaced SGD~38 min
- Pretraining vs. Fine-Tuning: The Paradigm Shift That Defines Modern AI~36 min
Module 3
- Attention Mechanisms: Self-Attention, Multi-Head, and What Transformers Actually Compute~35 min
- MoE Architecture: How GPT-4, Gemini, and Mixtral Actually Work~39 min
- Flash Attention and KV Cache: The Engineering That Makes Inference Possible~39 min
Module 4
- Chinchilla Scaling Laws: Why Frontier Models Are Deliberately Overtrained~35 min
- LoRA and QLoRA: How Fine-Tuning Actually Works in 2025~44 min
Module 5
- RLHF: The Pipeline That Made ChatGPT Possible and Its Limits~36 min
- DPO and Constitutional AI: The Alignment Techniques That Replaced RLHF~35 min