Discover the power of reasoning models by creating your own from scratch. In the insightful book "Building a Reasoning Model from Scratch", you'll construct a functional reasoning model atop a compact pre-trained LLM, guided by Sebastian Raschka, the acclaimed author of Build a Large Language Model (From Scratch). Throughout this course, you'll progress from foundational architecture concepts to advanced practical enhancements, equipped with clear explanations and hands-on coding examples.
Course Objectives
By engaging with this course, you will master the following skills:
- Implementing key improvements for reasoning within LLMs
- Evaluating models using expert judgments and established benchmarks
- Enhancing reasoning capabilities without the need to retrain existing weights
- Integrating external tools, such as calculators, through reinforcement learning (RL)
- Applying knowledge distillation from more complex reasoning models
- Understanding and constructing a comprehensive development pipeline for reasoning models
Application of Reasoning Models
Reasoning models segment tasks into manageable steps, delivering more reliable outcomes in mathematics, logic, and programming—a methodology already leveraged by cutting-edge systems like Grok 4 and GPT-5. This course simplifies the process: you will commence with a basic LLM, systematically augment it with reasoning mechanisms, and learn to accurately assess quality enhancements. Advancement is further achieved using non-training methods and RL strategies. Upon course completion, you will possess a robust, succinct reasoning stack, crafted through your own efforts.