Skip to main content

Building LLMs for Production

0h 0m 0s
English
Paid

Course description

"Creating LLM for Production" is a practical 470-page guide (updated in October 2024) designed for developers and specialists who want to go beyond prototyping and build reliable, industry-ready applications based on large language models.

The book explains the fundamentals of how LLMs work and thoroughly examines key techniques: advanced prompting, Retrieval-Augmented Generation (RAG), model fine-tuning, evaluation methods, and deployment strategies. Readers gain access to interactive Colab notebooks, real code examples, and case studies demonstrating how to integrate LLMs into products and workflows in practice. Special attention is given to issues of security, monitoring, optimization, and cost reduction.

Books

Read Book Building LLMs for Production

#Title
1Table of Contents
2About The Book
3Introduction
4Why Prompt Engineering, Fine-Tuning, and RAG?
5Coding Environment and Packages
6A Brief History of Language Models
7What are Large Language Models?
8Building Blocks of LLMs
9Tutorial: Translation with LLMs (GPT-3.5 API)
10Tutorial: Control LLMs Output with Few-Shot Learning
11Recap
12Understanding Transformers
13Transformer Model’s Design Choices
14Transformer Architecture Optimization Techniques
15The Generative Pre-trained Transformer (GPT) Architecture
16Introduction to Large Multimodal Models
17Proprietary vs. Open Models vs. Open-Source Language Models
18Applications and Use-Cases of LLMs
19Recap
20Understanding Hallucinations and Bias
21Reducing Hallucinations by Controlling LLM Outputs
22Evaluating LLM Performance
23Recap
24Prompting and Prompt Engineering
25Prompting Techniques
26Prompt Injection and Security
27Recap
28Why RAG?
29Building a Basic RAG Pipeline from Scratch
30Recap
31LLM Frameworks
32LangChain Introduction
33Tutorial 1: Building LLM-Powered Applications with LangChain
34Tutorial 2: Building a News Articles Summarizer
35LlamaIndex Introduction
36LangChain vs. LlamaIndex vs. OpenAI Assistants
37Recap
38What are LangChain Prompt Templates
39Few-Shot Prompts and Example Selectors
40What are LangChain Chains
41Tutorial 1: Managing Outputs with Output Parsers
42Tutorial 2: Improving Our News Articles Summarizer
43Tutorial 3: Creating Knowledge Graphs from Textual Data: Finding Hidden Connections
44Recap
45LangChain’s Indexes and Retrievers
46Data Ingestion
47Text Splitters
48Similarity Search and Vector Embeddings
49Tutorial 1: A Customer Support Q&A Chatbot
50Tutorial 2: A YouTube Video Summarizer Using Whisper and LangChain
51Tutorial 3: A Voice Assistant for Your Knowledge Base
52Tutorial 4: Preventing Undesirable Outputs with the Self-Critique Chain
53Tutorial 5: Preventing Undesirable Outputs from a Customer Service Chatbot
54Recap
55From Proof of Concept to Product: Challenges of RAG Systems
56Advanced RAG Techniques with LlamaIndex
57RAG - Metrics & Evaluation
58LangChain LangSmith and LangChain Hub
59Recap
60What are Agents: Large Models as Reasoning Engines
61An Overview of AutoGPT and BabyAGI
62The Agent Simulation Projects in LangChain
63Tutorial 1: Building Agents for Analysis Report Creation
64Tutorial 2: Query and Summarize a DB with LlamaIndex
65Tutorial 3: Building Agents with OpenAI Assistants
66Tutorial 4: LangChain OpenGPT
67Tutorial 5: Multimodal Financial Document Analysis from PDFs
68Recap
69Understanding Fine-Tuning
70Low-Rank Adaptation (LoRA)
71Tutorial 1: SFT with LoRA
72Tutorial 2: Using SFT and LoRA for Financial Sentiment
73Tutorial 3: Fine-Tuning a Cohere LLM with Medical Data
74Reinforcement Learning from Human Feedback
75Tutorial 4: Improving LLMs with RLHF
76Recap
77Model Distillation and Teacher-Student Models
78LLM Deployment Optimization: Quantization, Pruning, and Speculative Decoding
79Tutorial: Deploying a Quantized LLM on a CPU on Google Cloud Platform (GCP)
80Deploying Open-Source LLMs on Cloud Providers
81Recap
82Conclusion
83Further Reading and Courses

Comments

0 comments

Want to join the conversation?

Sign in to comment

Similar courses

Learn MCP (Model Context Protocol)

Learn MCP (Model Context Protocol)

Sources: zerotomastery.io
If you are interested in AI that doesn't just talk but actually does something, this compact course is for you. Get ready to dive into the Model Context...
1 hour 7 minutes 34 seconds
3D Browser Game Development with AI and Cursor

3D Browser Game Development with AI and Cursor

Sources: Kevin Kern (instructa.ai)
Hello everyone! Welcome to the course "Development of a 3D Browser Game with AI and Cursor". I'm glad to see you here! First, I want to tell you why we...
2 hours 7 minutes 55 seconds
Building Apps with o1 Pro Template System: Part 1

Building Apps with o1 Pro Template System: Part 1

Sources: Mckay Wrigley (takeoff)
This is the first part of a two-part practical course. In this module, you will get acquainted with the basic workflow of creating applications using...
4 hours 4 minutes 38 seconds
Agentic AI Programming for Python Course

Agentic AI Programming for Python Course

Sources: Talkpython
Learn how to use agent AI to create and improve Python applications. Discover the difference from chatbots and customize AI for your tasks.
2 hours 38 minutes 10 seconds
MCP in Practice: The Future of AI Agents

MCP in Practice: The Future of AI Agents

Sources: newline (ex fullstack.io)
In this course, you will gain a comprehensive understanding of MCP - from key components and basic concepts to practical application examples. We will pay...
1 hour 10 minutes 6 seconds