Local LLMs via Ollama & LM Studio - The Practical Guide

3h 52m 28s
English
Paid
AI assistants like ChatGPT and Google Gemini have become everyday tools. However, when privacy, cost, offline operation, or flexible customization are important, the best solution is to run powerful open language models (LLM) directly on your own computer. In this course, you will learn how to run and use local AI models, such as Llama by Meta, Gemma by Google, and DeepSeek, even on a regular laptop—without clouds, subscriptions, or data leaks.
Read more about the course

Why Local and Open LLMs?

In a world dominated by cloud services, local LLMs offer a real advantage:

  • No subscriptions - use powerful models for free
  • 100% privacy - all data stays on your computer
  • Offline capability - run AI in offline mode
  • Freedom from vendors - access to a rapidly growing ecosystem of open-source models
  • Cutting-edge capabilities - open models are among the top in global rankings!

What you will learn:

  • Overview of open LLMs: where to find them, how to choose them, and why they are important
  • Hardware requirements: what you need to run models locally
  • Model quantization: how to run even "heavy" AI on a regular PC
  • LM Studio: installation, setup, launch, and use of models
  • Ollama: a convenient way to manage LLM from the terminal or API
  • Practice: image processing, PDF document summaries, text generation, few-shot prompting, and more
  • Integration into your own projects: working with APIs and automation

Who the course is for:

  • Developers who want to integrate AI into applications
  • Enthusiasts and students interested in cutting-edge technologies
  • Those who value privacy and control over their data
  • Anyone who wants to save on subscriptions and explore the capabilities of modern LLMs

Watch Online Local LLMs via Ollama & LM Studio - The Practical Guide

Join premium to watch
Go to premium
# Title Duration
1 Welcome To The Course! 00:19
2 What Exactly Are "Open LLMs"? 06:28
3 Why Would You Want To Run Open LLMs Locally? 06:53
4 Popular Open LLMs - Some Examples 03:44
5 Where To Find Open LLMs? 04:48
6 Running LLMs Locally - Available Options 07:18
7 Check The Model Licenses! 04:05
8 Module Introduction 01:21
9 LLM Hardware Requirements - First Steps 04:22
10 Deriving Hardware Requirements From Model Parameters 05:35
11 Quantization To The Rescue! 06:51
12 Does It Run On Your Machine? 05:51
13 Module Introduction 02:04
14 Running Locally vs Remotely 01:09
15 Installing & Using LM Studio 03:10
16 Finding, Downloading & Activating Open LLMs 09:05
17 Using the LM Studio Chat Interface 04:54
18 Working with System Prompts & Presets 03:27
19 Managing Chats 02:33
20 Power User Features For Managing Models & Chats 06:29
21 Leveraging Multimodal Models & Extracting Content From Images (OCR) 02:49
22 Analyzing & Summarizing PDF Documents 03:28
23 Onwards To More Advanced Settings 01:53
24 Understanding Temperature, top_k & top_p 06:33
25 Controlling Temperature, top_k & top_p in LM Studio 04:46
26 Managing the Underlying Runtime & Hardware Configuration 04:18
27 Managing Context Length 05:22
28 Using Flash Attention 05:09
29 Working With Structured Outputs 05:30
30 Using Local LLMs For Code Generation 02:36
31 Content Generation & Few Shot Prompting (Prompt Engineering) 05:22
32 Onwards To Programmatic Use 02:26
33 LM Studio & Its OpenAI Compatibility 06:01
34 More Code Examples! 05:05
35 Diving Deeper Into The LM Studio APIs 02:11
36 Module Introduction 01:42
37 Installing & Starting Ollama 02:09
38 Finding Usable Open Models 02:57
39 Running Open LLMs Locally via Ollama 07:44
40 Adding a GUI with Open WebUI 02:13
41 Dealing with Multiline Messages & Image Input (Multimodality) 02:39
42 Inspecting Models & Extracting Model Information 03:32
43 Editing System Messages & Model Parameters 06:02
44 Saving & Loading Sessions and Models 03:36
45 Managing Models 05:43
46 Creating Model Blueprints via Modelfiles 06:23
47 Creating Models From Modelfiles 03:27
48 Making Sense of Model Templates 06:40
49 Building a Model From Scratch From a GGUF File 06:38
50 Getting Started with the Ollama Server (API) 02:13
51 Exploring the Ollama API & Programmatic Model Access 05:19
52 Getting Structured Output 02:57
53 More Code Examples! 04:54
54 Roundup 01:45

Similar courses to Local LLMs via Ollama & LM Studio - The Practical Guide

Build Your SaaS AI Web Platform From Zero to Production

Build Your SaaS AI Web Platform From Zero to ProductionCode4Startup (coderealprojects)

Category: Next.js, Other (AI)
Duration 8 hours 36 minutes 2 seconds
Build a React Native app with Claude AI

Build a React Native app with Claude AIdesigncode.io

Category: React Native, Other (AI)
Duration 13 hours 53 minutes 10 seconds
Build AI Agents with CrewAI

Build AI Agents with CrewAIzerotomastery.io

Category: Other (AI)
Duration 2 hours 51 minutes 42 seconds
Build SwiftUI apps for iOS 18 with Cursor and Xcode

Build SwiftUI apps for iOS 18 with Cursor and Xcodedesigncode.io

Category: Other (Mobile Apps Development), Swift, Other (AI)
Duration 4 hours 35 minutes 14 seconds
3D Browser Game Development with AI and Cursor

3D Browser Game Development with AI and CursorKevin Kern (instructa.ai)

Category: Other (AI)
Duration 2 hours 7 minutes 55 seconds
Build a Simple Neural Network & Learn Backpropagation

Build a Simple Neural Network & Learn Backpropagationzerotomastery.io

Category: Other (AI), Machine learning
Duration 4 hours 34 minutes 9 seconds
Build a SwiftUI app with Claude AI

Build a SwiftUI app with Claude AIdesigncode.io

Category: Other (Mobile Apps Development), Swift, Other (AI)
Duration 9 hours 5 minutes 44 seconds
5 Levels of Agents - Coding Agents

5 Levels of Agents - Coding AgentsMckay Wrigley (takeoff)

Category: Other (AI)
Duration 5 hours 4 minutes 36 seconds
Build AI Agents with AWS

Build AI Agents with AWSzerotomastery.io

Category: AWS, Other (AI)
Duration 3 hours 9 minutes 7 seconds