Gemske's Corner

Run your own AI locally.

Running a Local LLM with Ollama

Ollama allows you to run large language models directly on your own computer. This means your AI runs locally instead of sending your data out to an online service every time you use it.

It is one of the easiest ways to get started with local AI, and for many people it only takes a few minutes to install and begin chatting with a model in the terminal.

1. Install Ollama

Visit the official site:

https://ollama.com

Download the version for your operating system.

Linux Installation

Open your terminal and run:

curl -fsSL https://ollama.com/install.sh | sh

This installs Ollama automatically. After it finishes, you can start using Ollama commands in the terminal.

Mac Installation

Download the macOS version from the Ollama website and install it like a normal application.

Once it is installed, open Terminal and you can use Ollama commands there.

Windows Installation

Download the Windows installer from the Ollama website and run it like a normal installer.

After installation, open Command Prompt, PowerShell, or Windows Terminal and use Ollama from there.

2. Find AI Models

Ollama hosts a large catalog of models here:

https://ollama.com/search

This page lets you browse many models of different sizes and specialties.

3. Choosing a Model

Model size affects speed, memory use, and the kind of answers you will get.

General advice:

• 2B–4B models usually run well on almost any modern computer
• 7B–13B models are a good middle ground for many systems with 16–32GB of RAM
• 30B+ models are much heavier and are better suited for high-end systems

If you are just getting started, try a smaller, faster model such as:

qwen:2b
phi3
gemma:2b

Smaller models are a good first step because they download faster and feel more responsive.

4. Run Your First Model

To download and start a model, run:

ollama run MODELNAME

Example:

ollama run qwen:2b

The first time you run a model, Ollama will download it automatically.

5. Start Chatting

Once the model loads, simply type your prompt and press Enter.

Example:

Explain how airplanes fly

The model will respond directly in the terminal.

6. Useful Commands

List installed models:

ollama list

Download a model without running it:

ollama pull modelname

Remove a model:

ollama rm modelname

See which models are currently running:

ollama ps

Final Advice

Start with a small model so everything feels quick and easy.

Once you are comfortable, you can try larger models or connect Ollama to web interfaces, coding tools, or local automation projects.

Running AI locally is one of the simplest ways to experiment with modern machine learning while keeping more control over your own setup.

About This Site

This is a small personal site about useful tools you can run on your own computer. Everything here is meant to stay simple, practical, and friendly to normal hardware.

Built with plain HTML and a lot of curiosity.


Last updated: March 2026

Home | Ollama Guide | Kiwix Guide | Linux Page | Hardware Page

Visitor counter: 000042