Ollama allows you to run large language models directly on your own computer. This means your AI runs locally instead of sending your data out to an online service every time you use it.
It is one of the easiest ways to get started with local AI, and for many people it only takes a few minutes to install and begin chatting with a model in the terminal.
Visit the official site:
Download the version for your operating system.
Open your terminal and run:
curl -fsSL https://ollama.com/install.sh | sh
This installs Ollama automatically. After it finishes, you can start using Ollama commands in the terminal.
Download the macOS version from the Ollama website and install it like a normal application.
Once it is installed, open Terminal and you can use Ollama commands there.
Download the Windows installer from the Ollama website and run it like a normal installer.
After installation, open Command Prompt, PowerShell, or Windows Terminal and use Ollama from there.
Ollama hosts a large catalog of models here:
This page lets you browse many models of different sizes and specialties.
Model size affects speed, memory use, and the kind of answers you will get.
General advice:
• 2B–4B models usually run well on almost any modern computer
• 7B–13B models are a good middle ground for many systems with 16–32GB of RAM
• 30B+ models are much heavier and are better suited for high-end systems
If you are just getting started, try a smaller, faster model such as:
qwen:2b
phi3
gemma:2b
Smaller models are a good first step because they download faster and feel more responsive.
To download and start a model, run:
ollama run MODELNAME
Example:
ollama run qwen:2b
The first time you run a model, Ollama will download it automatically.
Once the model loads, simply type your prompt and press Enter.
Example:
Explain how airplanes fly
The model will respond directly in the terminal.
List installed models:
ollama list
Download a model without running it:
ollama pull modelname
Remove a model:
ollama rm modelname
See which models are currently running:
ollama ps
Start with a small model so everything feels quick and easy.
Once you are comfortable, you can try larger models or connect Ollama to web interfaces, coding tools, or local automation projects.
Running AI locally is one of the simplest ways to experiment with modern machine learning while keeping more control over your own setup.
This is a small personal site about useful tools you can run on your own computer. Everything here is meant to stay simple, practical, and friendly to normal hardware.
Built with plain HTML and a lot of curiosity.
Last updated: March 2026
Home | Ollama Guide | Kiwix Guide | Linux Page | Hardware Page
Visitor counter: 000042