Find a file
Patrick Devine 97323d1c68
consolidate the tokenizer (#14327)
This change adds a new x/tokenizer package which includes:
  * New BPE and SentencePiece tokenizers
  * Removing the dependency on the imagegen tokenizers
  * Fixes to multibyte decoding in the pipeline
  * Various correctness and benchmark tests

Not included in this PR is the WordPiece tokenizer for BERT models which will be
added when we add embedding models. The imagegen tokenizers will also be removed in
a follow-up PR.
2026-02-19 15:55:45 -08:00
.github win: add curl-style install script (#14178) 2026-02-09 15:28:11 -08:00
anthropic anthropic: enable websearch (#14246) 2026-02-13 19:20:46 -08:00
api add ability to disable cloud (#14221) 2026-02-12 15:47:00 -08:00
app add ability to disable cloud (#14221) 2026-02-12 15:47:00 -08:00
auth auth: fix problems with the ollama keypairs (#12373) 2025-09-22 23:20:20 -07:00
cmd cmd: set codex env vars on launch and handle zstd request bodies (#14122) 2026-02-18 17:19:36 -08:00
convert model: add qwen3-next architecture (#14051) 2026-02-03 23:27:21 -08:00
discover CUDA: filter devices on secondary discovery (#13317) 2025-12-03 12:58:16 -08:00
docs docs: make integrations more discoverable (#14301) 2026-02-17 13:27:25 -08:00
envconfig add ability to disable cloud (#14221) 2026-02-12 15:47:00 -08:00
format chore(all): replace instances of interface with any (#10067) 2025-04-02 09:44:27 -07:00
fs model: add qwen3-next architecture (#14051) 2026-02-03 23:27:21 -08:00
harmony Parser for Cogito v2 (#13145) 2025-11-19 17:21:07 -08:00
integration ollamarunner: Fix off by one error with numPredict 2026-02-04 17:14:24 -08:00
internal add ability to disable cloud (#14221) 2026-02-12 15:47:00 -08:00
kvcache model: add qwen3-next architecture (#14051) 2026-02-03 23:27:21 -08:00
llama model: add qwen3-next architecture (#14051) 2026-02-03 23:27:21 -08:00
llm move tokenizers to separate package (#13825) 2026-02-05 17:44:11 -08:00
logutil logutil: fix source field (#12279) 2025-09-16 16:18:07 -07:00
manifest Clean up the manifest and modelpath (#13807) 2026-01-21 11:46:17 -08:00
middleware cmd: set codex env vars on launch and handle zstd request bodies (#14122) 2026-02-18 17:19:36 -08:00
ml qwen3next: avoid inplace sigmoid for shared gate (#14077) 2026-02-04 15:50:02 -08:00
model model: add qwen3 support to mlxrunner (#14293) 2026-02-17 13:58:49 -08:00
openai x/imagegen: add image edit capabilities (#13846) 2026-01-22 20:35:08 -08:00
parser Add experimental MLX backend and engine with imagegen support (#13648) 2026-01-08 16:18:59 -08:00
progress Add z-image image generation prototype (#13659) 2026-01-09 21:09:46 -08:00
readline feature: add ctrl-g to allow users to use an editor to edit their prompt (#14197) 2026-02-11 13:04:41 -08:00
runner Add MLX runner with GLM4-MoE-Lite model support (#14185) 2026-02-10 14:57:57 -08:00
sample move tokenizers to separate package (#13825) 2026-02-05 17:44:11 -08:00
scripts install: prevent partial download script execution (#14311) 2026-02-18 18:32:45 -08:00
server bugfix: better mlx model scheduling (#14290) 2026-02-17 13:57:05 -08:00
template template: fix args-as-json rendering (#13636) 2026-01-06 18:33:57 -08:00
thinking thinking: fix double emit when no opening tag 2025-08-21 21:03:12 -07:00
tokenizer move tokenizers to separate package (#13825) 2026-02-05 17:44:11 -08:00
tools preserve tool definition and call JSON ordering (#13525) 2026-01-05 18:03:36 -08:00
types Update vendor ggml code to a5bb8ba4 (#13832) 2026-02-02 17:31:59 -08:00
version add version 2023-08-22 09:40:58 -07:00
x consolidate the tokenizer (#14327) 2026-02-19 15:55:45 -08:00
.dockerignore next build (#8539) 2025-01-29 15:03:38 -08:00
.gitattributes .gitattributes: add app/webview to linguist-vendored (#13274) 2025-11-29 23:46:10 -05:00
.gitignore harmony: remove special casing in routes.go 2025-09-18 14:55:59 -07:00
.golangci.yaml ci: restore previous linter rules (#13322) 2025-12-03 18:55:02 -08:00
CMakeLists.txt Revert "move tokenizers to separate package (#13825)" (#14111) 2026-02-05 20:49:08 -08:00
CMakePresets.json Revert "Update vendored llama.cpp to b7847" (#14061) 2026-02-03 18:39:36 -08:00
CONTRIBUTING.md docs: fix typos in repository documentation (#10683) 2025-11-15 20:22:29 -08:00
Dockerfile build: fix Dockerfile mlx directory (#14131) 2026-02-06 17:08:53 -08:00
go.mod cmd: set codex env vars on launch and handle zstd request bodies (#14122) 2026-02-18 17:19:36 -08:00
go.sum cmd: set codex env vars on launch and handle zstd request bodies (#14122) 2026-02-18 17:19:36 -08:00
LICENSE proto -> ollama 2023-06-26 15:57:13 -04:00
main.go lint 2024-08-01 17:06:06 -07:00
Makefile.sync Revert "Update vendored llama.cpp to b7847" (#14061) 2026-02-03 18:39:36 -08:00
MLX_VERSION Revert "chore: update mlx-c bindings to 0.5.0 (#14303)" (#14316) 2026-02-18 17:01:25 -08:00
README.md readme: update download link for macOS (#1) (#14271) 2026-02-15 15:25:15 -08:00
SECURITY.md docs: fix typos in repository documentation (#10683) 2025-11-15 20:22:29 -08:00

ollama

Ollama

Start building with open models.

Download

macOS

curl -fsSL https://ollama.com/install.sh | sh

or download manually

Windows

irm https://ollama.com/install.ps1 | iex

or download manually

Linux

curl -fsSL https://ollama.com/install.sh | sh

Manual install instructions

Docker

The official Ollama Docker image ollama/ollama is available on Docker Hub.

Libraries

Community

Get started

ollama

You'll be prompted to run a model or connect Ollama to your existing agents or applications such as claude, codex, openclaw and more.

Coding

To launch a specific integration:

ollama launch claude

Supported integrations include Claude Code, Codex, Droid, and OpenCode.

AI assistant

Use OpenClaw to turn Ollama into a personal AI assistant across WhatsApp, Telegram, Slack, Discord, and more:

ollama launch openclaw

Chat with a model

Run and chat with Gemma 3:

ollama run gemma3

See ollama.com/library for the full list.

See the quickstart guide for more details.

REST API

Ollama has a REST API for running and managing models.

curl http://localhost:11434/api/chat -d '{
  "model": "gemma3",
  "messages": [{
    "role": "user",
    "content": "Why is the sky blue?"
  }],
  "stream": false
}'

See the API documentation for all endpoints.

Python

pip install ollama
from ollama import chat

response = chat(model='gemma3', messages=[
  {
    'role': 'user',
    'content': 'Why is the sky blue?',
  },
])
print(response.message.content)

JavaScript

npm i ollama
import ollama from "ollama";

const response = await ollama.chat({
  model: "gemma3",
  messages: [{ role: "user", content: "Why is the sky blue?" }],
});
console.log(response.message.content);

Supported backends

  • llama.cpp project founded by Georgi Gerganov.

Documentation

Community Integrations

Want to add your project? Open a pull request.

Chat Interfaces

Web

Desktop

  • Dify.AI - LLM app development platform
  • AnythingLLM - All-in-one AI app for Mac, Windows, and Linux
  • Maid - Cross-platform mobile and desktop client
  • Witsy - AI desktop app for Mac, Windows, and Linux
  • Cherry Studio - Multi-provider desktop client
  • Ollama App - Multi-platform client for desktop and mobile
  • PyGPT - AI desktop assistant for Linux, Windows, and Mac
  • Alpaca - GTK4 client for Linux and macOS
  • SwiftChat - Cross-platform including iOS, Android, and Apple Vision Pro
  • Enchanted - Native macOS and iOS client
  • RWKV-Runner - Multi-model desktop runner
  • Ollama Grid Search - Evaluate and compare models
  • macai - macOS client for Ollama and ChatGPT
  • AI Studio - Multi-provider desktop IDE
  • Reins - Parameter tuning and reasoning model support
  • ConfiChat - Privacy-focused with optional encryption
  • LLocal.in - Electron desktop client
  • MindMac - AI chat client for Mac
  • Msty - Multi-model desktop client
  • BoltAI for Mac - AI chat client for Mac
  • IntelliBar - AI-powered assistant for macOS
  • Kerlig AI - AI writing assistant for macOS
  • Hillnote - Markdown-first AI workspace
  • Perfect Memory AI - Productivity AI personalized by screen and meeting history

Mobile

SwiftChat, Enchanted, Maid, Ollama App, Reins, and ConfiChat listed above also support mobile platforms.

Code Editors & Development

Libraries & SDKs

Frameworks & Agents

RAG & Knowledge Bases

  • RAGFlow - RAG engine based on deep document understanding
  • R2R - Open-source RAG engine
  • MaxKB - Ready-to-use RAG chatbot
  • Minima - On-premises or fully local RAG
  • Chipper - AI interface with Haystack RAG
  • ARGO - RAG and deep research on Mac/Windows/Linux
  • Archyve - RAG-enabling document library
  • Casibase - AI knowledge base with RAG and SSO
  • BrainSoup - Native client with RAG and multi-agent automation

Bots & Messaging

Terminal & CLI

Productivity & Apps

Observability & Monitoring

  • Opik - Debug, evaluate, and monitor LLM applications
  • OpenLIT - OpenTelemetry-native monitoring for Ollama and GPUs
  • Lunary - LLM observability with analytics and PII masking
  • Langfuse - Open source LLM observability
  • HoneyHive - AI observability and evaluation for agents
  • MLflow Tracing - Open source LLM observability

Database & Embeddings

Infrastructure & Deployment

Cloud

Package Managers