Find a file
Daniel Hiltgen 10e51c5177
MLX: add header vendoring and remove go build tag (#14642)
* prefer rocm v6 on windows

Avoid building with v7 - more changes are needed

* MLX: add header vendoring and remove go build tag

This switches to using a vendoring approach for the mlx-c headers so that Go
can build without requiring a cmake first.  This enables building the new MLX
based code by default.  Every time cmake runs, the headers are refreshed, so we
can easily keep them in sync when we bump mlx versions.  Basic Windows
and Linux support are verified.

* ci: harden for flaky choco repo servers

CI sometimes fails due to choco not actually installing cache.  Since it just speeds up the build, we can proceed without.

* review comments
2026-03-09 17:24:45 -07:00
.github MLX: add header vendoring and remove go build tag (#14642) 2026-03-09 17:24:45 -07:00
anthropic anthropic: enable websearch (#14246) 2026-02-13 19:20:46 -08:00
api mlx: Remove peak memory from the API 2026-03-02 15:56:18 -08:00
app app: auto update should be enabled when reset to defaults (#14741) 2026-03-09 15:02:36 -04:00
auth auth: fix problems with the ollama keypairs (#12373) 2025-09-22 23:20:20 -07:00
cmd mlx: get parameters from modelfile during model creation (#14747) 2026-03-09 15:33:24 -07:00
convert model: support for qwen3.5 architecture (#14378) 2026-02-24 20:08:05 -08:00
discover CUDA: filter devices on secondary discovery (#13317) 2025-12-03 12:58:16 -08:00
docs MLX: add header vendoring and remove go build tag (#14642) 2026-03-09 17:24:45 -07:00
envconfig add ability to disable cloud (#14221) 2026-02-12 15:47:00 -08:00
format chore(all): replace instances of interface with any (#10067) 2025-04-02 09:44:27 -07:00
fs model: support for qwen3.5 architecture (#14378) 2026-02-24 20:08:05 -08:00
harmony Parser for Cogito v2 (#13145) 2025-11-19 17:21:07 -08:00
integration ollamarunner: Fix off by one error with numPredict 2026-02-04 17:14:24 -08:00
internal Reapply "don't require pulling stubs for cloud models" again (#14608) 2026-03-06 14:27:47 -08:00
kvcache model: support for qwen3.5 architecture (#14378) 2026-02-24 20:08:05 -08:00
llama models: add nemotronh architecture support (#14356) 2026-02-22 15:09:14 -08:00
llm mlx: Remove peak memory from the API 2026-03-02 15:56:18 -08:00
logutil logutil: fix source field (#12279) 2025-09-16 16:18:07 -07:00
manifest Clean up the manifest and modelpath (#13807) 2026-01-21 11:46:17 -08:00
middleware Reapply "don't require pulling stubs for cloud models" again (#14608) 2026-03-06 14:27:47 -08:00
ml model: add support for qwen3.5-27b model (#14415) 2026-02-25 01:09:58 -08:00
model parsers: repair unclosed arg_value tags in GLM tool calls (#14656) 2026-03-06 14:08:34 -08:00
openai x/imagegen: add image edit capabilities (#13846) 2026-01-22 20:35:08 -08:00
parser MLX: add header vendoring and remove go build tag (#14642) 2026-03-09 17:24:45 -07:00
progress Add z-image image generation prototype (#13659) 2026-01-09 21:09:46 -08:00
readline feature: add ctrl-g to allow users to use an editor to edit their prompt (#14197) 2026-02-11 13:04:41 -08:00
runner runner: add token history sampling parameters to ollama runner (#14537) 2026-03-01 19:16:07 -08:00
sample runner: add token history sampling parameters to ollama runner (#14537) 2026-03-01 19:16:07 -08:00
scripts MLX: add header vendoring and remove go build tag (#14642) 2026-03-09 17:24:45 -07:00
server cloud_proxy: handle stream disconnects gracefully (#14685) 2026-03-06 19:18:52 -08:00
template template: fix args-as-json rendering (#13636) 2026-01-06 18:33:57 -08:00
thinking thinking: fix double emit when no opening tag 2025-08-21 21:03:12 -07:00
tokenizer move tokenizers to separate package (#13825) 2026-02-05 17:44:11 -08:00
tools preserve tool definition and call JSON ordering (#13525) 2026-01-05 18:03:36 -08:00
types Update vendor ggml code to a5bb8ba4 (#13832) 2026-02-02 17:31:59 -08:00
version add version 2023-08-22 09:40:58 -07:00
x MLX: add header vendoring and remove go build tag (#14642) 2026-03-09 17:24:45 -07:00
.dockerignore next build (#8539) 2025-01-29 15:03:38 -08:00
.gitattributes .gitattributes: add app/webview to linguist-vendored (#13274) 2025-11-29 23:46:10 -05:00
.gitignore harmony: remove special casing in routes.go 2025-09-18 14:55:59 -07:00
.golangci.yaml ci: restore previous linter rules (#13322) 2025-12-03 18:55:02 -08:00
CMakeLists.txt MLX: add header vendoring and remove go build tag (#14642) 2026-03-09 17:24:45 -07:00
CMakePresets.json MLX: add header vendoring and remove go build tag (#14642) 2026-03-09 17:24:45 -07:00
CONTRIBUTING.md docs: fix typos in repository documentation (#10683) 2025-11-15 20:22:29 -08:00
Dockerfile MLX: add header vendoring and remove go build tag (#14642) 2026-03-09 17:24:45 -07:00
go.mod cmd: set codex env vars on launch and handle zstd request bodies (#14122) 2026-02-18 17:19:36 -08:00
go.sum cmd: set codex env vars on launch and handle zstd request bodies (#14122) 2026-02-18 17:19:36 -08:00
LICENSE proto -> ollama 2023-06-26 15:57:13 -04:00
main.go lint 2024-08-01 17:06:06 -07:00
Makefile.sync Revert "Update vendored llama.cpp to b7847" (#14061) 2026-02-03 18:39:36 -08:00
MLX_CORE_VERSION MLX: add header vendoring and remove go build tag (#14642) 2026-03-09 17:24:45 -07:00
MLX_VERSION update mlx-c bindings to 0.5.0 (#14380) 2026-02-23 16:44:29 -08:00
README.md readme: update download link for macOS (#1) (#14271) 2026-02-15 15:25:15 -08:00
SECURITY.md docs: fix typos in repository documentation (#10683) 2025-11-15 20:22:29 -08:00

ollama

Ollama

Start building with open models.

Download

macOS

curl -fsSL https://ollama.com/install.sh | sh

or download manually

Windows

irm https://ollama.com/install.ps1 | iex

or download manually

Linux

curl -fsSL https://ollama.com/install.sh | sh

Manual install instructions

Docker

The official Ollama Docker image ollama/ollama is available on Docker Hub.

Libraries

Community

Get started

ollama

You'll be prompted to run a model or connect Ollama to your existing agents or applications such as claude, codex, openclaw and more.

Coding

To launch a specific integration:

ollama launch claude

Supported integrations include Claude Code, Codex, Droid, and OpenCode.

AI assistant

Use OpenClaw to turn Ollama into a personal AI assistant across WhatsApp, Telegram, Slack, Discord, and more:

ollama launch openclaw

Chat with a model

Run and chat with Gemma 3:

ollama run gemma3

See ollama.com/library for the full list.

See the quickstart guide for more details.

REST API

Ollama has a REST API for running and managing models.

curl http://localhost:11434/api/chat -d '{
  "model": "gemma3",
  "messages": [{
    "role": "user",
    "content": "Why is the sky blue?"
  }],
  "stream": false
}'

See the API documentation for all endpoints.

Python

pip install ollama
from ollama import chat

response = chat(model='gemma3', messages=[
  {
    'role': 'user',
    'content': 'Why is the sky blue?',
  },
])
print(response.message.content)

JavaScript

npm i ollama
import ollama from "ollama";

const response = await ollama.chat({
  model: "gemma3",
  messages: [{ role: "user", content: "Why is the sky blue?" }],
});
console.log(response.message.content);

Supported backends

  • llama.cpp project founded by Georgi Gerganov.

Documentation

Community Integrations

Want to add your project? Open a pull request.

Chat Interfaces

Web

Desktop

  • Dify.AI - LLM app development platform
  • AnythingLLM - All-in-one AI app for Mac, Windows, and Linux
  • Maid - Cross-platform mobile and desktop client
  • Witsy - AI desktop app for Mac, Windows, and Linux
  • Cherry Studio - Multi-provider desktop client
  • Ollama App - Multi-platform client for desktop and mobile
  • PyGPT - AI desktop assistant for Linux, Windows, and Mac
  • Alpaca - GTK4 client for Linux and macOS
  • SwiftChat - Cross-platform including iOS, Android, and Apple Vision Pro
  • Enchanted - Native macOS and iOS client
  • RWKV-Runner - Multi-model desktop runner
  • Ollama Grid Search - Evaluate and compare models
  • macai - macOS client for Ollama and ChatGPT
  • AI Studio - Multi-provider desktop IDE
  • Reins - Parameter tuning and reasoning model support
  • ConfiChat - Privacy-focused with optional encryption
  • LLocal.in - Electron desktop client
  • MindMac - AI chat client for Mac
  • Msty - Multi-model desktop client
  • BoltAI for Mac - AI chat client for Mac
  • IntelliBar - AI-powered assistant for macOS
  • Kerlig AI - AI writing assistant for macOS
  • Hillnote - Markdown-first AI workspace
  • Perfect Memory AI - Productivity AI personalized by screen and meeting history

Mobile

SwiftChat, Enchanted, Maid, Ollama App, Reins, and ConfiChat listed above also support mobile platforms.

Code Editors & Development

Libraries & SDKs

Frameworks & Agents

RAG & Knowledge Bases

  • RAGFlow - RAG engine based on deep document understanding
  • R2R - Open-source RAG engine
  • MaxKB - Ready-to-use RAG chatbot
  • Minima - On-premises or fully local RAG
  • Chipper - AI interface with Haystack RAG
  • ARGO - RAG and deep research on Mac/Windows/Linux
  • Archyve - RAG-enabling document library
  • Casibase - AI knowledge base with RAG and SSO
  • BrainSoup - Native client with RAG and multi-agent automation

Bots & Messaging

Terminal & CLI

Productivity & Apps

Observability & Monitoring

  • Opik - Debug, evaluate, and monitor LLM applications
  • OpenLIT - OpenTelemetry-native monitoring for Ollama and GPUs
  • Lunary - LLM observability with analytics and PII masking
  • Langfuse - Open source LLM observability
  • HoneyHive - AI observability and evaluation for agents
  • MLflow Tracing - Open source LLM observability

Database & Embeddings

Infrastructure & Deployment

Cloud

Package Managers