mirror of
https://github.com/ollama/ollama
synced 2026-04-23 08:45:14 +00:00
| .. | ||
| generate.go | ||
| image.go | ||
| main.go | ||
| README.md | ||
| sample.go | ||
MLX Engine
Experimental MLX backend for running models on Apple Silicon and CUDA.
Build
go build -tags mlx -o engine ./x/imagegen/cmd/engine
Text Generation
./engine -model /path/to/model -prompt "Hello" -max-tokens 100
Options:
-temperature- sampling temperature (default 0.7)-top-p- nucleus sampling (default 0.9)-top-k- top-k sampling (default 40)
Supports: Llama, Gemma3, GPT-OSS
Image Generation
./engine -zimage -model /path/to/z-image -prompt "a cat" -output cat.png
Options:
-width,-height- image dimensions (default 1024x1024)-steps- denoising steps (default 9)-seed- random seed (default 42)