mirror of
https://github.com/ollama/ollama
synced 2026-04-23 08:45:14 +00:00
Replace hardcoded Encode(prompt, true) with Encode(prompt, r.Tokenizer.AddBOS()) so the pipeline respects each model's tokenizer configuration. Models with add_bos_token=true (gemma3, llama): unchanged, tokenizer still prepends BOS. Models with bos_token=null (qwen3, qwen3.5): unchanged, the BOS guard (vocab.BOS >= 0) already prevented prepending regardless of the flag. This aligns the pipeline with the /v1/tokenize endpoint which already uses Tokenizer.AddBOS(). |
||
|---|---|---|
| .. | ||
| agent | ||
| cmd | ||
| create | ||
| imagegen | ||
| mlxrunner | ||
| models | ||
| server | ||
| tokenizer | ||
| tools | ||