ollama/model/models
Daniel Hiltgen de9673ac3f
tokenizer: add byte fallback for SentencePiece BPE encoding (#15232)
* tokenizer: add byte fallback for SentencePiece BPE encoding

When BPE merging produces tokens not in the vocabulary, fall back to
encoding each UTF-8 byte as <0xHH> byte tokens instead of silently
dropping the character. Also teach Decode to convert <0xHH> tokens
back to raw bytes.

Fixes #15229, fixes #15231

* tokenizer fixes
2026-04-02 13:04:45 -07:00
..
bert
deepseek2
deepseekocr
gemma2
gemma3
gemma3n
gemma4 tokenizer: add byte fallback for SentencePiece BPE encoding (#15232) 2026-04-02 13:04:45 -07:00
glm4moelite
glmocr
gptoss
lfm2
llama
llama4
mistral3
mllama
nemotronh
nomicbert
olmo3
qwen2
qwen3
qwen3next model: add qwen3-next compatibility for legacy ssm_in projections (#15133) 2026-03-29 11:50:47 -07:00
qwen3vl model: support for qwen3.5 architecture (#14378) 2026-02-24 20:08:05 -08:00
qwen25vl
models.go Add support for gemma4 (#15214) 2026-04-02 11:33:33 -07:00