mirror of
https://github.com/ollama/ollama
synced 2026-04-23 08:45:14 +00:00
Gemma 4 prompts differ when thinking is disabled for different sized models: 26b/31b emit an empty thought block, while e2b/e4b do not. Before #15490, our shared Gemma 4 renderer effectively matched the e2b behavior. #15490 changed it to always emit the empty thought block, which regressed e2b/e4b nothink behavior and led to #15536 (and possibly This change restores the previous shared behavior by removing the empty trailing thought block. It also renames the checked-in upstream chat templates so the e2b and 31b fixtures are tracked separately. A follow-up will split Gemma 4 rendering by model size. Fixes: #15536 |
||
|---|---|---|
| .. | ||
| imageproc | ||
| input | ||
| models | ||
| parsers | ||
| renderers | ||
| model.go | ||
| model_test.go | ||