mirror of
https://github.com/ollama/ollama
synced 2026-04-23 08:45:14 +00:00
Match the ollamarunner and OpenAI semantics: raw, full-vocab log-softmax with the top-K ranked by probability. Skipped on the GPU when the request doesn't ask for logprobs so decode doesn't pay for it otherwise. |
||
|---|---|---|
| .. | ||
| gemma3 | ||
| gemma4 | ||
| glm4_moe_lite | ||
| llama | ||
| nn | ||
| qwen3 | ||
| qwen3_5 | ||
| qwen3_5_moe | ||