This website requires JavaScript.
Explore
Help
Register
Sign In
ihateninjas
/
ollama
Watch
1
Star
0
Fork
You've already forked ollama
0
mirror of
https://github.com/ollama/ollama
synced
2026-04-23 08:45:14 +00:00
Code
Issues
Actions
1
Packages
Projects
Releases
Wiki
Activity
5268
commits
387
branches
508
tags
354
MiB
e823bff873
Commit graph
1 commit
Author
SHA1
Message
Date
Daniel Hiltgen
e823bff873
gemma4: enable flash attention (
#15378
)
...
Backport GGML kernels so we can enable flash attention for the gemma 4 model on Metal and CUDA.
2026-04-07 08:12:36 -07:00