News: 0001534717

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

ollama 0.6.2 Released WIth Support For AMD Strix Halo

([Programming] 46 Minutes Ago ollama 0.6.2)


The ollama open-source software that makes it easy to run Llama 3, DeepSeek-R1, Gemma 3, and other large language models is out with its newest release. The ollama software makes it easy to leverage the llama.cpp back-end for running a variety of LLMs and enjoying convenient integration with other desktop software.

With today's release of ollama 0.6.2 there is now support for AMD Strix Halo GPUs, a.k.a. the [1]Ryzen AI Max+ laptop / SFF desktop SoCs. Ryzen AI Max+ appears quite impressive though unfortunately we haven't had the opportunity to see how well it works under Linux. In any event it's good seeing ollama 0.6.2 providing timely support for the Ryzen AI Max+ "Strix Halo" hardware.

The other focus of ollama 0.6.2 is providing a number of fixes around its Gemma 3 LLM support, including now supporting multiple images with Gemma 3.

More details and downloads for the ollama 0.6.2 release via [2]GitHub .



[1] https://www.phoronix.com/news/AMD-Strix-Halo-On-Linux-Q

[2] https://github.com/ollama/ollama/releases/tag/v0.6.2



phoronix

If a man has a strong faith he can indulge in the luxury of skepticism.
-- Friedrich Nietzsche