Updated Intel LLM-Scaler-Omni Improves ComfyUI Performance For Arc Graphics
([Intel] 12 Minutes Ago
Intel LLM-Scaler-Omni)
- Reference: 0001598173
- News link: https://www.phoronix.com/news/Intel-LLM-Scaler-Omni-ComfyUI
- Source link:
The past several months Intel software engineers have been [1]quite busy with LLM-Scaler as part of Project Battlematrix . LLM-Scaler is a Docker-based solution for AI workloads on Intel graphics hardware to ship an optimized vLLM stack and other AI frameworks. Out today is a new LLM-Scaler-Omni release to help enhance ComfyUI performance on Intel hardware.
There have been quite frequent updates to [2]LLM-Scaler for expanding model coverage on Intel graphics hardware and delivering other new AI features for primarily the Arc "Battlemage" GPUs. With today's llm-scaler-omni beta release 0.1.0-b4 release, there is now SGLang Diffusion support. WIth that SGLang Diffusion support, there is around a 10% performance improvement when running ComfyUI on a single Arc Graphics card.
A ten percent improvement at this stage is pretty nice and with ComfyUI remaining quite popular. ComfyUI for those unaware is an open-source solution capable of generating videos, images, 3D, and audio with AI via Stable Diffusion pipelines and works with a wide variety of different models.
The new LLM-Scaler-Omni release also adds new ComfyUI workflows for Hunyuan-Video-1.5 (T2V, I2V, Multi-B60), Z-Image. More details on this updated Intel release via [3]GitHub while the pre-built containers are available from Docker Hub.
[1] https://www.phoronix.com/news/Intel-LLM-Scaler-1.0
[2] https://www.phoronix.com/search/llm-scaler
[3] https://github.com/intel/llm-scaler/releases/tag/omni-0.1.0-b4
There have been quite frequent updates to [2]LLM-Scaler for expanding model coverage on Intel graphics hardware and delivering other new AI features for primarily the Arc "Battlemage" GPUs. With today's llm-scaler-omni beta release 0.1.0-b4 release, there is now SGLang Diffusion support. WIth that SGLang Diffusion support, there is around a 10% performance improvement when running ComfyUI on a single Arc Graphics card.
A ten percent improvement at this stage is pretty nice and with ComfyUI remaining quite popular. ComfyUI for those unaware is an open-source solution capable of generating videos, images, 3D, and audio with AI via Stable Diffusion pipelines and works with a wide variety of different models.
The new LLM-Scaler-Omni release also adds new ComfyUI workflows for Hunyuan-Video-1.5 (T2V, I2V, Multi-B60), Z-Image. More details on this updated Intel release via [3]GitHub while the pre-built containers are available from Docker Hub.
[1] https://www.phoronix.com/news/Intel-LLM-Scaler-1.0
[2] https://www.phoronix.com/search/llm-scaler
[3] https://github.com/intel/llm-scaler/releases/tag/omni-0.1.0-b4