Intel LLM Scaler vLLM Update Supports More Models
([Intel] 6 Hours Ago
llm-scaler-vllm beta 0.10.2-b6)
- Reference: 0001594628
- News link: https://www.phoronix.com/news/Intel-LLM-Scaler-vLLM-0.10.2-b6
- Source link:
Intel software engineers continue to be hard at work on [1]LLM-Scaler as their solution for running vLLM on Intel GPUs in a Docker containerized environment . A new beta release of LLM-Scaler built around vLLM was released overnight with support for running more large language models.
Since the "LLM-Scaler 1.0" debut of the project back in August there have been frequent updates for expanding LLM coverage on Intel GPUs and exposing more features for harnessing the AI compute power on Intel graphics hardware. The versioning scheme though remains a mess with today's test version being "llm-scaler-vllm beta release 0.10.2-b6" even with "1.0" previously being announced.
As for the changes with today's llm-scaler-vllm beta update, they include:
- MoE-Int4 support for Qwen3-30B-A3B
- Bpe-Qwen tokenizer support
- Enable Qwen3-VL Dense/MoE models
- Enable Qwen3-Omni models
- MinerU 2.5 Support
- Enable whisper transcription models
- Fix minicpmv4.5 OOM issue and output error
- Enable ERNIE-4.5-vl models
- Enable Glyph based GLM-4.1V-9B-Base
This new beta update for those interested in using vLLM on Intel GPUs via this Docker environment can find all the details on [2]GitHub . The Docker image is available via intel/llm-scaler-vllm:0.10.2-b6 .
[1] https://www.phoronix.com/news/Intel-LLM-Scaler-1.0
[2] https://github.com/intel/llm-scaler/releases/tag/vllm-0.10.2-b6
Since the "LLM-Scaler 1.0" debut of the project back in August there have been frequent updates for expanding LLM coverage on Intel GPUs and exposing more features for harnessing the AI compute power on Intel graphics hardware. The versioning scheme though remains a mess with today's test version being "llm-scaler-vllm beta release 0.10.2-b6" even with "1.0" previously being announced.
As for the changes with today's llm-scaler-vllm beta update, they include:
- MoE-Int4 support for Qwen3-30B-A3B
- Bpe-Qwen tokenizer support
- Enable Qwen3-VL Dense/MoE models
- Enable Qwen3-Omni models
- MinerU 2.5 Support
- Enable whisper transcription models
- Fix minicpmv4.5 OOM issue and output error
- Enable ERNIE-4.5-vl models
- Enable Glyph based GLM-4.1V-9B-Base
This new beta update for those interested in using vLLM on Intel GPUs via this Docker environment can find all the details on [2]GitHub . The Docker image is available via intel/llm-scaler-vllm:0.10.2-b6 .
[1] https://www.phoronix.com/news/Intel-LLM-Scaler-1.0
[2] https://github.com/intel/llm-scaler/releases/tag/vllm-0.10.2-b6