PyTorch 2.5 Released With Improved Intel GPU Support
([Programming] 5 Hours Ago
PyTorch 2.5)
- Reference: 0001498893
- News link: https://www.phoronix.com/news/PyTorch-2.5-Released
- Source link:
PyTorch 2.5 is out today as the latest major update to this widely-used machine learning library.
PyTorch 2.5 improves its Intel GPU support for both client and data center graphics products. PyTorch 2.5 now has initial support for Intel client GPUs on Windows but also enhances the existing PyTorch Intel Linux graphics support too. Some of the highlights include:
- Expanded PyTorch hardware backend support matrix to include both Intel Data Center and Client GPUs.
- The implementation of SYCL kernels to enhance coverage and execution of Aten operators on Intel GPUs to boost performance in PyTorch eager mode.
- Enhanced Intel GPU backend of torch.compile to improve inference and training performance for a wide range of deep learning workloads.
PyTorch 2.5 also brings various Intel Xeon CPU performance optimizations too.
[1]
Some of the other PyTorch 2.5 changes include a NVIDIA cuDNN back-end for SDPA with H100 and newer GPUs, CPU back-end optimizations for TorchInductor, FP16 support on the CPU path for both eager mode and the TorchInductor CPP backend, autoload device extension support, and a variety of other enhancements.
Downloads and more details on the PyTorch 2.5 release via [2]GitHub . New [3]PyTorch benchmarks soon.
[1] https://www.phoronix.com/image-viewer.php?id=2024&image=intel_pytorch_25_lrg
[2] https://github.com/pytorch/pytorch/releases/tag/v2.5.0
[3] https://openbenchmarking.org/test/pts/pytorch
PyTorch 2.5 improves its Intel GPU support for both client and data center graphics products. PyTorch 2.5 now has initial support for Intel client GPUs on Windows but also enhances the existing PyTorch Intel Linux graphics support too. Some of the highlights include:
- Expanded PyTorch hardware backend support matrix to include both Intel Data Center and Client GPUs.
- The implementation of SYCL kernels to enhance coverage and execution of Aten operators on Intel GPUs to boost performance in PyTorch eager mode.
- Enhanced Intel GPU backend of torch.compile to improve inference and training performance for a wide range of deep learning workloads.
PyTorch 2.5 also brings various Intel Xeon CPU performance optimizations too.
[1]
Some of the other PyTorch 2.5 changes include a NVIDIA cuDNN back-end for SDPA with H100 and newer GPUs, CPU back-end optimizations for TorchInductor, FP16 support on the CPU path for both eager mode and the TorchInductor CPP backend, autoload device extension support, and a variety of other enhancements.
Downloads and more details on the PyTorch 2.5 release via [2]GitHub . New [3]PyTorch benchmarks soon.
[1] https://www.phoronix.com/image-viewer.php?id=2024&image=intel_pytorch_25_lrg
[2] https://github.com/pytorch/pytorch/releases/tag/v2.5.0
[3] https://openbenchmarking.org/test/pts/pytorch
pioto