News: 0001570417

  ARM Give a man a fire and he's warm for a day, but set fire to him and he's warm for the rest of his life (Terry Pratchett, Jingo)

AMD ZenDNN 5.1 Released For Enhancing AI Inference Performance On EPYC CPUs

([AMD] 103 Minutes Ago AMD ZenDNN 5.1)


Following the [1]AMD ZenDNN 5.0 release from last year's EPYC Turin launch that brought [2]big performance improvements for CPU-based inferencing with this open-source library compatible with Intel's oneDNN, today marks the availability of ZenDNN 5.1 as the next update.

ZenDNN 5.1 is now available for this AMD Zen optimized Deep Neural Network Library for CPU-based AI inferencing. ZenDNN 5.1 is optimized for use with TensorFlow 2.19 and PyTorch 2.7.

With ZenDNN 5.1 there was a focus on better optimizing for large-scale Recommender Systems like DLRMv2 and DIEN. Concat optimizations are leading to around a 28% performance gain for the DIEN BF16 model.

ZenDNN 5.1 also introduces new operator fusions for MatMul + BiasAdd + Tanh and MatMul + BiasAdd + Sigmoid. The new ZenDNN release also brings a new kernel for BF16/FP32 matrix multiplication that can lead to better performance with the DIEN model. This AMD library also adds Ahead-Of-Time (AOT) reordering for MATMUL kernels across INT8 / BF16 / FP32 data types.

Downloads and more details on today's AMD ZenDNN 5.1 feature release via [3]GitHub .



[1] https://www.phoronix.com/news/AMD-ZenDNN-5.0-Released

[2] https://www.phoronix.com/news/AMD-ZenDNN-5.0-400p-Performance

[3] https://github.com/amd/ZenDNN/releases/tag/v5.1



phoronix

Experience is not what happens to you; it is what you do with what happens
to you.
-- Aldous Huxley