AMD Job Posting Confirms More Details Around Their AI GPU Compute Stack Plans
([AMD] 5 Hours Ago
IREE + MLIR)
- Reference: 0001497872
- News link: https://www.phoronix.com/news/AMD-GPU-AI-Stack-Job-Details
- Source link:
A Friday evening job posting has confirmed and reinforced details around their future AI GPU compute stack, presumably what's been referred to as the [1]Unified AI Software Stack .
The Unified AI Software Stack is to support AMD's full range of hardware from CPUs to GPUs and most recently NPUs within Ryzen AI parts. The Unified AI Software Stack will help with offloading to the most appropriate device/accelerator and provide a more cohesive developer experience than what's currently provided by AMD software.
Posted to the [2]LLVM Discourse was the job posting that AMD is recruiting for an AI GPU compiler engineer with an MLIR and LLVM focus. We've known MLIR -- LLVM's modern intermediate representation -- is to be the common IR of the Unified AI Software Stack. MLIR has also played a role with their [3]Peano compiler for Ryzen NPUs and the like.
The job posting also notes that IREE is to play a central role in their future AI compute stack too. IREE is the Intermediate Representation Execution Environment built atop MLIR for lowering machine learning models into a unified IR. IREE supports already ONNX, PyTorch, TensorFlow, JAX, and more. IREE also has an AMD ROCm back-end already plus can also output to Vulkan as well as Apple Metal and NVIDIA CUDA.
Those unfamiliar with IREE can visit [4]IREE.dev . I have covered IREE in the past such as for [5]machine learning acceleration with Vulkan and has been mentioned in the context of AMD's AI software efforts. [6]AMD's Nod.ai acquisition was to recruit engineering talent around not only MLIR but IREE too.
Anyhow, the [7]job posting sums up the new AI GPU compiler development engineer position as:
"We are building IREE as an open-source compiler and runtime solution to productionize ML for a variety of use cases and hardware targets: https://iree.dev/. In particular, we aim to provide broad and performant GPU coverage, from datacenter to mobile, via a unified open-source software stack. Our team develops an end-to-end AI solution: from ML framework integration, down to generating efficient kernels."
Great seeing them reaffirm their interest from "datacenter to mobile" and thus this compiler/run-time software likely part of the Unified AI Software Stack effort. And, of course, that this in-development software stack will be open-source. It's going to be very interesting to see how well this future AMD AI software compute stack performs and exactly how well rounded the support is going to be across their different product lines.
[1] https://www.phoronix.com/news/Unified-AI-Software-MLIR-SPIR-V
[2] https://discourse.llvm.org/t/ai-gpu-compiler-engineer-amd-mlir-llvm/82495
[3] https://www.phoronix.com/news/AMD-Peano-LLVM-Ryzen-AI
[4] https://iree.dev/#workflow-overview
[5] https://www.phoronix.com/news/Google-IREE-ML-Vulkan
[6] https://www.phoronix.com/news/AMD-Acquires-Nod-AI
[7] https://careers.amd.com/careers-home/jobs/54684?lang=en-us
The Unified AI Software Stack is to support AMD's full range of hardware from CPUs to GPUs and most recently NPUs within Ryzen AI parts. The Unified AI Software Stack will help with offloading to the most appropriate device/accelerator and provide a more cohesive developer experience than what's currently provided by AMD software.
Posted to the [2]LLVM Discourse was the job posting that AMD is recruiting for an AI GPU compiler engineer with an MLIR and LLVM focus. We've known MLIR -- LLVM's modern intermediate representation -- is to be the common IR of the Unified AI Software Stack. MLIR has also played a role with their [3]Peano compiler for Ryzen NPUs and the like.
The job posting also notes that IREE is to play a central role in their future AI compute stack too. IREE is the Intermediate Representation Execution Environment built atop MLIR for lowering machine learning models into a unified IR. IREE supports already ONNX, PyTorch, TensorFlow, JAX, and more. IREE also has an AMD ROCm back-end already plus can also output to Vulkan as well as Apple Metal and NVIDIA CUDA.
Those unfamiliar with IREE can visit [4]IREE.dev . I have covered IREE in the past such as for [5]machine learning acceleration with Vulkan and has been mentioned in the context of AMD's AI software efforts. [6]AMD's Nod.ai acquisition was to recruit engineering talent around not only MLIR but IREE too.
Anyhow, the [7]job posting sums up the new AI GPU compiler development engineer position as:
"We are building IREE as an open-source compiler and runtime solution to productionize ML for a variety of use cases and hardware targets: https://iree.dev/. In particular, we aim to provide broad and performant GPU coverage, from datacenter to mobile, via a unified open-source software stack. Our team develops an end-to-end AI solution: from ML framework integration, down to generating efficient kernels."
Great seeing them reaffirm their interest from "datacenter to mobile" and thus this compiler/run-time software likely part of the Unified AI Software Stack effort. And, of course, that this in-development software stack will be open-source. It's going to be very interesting to see how well this future AMD AI software compute stack performs and exactly how well rounded the support is going to be across their different product lines.
[1] https://www.phoronix.com/news/Unified-AI-Software-MLIR-SPIR-V
[2] https://discourse.llvm.org/t/ai-gpu-compiler-engineer-amd-mlir-llvm/82495
[3] https://www.phoronix.com/news/AMD-Peano-LLVM-Ryzen-AI
[4] https://iree.dev/#workflow-overview
[5] https://www.phoronix.com/news/Google-IREE-ML-Vulkan
[6] https://www.phoronix.com/news/AMD-Acquires-Nod-AI
[7] https://careers.amd.com/careers-home/jobs/54684?lang=en-us
pWe00Iri3e7Z9lHOX2Qx