24 Zen 4 CPU Cores, 146 Billion Transistors, 128 GB HBM3, Up To 8x Faster Than MI250X

AMD has just confirmed the specs of its Instinct MI300 ‘CDNA 3’ accelerator which makes use of the Zen 4 CPU cores in a 5nm 3D chiplet package.

AMD Instinct MI300 ‘CDNA 3’ Specs: 5nm Chiplet Design, 146 Billion Transistors, 24 Zen 4 CPU Cores, 128 GB HBM3

The latest specifications that were unveiled for the AMD Instinct MI300 accelerator confirm that this exascale APU is going to be a monster of a chiplet design. The CPU will encompass several 5nm 3D chiplet packages, all combining to house an insane 146 Billion transistors. Those transistors include various core IPs, memory interfaces, interconnects, and much more. The CDNA 3 architecture is the fundamental DNA of the Instinct MI300 but the APU also comes with a total of 24 Zen 4 Data Center CPU cores & 128 GB of the next-generation HBM3 memory running in 8192-bit wide bus config that is truly mind-blowing.

During the AMD Financial Day 2022, the company confirmed that the MI300 will be a multi-chip and a multi-IP Instinct accelerator that not only features the next-gen CDNA 3 GPU cores but is also equipped with the next-generation Zen 4 CPU cores.

To enable greater than 2 exaflops of double precision processing power, the U.S. Department of Energy, Lawrence Livermore National Laboratory, and HPE have teamed up with AMD to design El Capitan, expected to be the world’s fastest supercomputer with delivery anticipated in early 2023. El Capitan will leverage next generation products that incorporate improvements from the custom processor design in Frontier.

  • Next generation AMD EPYC processors, codenamed “Genoa”, will feature the “Zen 4” processor core to support next generation memory and I/O sub systems for AI and HPC workloads
  • Next generation AMD Instinct GPUs based on new compute-optimized architecture for HPC and AI workloads will use next generation high bandwidth memory for optimum deep learning performance

This design will excel at AI and machine-learning data analysis to create models that are faster, more accurate, and capable of quantifying the uncertainty of their predictions.

via AMD

In the latest performance comparisons, AMD showcased that the Instinct Mi300 delivers a 8x boost in AI performance (TFLOPs) and a 5x AI performance per watt (TFLOPs/watt) boost over the Instinct MI250X.

AMD will be utilizing both 5nm and 6nm process nodes for its Instinct MI300 ‘CDNA 3’ APUs. The chip will be outfitted with the next generation of Infinity Cache and feature the 4th Gen Infinity architecture which enables CXL 3.0 ecosystem support. The Instinct MI300 accelerator will rock a unified memory APU architecture and new Math Formats, allowing for a 5x performance per watt uplift over CDNA 2 which is massive. AMD is also projecting over 8x the AI performance versus the CDNA 2-based Instinct MI250X accelerators. The CDNA 3 GPU’s UMAA will connect the CPU and GPU to a unified HBM memory package, eliminating redundant memory copies while delivering low TCO.

AMD’s Instinct MI300 APU accelerators are expected to be available by end of 2023 which is the same time as the deployment of the El Capitan supercomputer mentioned above.

Share this story

Facebook

Twitter

Read original article here

Leave a Comment