AMD Unveils Instinct MI200 ‘Aldebaran’ GPU, First 6nm MCM Product With 58 Billion Transistors, Over 14,000 Cores & 128 GB HBM2e Memory

AMD has officially announced its next-generation MI200 HPC GPU codenamed Aldebaran that uses a 6nm CDNA 2 architecture to deliver insane compute performance.

AMD Unveils Instinct MI200, Powering The Next-Gen Compute Powerhouse With First 6nm MCM GPU Technology & Over 95 TFLOPs FP32 Performance

AMD is officially the first to MCM technology and they are doing so with a grand product which is their Instinct MI200 codenamed Aldebaran. The AMD Aldebaran GPU will come in various forms & sizes but it’s all based on the brand new CDNA 2 architecture which is the most refined variation of Vega. Some of the main features before we go into detail are listed below:

AMD Unveils Next-Gen EPYC Milan-X CPUs, First To Feature 3D V-Cache Tech With Insane 804 MB Cache

  • AMD CDNA 2 architecture – 2nd Gen Matrix Cores accelerating FP64 and FP32 matrix operations, delivering up to 4X the peak theoretical FP64 performance vs. AMD previous-gen GPUs.
  • Leadership Packaging Technology – Industry-first multi-die GPU design with 2.5D Elevated Fanout Bridge (EFB) technology delivers 1.8X more cores and 2.7X higher memory bandwidth vs. AMD previous-gen GPUs, offering the industry’s best aggregate peak theoretical memory bandwidth at 3.2 terabytes per second.
  • 3rd Gen AMD Infinity Fabric technology – Up to 8 Infinity Fabric links connect the AMD Instinct MI200 with 3rd Gen EPYC CPUs and other GPUs in the node to enable unified CPU/GPU memory coherency and maximize system throughput, allowing for an easier on-ramp for CPU codes to tap the power of accelerators.

AMD Instinct MI200 GPU Die Shot:

Inside the AMD Instinct MI200 is an Aldebaran GPU featuring two dies, a secondary and a primary. It has two dies with each consisting of 8 shader engines for a total of 16 SE’s. Each Shader Engine packs 16 CUs with full-rate FP64, packed FP32 & a 2nd Generation Matrix Engine for FP16 & BF16 operations.

Each die, as such, is composed of 128 compute units or 8192 stream processors. This rounds up to a total of 220 compute units or 14,080 stream processors for the entire chip. The Aldebaran GPU is also powered by a new XGMI interconnect. Each chiplet features a VCN 2.6 engine and the main IO controller.

Built on AMD CDNA 2 architecture, AMD Instinct MI200 series accelerators deliver leading application performance for a broad set of HPC workloads. The AMD Instinct MI250X accelerator provides up to 4.9X better performance than competitive accelerators for double precision (FP64) HPC applications and surpasses 380 teraflops of peak theoretical half-precision (FP16) for AI workloads to enable disruptive approaches in further accelerating data-driven research.

In terms of performance, AMD is touting various record wins in the HPC segment over NVIDIA’s A100 solution with up to 3x performance improvements in AMG.

Watch The AMD ‘Accelerated Data Center’ Premiere Live Event Here – Next-Gen EPYC & Instinct Announcements

As for DRAM, AMD has gone with an 8-channel interface consisting of 1024-bit interfaces for an 8192-bit wide bus interface. Each interface can support 2GB HBM2e DRAM modules. This should give us up to 16 GB of HBM2e memory capacity per stack and since there are eight stacks in total, the total amount of capacity would be a whopping 128 GB. That’s 48 GB more than the A100 which houses 80 GB HBM2e memory. The memory will clock in at an insane speed of 3.2 Gbps for a full-on bandwidth of 3.2 TB/s. This is a whole 1.2 TB/s more bandwidth than the A100 80 GB which has 2 TB/s.

The AMD Instinct MI200 will be powering three top-tier supercomputers which include the United States’ exascale Frontier system; the European Union’s pre-exascale LUMI system; and Australia’s petascale Setonix system. The competition includes the A100 80 GB which offers 19.5 TFLOPs of FP64, 156 TFLOPs of FP32 and 312 TFLOPs of FP16 compute power. But we are likely to hear about NVIDIA’s own Hopper MCM GPU next year so there’s going to be a heated competition between the two GPU juggernauts in 2022.

AMD Radeon Instinct Accelerators 2020

Accelerator NameAMD Instinct MI300AMD Instinct MI250XAMD Instinct MI250AMD Instinct MI100AMD Radeon Instinct MI60AMD Radeon Instinct MI50AMD Radeon Instinct MI25AMD Radeon Instinct MI8AMD Radeon Instinct MI6
GPU ArchitectureTBA (CDNA 3)Aldebaran (CDNA 2)Aldebaran (CDNA 2)Arcturus (CDNA 1)Vega 20Vega 20Vega 10Fiji XTPolaris 10
GPU Process NodeAdvanced Process NodeAdvanced Process NodeAdvanced Process Node7nm FinFET7nm FinFET7nm FinFET14nm FinFET28nm14nm FinFET
GPU Dies4 (MCM)?2 (MCM)2 (MCM)1 (Monolithic)1 (Monolithic)1 (Monolithic)1 (Monolithic)1 (Monolithic)1 (Monolithic)
GPU Cores28,160?14,08014,080?768040963840409640962304
GPU Clock SpeedTBA1700 MHz~1700 MHz~1500 MHz1800 MHz1725 MHz1500 MHz1000 MHz1237 MHz
FP16 ComputeTBA383 TOPsTBA185 TFLOPs29.5 TFLOPs26.5 TFLOPs24.6 TFLOPs8.2 TFLOPs5.7 TFLOPs
FP32 ComputeTBA95.8 TFLOPsTBA23.1 TFLOPs14.7 TFLOPs13.3 TFLOPs12.3 TFLOPs8.2 TFLOPs5.7 TFLOPs
FP64 ComputeTBA47.9 TFLOPsTBA11.5 TFLOPs7.4 TFLOPs6.6 TFLOPs768 GFLOPs512 GFLOPs384 GFLOPs
VRAMTBA128 GB HBM2e128 GB HBM2e32 GB HBM232 GB HBM216 GB HBM216 GB HBM24 GB HBM116 GB GDDR5
Memory ClockTBATBATBA1200 MHz1000 MHz1000 MHz945 MHz500 MHz1750 MHz
Memory BusTBA8192-bit8192-bit4096-bit bus4096-bit bus4096-bit bus2048-bit bus4096-bit bus256-bit bus
Memory BandwidthTBA~2 TB/s?~2 TB/s?1.23 TB/s1 TB/s1 TB/s484 GB/s512 GB/s224 GB/s
Form FactorTBADual Slot, Full Length / OAMDual Slot, Full Length / OAMDual Slot, Full LengthDual Slot, Full LengthDual Slot, Full LengthDual Slot, Full LengthDual Slot, Half LengthSingle Slot, Full Length
CoolingTBAPassive CoolingPassive CoolingPassive CoolingPassive CoolingPassive CoolingPassive CoolingPassive CoolingPassive Cooling
TDPTBA500WTBA300W300W300W300W175W150W

The Aldebaran MI200 GPU will come in three configurations, the OAM only MI250 and MI250X & the dual-slot PCIe MI210. AMD has only shared full specifications and performance figures for its MI250 class HPC GPUs. The MI250X features the full 14,080 configurations and delivers 47.9, 95.7, 383 TFLOPs of FP64/FP32/FP16 while the MI250 features 13,312 cores with 45.3,90.5,362.1 TFLOPs of FP64/FP32/FP16 performance. The memory configuration remains the same between the two GPU configurations.

AMD Instinct MI200 GPU Package:



Read original article here

Leave a Comment