!-- 634c7aa55dca3b282b7c80846a1dd8060284ae7f -->

AMD Radeon Instinct MI100 ‘CDNA GPU’ Alleged Performance Numbers Show Faster Than NVIDIA’s A100 in FP32 Compute, Impressive Performance / Value But Beat by Amps in AI & HPC Workloads

Alleged performance numbers and details of AMD’s next-generation CDNA GPU-based Radeon Instinct MI100 accelerator have been leaked by AdoredTV. In an exclusive post, AdoredTV covers performance benchmarks of the upcoming HPC GPU against NVIDIA’s Volta and Ampere GPUs.

AMD Radeon Instinct MI100 ‘CDNA’ GPU performance benchmarks leak, reportedly faster than NVIDIA’s Ampere A100 in FP32 Compute with better performance / value

AdoredTV claims that the slides they received are from the official AMD Radeon Instinct MI100 presentation. Those placed on the source appear to be modified versions of the original, but the details remain intact. In our previous post, we confirmed that the Radeon Instinct MI100 GPU was on the market in 2H 2020. The AdoredTV slides shed some more light on the launch plans and server configurations we could expect from AMD and its partners in 2020. & further.

AMD Zen 3 ‘Ryzen 4000’ desktop CPUs and RDNA 2 ‘Radeon RX Navi 2X’ GPUs on track for launch in 2020 – EPYC Milan to ship later this year, 5nm Zen 4 in 2021

AMD Radeon Instinct MI100 1U server specifications

First, AMD plans to unveil an HPC-specific server with a 2P design with dual AMD EPYC CPUs that can be based on the Rome or Milan generation. Each EPYC CPU connects to two Radeon Instinct MI100 accelerators via the 2nd generation Infinity Fabric interconnect. The four GPUs can deliver a sustained 136 TFLOPs of FP32 (SGEMM) output, indicating approximately 34 TFLOPs of FP32 processing power per GPU. Each Radeon Instinct MI100 GPU has a TDP of 300W.

Additional specifications include the total GPU PCIe bandwidth of 256 GB / s, which is made possible with the Gen 4 protocol. The combined memory bandwidth of the four GPUs is 4.9 TB / s, which means that AMD uses HBM2e DRAM dies (each GPU pumps out 1,225 TB / s bandwidth). The combined memory pool is 128 GB or 32 GB per GPU. This suggests that AMD still uses 4 HBM2 DRAM stacking technology and each stacking case contains 8 hi DRAM dies. It appears that XGMI is not offered on standard configurations and is limited to specialized 1U racks.

In terms of availability, the 1U server with AMD EPYC (Rome / Milan) HPC CPUs would launch by December 2020, while an Intel Xeon variant is expected to launch in February 2021.

AMD beats Wall Street estimates as it records high earnings the second time ever

AMD Radeon Instinct MI100 3U server specifications

The second 3U server is expected to launch in March 2021 and will offer even more powerful specifications such as 8 Radeon Instinct MI100 GPUs connected to two EPYC CPUs. Each group of four Instinct MI 100s is interconnected via an XGMI (100 GB / s bidirectional) and a quad bandwidth of 1.2 TB / s. The four Instinct accelerators total a total of 272 TFLOPs with FP32 computing power, 512 GB per second PCIe bandwidth, 9.8 TB / s HBM bandwidth, and 256 GB memory DRAM capacity. The rack has a nominal power consumption of 3 kW.

AMD Radeon Instinct Accelerators 2020

Accelerator nameAMD Radeon Instinct MI6AMD Radeon Instinct MI8AMD Radeon Instinct MI25AMD Radeon Instinct MI50AMD Radeon Instinct MI60AMD Radeon Instinct MI100
GPU architecturePolaris 10Fiji XTVega 10Vega 20Vega 20Arcturus
GPU process node14nm FinFET28nm14nm FinFET7nm FinFET7nm FinFET7nm FinFET
GPU cores230440964096384040968192?
GPU clock speed1237 MHz1000 MHz1500 MHz1725 MHz1800 MHz1334 MHz?
FP16 Compute5.7 TFLOPs8.2 TFLOPs24.6 TFLOPs26.5 TFLOPs29.5 TFLOPs~ 50 TFLOPs
FP32 Compute5.7 TFLOPs8.2 TFLOPs12.3 TFLOPs13.3 TFLOPs14.7 TFLOPs~ 25 TFLOPs
FP64 Compute384 GFLOPs512 GFLOPs768 GFLOPs6.6 TFLOPs7.4 TFLOPs~ 12.5 TFLOPs
Memory clock1750 MHz500 MHz472 MHz500 MHz500 MHzTBD
Memory bus256-bit bus4096-bit bus2048-bit bus4096-bit bus4096-bit bus4096-bit bus
Memory bandwidth224 GB / s512 GB / s484 GB / s1 TB / s1 TB / sTBD
Form factorSingle slot, full lengthDouble slot, half lengthDouble slot, full lengthDouble slot, full lengthDouble slot, full lengthDouble slot, full length
CoolingPassive coolingPassive coolingPassive coolingPassive coolingPassive coolingPassive cooling?
~ 200W (test board)

AMD’s Radeon Instinct MI100 ‘CDNA GPU’ performance numbers, an FP32 powerhouse in the making?

In terms of performance, the AMD Radeon Instinct MI100 was compared to the NVIDIA Volta V100 and the NVIDIA Ampere A100 GPU accelerators. Interestingly, the slides list a 300W Ampere A100 accelerator, although no such configuration exists, meaning these slides are based on a supposed A100 configuration rather than an actual variant that comes in two flavors, the 400W- configuration in the SXM form factor and the 250W configuration provided in the PCIe form factor.

According to the benchmarks, the Radeon Instinct MI100 delivers about 13% better FP32 performance compared to the Ampere A100 and more than 2x performance improvement over the Volta V100 GPUs. The perf / value ratio is also compared to the MI100 which offers approximately 2.4x better value compared to the V100S and 50% better value than the Ampere A100. Performance scaling has also been shown to be almost linear, even with up to 32 GPU configurations in Resenet, which is quite impressive.

AMD Radeon Instinct MI100 vs NVIDIA’s Ampere A100 HPC Accelerator (Image Credits: AdoredTV):

That said, the slides also mention that AMD will provide much better performance and value in three specific segments, including Oil & Gas, Academia and HPC & Machine Learning. In the rest of the HPC workloads such as FP64 compute, AI and Data Analytics, NVIDIA will provide much superior performance with its A100 accelerator. NVIDIA also has the advantage of multi-instance GPU architecture over AMD. Performance statistics show 2.5x better FP64 performance, 2x better FP16 performance and twice the tensor performance thanks to the latest generation of Tensor cores on the Ampere A100 GPU.

One thing to emphasize is that AMD has not mentioned NVIDIA’s thrift numbers anywhere in the benchmarks. With thrift, NVIDIA’s Ampere A100 offers up to 156 TFLOPs of horsepower, although it seems like AMD just wanted to do a specific benchmark comparison versus the Ampere A100. Looks like the Radeon Instinct MI100 seems to be a decent HPC offering if performance and value figures hold up at launch.

fbq(‘init’, ‘1503230403325633’);
fbq(‘track’, ‘PageView’);