Graphics Cards

Amd radeon instinct mi100, new details of the amd hpc gpu

Table of contents:

Anonim

AMD's next HPC Radeon Instinct MI100 graphics card, which will feature the Arcturus GPU, has been discovered. The existence of AMD's Arcturus GPU was confirmed in 2018 and two years later, we are finally starting to get details on the specs of AMD's upcoming HPC / AI accelerator.

Radeon Instinct MI100 would have 100 INT8 TFLOPs

The code name "Arcturus" comes from the red giant star which is the brightest in the Bootes constellation and one of the brightest stars that can be seen from space. Like Vega and Navi, who are also some of the brightest stars visible in the night sky, the naming scheme is inspired by the time that has passed since the creation of RTG and founding father Raja Koduri (former president of AMD RTG), put a lot of emphasis on bright stars when they first introduced Polaris.

We have previously seen support for the Arcturus GPU added to HWiNFO, in particular the XL variant. To our surprise, the new leaked variant 'D34303' is also based on the XL chip and would go on to power the Radeon Instinct MI100.

?

AMD MI100 HBM2 D34303 A1 XL 200W 32GB 1000M.

- 比 屋 定 さ ん の 戯 れ 言 @Komachi (@KOMACHI_ENSAKA) February 7, 2020

Characteristics:

  • Based on 200W Arcturus XLTDP GPU Up to 32GB of HBM2 memory Reported HBM2 memory clocks are between 1000-1200MHz

Radeon Instinct MI100 has a TDP of 200W and is based on the XL variant of AMD's Arcturus GPU. The card also features 32GB of HBM2 memory with pin speeds of 1.0 - 1.2 GHz. The MI60 by comparison has 64 CUs with a TDP of 300W while clock speeds are reported at 1200 MHz (Base Clock) while memory operates at 1.0 GHz along with a 4096-bit bus interface, pumping out 1 TB / s of bandwidth. There's a good chance that the final Arcturus GPU design could include Samsung's latest HBM2E 'Flashbolt' memory, which offers speeds of 3.2Gbps for a bandwidth of up to 1.5Tb / s.

The name of the Radeon Instinct MI100 itself gives us a hint of its absolute performance metric which would be around 100 TFLOPs from INT8. That is a 66% increase in the computing power of INT8 (AI / DNN). Similarly, the calculation of the FP16 would be classified into around 50 TFLOPs, 25 TFLOPs from FP32 and 12.5 TFLOPs from FP64. We will keep you informed.

Wccftech font

Graphics Cards

Editor's choice

Back to top button