Home » The Nvidia H100 80 GB PCIe is displayed in Japan for 34,690 euros
Technology

The Nvidia H100 80 GB PCIe is displayed in Japan for 34,690 euros

Japan got to see the recently announced list Nvidia H100 with 80 GB of memory, and it’s not cheap at all, and as you could read in the title, it costs a trifle €34,690thus being the most expensive GPU on the market today, although obviously the best money can buy considering the AI ​​and machine learning market, so don’t imagine anyone spending so much money on gambling”Fornai” or the Aristoputus (yes, it exists).

For this price you buy a GPU that integrates no less than 80 billion transistors thanks to the use of the most advanced manufacturing process of the moment, and it is none other than the 4nm from TSMC. your graphics chip Nvidia GH100which maintains the monolithic design, offers a configuration of 18432 CUDA cores accompanied by none other than 80 GB of HBM3 memorycapable of offering a bandwidth of 3 TB/sit’s a 50 percent more compared to the Nvidia A100.

Nvidia H100 - Features

In terms of performance, we are talking about 60 TFLOPs FP64/FP321,000 TFLOP TF32 and 2,000 TFLOP FP16, implying triple the performance of the Nvidia A100. Regarding the performance of the FP8, multiply it x6 after reaching 4,000 TFLOPs. Due to this increase in power, the interface is also used PCI Express 5.0 to achieve a bandwidth of 128 GB/s (vs. 64 GB/s) and the 4th generation of NVLink interconnect technology which offers a bandwidth of 900 GB/s (+50%).

All this has a high energy cost, since we are talking about a TDP of 700W, a huge leap from the 400 W consumed by the top-end Nvidia A100 with 80 GB of HBM2e memory. That is, the power consumption grows by 75% in exchange for offering a performance boost between 300 – 600% and 50% bandwidth.

Nvidia H100

The Nvidia H100 includes the following specs:

  • 8 GPC, 72 TPC (9 TPC/GPC), 2 SM/TPC, 144 SM per GPU
  • 128 FP32 CUDA cores per SM, 18,432 FP32 CUDA cores per GPU
  • 4x 4th Gen Tensor cores per SM, 576 per GPU
  • 6 HBM3 or HBM2e stacks, 12 512-bit memory controllers
  • 60 MB L2 cache
  • NVLink Gen 4 and PCIe Gen 5

The Nvidia H100 SXM5 includes the following specs:

  • 8 GPC, 66 TPC, 2 SM/TPC, 132 SM per GPU
  • 128 FP32 CUDA cores per SM, 16896 FP32 CUDA cores per GPU
  • 4x 4th Gen Tensor cores per SM, 528 per GPU
  • 80 GB HBM3, 5 HBM3 stacks, 10 512-bit memory controllers
  • 50 MB L2 cache
  • NVLink Gen 4 and PCIe Gen 5

The Nvidia H100 PCIe 5.0 includes the following specs:

  • 7 or 8 GPC, 57 TPC, 2 SM/TPC, 114 SM per GPU
  • 128 FP32 CUDA/SM cores, 14,592 FP32 CUDA cores per GPU
  • 4x 4th Gen Tensor cores per SM, 456 per GPU
  • 80 GB HBM2e, 5 HBM2e stacks, 10 512-bit memory controllers
  • 50 MB L2 cache
  • NVLink Gen 4 and PCIe Gen 5

About the author

admin

Add Comment

Click here to post a comment

Your email address will not be published.