Posted on

NVIDIA Announces Hopper H100 GPU: Giant Leap from Ampere A100

At GTC 2022, NVIDIA announced the H100 GPU as the next flagship product for artificial intelligence and deep learnign applications.

H100 Product Specifications

Form FactorH100 SXMH100 PCIe
FP6430 teraFLOPS24 teraFLOPS
FP64 Tensor Core60 teraFLOPS48 teraFLOPS
FP3260 teraFLOPS48 teraFLOPS
TF32 Tensor Core1,000 teraFLOPS* | 500 teraFLOPS800 teraFLOPS* | 400 teraFLOPS
BFLOAT16 Tensor Core2,000 teraFLOPS* | 1,000 teraFLOPS1,600 teraFLOPS* | 800 teraFLOPS
FP16 Tensor Core2,000 teraFLOPS* | 1,000 teraFLOPS1,600 teraFLOPS* | 800 teraFLOPS
FP8 Tensor Core4,000 teraFLOPS* | 2,000 teraFLOPS3,200 teraFLOPS* | 1,600 teraFLOPS
INT8 Tensor Core4,000 TOPS* | 2,000 TOPS3,200 TOPS* | 1,600 TOPS
GPU memory80GB80GB
GPU memory bandwidth3TB/s2TB/s
Decoders7 NVDEC
Max thermal design power (TDP)700W350W
Multi-Instance GPUsUp to 7 MIGS @ 10GB each
Form factorSXMPCIe
InterconnectNVLink: 900GB/s PCIe Gen5: 128GB/sNVLINK: 600GB/s PCIe Gen5: 128GB/s
Server optionsNVIDIA HGX H100 Partner and NVIDIA-Certified Systems with 4 or 8 GPUs NVIDIA DGX H100 with 8 GPUsPartner and NVIDIA-Certified Systems with 1–8 GPUs

* With sparsity

Preliminary specifications, may be subject to change

The H100 will feature NVIDIA Hopper GPU architecture and will accelerate dynamic programming — a problem-solving technique used in algorithms for genomics, quantum  computing, route optimization and more — by up to 40x from previous generation. As an early adopter and provider of new technologies and products, Dihuni will make the H100 GPU available for our customers as soon as pricing and ordering information is available.

Additionally Dihuni will be announcing a new line of OptiReady CognitX AI Systems based on the H100 PCI-e and SXM GPU. Watch this space.

The current A100 80GB GPU and other Ampere based Deep Learning systems are available and shipping now from Dihuni. Contact to get pricing.