At GTC 2022, NVIDIA announced the H100 GPU as the next flagship product for artificial intelligence and deep learnign applications.
H100 Product Specifications
Form Factor | H100 SXM | H100 PCIe |
---|---|---|
FP64 | 30 teraFLOPS | 24 teraFLOPS |
FP64 Tensor Core | 60 teraFLOPS | 48 teraFLOPS |
FP32 | 60 teraFLOPS | 48 teraFLOPS |
TF32 Tensor Core | 1,000 teraFLOPS* | 500 teraFLOPS | 800 teraFLOPS* | 400 teraFLOPS |
BFLOAT16 Tensor Core | 2,000 teraFLOPS* | 1,000 teraFLOPS | 1,600 teraFLOPS* | 800 teraFLOPS |
FP16 Tensor Core | 2,000 teraFLOPS* | 1,000 teraFLOPS | 1,600 teraFLOPS* | 800 teraFLOPS |
FP8 Tensor Core | 4,000 teraFLOPS* | 2,000 teraFLOPS | 3,200 teraFLOPS* | 1,600 teraFLOPS |
INT8 Tensor Core | 4,000 TOPS* | 2,000 TOPS | 3,200 TOPS* | 1,600 TOPS |
GPU memory | 80GB | 80GB |
GPU memory bandwidth | 3TB/s | 2TB/s |
Decoders | 7 NVDEC 7 JPEG | 7 NVDEC 7 JPEG |
Max thermal design power (TDP) | 700W | 350W |
Multi-Instance GPUs | Up to 7 MIGS @ 10GB each | |
Form factor | SXM | PCIe |
Interconnect | NVLink: 900GB/s PCIe Gen5: 128GB/s | NVLINK: 600GB/s PCIe Gen5: 128GB/s |
Server options | NVIDIA HGX™ H100 Partner and NVIDIA-Certified Systems™ with 4 or 8 GPUs NVIDIA DGX™ H100 with 8 GPUs | Partner and NVIDIA-Certified Systems with 1–8 GPUs |
* With sparsity
Preliminary specifications, may be subject to change
The H100 will feature NVIDIA Hopper GPU architecture and will accelerate dynamic programming — a problem-solving technique used in algorithms for genomics, quantum computing, route optimization and more — by up to 40x from previous generation. As an early adopter and provider of new technologies and products, Dihuni will make the H100 GPU available for our customers as soon as pricing and ordering information is available.
Additionally Dihuni will be announcing a new line of OptiReady CognitX AI Systems based on the H100 PCI-e and SXM GPU. Watch this space.
The current A100 80GB GPU and other Ampere based Deep Learning systems are available and shipping now from Dihuni. Contact to get pricing.