Sale!

NVIDIA A100 900-21001-0000-000 40GB Ampere PCIe GPU for Deep Learning and Artificial Intelligence

$10,995.00 $10,250.00

NVIDIA A100 GPU for PCIe (900-21001-0000-000)

The NVIDIA A100 GPU for PCIe (900-21001-0000-000) is NVIDIA’s latest generation Tensor Cores with Tensor Float (TF32) precision provide up to 20X higher performance over the prior generation with zero code changes and an additional 2X boost with automatic mixed precision and FP16. The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics, and HPC to tackle the world’s toughest computing challenges. As the engine of the NVIDIA data center platform, A100 can efficiently scale up to thousands of GPUs or, using new Multi-Instance GPU (MIG) technology, can be partitioned into seven isolated GPU instances to accelerate workloads of all sizes. A100’s thirdgeneration Tensor Core technology now accelerates more levels of precision for diverse workloads, speeding time to insight as well as time to market.

Key Features

  • Sold and supported by NVIDIA
  • NVIDIA Ampere Architecture
  • PCI Express card
  • 250W Max Power Consumption
  • Passively cooled board
  • 40GB Memory Capacity
  • Full Height Bracket only
  • Manufacturer’s Part Number: 900-21001-0000-000

Note: If you are buying 2 or more GPUs, for every 2 GPUs, you will need to connect them with 2-WAY 2-SLOT x16 NVLINK BRIDGE (900-53651-0000-000).

Tesla A100 Datasheet

Description

NVIDIA A100 GPU for PCIe (900-21001-0000-000)

The NVIDIA A100 for PCIe (900-21001-0000-000) is NVIDIA’s latest generation Tensor Cores with Tensor Float (TF32) precision provide up to 20X higher performance over the prior generation with zero code changes and an additional 2X boost with automatic mixed precision and FP16. The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale for AI, data analytics, and HPC to tackle the world’s toughest computing challenges. As the engine of the NVIDIA data center platform, A100 can efficiently scale up to thousands of GPUs or, using new Multi-Instance GPU (MIG) technology, can be partitioned into seven isolated GPU instances to accelerate workloads of all sizes. A100’s thirdgeneration Tensor Core technology now accelerates more levels of precision for diverse workloads, speeding time to insight as well as time to market.

Key Features

  • Sold and supported by NVIDIA
  • NVIDIA Ampere Architecture
  • PCI Express card
  • 250W Max Power Consumption
  • Passively cooled board
  • 40GB Memory Capacity
  • Full Height Bracket Only
  • Manufacturer’s Part Number: 900-21001-0000-000

SPECIFICATIONS

NVIDIA A100 for HGX NVIDIA A100 for PCIe
Peak FP64 9.7 TF 9.7 TF
Peak FP64 Tensor Core 19.5 TF 19.5 TF
Peak FP32 19.5 TF 19.5 TF
Peak TF32 Tensor Core 156 TF | 312 TF* 156 TF | 312 TF*
Peak BFLOAT16 Tensor Core 312 TF | 624 TF* 312 TF | 624 TF*
Peak FP16 Tensor Core 312 TF | 624 TF* 312 TF | 624 TF*
Peak INT8 Tensor Core 624 TOPS | 1,248 TOPS* 624 TOPS | 1,248 TOPS*
Peak INT4 Tensor Core 1,248 TOPS | 2,496 TOPS* 1,248 TOPS | 2,496 TOPS*
GPU Memory 40 GB 40 GB
GPU Memory Bandwidth 1,555 GB/s 1,555 GB/s
Interconnect NVIDIA NVLink 600 GB/s**
PCIe Gen4 64 GB/s
NVIDIA NVLink 600 GB/s**
PCIe Gen4 64 GB/s
Multi-instance GPUs Various instance sizes with up to 7MIGs @5GB Various instance sizes with up to 7MIGs @5GB
Form Factor 4/8 SXM on NVIDIA HGX A100 PCIe
Max TDP Power 400W 250W
Delivered Performance of Top Apps 100% 90%

Additional information

Weight 8 lbs
Dimensions 10.7 × 4.4 × 1 in

Reviews

There are no reviews yet.

Be the first to review “NVIDIA A100 900-21001-0000-000 40GB Ampere PCIe GPU for Deep Learning and Artificial Intelligence”

Your email address will not be published. Required fields are marked *

Product Enquiry

You may also like…