NVIDIA A30 GPU 900-21001-0040-000 24GB for AI Inference and Mainstream Compute


(Add to cart to Buy / Request Quote)

Non-CEC Version Shipping Now

Visit this page for A30 non-CEC version GPU which is currently shipping.

Safe Checkout

Request Formal Quote, Volume Pricing, Stock or Product Information

  • Competitor Match/Beat on Custom Servers and Select Products (send competitor quote)
  • Leasing Options Available (requires 5 years business operations)
  • Purchase Orders Accepted / Net Terms subject to approval
  • Custom Servers - Configure Below, Add to Cart and Request Quote for formal pricing

Includes Compute Boost Disount of $165

NVIDIA® A30 Tensor Core GPU (900-21001-0040-000) GPU is the most versatile mainstream compute GPU for AI inference and mainstream enterprise workloads. Powered by NVIDIA Ampere architecture Tensor Core technology, it supports a broad range of math precisions, providing a single accelerator to speed up every workload. Built for AI inference at scale, the same compute resource can rapidly re-train AI models with TF32, as well as accelerate high-performance computing (HPC) applications using FP64 Tensor Cores. Multi-Instance GPU (MIG) and FP64 Tensor Cores combine with fast 933 gigabytes per second (GB/s) of memory bandwidth in a low 165W power envelope, all running on a PCIe card optimal for mainstream servers.

Key Features

  • Manufactured and supported by NVIDIA
  • NVIDIA Ampere Architecture
  • PCI Express card
  • 165W Max Power Consumption
  • Passively cooled board
  • 24GB HBM2 Memory Capacity
  • Full Height PCI-e Bracket Only
  • Condition: New
  • 3 Years Manufacturer’s Warranty
  • Manufacturer’s Part Number: 900-21001-0040-000

As a NVIDIA Preferred Solution Provider, we are authorized by the manufacturer and deliver only new original factory packaged products. 


NVIDIA A30 900-21001-0040-000


FP64 5.2 teraFLOPS
FP64 Tensor Core 10.3 teraFLOPS
FP32 10.3 teraFLOPS
TF32 Tensor Core 82 teraFLOPS | 165 teraFLOPS*
BFLOAT16 Tensor Core 165 teraFLOPS | 330 teraFLOPS*
FP16 Tensor Core 165 teraFLOPS | 330 teraFLOPS*
INT8 Tensor Core 330 TOPS | 661 TOPS*
INT4 Tensor Core 661 TOPS | 1321 TOPS*
Media engines 1 optical flow accelerator (OFA)
1 JPEG decoder (NVJPEG)
4 video decoders (NVDEC)
GPU memory 24GB HBM2
GPU memory bandwidth 933GB/s
Interconnect PCIe Gen4: 64GB/s
Third-gen NVLINK: 200GB/s**
Form factor Dual-slot, full-height, full-length (FHFL)
Max thermal design power (TDP) 165W
Multi-Instance GPU (MIG) 4 GPU instances @ 6GB each
2 GPU instances @ 12GB each
1 GPU instance @ 24GB
Virtual GPU (vGPU) software support NVIDIA AI Enterprise for VMware
NVIDIA Virtual Compute Server
Weight 8 lbs
Dimensions 10.7 × 4.4 × 1 in




Product Type

Chipset Manufacturer


Chipset Model


Memory Technology



Form Factor

Plug-in Card

Standard Memory

24 GB


There are no reviews yet.

Be the first to review “NVIDIA A30 GPU 900-21001-0040-000 24GB for AI Inference and Mainstream Compute”

Your email address will not be published. Required fields are marked *

You may also like…

Shopping Cart
Scroll to Top