- Manufacturer Part Number: 100-506143
- API Supported:
- OpenGL 4.6
- OpenCL 2.0
- Vulkan 1.0
- Chipset Manufacturer: AMD
- Chipset Line: Radeon Instinct
- Chipset Model: MI50
- GPU Clock: 1.73 GHz
- Standard Memory: 32 GB
- Memory Technology: HBM2
- Bus Width: 4096 bit
- Slot Space Required: Dual
- Form Factor: Plug-in Card
- Card Height: Full-height
- Cooler Type: Passive Cooler
- Length: 10.5″
- Platform Supported:
Unleash Deep Learning Discovery
The Radeon Instinct™ MI50 compute card is designed to deliver high levels of performance for deep learning, high performance computing (HPC), cloud computing, and rendering systems. This new accelerator is designed with optimized deep learning operations, exceptional double precision performance, and hyper-fast HBM2 memory delivering 1 TB/s memory bandwidth speeds.
Scale your datacenter server designs with AMD’s Infinity Fabric™ Link technology that can be used to directly connect up to 2 GPU hives of 4 GPUs in a single server at up to 5.75x the speed of PCIe® 3.0.
Quickly achieve reliable and accurate results in large-scale system deployments with the Radeon Instinct™ MI50 which is equipped with full-chip ECC and RAS capabilities.
Combine this finely balanced and ultra-scalable solution with our ROCm open ecosystem that includes Radeon Instinct optimized MIOpen libraries supporting frameworks like TensorFlow PyTorch and Caffe 2, and you have a solution ready for the next era of compute and machine intelligence.
Based on “Vega 7nm” Technology with 60 supercharged Compute Units (3840 Stream Processors)
Up to 53 TOPS INT8 Performance for Inference Workloads
Up to 26.5 TFLOPS FP16 and 13.3 TFLOPS FP32 Performance for Training Workloads
Up to 6.6 TFLOPS Double Precision for HPC
16 GB or 32 GB Ultra-fast HBM2 ECC Memory with up to 1 TB/s Memory Bandwidth
The World’s First PCIe® Gen 4 x16 Capable GPU
AMD Infinity Fabric™ Link – up to 184 GB/s peer-to-peer GPU communication speeds
ROCm Open Ecosystem
Optimized for Deep Learning
The Radeon Instinct™ MI50 server accelerator designed on the world’s first 7nm FinFET technology process brings customers a full-feature set based on the industry newest technologies. The MI50 is AMD’s workhorse accelerator offering that is ideal for large scale deep learning. Delivering 26.5 TFLOPS of native half-precision (FP16) or 13.3 TFLOPS single-precision (FP32) peak floating point performance and INT8 support and combined with 16GB of high-bandwidth HBM2 ECC memory, the Radeon Instinct™ MI50 brings customers the compute and memory performance needed for enterprise-class, mid-range compute capable of training complex neural networks for a variety of demanding machine deep learning applications in a cost-effective design.
Accuracy and Speed Now Go Hand-in-Hand
For High Performance Compute (HPC) workloads, the Radeon Instinct™ MI50 accelerator delivers incredible double precision speeds of up to 6.6 TFLOPS, allowing scientists and researches across the globe to more efficiently process HPC parallel codes across several industries including life sciences, energy, finance, automotive and aerospace, academics, government, defense and more.
AMD’s next-generation HPC solutions are designed to deliver optimal compute density and performance per node with the efficiency required to handle today’s massively parallel data-intensive codes; as well as, to provide a powerful, flexible solution for general purpose HPC deployments. The ROCm software platform brings a scalable HPC-class solution that provides fully open-source Linux drivers, HCC compilers, tools and libraries to give scientists and researchers system control down to the metal.