HPE NVIDIA Tesla V100 GPU 32GB HBM2 Volta CUDA PCIe for Accelerated Machine Deep Learning AI BigData Finance Oil Gas CAD HPC Physics Research

Double Memory than Previous Generation V100

The Tesla V100 GPU, widely adopted by the world’s leading researchers, has received a 2x memory boost to handle the most memory-intensive deep learning and high performance computing workloads.

Now equipped with 32GB of memory, Tesla V100 GPUs will help data scientists train deeper and larger deep learning models that are more accurate than ever. They can also improve the performance of memory-constrained HPC applications by up to 50 percent compared with the previous 16GB version.

GroundBreaking Volta Architecture

By pairing CUDA Cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPU servers for traditional HPC and Deep Learning. TENSOR CORE Equipped with 640 Tensor Cores, Tesla V100 delivers 125 TeraFLOPS of deep learning performance. That’s 12X Tensor FLOPS for DL Training, and 6X Tensor FLOPS for DL Inference when compared to NVIDIA Pascal™ GPUs.

Next Generation NVLink

NVIDIA NVLink in Tesla V100 delivers 2X higher throughput compared to the previous generation. Up to eight Tesla V100 accelerators can be interconnected at up to 300 GB/s to unleash the highest application performance possible on a single server. HBM2 With a combination of improved raw bandwidth of 900 GB/s and higher DRAM utilization efficiency at 95%, Tesla V100 delivers 1.5X higher memory bandwidth over Pascal GPUs as measured on STREAM.

Maximum Efficiency Mode

The new maximum efficiency mode allows data centers to achieve up to 40% higher compute capacity per rack within the existing power budget. In this mode, Tesla V100 runs at peak processing efficiency, providing up to 80% of the performance at half the power consumption.

Programmability

Tesla V100 is architected from the ground up to simplify programmability. Its new independent thread scheduling enables finer-grain synchronization and improves GPU utilization by sharing resources among small jobs.

CUDA Ready

CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.

In GPU-accelerated applications, the sequential part of the workload runs on the CPU – which is optimized for single-threaded performance – while the compute intensive portion of the application runs on thousands of GPU cores in parallel. When using CUDA, developers program in popular languages such as C, C++, Fortran, Python and MATLAB and express parallelism through extensions in the form of a few basic keywords.

The CUDA Toolkit from NVIDIA provides everything you need to develop GPU-accelerated applications. The CUDA Toolkit includes GPU-accelerated libraries, a compiler, development tools and the CUDA runtime.

Performance Specifications for NVIDIA Tesla P4, P40 and V100 Accelerators

Tesla V100: The Universal Datacenter GPU Tesla P4 for Ultra-Efficient Scale-Out Servers Tesla P40 for Inference Throughput Servers
Single-Precision Performance (FP32) 14 teraflops (PCIe)
15.7 teraflops (SXM2)
5.5 teraflops 12 teraflops
Half-Precision Performance (FP16) 112 teraflops (PCIe)
125 teraflops (SXM2)
Integer Operations (INT8) 22 TOPS* 47 TOPS*
GPU Memory 16/32 GB HBM2 8 GB 24 GB
Memory Bandwidth 900 GB/s 192 GB/s 346 GB/s
System Interface/Form Factor Dual-Slot, Full-Height PCI Express Form Factor
SXM2 / NVLink
Low-Profile PCI Express Form Factor Dual-Slot, Full-Height PCI Express Form Factor
Power 250W (PCIe)
300W (SXM2)
50 W/75 W 250 W
Hardware-Accelerated Video Engine 1x Decode Engine, 2x Encode Engines 1x Decode Engine, 2x Encode Engines

*Tera-Operations per Second with Boost Clock Enabled

HPE NVIDIA Tesla V100 GPU 16GB HBM2 Volta CUDA PCIe for Accelerated Machine Deep Learning AI BigData Finance Oil Gas CAD HPC Physics Research

GroundBreaking Volta Architecture

By pairing CUDA Cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPU servers for traditional HPC and Deep Learning. TENSOR CORE Equipped with 640 Tensor Cores, Tesla V100 delivers 125 TeraFLOPS of deep learning performance. That’s 12X Tensor FLOPS for DL Training, and 6X Tensor FLOPS for DL Inference when compared to NVIDIA Pascal™ GPUs.

Next Generation NVLink

NVIDIA NVLink in Tesla V100 delivers 2X higher throughput compared to the previous generation. Up to eight Tesla V100 accelerators can be interconnected at up to 300 GB/s to unleash the highest application performance possible on a single server. HBM2 With a combination of improved raw bandwidth of 900 GB/s and higher DRAM utilization efficiency at 95%, Tesla V100 delivers 1.5X higher memory bandwidth over Pascal GPUs as measured on STREAM.

Maximum Efficiency Mode

The new maximum efficiency mode allows data centers to achieve up to 40% higher compute capacity per rack within the existing power budget. In this mode, Tesla V100 runs at peak processing efficiency, providing up to 80% of the performance at half the power consumption.

Programmability

Tesla V100 is architected from the ground up to simplify programmability. Its new independent thread scheduling enables finer-grain synchronization and improves GPU utilization by sharing resources among small jobs.

CUDA Ready

CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.

In GPU-accelerated applications, the sequential part of the workload runs on the CPU – which is optimized for single-threaded performance – while the compute intensive portion of the application runs on thousands of GPU cores in parallel. When using CUDA, developers program in popular languages such as C, C++, Fortran, Python and MATLAB and express parallelism through extensions in the form of a few basic keywords.

The CUDA Toolkit from NVIDIA provides everything you need to develop GPU-accelerated applications. The CUDA Toolkit includes GPU-accelerated libraries, a compiler, development tools and the CUDA runtime.

Performance Specifications for NVIDIA Tesla P4, P40 and V100 Accelerators

Tesla V100: The Universal Datacenter GPU Tesla P4 for Ultra-Efficient Scale-Out Servers Tesla P40 for Inference Throughput Servers
Single-Precision Performance (FP32) 14 teraflops (PCIe)
15.7 teraflops (SXM2)
5.5 teraflops 12 teraflops
Half-Precision Performance (FP16) 112 teraflops (PCIe)
125 teraflops (SXM2)
Integer Operations (INT8) 22 TOPS* 47 TOPS*
GPU Memory 16/32 GB HBM2 8 GB 24 GB
Memory Bandwidth 900 GB/s 192 GB/s 346 GB/s
System Interface/Form Factor Dual-Slot, Full-Height PCI Express Form Factor
SXM2 / NVLink
Low-Profile PCI Express Form Factor Dual-Slot, Full-Height PCI Express Form Factor
Power 250W (PCIe)
300W (SXM2)
50 W/75 W 250 W
Hardware-Accelerated Video Engine 1x Decode Engine, 2x Encode Engines 1x Decode Engine, 2x Encode Engines

*Tera-Operations per Second with Boost Clock Enabled

Lenovo ThinkSystem NVIDIA Tesla V100 GPU 16GB HBM2 Volta CUDA PCIe for Accelerated Machine Deep Learning AI Finance Oil Gas CAD HPC Physics Research

Lenovo ThinkSystem servers support GPU technology to accelerate different computing workloads, maximize performance for graphic design, virtualization, artificial intelligence and high performance computing applications in Lenovo servers.

The following table summarizes the server support for the GPUs. The numbers listed in the server columns represent the number of GPUs supported.

GroundBreaking Volta Architecture

By pairing CUDA Cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPU servers for traditional HPC and Deep Learning. TENSOR CORE Equipped with 640 Tensor Cores, Tesla V100 delivers 125 TeraFLOPS of deep learning performance. That’s 12X Tensor FLOPS for DL Training, and 6X Tensor FLOPS for DL Inference when compared to NVIDIA Pascal™ GPUs.

Next Generation NVLink

NVIDIA NVLink in Tesla V100 delivers 2X higher throughput compared to the previous generation. Up to eight Tesla V100 accelerators can be interconnected at up to 300 GB/s to unleash the highest application performance possible on a single server. HBM2 With a combination of improved raw bandwidth of 900 GB/s and higher DRAM utilization efficiency at 95%, Tesla V100 delivers 1.5X higher memory bandwidth over Pascal GPUs as measured on STREAM.

Maximum Efficiency Mode

The new maximum efficiency mode allows data centers to achieve up to 40% higher compute capacity per rack within the existing power budget. In this mode, Tesla V100 runs at peak processing efficiency, providing up to 80% of the performance at half the power consumption.

Programmability

Tesla V100 is architected from the ground up to simplify programmability. Its new independent thread scheduling enables finer-grain synchronization and improves GPU utilization by sharing resources among small jobs.

CUDA Ready

CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.

In GPU-accelerated applications, the sequential part of the workload runs on the CPU – which is optimized for single-threaded performance – while the compute intensive portion of the application runs on thousands of GPU cores in parallel. When using CUDA, developers program in popular languages such as C, C++, Fortran, Python and MATLAB and express parallelism through extensions in the form of a few basic keywords.

The CUDA Toolkit from NVIDIA provides everything you need to develop GPU-accelerated applications. The CUDA Toolkit includes GPU-accelerated libraries, a compiler, development tools and the CUDA runtime.

Performance Specifications for NVIDIA Tesla P4, P40 and V100 Accelerators

Tesla V100: The Universal Datacenter GPU Tesla P4 for Ultra-Efficient Scale-Out Servers Tesla P40 for Inference Throughput Servers
Single-Precision Performance (FP32) 14 teraflops (PCIe)
15.7 teraflops (SXM2)
5.5 teraflops 12 teraflops
Half-Precision Performance (FP16) 112 teraflops (PCIe)
125 teraflops (SXM2)
Integer Operations (INT8) 22 TOPS* 47 TOPS*
GPU Memory 16/32 GB HBM2 8 GB 24 GB
Memory Bandwidth 900 GB/s 192 GB/s 346 GB/s
System Interface/Form Factor Dual-Slot, Full-Height PCI Express Form Factor
SXM2 / NVLink
Low-Profile PCI Express Form Factor Dual-Slot, Full-Height PCI Express Form Factor
Power 250W (PCIe)
300W (SXM2)
50 W/75 W 250 W
Hardware-Accelerated Video Engine 1x Decode Engine, 2x Encode Engines 1x Decode Engine, 2x Encode Engines

*Tera-Operations per Second with Boost Clock Enabled

Lenovo ThinkSystem NVIDIA Tesla V100 GPU 32GB HBM2 Volta CUDA PCIe for Accelerated Machine Deep Learning AI Finance Oil Gas CAD HPC Physics Research

Lenovo ThinkSystem servers support GPU technology to accelerate different computing workloads, maximize performance for graphic design, virtualization, artificial intelligence and high performance computing applications in Lenovo servers.

The following table summarizes the server support for the GPUs. The numbers listed in the server columns represent the number of GPUs supported.

Double Memory than Previous Generation V100

The Tesla V100 GPU, widely adopted by the world’s leading researchers, has received a 2x memory boost to handle the most memory-intensive deep learning and high performance computing workloads.

Now equipped with 32GB of memory, Tesla V100 GPUs will help data scientists train deeper and larger deep learning models that are more accurate than ever. They can also improve the performance of memory-constrained HPC applications by up to 50 percent compared with the previous 16GB version.

GroundBreaking Volta Architecture

By pairing CUDA Cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPU servers for traditional HPC and Deep Learning. TENSOR CORE Equipped with 640 Tensor Cores, Tesla V100 delivers 125 TeraFLOPS of deep learning performance. That’s 12X Tensor FLOPS for DL Training, and 6X Tensor FLOPS for DL Inference when compared to NVIDIA Pascal™ GPUs.

Next Generation NVLink

NVIDIA NVLink in Tesla V100 delivers 2X higher throughput compared to the previous generation. Up to eight Tesla V100 accelerators can be interconnected at up to 300 GB/s to unleash the highest application performance possible on a single server. HBM2 With a combination of improved raw bandwidth of 900 GB/s and higher DRAM utilization efficiency at 95%, Tesla V100 delivers 1.5X higher memory bandwidth over Pascal GPUs as measured on STREAM.

Maximum Efficiency Mode

The new maximum efficiency mode allows data centers to achieve up to 40% higher compute capacity per rack within the existing power budget. In this mode, Tesla V100 runs at peak processing efficiency, providing up to 80% of the performance at half the power consumption.

Programmability

Tesla V100 is architected from the ground up to simplify programmability. Its new independent thread scheduling enables finer-grain synchronization and improves GPU utilization by sharing resources among small jobs.

CUDA Ready

CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.

In GPU-accelerated applications, the sequential part of the workload runs on the CPU – which is optimized for single-threaded performance – while the compute intensive portion of the application runs on thousands of GPU cores in parallel. When using CUDA, developers program in popular languages such as C, C++, Fortran, Python and MATLAB and express parallelism through extensions in the form of a few basic keywords.

The CUDA Toolkit from NVIDIA provides everything you need to develop GPU-accelerated applications. The CUDA Toolkit includes GPU-accelerated libraries, a compiler, development tools and the CUDA runtime.

Performance Specifications for NVIDIA Tesla P4, P40 and V100 Accelerators

Tesla V100: The Universal Datacenter GPU Tesla P4 for Ultra-Efficient Scale-Out Servers Tesla P40 for Inference Throughput Servers
Single-Precision Performance (FP32) 14 teraflops (PCIe)
15.7 teraflops (SXM2)
5.5 teraflops 12 teraflops
Half-Precision Performance (FP16) 112 teraflops (PCIe)
125 teraflops (SXM2)
Integer Operations (INT8) 22 TOPS* 47 TOPS*
GPU Memory 16/32 GB HBM2 8 GB 24 GB
Memory Bandwidth 900 GB/s 192 GB/s 346 GB/s
System Interface/Form Factor Dual-Slot, Full-Height PCI Express Form Factor
SXM2 / NVLink
Low-Profile PCI Express Form Factor Dual-Slot, Full-Height PCI Express Form Factor
Power 250W (PCIe)
300W (SXM2)
50 W/75 W 250 W
Hardware-Accelerated Video Engine 1x Decode Engine, 2x Encode Engines 1x Decode Engine, 2x Encode Engines

*Tera-Operations per Second with Boost Clock Enabled

Dihuni OptiReady Supermicro 4029GP-TRT2-V100-1 4U 5x NVIDIA Tesla V100 32GB GPU 2S Xeon 4116 2.1GHz 128GB 250GBSSD 1TBHDD 2x10GbE Deep Learning Server

X11 Servers Featuring New Intel Skylake Scalable Xeon® Processors

Supermicro’s new X11 servers are engineered to unleash the full performance and rich feature sets on the new Intel® Xeon® Scalable processor family, supporting more cores and higher TDP envelopes of 205 watts and higher, increased number of memory channels and higher bandwidth, more PCI-E 3.0 lanes, 100G/40G/25G/10G Ethernet, 100G EDR InfiniBand (on select servers) and integrated  Intel® Omni-Path Architecture networking fabrics. The elevated compute performance, density, I/O capacity, and efficiency are coupled with industry’s most comprehensive support for NVMe NAND Flash and Intel® Optane SSDs for unprecedented application responsiveness and agility. For exact sever specifications, please see highlights below and also refer to detailed technical specifications.

“At Supermicro, we understand that customers need the newest technologies as early as possible to drive leading performance and improved TCO. With the industry’s strongest and broadest product line, our designs not only take full advantage of Xeon Scalable Processors’ new features such as three UPI, faster DIMMs and more core count per socket, but they also fully support NVMe through unique non-blocking architectures to achieve the best data bandwidth and IOPS.  For instance, one Supermicro 2U storage server can deliver over 16 million IOPS!”

“Supermicro designs the most application-optimized GPU systems and offers the widest selection of GPU-optimized servers and workstations in the industry. Our high performance computing solutions enable deep learning, engineering and scientific fields to scale out their compute clusters to accelerate their most demanding workloads and achieve fastest time-to-results with maximum performance per watt, per square foot and per dollar. With our latest innovations incorporating the new NVIDIA V100 PCI-E and V100 SXM2 GPUs in performance-optimized 1U and 4U systems with next-generation NVLink, our customers can accelerate their applications and innovations to help solve the world’s most complex and challenging problems.”  

Charles Liang, President and CEO of Supermicro

Support for 8 Double Width GPUs for Deep Learning

The 4029GP-TRT2 takes full advantage of the new Xeon Scalable Processor Family  PCIe lanes to support 8 double-width GPUs to deliver a very high performance Artificial Intelligence and Deep Learning system suitable for autonomous cars, molecular dynamics, computational biology, fluid simulation, advanced physics and Internet of Things (IoT) and Big Data Analytics etc. With NVIDIA Tesla cards, this server delivers unparalleled acceleration for compute intensive applications.

Server Systems Management

Supermicro Server Manager (SSM) provides capabilities to monitor the health of server components including memory, hard drives and RAID controllers. It enables the datacenter administrator to monitor and manage power usage across all Supermicro servers allowing users to maximize their CPU payload while mitigating the risk of tripped circuit. Firmware upgrades on Supermicro servers became easier now with a couple of clicks. Administrators can now mount an ISO image on multiple servers and reboot the servers with those images. The tool also provides pre-defined reports and many more features that will make managing Supermicro servers simpler. Download the SSM_brochure for more info or download Supermicro SuperDoctor® device monitoring and management software.

Technical Specifications

Mfr Part # SYS-4029GP-TRT2
Motherboard Super X11DPG-OT-CPU
CPU Dual Socket P (LGA 3647); Intel® Xeon® Scalable Processors,
Dual UPI up to 10.4GT/s; Dual UPI up to 10.4GT/s; Support CPU TDP 70-205W2 x Intel Skylake Xeon Silver 4116 2.1 GHz 12 Core CPU Installed
Cores Up to 28 Cores with Intel® HT Technology
GPU / Coprocessor Support Please refer to: Compatible GPU list
Memory Capacity 24 DIMM slots; Up to 3TB ECC 3DS LRDIMM, 1TB ECC RDIMM, DDR4 up to 2666MHz

128 GB DDR4-2666MHz (32GB x 4) Installed

Memory Type 2666/2400/2133MHz ECC DDR4 SDRAM
Chipset Intel® C622 chipset
SATA SATA3 (6Gbps) with RAID 0, 1, 5, 10
Network Controllers Dual Port 10GbE from C622
IPMI Support for Intelligent Platform Management Interface v.2.0; IPMI 2.0 with virtual media over LAN and KVM-over-LAN support
Graphics ASPEED AST2500 BMC
SATA 10 SATA3 (6Gbps) ports
LAN 2 RJ45 10GBase-T LAN ports; 1 RJ45 Dedicated IPMI LAN port
USB 4 USB 3.0 ports (rear)
Video 1 VGA Connector
COM Port 1 COM port (rear)
BIOS Type AMI 32Mb SPI Flash ROM
Software Intel® Node Manager; IPMI 2.0; KVM with dedicated LAN; SSM, SPM, SUM; ,; SuperDoctor® 5; Watchdog
CPU Monitors for CPU Cores, Chipset Voltages, Memory.; 4+1 Phase-switching voltage regulator
FAN Fans with tachometer monitoring; Status monitor for speed control; Pulse Width Modulated (PWM) fan connectors
Temperature Monitoring for CPU and chassis environment; Thermal Control for fan connectors
Form Factor 4U Rackmountable; Rackmount Kit (MCP-290-00057-0N)
Model CSE-418GTS-R4000B
Height 7.0″ (178mm)
Width 17.2″ (437mm)
Depth 29″ (737mm)
Net Weight: 80 lbs (36.2 kg); Gross Weight: 135 lbs (61.2 kg)
Available Colors Black
Hot-swap Up to 24 Hot-swap 2.5″ SAS/SATA drive bays; 8x 2.5″ drives supported natively

  • 1 x SamsungSM863a 240GB SATA 6Gb/s,VNAND,V48,2.5″,7mm SSD Installed
  • 1 x Seagate 2.5″ 1TB SATA 6Gb/s, 7.2K RPM, 4kN, 128MB HDD Installed
PCI-Express 11 PCI-E 3.0 x16 (FH, FL) slots; 1 PCI-E 3.0 x8 (FH, FL, in x16) slot

  • 5 x NVIDIA V100 32GB PCIe3.0 GPU Installed
Fans 8 Hot-swap 92mm cooling fans
Shrouds 1 Air Shroud (MCP-310-41808-0B)
Total Output Power 1000W/1800W/1980W/2000W
Dimension
(W x H x L)
73.5 x 40 x 265 mm
Input 100-120Vac / 12.5-9.5A / 50-60Hz; 200-220Vac / 10-9.5A / 50-60Hz; 220-230Vac / 10-9.8A / 50-60Hz; 230-240Vac / 10-9.8A / 50-60Hz; 200-240Vac / 11.8-9.8A / 50-60Hz (UL/cUL only)
+12V Max: 83.3A / Min: 0A (100-120Vac); Max: 150A / Min: 0A (200-220Vac); Max: 165A / Min: 0A (220-230Vac); Max: 166.7A / Min: 0A (230-240Vac); Max: 166.7A / Min: 0A (200-240Vac) (UL/cUL only)
12Vsb Max: 2.1A / Min: 0A
Output Type 25 Pairs Gold Finger Connector
Certification Titanium Level; [ Test Report ]
RoHS RoHS Compliant
Environmental Spec. Operating Temperature:
10°C ~ 35°C (50°F ~ 95°F); Non-operating Temperature:
-40°C to 60°C (-40°F to 140°F); Operating Relative Humidity:
8% to 90% (non-condensing); Non-operating Relative Humidity:
5% to 95% (non-condensing)

Dihuni OptiReady Supermicro 4029GP-TRT2-V100-1 4U 10 NVIDIA Tesla V100 32GB GPU 2S Xeon 4116 2.1GHz 256GB 250GBSSD 1TBHDD 2x10GbE Deep Learning Server

X11 Servers Featuring New Intel Skylake Scalable Xeon® Processors

Supermicro’s new X11 servers are engineered to unleash the full performance and rich feature sets on the new Intel® Xeon® Scalable processor family, supporting more cores and higher TDP envelopes of 205 watts and higher, increased number of memory channels and higher bandwidth, more PCI-E 3.0 lanes, 100G/40G/25G/10G Ethernet, 100G EDR InfiniBand (on select servers) and integrated  Intel® Omni-Path Architecture networking fabrics. The elevated compute performance, density, I/O capacity, and efficiency are coupled with industry’s most comprehensive support for NVMe NAND Flash and Intel® Optane SSDs for unprecedented application responsiveness and agility. For exact sever specifications, please see highlights below and also refer to detailed technical specifications.

“At Supermicro, we understand that customers need the newest technologies as early as possible to drive leading performance and improved TCO. With the industry’s strongest and broadest product line, our designs not only take full advantage of Xeon Scalable Processors’ new features such as three UPI, faster DIMMs and more core count per socket, but they also fully support NVMe through unique non-blocking architectures to achieve the best data bandwidth and IOPS.  For instance, one Supermicro 2U storage server can deliver over 16 million IOPS!”

“Supermicro designs the most application-optimized GPU systems and offers the widest selection of GPU-optimized servers and workstations in the industry. Our high performance computing solutions enable deep learning, engineering and scientific fields to scale out their compute clusters to accelerate their most demanding workloads and achieve fastest time-to-results with maximum performance per watt, per square foot and per dollar. With our latest innovations incorporating the new NVIDIA V100 PCI-E and V100 SXM2 GPUs in performance-optimized 1U and 4U systems with next-generation NVLink, our customers can accelerate their applications and innovations to help solve the world’s most complex and challenging problems.”  

Charles Liang, President and CEO of Supermicro

Support for 8 Double Width GPUs for Deep Learning

The 4029GP-TRT2 takes full advantage of the new Xeon Scalable Processor Family  PCIe lanes to support 8 double-width GPUs to deliver a very high performance Artificial Intelligence and Deep Learning system suitable for autonomous cars, molecular dynamics, computational biology, fluid simulation, advanced physics and Internet of Things (IoT) and Big Data Analytics etc. With NVIDIA Tesla cards, this server delivers unparalleled acceleration for compute intensive applications.

Server Systems Management

Supermicro Server Manager (SSM) provides capabilities to monitor the health of server components including memory, hard drives and RAID controllers. It enables the datacenter administrator to monitor and manage power usage across all Supermicro servers allowing users to maximize their CPU payload while mitigating the risk of tripped circuit. Firmware upgrades on Supermicro servers became easier now with a couple of clicks. Administrators can now mount an ISO image on multiple servers and reboot the servers with those images. The tool also provides pre-defined reports and many more features that will make managing Supermicro servers simpler. Download the SSM_brochure for more info or download Supermicro SuperDoctor® device monitoring and management software.

Technical Specifications

Mfr Part # SYS-4029GP-TRT2
Motherboard Super X11DPG-OT-CPU
CPU Dual Socket P (LGA 3647); Intel® Xeon® Scalable Processors,
Dual UPI up to 10.4GT/s; Dual UPI up to 10.4GT/s; Support CPU TDP 70-205W2 x Intel Skylake Xeon Silver 4116 2.1 GHz 12 Core CPU Installed
Cores Up to 28 Cores with Intel® HT Technology
GPU / Coprocessor Support Please refer to: Compatible GPU list
Memory Capacity 24 DIMM slots; Up to 3TB ECC 3DS LRDIMM, 1TB ECC RDIMM, DDR4 up to 2666MHz

256 GB DDR4-2666MHz (32GB x 8) Installed

Memory Type 2666/2400/2133MHz ECC DDR4 SDRAM
Chipset Intel® C622 chipset
SATA SATA3 (6Gbps) with RAID 0, 1, 5, 10
Network Controllers Dual Port 10GbE from C622
IPMI Support for Intelligent Platform Management Interface v.2.0; IPMI 2.0 with virtual media over LAN and KVM-over-LAN support
Graphics ASPEED AST2500 BMC
SATA 10 SATA3 (6Gbps) ports
LAN 2 RJ45 10GBase-T LAN ports; 1 RJ45 Dedicated IPMI LAN port
USB 4 USB 3.0 ports (rear)
Video 1 VGA Connector
COM Port 1 COM port (rear)
BIOS Type AMI 32Mb SPI Flash ROM
Software Intel® Node Manager; IPMI 2.0; KVM with dedicated LAN; SSM, SPM, SUM; ,; SuperDoctor® 5; Watchdog
CPU Monitors for CPU Cores, Chipset Voltages, Memory.; 4+1 Phase-switching voltage regulator
FAN Fans with tachometer monitoring; Status monitor for speed control; Pulse Width Modulated (PWM) fan connectors
Temperature Monitoring for CPU and chassis environment; Thermal Control for fan connectors
Form Factor 4U Rackmountable; Rackmount Kit (MCP-290-00057-0N)
Model CSE-418GTS-R4000B
Height 7.0″ (178mm)
Width 17.2″ (437mm)
Depth 29″ (737mm)
Net Weight: 80 lbs (36.2 kg); Gross Weight: 135 lbs (61.2 kg)
Available Colors Black
Hot-swap Up to 24 Hot-swap 2.5″ SAS/SATA drive bays; 8x 2.5″ drives supported natively

  • 1 x SamsungSM863a 240GB SATA 6Gb/s,VNAND,V48,2.5″,7mm SSD Installed
  • 1 x Seagate 2.5″ 1TB SATA 6Gb/s, 7.2K RPM, 4kN, 128MB HDD Installed
PCI-Express 11 PCI-E 3.0 x16 (FH, FL) slots; 1 PCI-E 3.0 x8 (FH, FL, in x16) slot

  • 10 x NVIDIA V100 32GB PCIe3.0 GPU Installed
Fans 8 Hot-swap 92mm cooling fans
Shrouds 1 Air Shroud (MCP-310-41808-0B)
Total Output Power 1000W/1800W/1980W/2000W
Dimension
(W x H x L)
73.5 x 40 x 265 mm
Input 100-120Vac / 12.5-9.5A / 50-60Hz; 200-220Vac / 10-9.5A / 50-60Hz; 220-230Vac / 10-9.8A / 50-60Hz; 230-240Vac / 10-9.8A / 50-60Hz; 200-240Vac / 11.8-9.8A / 50-60Hz (UL/cUL only)
+12V Max: 83.3A / Min: 0A (100-120Vac); Max: 150A / Min: 0A (200-220Vac); Max: 165A / Min: 0A (220-230Vac); Max: 166.7A / Min: 0A (230-240Vac); Max: 166.7A / Min: 0A (200-240Vac) (UL/cUL only)
12Vsb Max: 2.1A / Min: 0A
Output Type 25 Pairs Gold Finger Connector
Certification Titanium Level; [ Test Report ]
RoHS RoHS Compliant
Environmental Spec. Operating Temperature:
10°C ~ 35°C (50°F ~ 95°F); Non-operating Temperature:
-40°C to 60°C (-40°F to 140°F); Operating Relative Humidity:
8% to 90% (non-condensing); Non-operating Relative Humidity:
5% to 95% (non-condensing)

Dihuni OptiReady Supermicro 7049GP-TRT-V100-1 2 x NVIDIA Tesla V100 32GB GPU 4U Tower 2S Xeon 4114 96GB 480GB SSD 4TB HDD 2x10GbE Deep Learning Server

X11 Servers Featuring New Intel Skylake Scalable Xeon® Processors

Supermicro’s new X11 servers are engineered to unleash the full performance and rich feature sets on the new Intel® Xeon® Scalable processor family, supporting more cores and higher TDP envelopes of 205 watts and higher, increased number of memory channels and higher bandwidth, more PCI-E 3.0 lanes, 100G/40G/25G/10G Ethernet, 100G EDR InfiniBand (on select servers) and integrated  Intel® Omni-Path Architecture networking fabrics. The elevated compute performance, density, I/O capacity, and efficiency are coupled with industry’s most comprehensive support for NVMe NAND Flash and Intel® Optane SSDs for unprecedented application responsiveness and agility. For exact sever specifications, please see highlights below and also refer to detailed technical specifications.

“At Supermicro, we understand that customers need the newest technologies as early as possible to drive leading performance and improved TCO. With the industry’s strongest and broadest product line, our designs not only take full advantage of Xeon Scalable Processors’ new features such as three UPI, faster DIMMs and more core count per socket, but they also fully support NVMe through unique non-blocking architectures to achieve the best data bandwidth and IOPS.  For instance, one Supermicro 2U storage server can deliver over 16 million IOPS!”

“We are excited to preview our X11 Ultra, TwinPro™, BigTwin™, SuperBlade® and many more new designs based on the new Intel® Xeon® Processor Scalable Family processors with CPUs that can provide up to 3.9x higher virtualized throughput,”  

Charles Liang, President and CEO of Supermicro

Server Systems Management

Supermicro Server Manager (SSM) provides capabilities to monitor the health of server components including memory, hard drives and RAID controllers. It enables the datacenter administrator to monitor and manage power usage across all Supermicro servers allowing users to maximize their CPU payload while mitigating the risk of tripped circuit. Firmware upgrades on Supermicro servers became easier now with a couple of clicks. Administrators can now mount an ISO image on multiple servers and reboot the servers with those images. The tool also provides pre-defined reports and many more features that will make managing Supermicro servers simpler. Download the SSM_brochure for more info or download Supermicro SuperDoctor® device monitoring and management software.

Technical Specifications

Mfr Part # SYS-7049GP-TRT (Black)
CPU Dual Socket P (LGA 3647); Intel® Xeon® Scalable Processors,
3 UPI up to 10.4GT/s; Support CPU TDP 70-205W with IVR 2 x Intel Xeon Skylake Gold 4114 10 Core 2.2 GHz CPU Installed
Cores Up to 28 Cores with Intel® HT Technology
GPU GPU Support Matrix
Memory Capacity 16 DIMM slots; Up to 2TB ECC 3DS LRDIMM, 1TB ECC RDIMM, DDR4 up to 2666MHz

96GB ECC DDR4-2666MHz installed (8GB x 12); expandable up to 2TB

Memory Type 2666/2400/2133MHz ECC DDR4 SDRAM
Chipset Intel® C621 chipset
SATA SATA3 (6Gbps); RAID 0, 1, 5, 10
Network Controllers Intel® X550 Dual Port 10GBase-T; Virtual Machine Device Queues reduce I/O overhead; Supports 10GBASE-T, 100BASE-TX, and 1000BASE-T, RJ45 output
IPMI Support for Intelligent Platform Management Interface v.2.0; IPMI 2.0 with virtual media over LAN and KVM-over-LAN support
Graphics ASPEED AST2500 BMC
SATA 10 SATA3 (6Gbps) ports
LAN 2 RJ45 10GBase-T ports; 1 RJ45 Dedicated IPMI LAN port
USB 5 USB 3.0 ports (2 rear, 2 via header, 1 Type A); 4 USB 2.0 ports (2 rear, 2 via headers)
Video 1 VGA port
Serial Port/Header 2 COM ports (1 rear, 1 Header)
BIOS Type AMI 32Mb SPI Flash ROM
Software Intel® Node Manager; IPMI 2.0; SSM, SPM, SUM; SuperDoctor® 5
CPU Monitors for CPU Cores, Chipset Voltages, Memory.; 4+1 Phase-switching voltage regulator
FAN Fans with tachometer monitoring; Status monitor for speed control; Pulse Width Modulated (PWM) fan connectors
Temperature Monitoring for CPU and chassis environment; Thermal Control for 8x fan connectors
Form Factor 4U Rackmountable / Tower; Optional Rackmount Kit
Model CSE-747BTS-R2K20BP
Height 18.2″ (462mm)
Width 7.0″ (178mm)
Depth 26.5″ (673mm)
Package 27″ (H) x 13″ (W) x 38″ (D)
Weight Net Weight: 46 lbs (20.9 kg); Gross Weight: 62 lbs (28.1 kg)
Available Colors Dark Gray
Buttons Power On/Off button; System Reset button
LEDs Power status LED; Hard drive activity LED; Network activity LEDs; System Overheat & Power Fail LED
Ports 2 Front USB 3.0 Ports
PCI-Express 6 PCI-E 3.0 x16 (double-width) slots; 1 PCI-E 3.0 x4 (in x8 slot)

2 x NVIDIA Tesla V100 32GB PCI-E GPU Installed

Hot-swap 8 Hot-swap 3.5″ drive bays

  • 1 x Seagate 2.5″ 480GB SATA 6Gb/s, 7.0mm, 16nm,0.6 DWPD SSD Installed
  • 1 x Toshiba 3.5″ 4TB SATA 6Gb/s 7.2K RPM 128M 512E (Tomcat) HDD Installed
Fans 4 Heavy duty fans; 4 Rear exhaust fans
Heatsink 2 Active heatsink with optimal fan speed control
Total Output Power and Input 1200W with Input 100-127Vac; 1800W with Input 200-220Vac; 1980W with Input 220-230Vac; 2090W with Input 230-240Vac; 2090W with Input 200-220Vac; 2200W with Input 220-240Vac (for UL/cUL use only); 2090W with Input 230-240Vdc (for CCC only)
AC Input Frequency 50-60Hz
Dimension
(W x H x L)
76 x 40 x 336 mm
+12V Max: 100A / Min: 0A (100-127Vac); Max: 150A / Min: 0A (200-220Vac); Max: 165A / Min: 0A (220-230Vac); Max: 174.17A / Min: 0A (230-240Vac); Max: 183.3A / Min: 0A (220-240Vac)
5VSB Max: 1A / Min: 0A
Output Type Backplanes (gold finger)
Certification UL/cUL/CB/BSMI/CE/CCC; Titanium Level
RoHS RoHS Compliant
Environmental Spec. Operating Temperature:
10°C ~ 35°C (50°F ~ 95°F); Non-operating Temperature:
-40°C to 60°C (-40°F to 140°F); Operating Relative Humidity:
8% to 90% (non-condensing); Non-operating Relative Humidity:
5% to 95% (non-condensing)

Dihuni OptiReady Supermicro 1029GQ-TVRT-V2-2 2xNVIDIA Tesla V100 32GB SXM2 NVLink GPU 2S Xeon Silver 4112 128GB 960GB SSD 2x10GbE Deep Learning Server

X11 Servers Featuring New Intel Skylake Scalable Xeon® Processors

Supermicro’s new X11 servers are engineered to unleash the full performance and rich feature sets on the new Intel® Xeon® Scalable processor family, supporting more cores and higher TDP envelopes of 205 watts and higher, increased number of memory channels and higher bandwidth, more PCI-E 3.0 lanes, 100G/40G/25G/10G Ethernet, 100G EDR InfiniBand (on select servers) and integrated  Intel® Omni-Path Architecture networking fabrics. The elevated compute performance, density, I/O capacity, and efficiency are coupled with industry’s most comprehensive support for NVMe NAND Flash and Intel® Optane SSDs for unprecedented application responsiveness and agility. For exact sever specifications, please see highlights below and also refer to detailed technical specifications.

“At Supermicro, we understand that customers need the newest technologies as early as possible to drive leading performance and improved TCO. With the industry’s strongest and broadest product line, our designs not only take full advantage of Xeon Scalable Processors’ new features such as three UPI, faster DIMMs and more core count per socket, but they also fully support NVMe through unique non-blocking architectures to achieve the best data bandwidth and IOPS.  For instance, one Supermicro 2U storage server can deliver over 16 million IOPS!”

“Supermicro designs the most application-optimized GPU systems and offers the widest selection of GPU-optimized servers and workstations in the industry. Our high performance computing solutions enable deep learning, engineering and scientific fields to scale out their compute clusters to accelerate their most demanding workloads and achieve fastest time-to-results with maximum performance per watt, per square foot and per dollar. With our latest innovations incorporating the new NVIDIA V100 PCI-E and V100 SXM2 GPUs in performance-optimized 1U and 4U systems with next-generation NVLink, our customers can accelerate their applications and innovations to help solve the world’s most complex and challenging problems.”  

Charles Liang, President and CEO of Supermicro

Server Systems Management

Supermicro Server Manager (SSM) provides capabilities to monitor the health of server components including memory, hard drives and RAID controllers. It enables the datacenter administrator to monitor and manage power usage across all Supermicro servers allowing users to maximize their CPU payload while mitigating the risk of tripped circuit. Firmware upgrades on Supermicro servers became easier now with a couple of clicks. Administrators can now mount an ISO image on multiple servers and reboot the servers with those images. The tool also provides pre-defined reports and many more features that will make managing Supermicro servers simpler. Download the SSM_brochure for more info or download Supermicro SuperDoctor® device monitoring and management software.

Technical Specifications

Mfr Pat # SYS-1029GQ-TVRT
Motherboard  Super X11DGQ
CPU Dual Socket P (LGA 3647); Intel® Xeon® Scalable Processors,
3 UPI up to 10.4GT/s; 3 UPI up to 10.4GT/s; Support CPU TDP 70-165W2 x Intel Skylake Xeon Scalable Silver 4112 4Cores 2.6GHz CPU Installed
Cores Up to 28 Cores with Intel® HT Technology
GPU 4 NVIDIA® Tesla® V100 SXM2 GPUs; Up to 300GB/s GPU-to-GPU NVIDIA® NVLINK™

2 x NVIDIA Tesla V100 SXM2 32GB HBM2 NVLink GPU Installed

Memory Capacity 12 DIMM slots; Up to 1.5TB ECC 3DS LRDIMM, 1.5TB ECC RDIMM, DDR4 up to 2666MHz

128GB (32GBx4) DDR4-2666 2Rx4 ECC Registered Memory Installed

Memory Type 2666/2400/2133MHz ECC DDR4 SDRAM
Chipset Intel® C621 chipset
SATA SATA3 (6Gbps) with RAID 0, 1, 5, 10
Network Connectivity Intel® X540 Dual Port 10GBase-T; Virtual Machine Device Queues reduce I/O overhead; Supports 10GBASE-T, 100BASE-TX, and 1000BASE-T, RJ45 output
IPMI Support for Intelligent Platform Management Interface v.2.0; IPMI 2.0 with virtual media over LAN and KVM-over-LAN support
Graphics ASPEED AST2500 BMC
SATA 4 SATA3 (6Gbps) ports
LAN 2 RJ45 10GBase-T ports; 1 RJ45 Dedicated IPMI LAN port
USB 2 USB 3.0 ports (rear)
Video 1 VGA port
Serial Header 1 Fast UART 16550 header
BIOS Type AMI 32Mb SPI Flash ROM
Software Intel® Node Manager; IPMI 2.0; KVM with dedicated LAN; SSM, SPM, SUM; ,; SuperDoctor® 5; Watch Dog
Power Configurations ACPI / APM Power Management
CPU Monitors for CPU Cores, Chipset Voltages, Memory.; 4+1 Phase-switching voltage regulator
FAN Fans with tachometer monitoring; Status monitor for speed control; Pulse Width Modulated (PWM) fan connectors
Temperature Monitoring for CPU and chassis environment; Thermal Control for fan connectors
Form Factor 1U Rackmount
Model CSE-118GQPTS-R2K05P2
Height 1.7″ (43mm)
Width 17.2″ (437mm)
Depth 35.2″ (894mm); 39.3″ (997mm) with rails
Package 24″ (H) x 8″ (W) x 46″ (D)
Weight Net Weight: 45 lbs (15.9 kg); Gross Weight: 58 lbs (21.8 kg)
Available Color Black
Buttons Power On/Off button; UID button
LEDs Power LED; Hard drive activity LED; Network activity LEDs; System Overheat LED / Fan fail LED /
UID LED
PCI-Express 4 PCI-E 3.0 x16 slots
Hot-swap 2 Hot-swap 2.5″ SAS/SATA drive bays

1 x 960GB SATA3 6Gb/s,7.0mm,16nm,0.7 DWPD SSD Installed

Fixed 2 Internal 2.5″ drive bays
M.2 Support 1x M.2 2242/2260/2280; Support M.2 SATA and NVMe
Fans 7 Heavy duty 4cm counter-rotating fans with air shroud & optimal fan speed control
Total Output Power 1000W/1800W/1980W/2000W
Dimension
(W x H x L)
73.5 x 40 x 265 mm
Input 100-120Vac / 12.5-9.5A / 50-60Hz; 200-220Vac / 10-9.5A / 50-60Hz; 220-230Vac / 10-9.8A / 50-60Hz; 230-240Vac / 10-9.8A / 50-60Hz; 200-240Vac / 11.8-9.8A / 50-60Hz (UL/cUL only)
+12V Max: 83.3A / Min: 0A (100-120Vac); Max: 150A / Min: 0A (200-220Vac); Max: 165A / Min: 0A (220-230Vac); Max: 166.7A / Min: 0A (230-240Vac); Max: 166.7A / Min: 0A (200-240Vac) (UL/cUL only)
12Vsb Max: 2.1A / Min: 0A
Output Type 25 Pairs Gold Finger Connector
Certification Titanium Level; [ Test Report ]
RoHS RoHS Compliant
Environmental Spec. Operating Temperature:
10°C ~ 35°C (50°F ~ 95°F); Non-operating Temperature:
-40°C to 60°C (-40°F to 140°F); Operating Relative Humidity:
8% to 90% (non-condensing); Non-operating Relative Humidity:
5% to 95% (non-condensing)

Dihuni OptiReady Supermicro 4028GR-TVRT-4-2 4xNVIDIA Tesla V100 32GB SXM2 NVLINK GPU 2S Xeon 2.4GHz CPU, 128GB Mem, 960GB SSD 10G Deep Learning Server

NVLink Performance

Systems with multiple GPUs and CPUs are becoming common in a variety of industries as developers rely on more parallelism in applications like AI computing. These include 4-GPU and 8-GPU system configurations using PCIe system interconnect to solve very large, complex problems. But PCIe bandwidth is increasingly becoming the bottleneck at the multi-GPU system level, driving the need for a faster and more scalable multiprocessor interconnect. NVIDIA® NVLink technology addresses this interconnect issue by providing higher bandwidth, more links, and improved scalability for multi-GPU and multi-GPU/CPU system configurations. A single NVIDIA Tesla® V100 GPU supports up to six NVLink connections and total bandwidth of 300 GB/sec—10X the bandwidth of PCIe Gen 3.

Supermicro Servers Featuring Intel E5-2600 v4 Xeon® Processors

Built on 14 nm process technology, the Intel® Xeon® processor E5-2600 v4 family offers up to 22 cores/44 threads per socket and 55 MB last-level cache (LLC) per socket for increased performance, as well as Intel® Transactional Synchronization Extensions (Intel® TSX) for increased parallel workload performance. You can dynamically manage shared resources efficiently and increase resource utilization with Intel® Resource Director Technology (Intel® RDT) offering cache monitoring and allocation technology, code and data prioritization, and memory bandwidth monitoring. Accelerated cryptographic performance enables encrypted data to move fast over secure connections, plus improved security keys help safeguard network access and deepen platform protection.

“Supermicro’s total Infrastructure solutions maximize performance, density and efficiency with architecture innovations optimized for the latest Intel Xeon processor technology, high performance NVMe, and SIOM flexible networking up to 100G. Our expanding range of server and storage platforms including the new Simply Double SuperStorage, FatTwin, TwinPro, Ultra, MicroCloud, MicroBlade and SuperBlade provide the ultimate range of complete infrastructure building blocks.”  

Charles Liang, President and CEO of Supermicro

Server Systems Management

Supermicro Server Manager (SSM) provides capabilities to monitor the health of server components including memory, hard drives and RAID controllers. It enables the datacenter administrator to monitor and manage power usage across all Supermicro servers allowing users to maximize their CPU payload while mitigating the risk of tripped circuit. Firmware upgrades on Supermicro servers became easier now with a couple of clicks. Administrators can now mount an ISO image on multiple servers and reboot the servers with those images. The tool also provides pre-defined reports and many more features that will make managing Supermicro servers simpler. Download the SSM_brochure for more info or download Supermicro SuperDoctor® device monitoring and management software.

Technical Specifications

SYS-4028GR-TVRT SuperServer 4028GR-TVRT (Black)
CPU Intel® Xeon® processor E5-2600 v4†/ v3 family (up to 145W TDP) *; Intel® Xeon® processor E5-2600; /
v3 family(up to 145W TDP); Dual Socket R3 (LGA 2011); Dual Socket R3 (LGA 2011)2 x Intel® Xeon® processor E5-2640V4 2.4GHz Installed
Cores / Cache Up to 22 Cores† / Up to 55MB† Cache
System Bus QPI up to 9.6 GT/s
4 Tesla V100 32GB SXM2 GPUs Installed

Up to 300GB/s GPU-to-GPU NVLINK; Up to 300GB/s GPU-to-GPU NVLINK

Memory Capacity 24x 288-pin DDR4 DIMM slots; 24x 288-pin DDR4 DIMM slots; Up to 3TB† ECC 3DS LRDIMM, 2TB ECC RDIMM; Up to 3TB; ECC 3DS LRDIMM, 2TB ECC RDIMM

128GB DDR4-2400 Memory Installed

Memory Type 2400†/2133/1866/1600MHz ECC DDR4 SDRAM 72-bit
DIMM Sizes RDIMM: 32GB, 16GB, 8GB, 4GB; LRDIMM: 64GB, 32GB; 3DS LRDIMM: 128GB
Memory Voltage 1.2 V; 1.2 V
Error Detection Corrects single-bit errors
Chipset Intel® C612 chipset; Intel® C612 chipset
SATA SATA3 (6Gbps) with RAID 0, 1, 5, 10
IPMI Support for Intelligent Platform Management Interface v.2.0; IPMI 2.0 with virtual media over LAN and KVM-over-LAN support; ASPEED AST2400 BMC
Network Controllers Dual Port 10GbE with Intel X540 Ethernet Controller; Dual Port 10GbE with Intel X540 Ethernet Controller; Virtual Machine Device Queues reduce I/O overhead; Virtual Machine Device Queues reduce I/O overhead; Supports 10GBASE-T, 100BASE-TX, and 1000BASE-T, RJ45 output; Supports 10GBASE-T, 100BASE-TX, and 1000BASE-T, RJ45 output
Graphics ASPEED AST2400 BMC
SATA 10 SATA3 (6Gbps) ports; 10 SATA3 (6Gbps) ports
LAN 2 RJ45 10GBase-T LAN ports; 1 RJ45 Dedicated IPMI LAN port
USB 2 USB 3.0 ports
Video 1 VGA Connector; 1 VGA Connector
Serial Port 1 Serial header
BIOS Type 128Mb SPI Flash EEPROM with AMI® BIOS; 128Mb SPI Flash EEPROM with AMI® BIOS
BIOS Features Plug and Play (PnP); APM 1.2; DMI 2.3; PCI 2.3; ACPI 1.0 / 2.0 / 3.0; USB Keyboard support; SMBIOS 2.7.1; UEFI
Form Factor 4U Rackmountable; Rackmount Kit (MCP-290-00057-0N)
Model CSE-R422BG
Height 7.0″ (178mm)
Width 17.6″ (447mm)
Depth 31.7″ (805mm)
Net Weight: 80 lbs (36.2 kg); Gross Weight: 135 lbs (61.2 kg)
Available Colors Black
Hot-swap 16 Hot-swap 2.5″ SATA/SAS drive bays

1x960GB SSD Installed

PCI-Express 4 PCI-E 3.0 x16 (low-profile) slots; 2 PCI-E 3.0 x8 slots
Fans 8x 92mm Cooling Fans
Shrouds 1 Air Shroud (MCP-310-41808-0B)
2200W Redundant Titanium Level Power Supplies with PMBus
Total Output Power 1200W/1800W/1980W/2090W/2200W
(UL/cUL only)
Dimension
(W x H x L)
106.5 x 82.4 x 203.5 mm
Input 1200W: 100-127 Vac / 14-11 A / 50-60 Hz; 1800W: 200-220 Vac / 10-9.5 A / 50-60 Hz; 1980W: 220-230 Vac / 10-9.5 A / 50-60 Hz; 2090W: 230-240 Vac / 10-9.8 A / 50-60 Hz; 2200W: 220-240 Vac / 12-11 A / 50-60 Hz (UL/cUL only); 2090W: 180-220 Vac / 14-11 A / 50-60 Hz (UL/cUL only); 2090W: 230-240 Vdc / 10-9.8 A (CCC only)
+12V Max: 100A / Min: 0A (1200W); Max: 150A / Min: 0A (1800W); Max: 165A / Min: 0A (1980W); Max: 174.17A / Min: 0A (2090W); Max: 183.3A / Min: 0A (2200W); Max: 174.17A / Min: 0A (2090W)
12Vsb Max: 2A / Min: 0A
Output Type Gold Finger (connector on M/P)
Certification UL/cUL/CB/BSMI/CE/CCC; Titanium Level
CPU Monitors for CPU Cores, +1.2V, 1.5V, +3.3V, +12V, (+3V, 1.0V, 1.2V, 1.8V, 1.1V) Standby, VBAT, Memory, Chipset Voltages.; 4-Phase-switching voltage regulator with auto-sense from 0.6V-1.35V
FAN 4-pin fan headers with tachometer status monitoring; Low noise fan speed control mode
Temperature Monitoring for CPU and system environment; Thermal Control for 8 fan connectors
Other Features Chassis intrusion detection
RoHS RoHS Compliant
Environmental Spec. Operating Temperature:
10°C to 35°C (50°F to 95°F); Non-operating Temperature:
-40°C to 70°C (-40°F to 158°F); Operating Relative Humidity:
8% to 90% (non-condensing); Non-operating Relative Humidity:
5% to 95% (non-condensing)

Dihuni OptiReady Supermicro 4028GR-TVRT-8-2 8xNVIDIA Tesla V100 32GB SXM2 NVLINK GPU 2S Xeon 2.4GHz CPU, 256GB Mem, 960GB SSD 10G Deep Learning Server

NVLink Performance

Systems with multiple GPUs and CPUs are becoming common in a variety of industries as developers rely on more parallelism in applications like AI computing. These include 4-GPU and 8-GPU system configurations using PCIe system interconnect to solve very large, complex problems. But PCIe bandwidth is increasingly becoming the bottleneck at the multi-GPU system level, driving the need for a faster and more scalable multiprocessor interconnect. NVIDIA® NVLink technology addresses this interconnect issue by providing higher bandwidth, more links, and improved scalability for multi-GPU and multi-GPU/CPU system configurations. A single NVIDIA Tesla® V100 GPU supports up to six NVLink connections and total bandwidth of 300 GB/sec—10X the bandwidth of PCIe Gen 3.

Supermicro Servers Featuring Intel E5-2600 v4 Xeon® Processors

Built on 14 nm process technology, the Intel® Xeon® processor E5-2600 v4 family offers up to 22 cores/44 threads per socket and 55 MB last-level cache (LLC) per socket for increased performance, as well as Intel® Transactional Synchronization Extensions (Intel® TSX) for increased parallel workload performance. You can dynamically manage shared resources efficiently and increase resource utilization with Intel® Resource Director Technology (Intel® RDT) offering cache monitoring and allocation technology, code and data prioritization, and memory bandwidth monitoring. Accelerated cryptographic performance enables encrypted data to move fast over secure connections, plus improved security keys help safeguard network access and deepen platform protection.

“Supermicro’s total Infrastructure solutions maximize performance, density and efficiency with architecture innovations optimized for the latest Intel Xeon processor technology, high performance NVMe, and SIOM flexible networking up to 100G. Our expanding range of server and storage platforms including the new Simply Double SuperStorage, FatTwin, TwinPro, Ultra, MicroCloud, MicroBlade and SuperBlade provide the ultimate range of complete infrastructure building blocks.”  

Charles Liang, President and CEO of Supermicro

Server Systems Management

Supermicro Server Manager (SSM) provides capabilities to monitor the health of server components including memory, hard drives and RAID controllers. It enables the datacenter administrator to monitor and manage power usage across all Supermicro servers allowing users to maximize their CPU payload while mitigating the risk of tripped circuit. Firmware upgrades on Supermicro servers became easier now with a couple of clicks. Administrators can now mount an ISO image on multiple servers and reboot the servers with those images. The tool also provides pre-defined reports and many more features that will make managing Supermicro servers simpler. Download the SSM_brochure for more info or download Supermicro SuperDoctor® device monitoring and management software.

Technical Specifications

SYS-4028GR-TVRT SuperServer 4028GR-TVRT (Black)
CPU Intel® Xeon® processor E5-2600 v4†/ v3 family (up to 145W TDP) *; Intel® Xeon® processor E5-2600; /
v3 family(up to 145W TDP); Dual Socket R3 (LGA 2011); Dual Socket R3 (LGA 2011)2 x Intel® Xeon® processor E5-2640V4 2.4GHz Installed
Cores / Cache Up to 22 Cores† / Up to 55MB† Cache
System Bus QPI up to 9.6 GT/s
8 Tesla V100 32GB SXM2 GPUs Installed

Up to 300GB/s GPU-to-GPU NVLINK; Up to 300GB/s GPU-to-GPU NVLINK

Memory Capacity 24x 288-pin DDR4 DIMM slots; 24x 288-pin DDR4 DIMM slots; Up to 3TB† ECC 3DS LRDIMM, 2TB ECC RDIMM; Up to 3TB; ECC 3DS LRDIMM, 2TB ECC RDIMM

256GB DDR4-2400 Memory Installed

Memory Type 2400†/2133/1866/1600MHz ECC DDR4 SDRAM 72-bit
DIMM Sizes RDIMM: 32GB, 16GB, 8GB, 4GB; LRDIMM: 64GB, 32GB; 3DS LRDIMM: 128GB
Memory Voltage 1.2 V; 1.2 V
Error Detection Corrects single-bit errors
Chipset Intel® C612 chipset; Intel® C612 chipset
SATA SATA3 (6Gbps) with RAID 0, 1, 5, 10
IPMI Support for Intelligent Platform Management Interface v.2.0; IPMI 2.0 with virtual media over LAN and KVM-over-LAN support; ASPEED AST2400 BMC
Network Controllers Dual Port 10GbE with Intel X540 Ethernet Controller; Dual Port 10GbE with Intel X540 Ethernet Controller; Virtual Machine Device Queues reduce I/O overhead; Virtual Machine Device Queues reduce I/O overhead; Supports 10GBASE-T, 100BASE-TX, and 1000BASE-T, RJ45 output; Supports 10GBASE-T, 100BASE-TX, and 1000BASE-T, RJ45 output
Graphics ASPEED AST2400 BMC
SATA 10 SATA3 (6Gbps) ports; 10 SATA3 (6Gbps) ports
LAN 2 RJ45 10GBase-T LAN ports; 1 RJ45 Dedicated IPMI LAN port
USB 2 USB 3.0 ports
Video 1 VGA Connector; 1 VGA Connector
Serial Port 1 Serial header
BIOS Type 128Mb SPI Flash EEPROM with AMI® BIOS; 128Mb SPI Flash EEPROM with AMI® BIOS
BIOS Features Plug and Play (PnP); APM 1.2; DMI 2.3; PCI 2.3; ACPI 1.0 / 2.0 / 3.0; USB Keyboard support; SMBIOS 2.7.1; UEFI
Form Factor 4U Rackmountable; Rackmount Kit (MCP-290-00057-0N)
Model CSE-R422BG
Height 7.0″ (178mm)
Width 17.6″ (447mm)
Depth 31.7″ (805mm)
Net Weight: 80 lbs (36.2 kg); Gross Weight: 135 lbs (61.2 kg)
Available Colors Black
Hot-swap 16 Hot-swap 2.5″ SATA/SAS drive bays

1x960GB SSD Installed

PCI-Express 4 PCI-E 3.0 x16 (low-profile) slots; 2 PCI-E 3.0 x8 slots
Fans 8x 92mm Cooling Fans
Shrouds 1 Air Shroud (MCP-310-41808-0B)
2200W Redundant Titanium Level Power Supplies with PMBus
Total Output Power 1200W/1800W/1980W/2090W/2200W
(UL/cUL only)
Dimension
(W x H x L)
106.5 x 82.4 x 203.5 mm
Input 1200W: 100-127 Vac / 14-11 A / 50-60 Hz; 1800W: 200-220 Vac / 10-9.5 A / 50-60 Hz; 1980W: 220-230 Vac / 10-9.5 A / 50-60 Hz; 2090W: 230-240 Vac / 10-9.8 A / 50-60 Hz; 2200W: 220-240 Vac / 12-11 A / 50-60 Hz (UL/cUL only); 2090W: 180-220 Vac / 14-11 A / 50-60 Hz (UL/cUL only); 2090W: 230-240 Vdc / 10-9.8 A (CCC only)
+12V Max: 100A / Min: 0A (1200W); Max: 150A / Min: 0A (1800W); Max: 165A / Min: 0A (1980W); Max: 174.17A / Min: 0A (2090W); Max: 183.3A / Min: 0A (2200W); Max: 174.17A / Min: 0A (2090W)
12Vsb Max: 2A / Min: 0A
Output Type Gold Finger (connector on M/P)
Certification UL/cUL/CB/BSMI/CE/CCC; Titanium Level
CPU Monitors for CPU Cores, +1.2V, 1.5V, +3.3V, +12V, (+3V, 1.0V, 1.2V, 1.8V, 1.1V) Standby, VBAT, Memory, Chipset Voltages.; 4-Phase-switching voltage regulator with auto-sense from 0.6V-1.35V
FAN 4-pin fan headers with tachometer status monitoring; Low noise fan speed control mode
Temperature Monitoring for CPU and system environment; Thermal Control for 8 fan connectors
Other Features Chassis intrusion detection
RoHS RoHS Compliant
Environmental Spec. Operating Temperature:
10°C to 35°C (50°F to 95°F); Non-operating Temperature:
-40°C to 70°C (-40°F to 158°F); Operating Relative Humidity:
8% to 90% (non-condensing); Non-operating Relative Humidity:
5% to 95% (non-condensing)