Dihuni Introduces Supermicro’s NVIDIA HGX-2 based 16 Tesla V100 32GB SXM2 GPU Server for Deep Learning, AI, HPC and IoT Predictive Analytics

We are pleased to announce our plans to introduce Supermicro’s upcoming NVIDIA® HGX-2 based cloud server platform which the company describes as the world’s most powerful system for artificial intelligence (AI) and high-performance computing (HPC) capable of performing at 2 PetaFLOPS. Supermicro is keeping up to its reputation of being first to market with advanced computing solutions.

“Supermicro’s new SuperServer based on the HGX-2 platform will deliver more than double the performance of current systems, which will help enterprises address the rapidly expanding size of AI models that sometimes require weeks to train,” said Charles Liang, president and CEO of Supermicro. “Our new HGX-2 system will enable efficient training of complex models. It combines sixteen Tesla V100 32GB SXM3 GPUs connected via NVLink and NVSwitch to work as a unified 2 PetaFlop accelerator with half a terabyte of aggregate GPU memory to deliver unmatched compute power.”

Just like NVIDIA’s DGX-2 system, the HGX-2 architecture based Supermicro SuperServer SYS-9029GP-TNVRT is very suitable for Digital Transformation applications from natural speech by computers to autonomous vehicles, predictive analytics in manufacturing, smart cities, smart buildings etc.  With the explosion in IT and IoT data, AI models are exploding in size. HPC applications are similarly growing in complexity as they unlock new scientific insights. The 9029GP-TNVRT can be optimized to deliver the highest compute performance and memory for rapid model training.

We also want to take this opportunity to announce support for the newly introduced NVIDIA Tesla T4 GPU on Supermicro servers. The ultra-efficient Tesla T4 is designed to accelerate inference workloads in any scale-out server. Powered by NVIDIA Turing Tensor Cores, T4 brings revolutionary multi-precision inference performance to accelerate the diverse applications of modern AI. The deployment of these GPUs on Supermicro servers ensures the right performance is delivered for deep learning computing models that are getting increasingly important for any organization in our data centric world. Complex neural networks need to be trained on exponentially larger volumes of data and powerful Supermicro GPU servers help deliver maximum throughput for inference workloads.

Dihuni will soon announce OptiReady configurations and pricing on NVIDIA HGX2 architecture and Tesla T4 based Supermicro servers. Please contact us at digital@dihuni.com for more information.

Shopping Cart
Scroll to Top