Sale!

Dihuni OptiReady Supermicro SYS-6049GP-TRT 20 x NVIDIA Tesla T4 GPU 256GB RAM 2x1TB SATA 2S Xeon 6140 CPU 2x10GbE Deep Learning Inference AI Server

$69,970.00

(Add to cart to Buy / Request Quote)

Out of stock

SKU: SYS-6049GP-TRT-20 Category:
Safe Checkout

Request Formal Quote, Volume Pricing, Stock or Product Information

  • Competitor Match/Beat on Custom Servers and Select Products (send competitor quote)
  • Leasing Options Available (requires 5 years business operations)
  • Purchase Orders Accepted / Net Terms subject to approval
  • Custom Servers - Configure Below, Add to Cart and Request Quote for formal pricing

The Supermicro SuperServer SYS-6049GP-TRT OptiReady 20 Tesla T4 server is optimized to deliver the superior performance required to vertically scale the technology of modern AI. To achieve maximum GPU density and performance, this 4U server supports up to 20 NVIDIA Tesla T4 Tensor Core GPUs, three terabytes of memory, and 24 hot-swappable 3.5″ drives. This system also features four 2000-watt Titanium level efficiency (2+2) redundant power supplies to help optimize the power efficiency, uptime and serviceability.

NVIDIA NGC Pre-Loaded

This Deep Learning server is available with NVIDIA NGC containers that can be preloaded. NGC empowers researchers, data scientists, and developers with performance-engineered containers featuring AI software like TensorFlow, Keras, PyTorch, MXNet, NVIDIA TensorRT™, RAPIDS and more. These pre-integrated containers feature NVIDIA AI stack including NVIDIA® CUDA® Toolkit, NVIDIA deep learning libraries which are easy to upgrade using Docker commands.

Key Features

  • 20 x NVIDIA Tesla T4 16GB PCIe GPUs Installed
  • 2 x Intel Skylake Xeon 6140 18 Core 36 Thread  2.3GHz CPU Installed
  • 256 GB (8x32GB) DDR4-2666 2Rx4 ECC REG DIMM Installed
  • 2 x 1TB SATA HDD Installed
  • NGC Docker Container for Deep Learning Pre-loaded (Optional, please select below)

 

SYS-6049GP-TRT  Key Features

  • 20 x NVIDIA Tesla T4 16GB PCIe GPUs Installed
  • 2 x Intel Skylake Xeon 6140 18 Core 36 Thread  2.3GHz CPU Installed
  • 256 GB (8x32GB) DDR4-2666 2Rx4 ECC REG DIMM Installed
  • 2 x 1TB SATA HDD Installed

State-of-the-Art Inference in Real-Time

Responsiveness is key to user engagement for services such as conversational AI, recommender systems, and visual search. As models increase in accuracy and complexity, delivering the right answer right now requires exponentially larger compute capability. Tesla T4 delivers up to 40X times better low-latency throughput, so more requests can be served in real time.

T4 Inference Performance

Video Transcoding Performance

As the volume of online videos continues to grow exponentially, demand for solutions to efficiently search and gain insights from video continues to grow as well. Tesla T4 delivers breakthrough performance for AI video applications, with dedicated hardware transcoding engines that bring twice the decoding performance of prior-generation GPUs. T4 can decode up to 38 full-HD video streams, making it easy to integrate scalable deep learning into video pipelines to deliver innovative, smart video services.

X11 Servers Featuring New Intel Skylake Scalable Xeon® Processors

Supermicro’s new X11 servers are engineered to unleash the full performance and rich feature sets on the new Intel® Xeon® Scalable processor family, supporting more cores and higher TDP envelopes of 205 watts and higher, increased number of memory channels and higher bandwidth, more PCI-E 3.0 lanes, 100G/40G/25G/10G Ethernet, 100G EDR InfiniBand (on select servers) and integrated  Intel® Omni-Path Architecture networking fabrics. The elevated compute performance, density, I/O capacity, and efficiency are coupled with industry’s most comprehensive support for NVMe NAND Flash and Intel® Optane SSDs for unprecedented application responsiveness and agility. For exact sever specifications, please see highlights below and also refer to detailed technical specifications.

“To address the rapidly emerging high-throughput inference market driven by technologies such as 5G, Smart Cities and IoT devices that generate huge amounts of data and require real-time decision making, our new SuperServer 6049GP-TRT provides the superior performance required to vertically scale the technology of modern AI. To achieve maximum GPU density and performance, this 4U server supports up to 20 NVIDIA® Tesla® T4 Tensor Core GPUs, three terabytes of memory, and 24 hot-swappable 3.5″ drives. This system also features four 2000-watt Titanium level efficiency (2+2) redundant power supplies to help optimize the power efficiency, uptime and serviceability.”

Charles Liang, President and CEO of Supermicro

Server Systems Management

Supermicro Server Manager (SSM) provides capabilities to monitor the health of server components including memory, hard drives and RAID controllers. It enables the datacenter administrator to monitor and manage power usage across all Supermicro servers allowing users to maximize their CPU payload while mitigating the risk of tripped circuit. Firmware upgrades on Supermicro servers became easier now with a couple of clicks. Administrators can now mount an ISO image on multiple servers and reboot the servers with those images. The tool also provides pre-defined reports and many more features that will make managing Supermicro servers simpler. Download the SSM_brochure for more info or download Supermicro SuperDoctor® device monitoring and management software.

Weight 165 lbs
Dimensions 31.7 × 17.6 × 17.5 in

Reviews

There are no reviews yet.

Be the first to review “Dihuni OptiReady Supermicro SYS-6049GP-TRT 20 x NVIDIA Tesla T4 GPU 256GB RAM 2x1TB SATA 2S Xeon 6140 CPU 2x10GbE Deep Learning Inference AI Server”

Your email address will not be published. Required fields are marked *

Shopping Cart
Scroll to Top