Sale!

NVIDIA Tesla P100 GPU 12GB HBM2 Pascal CUDA PCIe x16 for Accelerated Machine & Deep Learning Artificial Intelligence Finance Oil Gas CAD Research IoT

(4 customer reviews)

$4,560.00

(Add to cart to Buy / Request Quote)

Out of stock

Safe Checkout

Request Formal Quote, Volume Pricing, Stock or Product Information

  • Competitor Match/Beat on Custom Servers and Select Products (send competitor quote)
  • Leasing Options Available (requires 5 years business operations)
  • Purchase Orders Accepted / Net Terms subject to approval
  • Custom Servers - Configure Below, Add to Cart and Request Quote for formal pricing

Ideal for your Advanced Digital Transformation Applications : Video Processing, Big Data, Hyperconverged Appliances, Internet of Things (IoT), In-Memory Analytics, Machine Learning (ML), Artificial Intelligence (AI) and intensive Data Center, High Performance Computing (HPC) or Hyperscale Infrastructure Applications. The NVIDIA Tesla GPUs are very suitable for autonomous cars, molecular dynamics, computational biology, fluid simulation etc and even for advanced Virtual Desktop Infrastructure (VDI) applications.

In the new era of AI and intelligent machines, deep learning is shaping our world like no other computing model in history. Interactive speech, visual search, and video recommendations are a few of many AI-based services that we use every day. Accuracy and responsiveness are key to user adoption for these services. As deep learning models increase in accuracy and complexity, CPUs are no longer capable of delivering a responsive user experience.  NVIDIA® Tesla® P100 GPU accelerators are the world’s first AI supercomputing data center GPUs. They tap into NVIDIA Pascal™ GPU architecture to deliver a unified platform for accelerating both HPC and AI. With higher performance and fewer, lightning-fast nodes, Tesla P100 enables data centers to dramatically increase throughput while also saving money. With over 500 HPC applications accelerated—including 15 out of top 15—as well as all deep learning frameworks, every HPC customer can deploy accelerators in their data centers.

As a NVIDIA Preferred Solution Provider, we are authorized by the manufacturer and proudly deliver only original factory packaged products.

Key Features

  • Sold and supported by NVIDIA
  • Dual-slot 10.5 inch PCI Express Gen3 card
  • 250W Max Power Consumption
  • Passively cooled board
  • 12GB CoWoS HBM2 Stacked Memory Capacity

We accept all major credit cards e.g. MasterCard, Visa, American Express, Discover etc. Please review our Terms and Conditions and Return, Refund and Repair policy prior to purchase. 

The NVIDIA® Tesla® P100 GPU Accelerator for PCIe is a dual-slot 10.5 inch PCI Express Gen3 card with a single NVIDIA® Pascal™ GP100 graphics processing unit (GPU). It uses a passive heat sink for cooling, which requires system air flow to properly operate the card within its thermal limits. The Tesla P100 PCIe supports double precision (FP64), single precision (FP32) and half precision (FP16) compute tasks, unified virtual memory and page migration engine. For performance optimization, NVIDIA GPU Boost™ feature is supported. By adjusting the GPU clock dynamically, maximum performance is achieved within the power cap limit.

Tesla P100 PCIe boards are shipped with ECC enabled by default to protect the GPU’s memory interface and the on-board memories. ECC protects the memory interface by detecting any single, double, and all odd-bit errors. The GPU will replay any memory transaction that have an ECC error until the data transfer is error-free. ECC protects the DRAM content by fixing any single-bit errors and detecting double-bit errors. There is no replay associated with ECC. The Tesla P100 PCIe with HBM2 memory has native support for ECC and has no ECC overhead, both in memory capacity and bandwidth. For more information on compute capabilities, HBM2, unified virtual memory, and page migration engine visit NVIDIA official website.

Exponential Performance Leap with Pascal Architecture

Exponential Performance Leap with Pascal Architecture

The NVIDIA Pascal™ architecture enables the Tesla P100 to deliver superior performance for HPC and hyperscale workloads. With more than 21 teraflops of FP16 performance, Pascal is optimized to drive exciting new possibilities in deep learning applications. Pascal also delivers over 5 and 10 teraflops of double and single precision performance for HPC workloads.

Applications at Massive Scale with NVIDIA NVLink

Unprecedented Efficiency with CoWoS with HBM2

The Tesla P100 tightly integrates compute and data on the same package by adding CoWoS® (Chip-on-Wafer-on-Substrate) with HBM2 technology to deliver 3X memory performance over the NVIDIA Maxwell™ architecture. This provides a generational leap in time-to-solution for data-intensive applications.

Simpler Programming with Page Migration Engine

Simpler Programming with Page Migration Engine

Page Migration Engine frees developers to focus more on tuning for computing performance and less on managing data movement. Applications can now scale beyond the GPU’s physical memory size to virtually limitless amount of memory.

CUDA Ready

CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.

In GPU-accelerated applications, the sequential part of the workload runs on the CPU – which is optimized for single-threaded performance – while the compute intensive portion of the application runs on thousands of GPU cores in parallel. When using CUDA, developers program in popular languages such as C, C++, Fortran, Python and MATLAB and express parallelism through extensions in the form of a few basic keywords.

The CUDA Toolkit from NVIDIA provides everything you need to develop GPU-accelerated applications. The CUDA Toolkit includes GPU-accelerated libraries, a compiler, development tools and the CUDA runtime.

Performance Specification for NVIDIA Tesla P100 Accelerators

P100 for PCIe-Based
Servers
P100 for NVLink-Optimized Servers
Double-Precision Performance 4.7 teraflops 5.3 teraflops
Single-Precision Performance 9.3 teraflops 10.6 teraflops
Half-Precision Performance 18.7 teraflops 21.2 teraflops
NVIDIA NVLink™ Interconnect Bandwidth 160 GB/s
PCIe x16 Interconnect Bandwidth 32 GB/s 32 GB/s
CoWoS HBM2 Stacked Memory Capacity 16 GB or 12 GB 16 GB
CoWoS HBM2 Stacked Memory Bandwidth 732 GB/s or 549 GB/s 732 GB/s
Enhanced Programmability with Page Migration Engine Check Check
ECC Protection for Reliability Check Check
Server-Optimized for Data Center Deployment Check Check
Weight 8 lbs

4 reviews for NVIDIA Tesla P100 GPU 12GB HBM2 Pascal CUDA PCIe x16 for Accelerated Machine & Deep Learning Artificial Intelligence Finance Oil Gas CAD Research IoT

  1. Derek C

    Dihuni team was upfront about the discontinuance of this product and helped us in getting some urgent GPUs for AI lab use. Professional company.

    • Dihuni

      Thank you. Yes, the P100 EOL was announced earlier this year. The current last date to buy is December 26th. We appreciate your business.

  2. S.M

    We debated between V100, P100 and P4 and settled on P100 because of price attractiveness compared to V100 and still getting great performance.

  3. L.Y

    We are happy to find an authorized NVIDIA partner that delivers. I don’t recommend buying such high end product from online stores that sell this product without proper authorization from manufacturer. Dihuni kept us posted about our order and although our package shipped out a couple days after initial date, we were happy that we were properly informed. Nice job Dihuni!

  4. N.T

    I requested expedited delivery and product was shipped next day.

Add a review

Your email address will not be published. Required fields are marked *

Shopping Cart
Scroll to Top