Dihuni Announces Open Source Deep Learning and AI Software Installation Services on NVIDIA Tesla GPU Based OptiReady Servers

We are pleased to announce that select Dihuni OptiReady Servers featuring NVIDIA Tesla GPUs are now available with an option to load and test a comprehensive and complete stack of Open Source and NVIDIA specific Deep Learning, Machine Learning (ML) and Artificial Intelligence (AI) software.

With the massive amounts of data generated from IT and Internet of Things (IoT) systems, the popularity of Deep Learning and AI systems is increasing as new scientific discoveries can now be made and both business and research worlds can be completely transformed. However, developing the right infrastructure to train and deploy Deep Learning and AI models requires not just the right hardware but also the right programming languages and software frameworks to continually train, process and improve models from all kinds of information: images, video, text, speech, machine data etc.

We recognize the selection, installation and deployment of such software packages can take time away from actual Deep Learning and analytical tasks and therefore we are immediately making the following list of software packages available for pre-installation on NVIDIA Tesla GPU powered Deep Learning servers :

Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR) and by community contributors. Yangqing Jia created the project during his PhD at UC Berkeley. Caffe is released under the BSD 2-Clause license. (Source : http://caffe.berkeleyvision.org/)

Caffe2 (developed at Facebook) aims to provide an easy and straightforward way to experiment with deep learning and leverage community contributions of new models and algorithms. You can bring your creations to scale using the power of GPUs in the cloud or to the masses on mobile with cross- platform libraries. (Source : https://research.fb.com/downloads/caffe2/)

The Microsoft Cognitive Toolkit—previously known as CNTK—empowers you to harness the intelligence within massive datasets through deep learning by providing uncompromised scaling, speed, and accuracy with commercial-grade quality and compatibility with the programming languages and algorithms you already use. (Source : https://www.microsoft.com/en-us/cognitive-toolkit/)

Apache MXNet is a modern open-source Deep Learning framework used to train, and deploy deep neural networks. Acceleration libraries like MXNetoffer powerful tools to help developers exploit the full capabilities of GPUs and cloud computing. While these tools are generally useful and applicable to any mathematical computation, MXNet places a special emphasis on speeding up the development and deployment of large-scale deep neural networks.  (Source : https://mxnet.apache.org/faq/why_mxnet.html)

PyTorch is a python package that provides two high-level features: Tensor computation (like numpy) with strong GPU acceleration and Deep Neural Networks built on a tape-based autodiff system. You can reuse your favorite python packages such as numpy, scipy and Cython to extend PyTorch when needed. (Source : https://pytorch.org/about/)

Torch is a scientific computing framework with wide support for machine learning algorithms that puts GPUs first. It is easy to use and efficient, thanks to an easy and fast scripting language, LuaJIT, and an underlying C/CUDA implementation. (Source : http://torch.ch/)

TensorFlow is an open source software library for high performance numerical computation. Its flexible architecture allows easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices. Originally developed by researchers and engineers from the Google Brain team within Google’s AI organization, it comes with strong support for machine learning and deep learning and the flexible numerical computation core is used across many other scientific domains. (Source : https://www.tensorflow.org/ecosystem/)

Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Theano features: tight integration with NumPy,  transparent use of a GPU, efficient symbolic differentiation,  speed and stability optimizations, dynamic C code generation and extensive unit-testing and self-verification. (Source : http://deeplearning.net/software/theano/)

The following software packages from NVIDIA are also available for installation as part of the Deep Learning and AI Suite.

The NVIDIA® CUDA® Toolkit provides a development environment for creating high performance GPU-accelerated applications. With the CUDA Toolkit, you can develop, optimize and deploy your applications on GPU-accelerated embedded systems, desktop workstations, enterprise data centers, cloud-based platforms and HPC supercomputers. The toolkit includes GPU-accelerated libraries, debugging and optimization tools, a C/C++ compiler and a runtime library to deploy your application. (Source : NVIDIA)

NVIDIA TensorRT is a high-performance deep learning inference library for production environments and to deploy Neural Networks. Power efficiency and speed of response are two key metrics for deployed deep learning applications, because they directly affect the user experience and the cost of the service provided. (Source : NVIDIA)

The NVIDIA Deep Learning GPU Training System (DIGITS) puts the power of deep learning into the hands of engineers and data scientists. DIGITS can be used to rapidly train the highly accurate deep neural network (DNNs) for image classification, segmentation and object detection tasks. (Source : NVIDIA)

At an introductory price of $1200 per server, Dihuni will install the complete suite of Deep Learning and AI software shown above. You can also ‘pick and choose’ software modules that suit your Deep Learning and AI needs and we will custom install for you.

Above are examples of popular Dihuni OptiReady systems that come pre-installed with Deep Learning and AI software packages. Besides NVIDIA Tesla V100 32GB and 16GB GPU servers, you can get the complete software suite installed on servers based on NVIDIA Tesla P100, P40 and P4 as well. Please contact us at digital@dihuni.com for more information.

About Dihuni

Dihuni is a leading provider of Digital Transformation, Internet of Things (IoT) and Deep Learning Solutions. The internet has changed everything – from software applications to compute, storage and networking hardware. Dihuni helps businesses in achieving desired digital outcomes and ensures customers are enabled with the right hardware, software and services to make that happen. There is huge complexity in implementing a successful solution regardless of whether you are a software developer wanting a fast developer machine or if you are involved in developing an efficient on-premise and cloud back-end infrastructure for your IT or Internet of Things (IoT) applications or setting up the right systems for data, analytics, Deep/Machine Learning, Artificial Intelligence (AI) and Digital Applications. 

Shopping Cart
Scroll to Top