Hardware

Nvidia announces tesla v100 processor with 5120 cuda cores

Table of contents:

Anonim

During the GTC 2017 event currently held in San José, California, NVIDIA announced a powerful processor that aims to drive a new era of computing, including artificial intelligence and deep learning neural networks, such as autonomous cars or assistants instant translation.

The new NVIDIA Volta architecture is aimed at artificial intelligence

The Tesla V100 has five times the computational power of the Pascal processor introduced last year. It is based on a new Volta architecture that features some 21 billion transistors on a single chip the size of an Apple Watch. The NVIDIA CEO also said that the company spent billions of dollars developing this processor.

The Tesla V100 is made specifically for deep learning applications, making it about 12 times faster than last year's chip in performing floating point operations per second, and features the second generation of NVLink with a wide 300GB / s bandwidth makes use of 16GB of HBM2 memory running at 900GB / s.

On the other hand, this card is powered by a new Volta GPU that has 5120 CUDA cores, making it the largest GPU ever made with a board size of 812mm square.

Likewise, the Tesla V100 also boasts a new type of computational nucleus called Tensor, whose purpose is arithmetic for deep learning.

Main specifications of the Tesla V100 chip:

  • New streaming multiprocessor optimized for deep learning Second generation NVLink 16GB HBM2 memory Volta multiprocess service Improved unified memory Cooperative groups and new cooperative launch APIs Maximum performance with optimized efficiency modes Volta software optimized

The Tesla V100 chip is at the center of the new DGX-1 and HGX-1 computing machines, about which you can read more information by clicking here.

Hardware

Editor's choice

Back to top button