Nvidia announces tesla p40 and tesla p4 for artificial intelligence
Table of contents:
- Nvidia Tesla P40 and Tesla P4 offer tremendous advances in artificial intelligence
- New TensorRT and NVIDIA DeepStream SDK software for maximum performance
Nvidia has announced its new Tesla P40 and Tesla P4 graphics cards based on the Pascal architecture along with new software that promises a massive advance in efficiency and speed in the demanding field of artificial intelligence.
Nvidia Tesla P40 and Tesla P4 offer tremendous advances in artificial intelligence
Many modern artificial intelligence (AI) services such as voice assistance, spam filters, and content recommendation services are experiencing tremendous growth in complexity, requiring 10 times greater computing power than just a year ago. In this situation, the current CPUs are not able to offer enough power, so the GPU is increasingly taking center stage.
We recommend that you read our virtual reality settings.
The new Nvidia Tesla P40 and Tesla P4 cards have been specifically designed to offer maximum performance in artificial intelligence scenarios such as voice, image or text recognition to respond as quickly as possible. These new cards are based on the Pascal architecture with 8-bit instructions (INT8) and are capable of offering 45x the performance of the most powerful CPUs and 4x the performance of the previous generation GPUs. The Tesla P4 has a starting consumption of only 50W to be 40 times more efficient in its work than a CPU, a server with only one of these cards can replace up to 13 CPU-based servers for video inference tasks, which means 8 times savings in total costs.
For its part, the Tesla P40 offers maximum performance in deep learning scenarios with its impressive 47 tera-operations per second (TOPS), a server with eight of these cards is capable of replacing up to 140 CPU-based servers, which translates saving over $ 650, 000 in server acquisition costs.
New TensorRT and NVIDIA DeepStream SDK software for maximum performance
Alongside the Tesla P40 and Tesla P4, the two new software NVIDIA TensorRT and NVIDIA DeepStream SDK have been released to accelerate artificial intelligence inference operations.
TensorRT is a library created to optimize deep learning models in order to offer an immediate response in the most complex network situations. Maximizes the efficiency and performance of deep learning with its 16-bit and 32-bit operations, plus 8-bit precision operations.
We recommend our post on the best graphics cards for gamers.
For its part, the NVIDIA DeepStream SDK offers the power of an entire server to simultaneously decode and analyze up to 93 streams of HD video in real time, a breakthrough compared to the 7 streams that a server with two CPUs can process. This represents a tremendous advance in the field of artificial intelligence by allowing video understanding operations for autonomous driving systems, interactive robots, content filters and deep learning, among others.
specs | Tesla P4 | Tesla P40 |
Single Precision FLOPS * | 5.5 | 12 |
INT8 TOPS * (Tera-Operations Per Second) | 22 | 47 |
CUDA Cores | 2, 560 | 3, 840 |
Memory | 8GB | 24GB |
Memory bandwidth | 192GB / s | 346GB / s |
Energy | 50 Watt (or higher) | 250 Watt |
Source: videocardz
Microsoft and xiaomi will work on creating artificial intelligence hardware and projects
Microsoft and Xiaomi will work on creating artificial intelligence hardware and projects. Find out more about the agreement that both companies have closed.
Tesla works on its own artificial intelligence chips
Tesla works on its own artificial intelligence chips. Find out more about the company's decision to develop its own AI.
Tesla motors and amd join forces for artificial intelligence
Tesla Motors has formed an alliance with AMD to develop a new custom SoC focused on artificial intelligence.