Nvidia 【all the information】
Table of contents:
- Nvidia history
- Nvidia GeForce and Nvidia Pascal, dominating gaming
- Artificial intelligence and Volta architecture
- Nvidia's future goes through Turing and Ampere
- NVIDIA G-Sync, ending image sync issues
Nvidia Corporation, more commonly known as Nvidia, is an American technology company incorporated in Delaware and based in Santa Clara, California. Nvidia designs graphics processing units for the video game and professional markets, as well as a chip unit (SoC) system for the automotive and mobile computing market. Its core product line, GeForce, is in direct competition with AMD's Radeon products.
We recommend reading our best PC hardware and component guides:
In addition to manufacturing GPUs, Nvidia provides parallel processing capabilities worldwide to researchers and scientists, enabling them to efficiently run high-performance applications. More recently, it has moved into the mobile computing market, where it produces Tegra mobile processors for video game consoles, tablets, and autonomous navigation and vehicle entertainment systems. This has led to Nvidia becoming a company focused on four markets since 2014 : gaming, professional visualization, data centers, and artificial intelligence and automobiles.
Index of contents
Nvidia history
Nvidia was founded in 1993 by Jen-Hsun Huang, Chris Malachowsky, and Curtis Priem. The company's three co-founders hypothesized that the right direction for computing would go through graphics-accelerated processing, believing that this computing model could solve problems that general-purpose computing couldn't solve. They also noted that video games are some of the most computationally challenging issues, and that they have incredibly high sales volumes.
From a small video game company to an artificial intelligence giant
The company was born with an initial capital of $ 40, 000, initially had no name, and the co-founders named all of its NV files, as in "next release." The need to incorporate the company caused the co-founders to review all the words with those two letters, which led them to "invidia", the Latin word that means "envy".
The launch of RIVA TNT in 1998 consolidated Nvidia's reputation for developing graphics adapters. In late 1999, Nvidia released the GeForce 256 (NV10), which most notably introduced consumer-level transformation and lighting (T&L) in 3D hardware. Operating at 120 MHz and featuring four lines of pixels, it implemented advanced video acceleration, motion compensation, and hardware sub-image blending. GeForce outperformed existing products by a wide margin.
Due to the success of its products, Nvidia won the contract to develop the graphics hardware for Microsoft's Xbox game console, earning Nvidia a $ 200 million advance. However, the project took many of its best engineers from other projects. In the short term, this didn't matter, and the GeForce2 GTS was shipped in the summer of 2000. In December 2000, Nvidia reached an agreement to acquire the intellectual assets of its only rival 3dfx, a pioneer in 3D graphics technology for the consumer. who led the field from the mid-1990s to 2000. The acquisition process ended in April 2002.
In July 2002, Nvidia acquired Exluna for an undisclosed amount of money. Exluna was responsible for creating various software rendering tools. Later, in August 2003, Nvidia acquired MediaQ for approximately $ 70 million. And it also acquired iReady, a provider of high-performance TCP / IP and iSCSI offload solutions on April 22, 2004.
So great was Nvidia's success in the video game market, that in December 2004 it was announced that it would help Sony with the design of the PlayStation 3's RSX graphics processor, the new generation video game console from the Japanese firm that it had the difficult task of repeating the success of its predecessor, the best-selling in history.
In December 2006, Nvidia received citations from the US Department of Justice. With respect to possible antitrust violations in the graphics card industry. At that time AMD had become its great rival, after the purchase of ATI by the latter. Since then AMD and Nvidia have been the only manufacturers of video game graphics cards, not forgetting Intel's integrated chips.
Forbes named Nvidia the Best Company of the Year for 2007, citing the achievements it made over the past five years. On January 5, 2007, Nvidia announced that it had completed the acquisition of PortalPlayer, Inc, and in February 2008, Nvidia acquired Ageia, developer of the PhysX physics engine and physics processing unit running this engine. Nvidia announced that it planned to integrate PhysX technology into its future GeForce GPU products.
Nvidia faced great difficulty in July 2008, when it received a decrease in revenue of approximately $ 200 million after it was reported that certain mobile chipsets and mobile GPUs produced by the company had abnormal failure rates due to manufacturing defects. In September 2008, Nvidia became the subject of a class action lawsuit by those affected, alleging that the defective GPUs had been incorporated into certain models of notebooks manufactured by Apple, Dell and HP. The soap opera ended in September 2010, when Nvidia reached an agreement that owners of the affected laptops would be reimbursed for the cost of repairs or, in some cases, product replacement.
In November 2011, Nvidia released its ARG Tegra 3 chip system for mobile devices after initially presenting it at the Mobile World Congress. Nvidia claimed that the chip featured the first quad-core mobile CPU. In January 2013, Nvidia introduced the Tegra 4, as well as the Nvidia Shield, an Android-based portable gaming console powered by the new processor.
On May 6, 2016 Nvidia introduced the GeForce GTX 1080 and 1070 graphics cards, the first based on the new Pascal microarchitecture. Nvidia claimed that both models outperformed their Maxwell-based Titan X model. These cards incorporate GDDR5X and GDDR5 memory respectively, and use a 16nm manufacturing process. The Pascal architecture also supports a new hardware feature known as simultaneous multiple projection (SMP), which is designed to improve the quality of multi-monitor and virtual reality rendering. Pascal has enabled the manufacturing of laptops that meet Nvidia's Max-Q design standard.
In May 2017, Nvidia announced a partnership with Toyota Motor Corp under which the latter will use Nvidia's Drive X series artificial intelligence platform for its autonomous vehicles. In July 2017, Nvidia and Chinese search giant Baidu, Inc. announced a powerful AI partnership that includes cloud computing, autonomous driving, consumer devices, and Baidu's AI framework, PaddlePaddle.
Nvidia GeForce and Nvidia Pascal, dominating gaming
GeForce is the brand name for graphics cards based on graphics processing units (GPUs) created by Nvidia from 1999. To date, the GeForce series has known sixteen generations since its inception. The versions focused on professional users of these cards come under the name Quadro, and include some differentiating features at the driver level. GeForce's direct competition is AMD with its Radeon cards.
Pascal is the code name for the latest GPU microarchitecture developed by Nvidia that has entered the video game market, as a successor to the previous Maxwell architecture. The Pascal architecture was first introduced in April 2016 with the launch of the Tesla P100 for servers on April 5, 2016. Currently, Pascal is primarily used in the GeForce 10 series, with the GeForce GTX 1080 and GTX being The first 1070 video game cards were released with this architecture, on May 17, 2016 and June 10, 2016 respectively. Pascal is manufactured using TSMC's 16nm FinFET process, allowing it to offer far superior energy efficiency and performance compared to Maxwell, which was manufactured at 28nm FinFET.
The Pascal architecture is organized internally in what is known as streaming multiprocessor ( SM), functional units that are made up of 64 CUDA Cores, which in turn are divided into two processing blocks of 32 CUDA Cores each of them and accompanied by an instruction buffer, a warp planner, 2 texture mapping units and 2 dispatch units. These SM drives are the equivalent of AMD's CUs.
Nvidia's Pascal architecture has been designed to be the most efficient and advanced in the gaming world. Nvidia's engineering team has put a lot of effort into creating a GPU architecture that is capable of very high clock speeds, while maintaining tight power consumption. To achieve this, a very careful and optimized design has been chosen in all its circuits, resulting in Pascal being able to reach a frequency 40% higher than Maxwell, a figure much higher than the process would have allowed at 16 nm without all the optimizations at the design level.
Memory is a key element in the performance of a graphics card, GDDR5 technology was announced in 2009, so it has already become obsolete for today's most powerful graphics cards. That is why Pascal supports GDDR5X memory, which was the fastest and most advanced memory interface standard in history at the time of the launch of these graphics cards, reaching transfer speeds of up to 10 Gbps or almost 100 picoseconds between bits. of data. GDDR5X memory also allows the graphics card to consume less power compared to GDDR5, as the operating voltage is 1.35V, compared to 1.5V or even more that the faster GDDR5 chips need. This reduction in voltage translates into a 43% higher operating frequency with the same power consumption.
Another important Pascal innovation comes from memory compression techniques without loss of performance, which reduces the demand for bandwidth by the GPU. Pascal includes the fourth generation of delta color compression technology. With delta color compression, the GPU analyzes scenes to calculate the pixels whose information can be compressed without sacrificing the quality of the scene. While the Maxwell architecture was not able to compress data related to some elements, such as vegetation and parts of the car in the Project Cars game, Pascal is able to compress most of the information on these elements, thus being much more efficient than Maxwell. As a consequence, Pascal is able to significantly reduce the number of bytes that have to be extracted from memory. This reduction in bytes translates into an additional 20% of effective bandwidth, resulting in an increase of 1.7 times the bandwidth with the use of GDDR5X memory compared to GDDR5 and Maxwell architecture.
Pascal also offers important improvements in relation to Asynchronous Computing, something very important since currently the workloads are very complex. Thanks to these improvements, the Pascal architecture is more efficient at distributing the load among all its different SM units, which means that there are hardly any unused CUDA cores. This allows the optimization of the GPU to be much greater, making better use of all the resources it has.
The following table summarizes the most important features of all Pascal based GeForce cards.
NVIDIA GEFORCE PASCAL GRAPHICS CARDS |
||||||
CUDA Cores | Frequencies (MHz) | Memory | Memory interface | Memory bandwidth (GB / s) | TDP (W) | |
NVIDIA GeForce GT1030 | 384 | 1468 | 2 GB GDDR5 | 64 bit | 48 | 30 |
NVIDIA GeForce GTX1050 | 640 | 1455 | 2 GB GDDR5 | 128 bit | 112 | 75 |
NVIDIA GeForce GTX1050Ti | 768 | 1392 | 4 GB GDDR5 | 128 bit | 112 | 75 |
NVIDIA GeForce GTX1060 3 GB | 1152 | 1506/1708 | 3GB GDDR5 | 192 bit | 192 | 120 |
NVIDIA GeForce GTX1060 6GB | 1280 | 1506/1708 | 6 GB GDDR5 | 192 bit | 192 | 120 |
NVIDIA GeForce GTX1070 | 1920 | 1506/1683 | 8GB GDDR5 | 256 bit | 256 | 150 |
NVIDIA GeForce GTX1070Ti | 2432 | 1607/1683 | 8GB GDDR5 | 256 bit | 256 | 180 |
NVIDIA GeForce GTX1080 | 2560 | 1607/1733 | 8 GB GDDR5X | 256 bit | 320 | 180 |
NVIDIA GeForce GTX1080 Ti | 3584 | 1480/1582 | 11 GB GDDR5X | 352 bit | 484 | 250 |
NVIDIA GeForce GTX Titan Xp | 3840 | 1582 | 12 GB GDDR5X | 384 bit | 547 | 250 |
Artificial intelligence and Volta architecture
Nvidia's GPUs are widely used in the fields of deep learning, artificial intelligence, and accelerated analysis of large amounts of data. The company developed deep learning based on GPU technology, in order to use artificial intelligence to tackle problems such as cancer detection, weather forecasting and autonomous driving vehicles, such as the famous Tesla.
Nvidia's goal is to help networks learn to “think ”. Nvidia's GPUs work exceptionally well for deep learning tasks because they are designed for parallel computing, and they work well to handle the vector and matrix operations that prevail in deep learning. The company's GPUs are used by researchers, laboratories, technology companies, and business enterprises. In 2009, Nvidia participated in what was called the big bang for deep learning, as deep learning neural networks were combined with the company's graphics processing units. That same year, Google Brain used Nvidia's GPUs to create deep neural networks capable of machine learning, where Andrew Ng determined that they could increase the speed of deep learning systems by 100 times.
In April 2016, Nvidia introduced the 8-GPU cluster-based DGX-1 supercomputer to enhance users' ability to use deep learning by combining GPUs with specifically designed software. Nvidia also developed the GPU-based Nvidia Tesla K80 and P100 virtual machines, available through the Google Cloud, which Google installed in November 2016. Microsoft added servers based on Nvidia's GPU technology in a preview of its N series, based on the Tesla K80 card. Nvidia also partnered with IBM to create a software kit that increases the AI capabilities of its GPUs. In 2017, Nvidia's GPUs were also brought online at the RIKEN Center for the Advanced Intelligence Project for Fujitsu.
In May 2018, researchers at Nvidi a's artificial intelligence department realized the possibility that a robot could learn to do a job by simply observing the person doing the same job. To achieve this, they have created a system that, after a brief review and test, can now be used to control next-generation universal robots.
Volta is the code name for the most advanced GPU microarchitecture developed by Nvidia, it is Pascal's successor architecture and was announced as part of a future roadmap ambition in March 2013. The architecture is named after Alessandro Volta, the physicist, chemist and inventor of the electric battery. The Volta architecture has not reached the gaming sector, although it has done so with the Nvidia Titan V graphics card, focused on the consumer sector and which can also be used in gaming equipment.
This Nvidia Titan V is a GV100 core based graphics card and three HBM2 memory stacks, all in one package. The card has a total of 12 GB of HBM2 memory that works through a 3072-bit memory interface. Its GPU contains over 21 million transistors, 5, 120 CUDA cores and 640 Tensor cores to deliver 110 TeraFLOPS performance in deep learning. Its operating frequencies are 1200 MHz base and 1455 MHz in turbo mode, while the memory works at 850 MHz, offering a bandwidth of 652.8 GB / s. A CEO Edition version has recently been announced that increases memory up to 32GB.
The first graphics card manufactured by Nvidia with the Volta architecture was the Tesla V100, which is part of the Nvidia DGX-1 system. The Tesla V100 makes use of the GV100 core which was released on June 21, 2017. The Volta GV100 GPU is built in a 12nm FinFET manufacturing process , with 32GB of HBM2 memory capable of delivering up to 900GB / s of bandwidth.
Volta also brings to life the latest Nvidia Tegra SoC, called Xavier, which was announced on September 28, 2016. Xavier Contains 7 billion transistors and 8 custom ARMv8 cores, along with a Volta GPU with 512 CUDA cores and a TPU of open source (Tensor Processing Unit) called DLA (Deep Learning Accelerator). Xavier can encode and decode video at 8K Ultra HD resolution (7680 × 4320 pixels) in real time, all with a TDP of 20-30 watts and a die size estimated at around 300mm2 thanks to the 12 manufacturing process. nm FinFET.
The Volta architecture is characterized by being the first to include the Tensor Core, cores specially designed to offer much superior performance in deep learning tasks compared to regular CUDA cores. A Tensor Core is a unit that multiplies two FP16 4 × 4 matrices and then adds a third FP16 or FP32 matrix to the result, using merged addition and multiplication operations, obtaining an FP32 result that could optionally be downgraded to an FP16 result. Tensor nuclei are intended to accelerate neural network training.
Volta also stands out for including the advanced proprietary NVLink interface, which is a wire-based communication protocol for short-range semiconductor communications developed by Nvidia, which can be used for data code transfers and control in processor systems based on CPU and GPU and those based solely on GPU. NVLink specifies a point-to-point connection with data rates of 20 and 25 Gb / s per data lane and per address in its first and second versions. Total data rates in real-world systems are 160 and 300 GB / s for the total sum of the input and output data streams. NVLink products introduced to date focus on the high performance application space. NVLINK was first announced in March 2014 and uses a proprietary high-speed signaling interconnect developed and developed by Nvidia.
The following table summarizes the most important features of Volta-based cards:
NVIDIA VOLTA GRAPHICS CARDS |
||||||||
CUDA Cores | Core Tensor | Frequencies (MHz) | Memory | Memory interface | Memory bandwidth (GB / s) | TDP (W) | ||
Tesla V100 | 5120 | 640 | 1465 | 32GB HBM2 | 4, 096 bit | 900 | 250 | |
GeForce Titan V | 5120 | 640 | 1200/1455 | 12 GB HBM2 | 3, 072 bit | 652 | 250 | |
GeForce Titan V CEO Edition | 5120 | 640 | 1200/1455 | 32GB HBM2 | 4, 096 bit | 900 | 250 |
Nvidia's future goes through Turing and Ampere
The two future Nvidia architectures will be Turing and Ampere according to all the rumors that have appeared to date, it is possible that when you read this post, one of them has already been officially announced. For now, nothing is known for certain about these two architectures, although it is said that Turing would be a simplified version of Volta for the gaming market, in fact it is expected to arrive with the same manufacturing process at 12 nm.
Ampere sounds like Turing's successor architecture, although it could also be Volta's successor to the artificial intelligence sector. Absolutely nothing is known about this, although it seems logical to expect it to arrive manufactured at 7 nm. The rumors suggest that Nvidia will announce its new GeForce cards at Gamecom in the next month of August, only then will we leave doubts about what Turing or Ampere will be, if they really come into existence.
NVIDIA G-Sync, ending image sync issues
G-Sync is a proprietary adaptive sync technology developed by Nvidia, the primary goal of which is to eliminate screen tearing and the need for alternatives in the form of software like Vsync. G-Sync eliminates tearing of the screen by forcing it to adapt to the framerate of the output device, the graphics card, rather than the output device adapting to the screen, resulting in image tear in the screen.
For a monitor to be G-Sync compatible, it must contain a hardware module sold by Nvidia. AMD (Advanced Micro Devices) has released a similar technology for displays, called FreeSync, which has the same function as G-Sync but does not require any specific hardware.
Nvidia created a special function to avoid the possibility that a new frame is ready while drawing a duplicate on the screen, something that can generate delay and / or stutter, the module anticipates the update and waits for the next frame to be completed. Pixel overload also becomes misleading in a non-fixed update scenario, and the solutions predict when the next update will take place, therefore overdrive value should be implemented and adjusted for each panel in order to avoid ghosting.
The module is based on an Altera Arria V GX family FPGA with 156K logic elements, 396 DSP blocks and 67 LVDS channels. It is produced in the TSMC 28LP process and is combined with three chips for a total of 768 MB of DDR3L DRAM, to achieve a certain bandwidth. The FPGA used also features an LVDS interface to control the monitor panel. This module is intended to replace common climbers and to be easily integrated by monitor manufacturers, who only need to take care of the power supply circuit board and input connections.
G-Sync has faced some criticism due to its proprietary nature, and the fact that it is still promoted when free alternatives exist, such as the VESA Adaptive-Sync standard, which is an optional feature of DisplayPort 1.2a. While AMD's FreeSync is based on DisplayPort 1.2a, G-Sync requires a Nvidia-made module instead of the usual on-screen scaler for Nvidia GeForce graphics cards to work properly, being compatible with Kepler, Maxwell, Pascal, and microarchitectures. Volta.
The next step has been taken with G-Sync HDR technology, which as its name suggests, adds HDR capabilities to greatly improve the image quality of the monitor. To make this possible, a significant leap in hardware has had to be made. This new version G-Sync HDR uses an Intel Altera Arria 10 GX 480 FPGA, a highly advanced and highly programmable processor that can be encoded for a wide range of applications, which is accompanied by 3 GB of 2400MHz DDR4 memory manufactured by Micron. This makes the price of these monitors more expensive.
Here ends our post on everything you have to know about Nvidia. Remember that you can share it on social networks so that it reaches more users. You can also leave a comment if you have any suggestion or something to add.
All information about lenovo yoga tablet
Everything about the first tablet of the Lenovo Yoga range: technical characteristics, images, battery, camera, availability and price.
Nvidia rtx 【all information】
We already have with us the new NVIDIA RTX graphics cards. From the flagship model: NVIDIA RTX 2080 Ti, to the model for the most gamers in 4K:
▷ Nvidia quadro 【all the information】?
All the information about Nvidia Quadro professional graphics cards: characteristics, design, performance, advantages and disadvantages ✅