Android

Graphics card - everything you need to know

Table of contents:

Anonim

In the era of gaming computers, the graphics card has gained as much or almost more importance than the CPU. In fact, many users avoid buying powerful CPUs to invest money in this important component that is responsible for processing everything that has to do with textures and graphics. But how much do you know about this hardware? Well here we explain everything, or something less everything we consider most important.

Index of contents

The graphics card and the gaming era

Without a doubt, the most used term to refer to GPUs is that of a graphics card, although it is not exactly the same and we will explain it. A GPU or Graphics Processing Unit is basically a processor built to handle graphics. The term obviously sounds very similar to CPU, so it is important to differentiate between the two elements.

When we are talking about a graphics card, we are really talking about the physical component. This is built from a PCB independent from the motherboard and provided with a connector, normally PCI-Express, with which it will be connected to the motherboard itself. On this PCB we have the GPU installed, and also the graphic memory or VRAM together with components such as VRM, connection ports and the heatsink with its fans.

Gaming would not exist if it were not for graphics cards, especially if we are talking about computers or PCs. In the beginning, everyone will know that computers did not have a graphical interface, we only had a black screen with a promt to enter commands. Those basic functions are far from being now in the gaming era, in which we have equipment with a perfect graphical interface and in enormous resolutions that allow us to manage environments and characters almost as if it were real life.

Why separate GPU and CPU

To talk about proprietary graphics cards, we must first know what they bring us and why they are so important today. Today, we couldn't conceive of a gaming computer without a physically separate CPU and GPU.

What does the CPU do

Here we have it quite simple, because we can all get an idea of ​​what the microprocessor does in a computer. It is the central processing unit, through which all the instructions generated by the programs and a large part of those sent by the peripherals and the user himself pass through. The programs are formed by a succession of instructions that will be executed to generate a response based on an input stimulus, it may be a simple click, a command, or the operating system itself.

Now comes a detail that we must remember when we see what the GPU is. The CPU is made up of cores, and a large size we can say. Each one of them is capable of executing one instruction after another, the more cores, since more instructions can be executed at the same time. There are many types of programs on a PC, and many types of instructions that are very complex and divided into several stages. But the truth is that a program does not generate a large number of these instructions in parallel. How do we make sure that the CPU “understands” any program that we install? What we need are few nuclei, very complex, and that are very fast to execute the instructions quickly, so we will notice that the program is fluid and responds to what we ask it.

These basic instructions are reduced to mathematical operations with integers, logical operations and also some floating point operations. The latter are the most complicated since they are very large real numbers that need to be represented in more compact elements using scientific notation. Supporting the CPU is RAM, fast storage that saves running programs and their instructions to send them over a 64-bit bus to the CPU.

And what does the GPU do

Precisely the GPU is closely related to these floating point operations that we have talked about previously. In fact, a graphics processor practically spends all its time performing these types of operations, since they have a lot to do with graphic instructions. For this reason, it is often called a mathematical coprocessor, in fact there is one within the CPU, but much simpler than the GPU.

What is a game made of? Well, basically the pixel movement thanks to a graphics engine. It is nothing more than a program focused on emulating a digital environment or world where we move as if it were our own. In these programs most of the instructions have to do with pixels and their movement to form textures. In turn, these textures have color, 3D volume and physical properties of light reflection. All this is basically floating point operations with matrices and geometries that must be done simultaneously.

Therefore, a GPU does not have 4 or 6 cores, but thousands of them, to do all these specific operations in parallel over and over again. Sure, these cores are not as "smart" as the CPU cores, but they can do a lot more operations of this type at once. The GPU also has its own memory, GRAM, which is much faster than normal RAM. It has a much larger bus, between 128 and 256 bits to send much more instructions to the GPU.

In the video that we leave you linked, the myth hunters emulate the operation of a CPU and a GPU and in terms of their number of cores when it comes to painting a picture.

youtu.be/-P28LKWTzrI

What the CPU and GPU do together

At this point you may have already thought that in gaming computers the CPU also influences the final performance of the game and its FPS. Obviously, and there are many instructions that are the responsibility of the CPU.

The CPU is responsible for sending data in the form of vertices to the GPU, so that it "understands" what physical transformations (movements) it must do to the textures. This is called the Vertex Shader or movement physics. After this, the GPU obtains information about which of these vertices will be visible, making the so-called pixel clipping by rasterization. When we already know the shape and its movement, then it is time to apply the textures, in Full HD, UHD or any resolution, and their corresponding effects, it would be the Pixel Shader process .

For this same reason, the more power the CPU has, the more vertex instructions it can send to the GPU, and the better it will lock it. So the key difference between these two elements is in the level of specialization and the degree of parallelism in the processing for the GPU.

What is an APU?

We have already seen what a GPU is and its function in a PC, and relationship with the processor. But it is not the only existing element that is capable of handling 3D graphics, and that is why we have the APU or Accelerated Processor Unit.

This term was invented by AMD to name its processors with a GPU integrated in the same package. Indeed, this means that within the processor itself we have a chip or better said, a chipset made up of several cores that is capable of working with 3D graphics in the same way that a graphics card does. In fact, many of today's processors have this type of processor, called IGP (Integrated Graphics Processor) within itself.

But of course, a priori we can not compare the performance of a graphics card with thousands of internal cores with an IGP integrated within the CPU itself. So its processing capacity is still much lower, in terms of gross power. To this we add the fact of not having a dedicated memory as fast as the GDDR of the graphics cards, being enough with part of the RAM memory for its graphic management.

We call independent graphics cards dedicated graphics cards, while we call IGP internal graphics cards. The Intel Core ix processors have almost all of them an integrated GPU called Intel HD / UHD Graphics, except the models with the "F" at the end. AMD does the same with some of its CPUs, specifically the Ryzen of the G series and the Athlon, with graphics called Radeon RX Vega 11 and Radeon Vega 8.

A little history

Far are the old text-only computers that we have now, but if something has been present in all ages it is the desire to create increasingly detailed virtual worlds to immerse ourselves inside.

In the first general consumer equipment with Intel 4004, 8008 and company processors, we already had graphics cards, or something similar. These were only limited to interpreting the code and displaying it on a screen in the form of plain text of about 40 or 80 columns, and of course in monochrome. In fact, the first graphics card was called MDA (Monocrome Data Adapter). It had its own RAM of no less than 4KB, to render perfect graphics in the form of plain text at 80 × 25 columns.

After this came the CGA (Color Graphics Adapter) graphics cards, in 1981 IBM began to market the first color graphics card. It was capable of rendering 4 colors simultaneously from an internal 16 palette at a resolution of 320 × 200. In text mode it was able to raise the resolution to 80 × 25 columns or what is equal to 640 × 200.

We keep moving forward, with the HGC or Hercules Graphics Card, the name promises! A monochrome card that raised the resolution to 720 × 348 and was capable of working alongside a CGA to have up to two different video outputs.

The jump to cards with rich graphics

Or rather EGA, Enharced Graphics Adapter that was created in 1984. This was the first graphics card itself, capable of working with 16 colors and resolutions up to 720 × 540 for ATI Technologies models, does that sound familiar to you, right?

In 1987 a new resolution is produced, and the ISA video connector is abandoned to adopt the VGA (Video Graphics Array) port, also called Sub15-D, an analog serial port that has been used until recently for CRTs and even panels TFT. The new graphics cards raised its color palette to 256, and its VRAM memory to 256KB. At this time, computer games began to develop with much more complexity.

It was in 1989 when graphics cards stopped using color palettes and started using color depth. With the VESA standard as the connection to the motherboard, the bus was expanded to 32 bits, so they were already able to work with several million colors and resolutions up to 1024x768p thanks to the monitors with the SuperVGA port. Cards as iconic as the ATI Match 32 or Match 64 with a 64-bit interface were among the best of the time.

The PCI slot arrives and with it the revolution

The VESA standard was a hell of a big bus, so in 1993 it evolved to the PCI standard, the one we have today with its different generations. This one allowed us smaller cards, and many manufacturers joined the party like Creative, Matrox, 3dfx with their Voodoo and Voodoo 2, and one Nvidia with its first RIVA TNT and TNT2 models released in 1998. At that time, the first specific libraries for 3D acceleration appeared, such as DirectX by Microsoft and OpenGL by Silicon Graphics.

Soon the PCI bus became too small, with cards capable of addressing 16 bits and 3D graphics at a resolution of 800x600p, so the AGP (Advanced Graphics Port) bus was created. This bus had a 32-bit PCI-like interface but increasing its bus by 8 additional channels to communicate with RAM faster. Its bus worked at 66 MHz and 256 Mbps of bandwidth, with up to 8 versions (AGP x8) reaching up to 2.1 GB / s, and which in 2004 would be replaced by the PCIe bus.

Here we have already very well established the two great 3D graphics card companies such as Nvidia and ATI. One of the first cards that marked the new era was the Nvidia GeForce 256, implementing T&L technology (lighting and geometry calculations). Then ranking above its rivals for being the first 3D polygon graphics accelerator and Direct3D compatible. Shortly afterwards ATI would release its first Radeon, thus shaping the names of both manufacturers for its gaming graphics cards that last until today, even after the purchase of ATI by AMD.

The PCI Express bus and current graphics cards

And finally we come to the current era of graphics cards, when in 2004 the VGA interface did not work anymore and was replaced by PCI-Express. This new bus allowed transfers of up to 4 GB / s both up and down simultaneously (250 MB x16 lanes). Initially it would be connected to the north bridge of the motherboard, and would use part of the RAM for video, with the name TurboCaché or HyperMemory. But later with the incorporation of the north bridge in the CPU itself, these 16 PCIe lanes would go in direct communication with the CPU.

The era of ATI Radeon HD and Nvidia GeForce began, becoming the leading exponents of gaming graphics cards for computers on the market. Nvidia would soon take the lead with a GeForce 6800 that supported DirectX 9.0c versus an ATI Radeon X850 Pro that was a little behind. After this, both brands went on to develop the unified shader architecture with their Radeon HD 2000 and their GeForce 8 series. In fact, the powerful Nvidia GeForce 8800 GTX was one of the most powerful cards of its generation, and even the ones that came after it, being Nvidia's definitive leap to supremacy. In 2006 it was when AMD bought ATI and their cards were renamed AMD Radeon.

Finally we stand on cards compatible with DirectX 12, Open GL 4.5 / 4.6 libraries, the first being the Nvidia GTX 680 and the AMD Radeon HD 7000. Successive generations have come from the two manufacturers, in the case of Nvidia we have the Maxwell (GeForce 900), Pascal (GeForce 10) and Turing (Geforce 20) architectures, while AMD has the Polaris (Radeon RX), GCN (Radeon Vega) and now the RDNA (Radeon RX 5000).

Parts and hardware of a graphics card

We are going to see the main parts of a graphics card in order to identify what elements and technologies we must know when buying one. Of course technology advances a lot so we will gradually update what we see here.

Chipset or GPU

We already know quite well what the function of the graphics processor of a card is, but it will be important to know what we have inside. It is the core of it, and inside we find a huge number of cores that are responsible for performing different functions, especially in the architecture currently used by Nvidia. Inside we find the respective cores and the cache memory associated with the chip, which normally has L1 and L2.

Inside a Nvidia GPU we find the CUDA or CUDA cores, which are, so to speak, in charge of performing the general floating point calculations. These cores in AMD cards are called Stream Processors. The same number on cards from different manufacturers does not mean the same capacity, since these will depend on the architecture.

In addition, Nvidia also features Tensor cores and RT cores. These cores are intended for the processor with more complex instructions about real-time ray tracing, one of the most important capabilities of the manufacturer's new generation card.

GRAM memory

The GRAM memory does practically the same function as the RAM memory of our computer, storing the textures and elements that are going to be processed in the GPU. In addition, we find very large capacities, with more than 6 GB currently in almost all high-end graphics cards.

It is a DDR-type memory, just like RAM, so its effective frequency will always be twice the clock frequency, something to keep in mind when it comes to overclocking and specification data. Currently most cards use GDDR6 technology, if as you hear, DDR6, while in normal RAM they are DDR4. These memories are much faster than DDR4, reaching frequencies of up to 14, 000 MHz (14 Gbps) effectively with a clock at 7, 000 MHz. In addition, its bus width is much greater, sometimes reaching 384 bits on Nvidia top range.

But there is still a second memory that AMD has used for its Radeon VII, in the case of the HBM2. This memory does not have speeds as high as GDDR6, but instead offers us a brutal bus width of up to 2048 bits.

VRM and TDP

The VRM is the element in charge of supplying power to all the components of the graphics card, especially the GPU and its GRAM memory. It consists of the same elements as the VRM of a motherboard, with its MOSFETS acting as DC-DC current rectifiers, its Chokes and its capacitors. Similarly, these phases are divided into V_core and V-SoC, for GPU and memory.

On the TDP side, it also means exactly the same as on a CPU. It is not about the power consumed by the processor, but the power in the form of heat that it generates working maximum load.

To power the card we need a power connector. Currently 6 + 2-pin configurations are used for the cards, since the PCIe slot itself is only capable of supplying a maximum of 75W, while a GPU can consume more than 200W.

Connection interface

The connection interface is the way to connect the graphics card to the motherboard. Currently absolutely all dedicated graphics cards function via the PCI-Express 3.0 bus except the new AMD Radeon XR 5000 cards, which have been upgraded to PCIe 4.0 Bus.

For practical purposes, we will not notice any difference, since the amount of data currently being exchanged on this 16-line bus is much less than its capacity. Out of curiosity, PCIe 3.0 x16 is capable of carrying 15.8 GB / s up and down simultaneously, while PCIe 4.0 x16 doubles the capacity to 31.5 GB / s. Soon all GPUs will be PCIe 4.0 this is obvious. We do not have to worry about having a PCIe 4.0 board and a 3.0 card, since the standard always offers backward compatibility.

Video ports

Last but not least, we have the video connectors, those that we need to connect our monitor or monitors and obtain the image. In the current market we have four types of video connection:

  • HDMI: High-Definition Multimedia Interface is a communication standard for uncompressed picture and sound multimedia devices. The HDMI version will influence the image capacity that we can get from the graphics card. The latest version is HDMI 2.1, which offers a maximum resolution of 10K, playing 4K at 120Hz and 8K at 60Hz. While version 2.0 offers 4K @ 60Hz in 8 bits. DisplayPort: It is also a serial interface with uncompressed sound and image. As before, the version of this port will be very important, and we will need it to be at least 1.4, since this version has support to play content in 8K at 60 Hz and in 4K at 120 Hz with no less than 30 bits. and in HDR. Without a doubt the best of all today. USB-C: USB Type-C is reaching more and more devices, due to its high speed and its integration with interfaces such as DisplayPort and Thunderbolt 3 at 40 Gbps. This USB has DisplayPort Alternate Mode, being the DisplayPort 1.3, with support to display images in 4K resolution at 60 Hz. Similarly Thunderbolt 3 is capable of playing content in UHD under the same conditions. DVI: it is an unlikely connector to find it in current monitors, being the evolution of VGA to a high definition digital signal. If we can avoid it, better than better, the most widespread being DVI-DL.

How powerful is a graphics card

To refer to the power of a graphics card, it is necessary to know some concepts that usually appear in its specifications and benchmarks. This will be the best way to know in depth the graphics card that we want to buy and also know how to compare it with the competition.

FPS rate

The FPS is the Framerate or Frames per Second. It measures the frequency at which the screen shows the images of a video, game or what is represented on it. FPS has a lot to do with how we perceive movement in an image. The more FPS, the more fluid feeling a picture will give us. At a rate of 60 FPS or higher, the human eye under normal conditions will appreciate a fully fluid image, which would simulate reality.

But of course, everything does not depend on the graphics card, since the refresh rate of the screen will mark the FPS that we will see. FPS is the same as Hz, and if a screen is 50 Hz, the game will be viewed at maximum 60 FPS, even if the GPU is capable of playing it at 100 or 200 FPS. To know what would be the maximum FPS rate that the GPU would be able to represent, we have to disable vertical sync in the game options.

Architecture of your GPU

Before we have seen that GPUs have a certain count of physical cores which could lead us to think that the more, the better performance it will bring us. But this is not exactly so, since, as with the CPU architecture, performance will vary even having the same speed and the same cores. We call this IPC or Instructions Per Cycle.

The architecture of graphics cards has evolved over time to have simply spectacular performances. They are capable of supporting 4K resolutions over 60Hz or even 8K resolutions. But most importantly, it's its great ability to animate and render textures with light in real time, just like our eyes do in real life.

Currently we have the Nvidia with its Turing architecture, using 12nm FinFET transistors to build the chipsets of the new RTX. This architecture has two differential elements that until now did not exist in consumer equipment, the Ray Tracing capability in real time and the DLSS (Deep Learning Super Sampling). The first function tries to simulate what happens in the real world, calculating how light affects virtual objects in real time. The second, it is a series of artificial intelligence algorithms with which the card renders the textures at a lower resolution to optimize the performance of the game, it is like a kind of Antialiasing. The ideal is to combine DLSS and Ray Tracing.

On the AMD side, it has also released architecture, although it is true that it coexists with the previous ones to have a wide range of cards that, although it is true, are not at the level of Nvidia's top range. With RDNA, AMD has increased the IPC of its GPUs by 25% compared to the CNG architecture, thus achieving 50% more speed for each watt consumed.

Clock frequency and turbo mode

Along with the architecture, two parameters are very important in order to see the performance of a GPU, which are those of its base clock frequency and the increase in factory turbo or overclocking mode. As with CPUs, GPUs are also able to vary their graphics processing frequency as needed at any given time.

If you look, the frequencies of graphics cards are much lower than those of processors, being around 1600-2000 MHz. This is because the greater number of cores supplies the need for a higher frequency, in order to control the TDP of the card.

At this point it will be essential to know that in the market we have reference models and personalized cards. The first are the models released by the manufacturers themselves, Nvidia and AMD. Second, manufacturers basically take GPUs and memories to assemble their own with higher performance components and heatsinks. The case is that its clock frequency also changes, and these models tend to be faster than the reference ones.

TFLOPS

Along with the clock frequency we have the FLOPS (Floating Point Operations per second). This value measures the floating point operations that a processor is capable of performing in one second. It is a figure that measures the gross power of the GPU, and also of the CPUs. Currently we cannot simply talk about FLOSP, been from TeraFLOPS or TFLOPS.

We should not be confused to think that more TFLOPS will mean that our graphics card is better. This is normally the case, as you should be able to move textures more freely. But other elements such as the amount of memory, its speed and the architecture of the GPU and its cache will make the difference.

TMUs and ROPs

These are terms that will appear on all graphics cards, and they do give us a good idea of ​​the working speed of the same.

TMU stands for Texture Mapping Unit. This element is responsible for dimensioning, rotating and distorting a bitmap image to place it in a 3D model that will serve as a texture. In other words, it applies a color map to a 3D object that a priori will be empty. The more TMU, the higher the texturing performance, the faster the pixels will fill, and the more FPS we will get. Current TMUs include Texture Direction Units (TA) and Texture Filter Units (TF).

Now we turn to see the ROPs or Raster Units. These units process the texel information from the VRAM memory and perform matrix and vector operations to give a final value to the pixel, which will be its depth. This is called rasterization, and basically controlling the Antialiasing or the merging of the different pixel values ​​located in memory. DLSS is precisely an evolution of this process to generate

Amount of memory, bandwidth and bus width

We know that there are several types of technologies for VRAM memory, of which currently the most widely used is GDDR5 and GDDR6, with speeds of up to 14 Gbps for the latter. As with RAM, the more memory the more pixel, text and text data we can store. This greatly influences the resolution at which we play, the level of detail in the world, and the viewing distance. Currently a graphics card will need at least 4 GB of VRAM to be able to work with the new generation games at Full HD and higher resolutions.

The memory bus width represents the number of bits that can be transmitted in a word or instruction. These are much longer than those used by CPUs, with lengths between 192 and 384 bits, let's remember the concept of parallelism in processing.

Memory bandwidth is the amount of information that can be transferred per unit of time and is measured in GB / s. The greater the bus width and the greater the memory frequency, the more bandwidth we will have, because the greater the amount of information that can travel through it. It is just like the Internet.

API compatibility

An API is basically a set of libraries that are used to develop and work with different applications. It means application programming, and is the means by which different applications communicate with each other.

If we move to the multimedia world, we also have APIs that allow the operation and creation of games and video. The most famous of all will be DirectX, which is in its 12th version since 2014, and in the latest updates it has implemented Ray Tracing, programmable MSAA and virtual reality capabilities. The open source version is OpenGL, which is version 4.5 and is also used by many games. Finally we have Vulkan, an API specially developed for AMD (its source code was from AMD and it was transferred to Khronos).

Overclocking capability

Before we talked about the turbo frequency of the GPUs, but it is also possible to increase it above its limits by overclocking it. This practice is basically trying to find more FPS in games, more fluency to improve our response.

The overclocking capacity of the CPUs is around 100 or 150 MHz, although some are capable of supporting something more or something less, depending on their architecture and maximum frequency.

But it is also possible to overlock the GDDR memories and also a lot. An average GDDR6 memory working at 7000 MHz supports uploads of up to 900 and 1000 MHz, thus reaching up to 16 Gbps effective. In fact, it is the element that increases the FPS rate of the game the most, with increases of even 15 FPS.

Some of the best overclocking programs are Evga Precision X1, MSI AfterBurner and AMD WattMan for Radeons. Although many manufacturers have their own, such as AORUS, Colorful, Asus, etc.

The test benchmarks for graphics card

Benchmarks are stress and performance tests that certain hardware supplements of our PC undergo to evaluate and compare their performance against other products on the market. Of course there are benchmarks to evaluate the performance of graphics cards, and even the graphics-CPU set.

These tests almost always show a dimensionless score, that is, it can only be purchased with those generated by that program. On the opposite side would be the FPS and for example TFLOPS. The most used programs for graphics card benchmarks are 3DMark, which has a large number of different tests, PassMark, VRMark or GeekBench. They all have their own statistics table to buy our GPU with the competition.

Size matters… and the heatsink too

Of course it matters friends, so before buying a graphics card, the least we can do is go to its specifications and see what it measures. Then let's go to our chassis and measure what space we have available for it.

Dedicated graphics cards have very powerful GPUs with TDP of over 100W in all of them. This means that they are going to get quite hot, in fact, even hotter than processors. For this reason, all of them have large heatsinks that occupy almost the entire electronics PCB.

In the market we can find basically two types of heatsinks.

  • Blower: This type of heatsink is for example the one that has the reference models AMD Radeon RX 5700 and 5700 XT or the previous Nvidia GTX 1000. A single fan sucks vertical air and makes it flow through the finned heatsink. These heatsinks are very bad, because it takes little air and the speed of passage through the heatsink is low. Axial flow: they are the fans of a lifetime, located vertically in the heatsink and pushing air towards the fins that will later come out from the sides. It is used in all the custom models for being the one that gives the best performance. Even liquid cooling: some top of the range models have heatsinks that embed a liquid cooling system, for example the Asus Matrix RTX 2080 Ti.

Personalized cards

We call the graphics models assembled by generic hardware manufacturers such as Asus, MSI, Gigabyte, etc. These directly buy the graphics chips and memories from the main manufacturer, AMD or Nvidia, and then mount them on a PCB made by them together with a heatsink also created by them.

The good thing about this card is that they come overclocked at the factory, with a higher frequency than the reference models, so they will perform a little more. Its heatsink is also better and its VRM, and even many have RGB. The bad thing is that they are usually more expensive. Another positive aspect is that they offer many types of sizes, for ATX, Micro ATX or even ITX chassis, with very small and compact cards.

How is the GPU or graphics card of a gaming laptop

Surely at this point we wonder if a laptop can also have a dedicated graphics card, and the truth is that it does. In fact, in professional Review we analyze a huge number of gaming laptops with a dedicated GPU.

In this case, it will not be installed on an expansion board, but the chipset will be directly soldered on the main PCB of the laptop and very close to the CPU. These designs are usually called Max-Q because they do not have a finned heatsink and have a specific region in the base plate for them.

In this area, the undisputed king is Nvidia, with its RTX and GTX Max-Q. They are chips optimized for laptops and that consume 1/3 compared to desktop models and only sacrifice 30% of their performance. This is accomplished by decreasing its clock frequency, sometimes by removing some cores and slowing down the GRAM.

What CPU do I mount according to my graphics card

To play, as well as to do all kinds of tasks on our computer, we always have to find a balance in our components to avoid bottlenecks. Reducing this to the world of gaming and our graphics cards, we must achieve a balance between GPU and CPU, so that neither of them falls short and the other misuses too much. Our money is at stake, and we cannot buy an RTX 2080 and install it with a Core i3-9300F.

The central processor has an important role in working with graphics as we have already seen in previous sections. So we need to make sure it has enough speed, cores, and processing threads to work with the physics and movement of the game or video and send them to the graphics card as fast as possible.

In any case, we will always have the possibility to modify the graphics settings of the game to reduce the impact of a CPU that is too slow for the demands. In the case of the GPU it is easy to compensate for its lack of performance, just by lowering the resolution we will achieve great results. With the CPU it is different, since, although there are fewer pixels, the physics and movement will remain almost the same, and lowering the quality of these options can greatly influence the correct gaming experience. Here are some options that influence the CPU and others on the GPU:

They influence GPU They influence CPU
In general, rendering options In general, the physical options
Anti-aliasing Character movement
Ray Tracing Items displayed on screen
Textures Particles
Tessellation
Postprocessed
Resolution
Ambient occlusion

Seeing this, we can make a more or less general balance classifying the equipment according to the purpose for which they are built. This will make it easier to achieve more or less balanced specifications.

Cheap multimedia and office equipment

We start with the most basic, or at least what we consider more basic apart from the mini PCs with Celeron. Supposedly, if we were looking for something cheap, the best thing would be to go to AMD's Athlon processors or Intel's Pentium Gold. In both cases we have good-level integrated graphics, such as the Radeon Vega in the first case, or the UHD Graphics in the case of Intel, which support high resolutions and a decent performance in undemanding tasks.

In this field it is totally pointless to buy a dedicated graphics card. They are CPUs with two cores that are not going to yield enough to amortize the cost of a card. What's more, the integrated graphics are going to give us a performance similar to what a dedicated GPU of 80-100 euros would offer.

General-purpose equipment and low-end gaming

We can consider a general-purpose equipment to be one that will respond well in many different circumstances. For example, surfing, working in the office, doing little things in design and even editing videos at an amateur level and playing occasionally in Full HD (we can't come here and ask for much more).

In this area , the 4-core and high-frequency Intel Core i3 will stand out, and especially the AMD Ryzen 3 3200G and 5 3400G with integrated Radeon RX Vega 11 graphics and a very adjusted price. These Ryzen are capable of moving a last generation game with dignity in low quality and Full HD. If we want something a little better, let's move on to the next one.

Computer with graphics card for mid and high range gaming

Being mid-range gaming, we could already afford a Ryzen 5 2600 or a Core i5-9400F for less than 150 euros and add a dedicated GPU like the Nvidia 1650, 1660 and 1660 Ti, or the AMD Radeon RX 570, 580 or 590. They are not bad options if we do not want to spend more than 250 euros on a graphics card.

But of course, if we want more we must make sacrifices, and this is what it is if we want to obtain an optimal gaming experience in Full HD or 2K in high quality. In this case, the commented processors are still a great option for being 6-core, but we could go up to the Ryzen 5 3600 and 3600X and the Intel Core i5-9600K. With these, it will be worth it to upgrade to Nvidia's RTX 2060/2070 Super and AMD's RX 5700/5700 XT.

Enthusiastic gaming and design team

Here there will be a lot of rendering tasks and games running with the filters at maximum, so we will need a CPU of at least 8 cores and a powerful graphics card. The AMD Ryzen 2700X or 3700X will be a great option, or the Intel Core i7 8700K or 9700F. Along with them, we deserve an Nvidia RTX 2070 Super or an AMD Radeon RX 5700 XT.

And if we want to be the envy of our friends, let's get on a RTX 2080 Super, let's wait a bit for the Radeon 5800, and let's get an AMD Ryzen 3900X or an Intel Core i9-9900K. Threadrippers are not a feasible option at present, although the Intel X and XE of the LGA 2066 platform are and their high cost.

Conclusion about the graphics card and our recommended models

So far this post comes in which we explain in enough detail the current status of graphics cards, as well as a bit of their history from the beginning of them. It is one of the most popular products in the world of computing, since a gaming PC will surely perform much more than a console.

Real gamers use computers to play, especially in e-sport or competitive gaming worldwide. In them, always try to achieve the maximum possible performance, increasing FPS, decreasing response times and using components designed for gaming. But nothing would be possible without graphics cards.

  • What graphic card should i buy? The best on the market Best graphics cards on the market
Android

Editor's choice

Back to top button