Tutorials

▷ What is computer latency and how to measure it

Table of contents:

Anonim

Surely many of those who have an Internet connection and do not yet know what latency is, or rather, the concept of latency. Latency is present in each and every one of the components that make up a computer system, and not only in an Internet network. So today we will try to define what latency is and what devices it is on. We will also see how we can measure it according to which cases.

Index of contents

In computing there are a large number of parameters that must be taken into account when acquiring certain components. One of them is precisely latency, although we do not have an explicit measure in all cases, precisely because it is known to exist, and it can be very similar in all devices, for example, on hard drives.

On the other hand, others do have these measures, and they are also very important, for example, router, in some cases, and especially RAM memory. Without further ado, let's see what latency is and how we can measure it on our computer.

Latency, general meaning

First of all, what we will have to do is define the concept of latency in generic terms, because in this way we can better imagine where latency can exist.

Latency, in computer terms, can be defined as the time that elapses between an order and the response that occurs to that specific order. So, as we can assume, latency is measured in a unit of time, specifically in milliseconds or microseconds, since the second would be too high a measure to apply to microcomputer systems.

With latency we are measuring the time we are waiting from when we give an order until we receive the response we expect, either in the form of information on a computer or in motion or sound in real life.

Each computer element works through electrical stimuli, so we could say that it is the time it takes to carry out all the necessary electrical and logical commutations from the beginning of the action through a peripheral device, until the computer executes the action and shows the results.

Internet latency

When we talk about latency in computing, the vast majority of the time, we are referring to the latency of an Internet connection network. The interconnection between nodes in a network is based on the interaction of electrical signals, which travel through a medium, either physical, such as cables or by air, in the form of waves. In addition, it is necessary to use a series of protocols that allow us to make one media compatible with another and establish, in some way, an order in the information we send and receive.

Network latency measures the sum of challenges that occur since we request information (or send it) and the remote node responds to us. In other words, it measures the time it takes for a data packet to get from one place to another. This time, of course, is also measured in milliseconds. If for example we have a latency of 30 milliseconds, it will mean that, since we have sent a request from our browser, until the server has received it and in turn responded to us with what we want, a time of 30 milliseconds will have elapsed. It seems little, but sometimes we notice it a lot, we will see in what situations.

This term is also well known by the name of Lag, especially in the world of video games, but both terms express exactly the same.

What influences latency

This measure is one of the most important and that we must always take into account in our connection, according to what type of applications we are going to use. Generally we have a series of factors that influence latency:

The packet size and protocols being used

If the transmission package is small, it will be easier to transmit and travel than a heavy one, since there will be no need to split it and then join it. In this sense, the hardware of the equipment also influences, reason why, with routers or old network cards, more processing time will be needed to carry out an action. This is especially critical on computers with low processing capacity.

We must also take into account the data transmission protocols. These protocols allow us to ensure that a package arrives in good condition and by the correct route, from one node to another, introducing extra information on how it should be handled, what type of encryption it carries, and other important aspects for its identification and routing. As you can imagine, extracting all the information that is inside these packages will also take time and this translates into latency.

There are a large number of transmission protocols in networks, but the best known are undoubtedly TCP (Transmission Control Protocol) and IP (Internet Protocol) and their combination. These protocols are used for various functions, mainly for the correct routing of packets (IP protocol) and for error control and to ensure that the information arrives correctly (TCP protocol).

The physical transmission medium, fiber optic latency

In the same way, transmitting through a physical medium, in most cases, will be faster than doing it using waves, although the implementation of 5 GHz frequencies has provided this type of networks with a higher transmission speed.

The fastest medium currently is undoubtedly fiber optics, since it practically does not introduce latency or Lag in the connection. Data transmission through photoelectric impulses is currently the one with the highest capacity, both in bandwidth and in switching speed.

Of the number of commutations that must occur until reaching the destination.

It will also have a lot to do with the jumps that the package must take before reaching the destination, it is not the same to have a direct cable between one node and another, than to go through 200 different nodes until arriving. Each of them will be wasting time while they are in charge of moving the package from one door to another, we must bear in mind that a package never reaches the destination directly, before it will travel through a multitude of servers that will need to process it, and even add extra information to forward it. to the target. And maybe this destination is in Conchinchina and beyond.

At this point, you will have noticed that we have not talked too much about the bandwidth of a connection, and it is precisely what we look at the most when hiring an Internet provider.

Difference Between Bandwidth and Latency When is each one important?

When we talk about bandwidth of a connection, we are referring to the amount of information that we are able to transmit from one point to another per unit of time. The more bandwidth we have, the more packages we can download simultaneously. The unit of measurement is that of bits per second b / s, although currently the measurement is almost always that of Megabits per second (Mb / s). If we talk in terms of storage it will be Megabytes per second (MB / s) where one byte is equivalent to 8 bits.

If we look we are making a mistake, we talk about Internet speed when we talk about bandwidth, and this should be latency. However, we are all used to this, and we have no doubts about it, so we will talk about latency to refer to it and speed to refer to bandwidth.

Now we have to know when we should consider both measures depending on what we use our connection for.

Bandwidth

If we want to use our connection to download content located statically on a server (images, videos, games) then bandwidth will be essential. We do not care if the connection takes 10 seconds to establish, the important thing is that the file takes as little time as possible to download. If a file occupies 1000 MB and we have a connection of 100 MB / s, it will take 10 seconds to download it. If we have a 200 MB / s connection, it will take 5 seconds, easy.

Latency

It will be essential when we want to use our connection to play content in real time such as streaming or to play massive online games. If we realize it, in this case we need what is transmitted and received to be done simultaneously, without image freezes and load buffers. When we play and see that a player avatar magically appears and disappears and jumps, it means that either he or we have Lag or high latency. What we see, even if it is taking place at that moment, we only see bits without continuity because the time it takes to send the information to our team is much longer than what is actually happening.

If we talk about FPS shooter games and we have a very high latency, we will not find out when they kill us, nor will we know the exact position of an opponent. Of course, bandwidth will be important, but latency plays a key role.

How to measure the latency of our connection

To measure the latency of our connection, we can use a tool that has been implemented in Windows since its inception, called Ping. To use it we will have to open a command window, going to the start menu and typing " CMD ". A black window will open where we have to place the following command:

ping

For example, if we want to see the latency between Professional Review and our team, we will put “ ping www.Profesionalreview.com “.

We must look at the part of " time = XXms ", that will be our latency. Let's see how connection type affects latency. To do this, we are going to see the difference between a wired connection and a Wi-Fi connection from afar on the same computer by pinging our own router.

We see that, by cable, the latency is practically nil, less than 1 millisecond, while by Wi-Fi we are already introducing the order of 7 milliseconds. It is precisely for this reason that gamers always want to use a physical connection to a Wi-Fi. These 7 ms will translate into freezes of images and jerks if we add them to the own lag that the remote connection will put.

Visit our tutorial for more information about the ping command and how to know the external IP

Well, it will have become more or less clear to us what latency is on the Internet and how we should take it into account. Now let's see where the latency appears the most.

Latency in RAM

Surely this will be the second most important section in which we must take into account the latency of an element of our equipment, or at least the one that has gained more fame in recent years with DDR3 and DDR4 RAM.

In the case of RAM, the definition is a little different from what we have understood in networks. In this case, an element as important as the clock cycles that our processor works (frequency) comes into play. In any case, we are always talking about a measure of TIME, and not something else.

The actual latency in RAM is called CAS or CL, and is nothing more than the number of clock cycles that elapse since a request is made by the CPU and the RAM has the information available. We are measuring the time between the request and the response.

Visit this comprehensive article talking about RAM latency to find out everything about it.

Hard disk latency

Another device where we can find latency times of great importance is in hard drives, especially those based on mechanical elements. In this case, latency is translated in several different terms and focused on specific functions:

Access time

Basically it is the time it takes for the storage unit to be ready to transmit the data. A hard disk is made up of turntables in which the data is physically recorded, in turn these data must be read by a mechanical head that moves perpendicularly sweeping the entire surface of the disk.

The access time is that the time it takes the hard disk to read our request for information and locate the mechanical head exactly in the cylinder and specific sector where this information is to be read. Simultaneously with this, the hard drive spins at high speed, so the spindle, once located in the sector, will have to wait for the track to reach it. Only at this time the information will be prepared to be read and transmitted.

Access time can be divided into several functions which we have described in these paragraphs:

Search time

It is precisely the time it takes for the head to be placed on the cylinder, sector and track that contain the data. This search time can vary between 4 milliseconds for the fastest units, up to 15 ms. The most common for desktop hard drives is 9 ms.

In SSD drives there are no mechanical parts, so the search time is between 0.08 and 0.16 ms. Hugely less than mechanical ones.

Rotation latency:

This concept measures the time it takes for the spindle to reach the data track due to the hard drive's own rotation. Hard drives are continuously rotating, so for certain time intervals the head will encounter intermittent data tracks. The higher the number of revolutions (turns), the faster the data on a specific track can be accessed. For an average hard drive of 7, 200 RPM we will obtain a latency of 4.17 ms.

Other delays that add latency

Other delays typical of information transmission include command processing time and spindle stabilization time. The first will be the time it takes for the hardware to read, process, and transmit the data to the bus, which is typically about 0.003 ms. The second is the time it takes for the spindle to stabilize after moving, due to being mechanical, this will take a certain time of about 0.1 ms.

Then we can also add other times to the data transmission time such as the following:

  • Sector time: time it takes for the sector of the hard disk to be validated and physically and logically located. Head jump time: time that elapses between switching from one head to another to read the information. Since we must bear in mind that hard drives have two heads for each plate they have. It is normally in 1 and 2 ms. Cylinder change time: logically the time that elapses between changes from one cylinder to another. This is usually about 2 or 3 ms.

What does this translate to? Well, a mechanical hard drive is damn slow compared to an SSD. This is why SSDs substantially boost the performance of any computer, even older ones.

Latency in wireless mice and headsets

Nor can we forget about wireless mice in the field of latency. We have already empirically verified that latency in a radio frequency medium increases with respect to physical connections, and this is no exception in wireless mice.

Wireless mice operate mostly, in a frequency range of 2.4 GHz, we can imagine that this is very fast, especially if the receiver is close, but it will not have a lower latency than a cable mouse, even of interior models in range. It is precisely for this reason that most gaming mice have wired and non-wireless connectivity, except very high-end models with a high cost.

Exactly the same happens with headphones, however, in this specific case, it is about sound, where we biologically already have a certain latency to react to the sounds that are produced in our environment. This is why the benefits of a wireless (good) and a wired headset will be very similar, in our ears and for the purpose of use. It will not, therefore, be as important as a mouse or other component.

Conclusion about latency in our computer

Well these are the main measures of latency that we must take into account in our computer equipment. Without a doubt, the most important of all will surely be that of the Internet connection, since it is the one that we will most notice in our daily use of the network, especially if we dedicate ourselves to playing online. And also of course that of a hard drive if our system is installed on a mechanical one.

In all other cases, we can practically not do much to improve the performance of the components, since it is an inherent feature of them, especially hard drives. If we have bought an SSD coming from using an HDD we will surely notice that the performance difference is abysmal.

In the case of RAM, if you have seen our article specially dedicated to it, you will know how we can measure it, but there is little we can do to improve it, in fact, it is practically imperceptible for us, taking into account the high frequencies at which modules and all motherboard work. In addition, this deficiency is made up by the high frequency of those who work.

Latency is something that will absolutely always be part of the architecture of a computer or any other element. There will always be a lapse of time between a request and an execution regardless of the medium used and the element connected. Ourselves and our stimuli are the greatest source of LAG or latency.

We also recommend:

Do you think latency is really important on a computer or a network? Leave us comments about your opinion on this topic. Can you think of any other component in which latency should be taken into account?

Tutorials

Editor's choice

Back to top button