Android

▷ Intel xeon 【all information】

Table of contents:

Anonim

Among the vast catalog of Intel we can find Intel Xeon processors, which are the least known by users for not being focused on the domestic sector. In this article we explain what these processors are and what are the differences with the domestic ones.

Index of contents

What is Intel Xeon?

Xeon is a brand of x86 microprocessors designed, manufactured, and marketed by Intel, targeting the workstation, server, and embedded systems markets. Intel Xeon processors were introduced in June 1998. Xeon processors are based on the same architecture as normal desktop CPUs, but have some advanced features such as ECC memory support, higher number of cores, support for large amounts of RAM., increased cache memory and more provision for enterprise-grade reliability, availability and serviceability features responsible for handling hardware exceptions through the Machine Check architecture. They are often able to safely continue execution where a normal processor cannot due to their additional RAS characteristics, depending on the type and severity of the Machine Verification Exception. Some are also compatible with multi-socket systems with 2, 4 or 8 sockets by using the Quick Path Interconnect bus.

We recommend reading our post about AMD Ryzen - The best processors manufactured by AMD

Some deficiencies that make Xeon processors unsuitable for most consumer PCs include lower frequencies for the same price, since servers run more tasks in parallel than desktops, core counts are more important than frequencies of watch, generally the absence of an integrated GPU system, and lack of overclocking support. Despite these disadvantages, Xeon processors have always been popular with desktop users, mainly gamers and extreme users, mainly due to higher core count potential, and a more attractive price / performance ratio than the Core i7 in terms of total computing power of all cores. Most Intel Xeon CPUs lack an integrated GPU, which means that systems built with those processors require either a discrete graphics card or a separate GPU if monitor output is desired.

Intel Xeon is a different product line than Intel Xeon Phi, which goes by the similar name. The first generation Xeon Phi is a completely different type of device more comparable to a graphics card, as it is designed for a PCI Express slot and is intended to be used as a multi-core coprocessor, such as the Nvidia Tesla. In the second generation, Xeon Phi became a main processor more similar to Xeon. It fits the same socket as a Xeon processor and is compatible with x86; however, compared to Xeon, the Xeon Phi's design point emphasizes more cores with higher memory bandwidth.

What are the Intel Xeon Scalable?

Big changes are underway in the company's data center. Many organizations are undergoing a widespread transformation based on online data and services, leveraging that data for powerful Artificial Intelligence and analytics applications that can turn it into ideas that change the business, and then implement tools and services that make those ideas work.. This calls for a new type of server and network infrastructure, optimized for artificial intelligence, analytics, massive data sets, and more, powered by a revolutionary new CPU. That's where Intel's Xeon Scalable line comes in.

Intel Xeon Scalable represents possibly the biggest step change in twenty years of Xeon CPU. It's not simply a faster Xeon or Xeon with more cores, but a family of processors designed around a synergy between computing, network and storage capabilities, bringing new features and performance improvements to all three.

While Xeon Scalable offers a 1.6x average performance boost over previous-generation Xeon CPUs, the benefits go beyond standards to cover real-world optimizations for analytics, security, AI, and image processing. There is more power to run high performance complexes. When it comes to the data center, it's a win in every way.

Perhaps the biggest and most obvious change is the replacement of the old ring-based Xeon architecture, where all the processor cores were connected via a single ring, with a new mesh or mesh architecture. This aligns the cores plus associated cache, RAM, and I / O, in rows and columns that connect at each intersection, allowing data to move more efficiently from one core to another.

If you imagine it in terms of a road transport system, the ancient Xeon architecture was like a high-speed circular, where data moving from one core to another should move around the ring. The new mesh architecture is more like a highway grid, just one that allows traffic to flow at maximum point-to-point speed without congestion. This optimizes performance on multi-threaded tasks where different cores can share data and memory, while increasing energy efficiency. In the most basic sense, it is an architecture purpose created to move large amounts of data around a processor that could have up to 28 cores. Furthermore, it is a structure that is expanded more efficiently, whether we are talking about multiple processors or new CPUs with even more cores later.

If the mesh architecture is about moving data more efficiently, then the new AVX-512 instructions try to optimize the way it is processed. Building on the work Intel started with its first SIMD extensions in 1996, AVX-512 allows even more data items to be processed simultaneously than with next-generation AVX2, doubling the width of each record and adding two more to improve performance. AVX-512 allows twice as many floating point operations per second per clock cycle, and can process twice as many data items as AVX2 could have in the same clock cycle.

Better yet, these new instructions are specifically designed to accelerate performance in complex, data-intensive workloads such as scientific simulation, financial analysis, deep learning, image, audio and video processing, and cryptography.. This helps an Xeon Scalable processor handle HPC tasks more than 1.6 times faster than the previous generation equivalent, or accelerate artificial intelligence and deep learning operations by 2.2x.

AVX-512 also helps with storage, speeding up key features like deduplication, encryption, compression, and decompression so you can make more efficient use of your resources and strengthen the security of on-premises and private cloud services.

In this sense, AVX-512 works hand in hand with Intel QuickAssist (Intel QAT) technology. QAT enables hardware acceleration for data encryption, authentication, and compression and decompression, increasing the performance and efficiency of processes that place high demands on today's network infrastructure, and that will only increase as you implement more services and digital tools.

Used in conjunction with Software Defined Infrastructure (SDI), QAT can help you recover lost CPU cycles spent on security, compression and decompression tasks so that they are available for computationally intensive tasks that bring real value to the company. Because a QAT-enabled CPU can handle high-speed compression and decompression, almost free of charge, applications can work with compressed data. This not only has a smaller storage footprint, but requires less time to transfer from one application or system to another.

Intel Xeon Scalable CPUs integrate with Intel's C620 series chipsets to create a platform for balanced system-wide performance. Intel Ethernet connectivity with iWARP RDMA is built-in, offering low latency 4x10GbE communications. The platform offers 48 lines of PCIe 3.0 connectivity per CPU, with 6 channels of DDR4 RAM per CPU with support capacities of up to 768GB at 1.5TB per CPU and speeds of up to 2666MHz.

Storage receives the same generous treatment. There is room for up to 14 SATA3 drives and 10 USB3.1 ports, not to mention the CPU's built-in virtual NMMe RAID control. Support for next-generation Intel Optane technology further increases storage performance, with dramatic positive effects on in-memory database and analytical workloads. And with Intel Xeon Scalable, Intel's Omni-Path fabric support comes built-in without the need for a discrete interface card. As a result, Xeon Scalable Processors come ready for high-bandwidth, low-latency applications in HPC clusters.

With Xeon Scalable, Intel has delivered a line of processors that meet the needs of next-generation data centers, but what does all this technology mean in practice? For starters, servers that can handle larger analytical workloads at higher speeds, getting faster insights from larger data sets. Intel Xeon Scalable also has the storage and compute capacity for advanced deep learning and machine learning applications, allowing systems to train in hours, not days, or "infer" the meaning of new data with greater speed and accuracy by process images, speech or text.

The potential for in-memory database and analytics applications, such as SAP HANA, is enormous, with performance up to 1.59 times higher when running in-memory workloads on the next-generation Xeon. When your business relies on gathering information from vast data sets with real-time sources, that might be enough to give you a competitive advantage.

Xeon Scalable has the performance and memory and system bandwidth to host larger and more complex HPC applications, and finds solutions for more complex business, scientific and engineering problems. It can offer faster, higher quality video transcoding while streaming video to more customers.

An increase in virtualization capacity could allow organizations to run four times more virtual machines on a Xeon Scalable server than on a next-generation system. With nearly zero overhead for compression, decompression, and encryption of data at rest, businesses can use their storage more effectively, while strengthening security at the same time. This isn't just about benchmarks, it's about technology that transforms the way your data center works, and in doing so your business, too.

What is ECC memory?

ECC is a method of detecting and then correcting single-bit memory errors. A single bit memory error is a data error in the production or production of the server, and the presence of errors can have a great impact on the performance of the server. There are two types of single-bit memory errors: hard errors and soft errors. Physical errors are caused by physical factors, such as excessive temperature variation, stress stress, or physical stress that occurs on memory bits.

Soft errors occur when data is written or read differently than originally intended, such as variations in motherboard voltage, cosmic rays, or radioactive decay that can cause bits in memory to come back volatile. Because the bits retain their programmed value in the form of an electrical charge, this type of interference can alter the load on the memory bit, causing an error. On servers, there are several places where errors can occur: in the storage unit, in the CPU core, through a network connection, and in various types of memory.

For workstations and servers where errors, data corruption, and / or system failures must be avoided at all costs, such as in the financial sector, ECC memory is often the memory of choice. This is how ECC memory works. In computing, data is received and transmitted through bits, the smallest unit of data in a computer, which are expressed in binary code using one or zero.

When the bits are grouped together, they create binary code, or "words, " which are units of data that are routed and move between memory and the CPU. For example, an 8-bit binary code is 10110001. With ECC memory, there is an additional ECC bit, which is known as a parity bit. This extra parity bit causes the binary code to read 101100010, where the last zero is the parity bit and is used to identify memory errors. If the sum of all 1's in a line of code is an even number (not including the parity bit), then the line of code is called even parity. Error-free code always has even parity. However, parity has two limitations: it is only capable of detecting odd numbers of errors (1, 3, 5, etc.) and allows even numbers of errors to pass (2, 4, 6, etc.). Parity cannot correct errors either, it can only detect them. That's where ECC memory comes in.

ECC memory uses parity bits to store encrypted code when writing data to memory, and ECC code is stored at the same time. When the data is read, the stored ECC code is compared to the ECC code that was generated when the data was read. If the code that was read does not match the stored code, it is decrypted by the parity bits to determine which bit was in error, then this bit is corrected immediately. As data is processed, ECC memory is constantly scanning code with a special algorithm to detect and correct single-bit memory errors.

In mission critical industries such as the financial sector, ECC memory can make a big difference. Imagine that you are editing the information in a confidential customer account and then exchanging this information with other financial institutions. As you send the data, let's say a binary digit is flipped by some kind of electrical interference. ECC server memory helps preserve the integrity of your data, prevents data corruption, and prevents system crashes and failures.

We recommend reading:

This ends our article on Intel Xeon and everything you need to know about these new processors, remember to share it on social media so that it can help more users who need it.

Android

Editor's choice

Back to top button