Processors

Nnp, dlboost and keem bay, new intel chips for ia and neural networks

Table of contents:

Anonim

Intel announced new dedicated hardware at its AI Summit event on November 12 away from the mass market, NNP, DLBoost and Keem Bay. These products are without a doubt the culmination of more than three years of work since the acquisitions of Movidius and Nervana in the second half of 2016 and the creation of their AI Products Group, led by Naveen Rao, co-founder of Nervana.

NNP, DLBoost and Keem Bay, New Intel chips for AI and neural networks

Rao noted that Intel is already a big player in the AI ​​sector and that its AI revenue in 2019 will exceed $ 3.5 billion, up from more than $ 1 billion in 2017. Intel already has different hardware for all. OpenVINO fronts for IOt, Agilex FPGAs, Ice Lake on PC, DLBoost from Cascade Lake, and even further, its future discrete graphics.

Processors: DLBoost

Intel demonstrated the compatibility with bfloat16 at Cooper Lake 56-core that will be out next year as part of its DLBoost range of AI functions in its processors. Bfloat16 is a numerical format that achieves a precision similar to that of the single precision floating point (FP32) in AI training.

Intel did not provide an estimate of the performance improvement, but it did state that, for inference purposes, Cooper Lake is 30 times faster than Skylake-SP. On the PC side, Ice Lake incorporates the same DLBoost AVX-512_VNNI instructions that are also in Cascade Lake.

Movidius: Keem Bay VPU

As part of its strategy towards artificial intelligence, such as smart cameras, robots, drones, and VR / AR, Intel acquired Movidius in 2016. Movidius calls its low-power chips "vision processing units" (VPU). They feature image signal processing (ISP) capabilities, hardware accelerators, MIPS processors, and 128-bit programmable vector processors (VLIW) that you call SHAVE cores.

Visit our guide on the best processors on the market

Intel has now detailed what it calls the 'Gen 3' Intel Movidius VPU codenamed Keem Bay. According to Intel, it has an inference performance more than 10 times higher than that of the Myriad X and consumes the same amount of energy.

Nervana Neural Network Processors (NNP)

Intel has NNPs for both training and deep neural network inference. Intel's NNP-I for inference is based on two Ice Lake Sunny Cove cores and twelve ICE accelerator cores. Intel claims it will deliver great performance per watt and compute density. In its M.2 form factor, it is capable of 50 TOPS at 12W, which equates to 4.8TOPS / W, as previously announced. Intel revealed that the form factor of the PCIe card consumes 75W and produces up to 170 TOPS (with INT8 precision).

Intel reiterated its high near-linear scaling efficiency of 95% for 32 cards, compared to 73% for Nvidia.

Intel has prepared a wide assortment of chips for all fronts, AI, 5G, neural networks, autonomous driving, etc., in a market that this year generated revenues estimated at 10 billion dollars. We will keep you informed.

Tomshardware font

Processors

Editor's choice

Back to top button