Intel has launched its 5th-generation Xeon Scalable processors, which are designed to run AI on CPUs.
The new chips offer more cores, a larger cache, and improved machine learning capabilities.
Intel claims that its 5th-gen Xeons are up to 1.4x faster in AI inferencing compared to the previous generation.
The company has also made architectural improvements to boost performance and efficiency.
Intel is positioning the processors as the best CPUs for AI and aims to attract customers who are struggling to access dedicated AI accelerators.
The chips feature Advanced Matrix Extensions (AMX) instructions for AI acceleration.
Compared to the Sapphire Rapids chips launched earlier this year, Intel's 5th-gen Xeons deliver acceptable latencies for a wide range of machine learning applications.
The new chips have up to 64 cores and a larger L3 cache of 320MB.
Intel has extended support for faster DDR5 memory, delivering peak bandwidth of 368 GB/s.
Intel claims that its 5th-gen Xeons offer up to 2.5x the performance of AMD's Epyc processors in a core-for-core comparison.
The company is promoting the use of CPUs for AI inferencing and has improved the capabilities of its AMX accelerators.
Intel's 5th-gen Xeons can also run smaller AI models on CPUs, although memory bandwidth and latency are important factors for these workloads.
Source: https://www.theregister.com/2023/12/14/intel_xeon_ai/
[link] [comments]