Intel has provided a sneak peek into its upcoming Meteor Lake chips, shedding light on their AI processing capabilities. The company aims to leverage AI acceleration to enhance power efficiency and performance for local AI workloads, and the chips are expected to debut in laptops before making their way to desktop PCs.
With competitors like Apple and AMD already incorporating powerful AI acceleration engines into their silicon, Intel is determined not to be left behind. Recognizing the growing demand for AI capabilities in PCs, Intel has developed a custom acceleration block for its consumer PC chips. The focus of their latest endeavour is the VPU unit, an integral component of the Meteor Lake system-on-a-chip tile.
Intel’s Meteor Lake chips employ a blended chipset-based design, which combines Intel and TSMC technologies within a single package. The VPU, along with other features such as I/O, GNA cores, and memory controllers, contributes to the overall performance of the chip. Utilizing TSMC’s N6 process for fabrication, Intel allocates approximately 30% of the die area for the VPU, even though it may take time for developers to fully exploit its potential.
The chip’s block diagram believed to reflect Meteor Lake’s design, reveals the inclusion of Intel’s low-power AI acceleration block, the Gaussian Neural Acceleration (GNA) 3.5. Furthermore, the new VPU block, based on Movidius technology, is also present. The VPU, designed for sustained AI workloads, works in conjunction with CPUs, GPUs, and GNA engines, enabling a wide range of AI tasks to be executed efficiently.
Intel emphasizes that the VPU primarily handles background tasks, with the GPU taking charge of more parallelized workloads. The CPU, on the other hand, focuses on low-latency inference work. Intel has implemented a mechanism that allows developers to target different compute layers depending on the specific requirements of the application, resulting in improved performance and reduced power consumption — an essential objective in AI acceleration.
Although Intel’s chips currently employ GNA blocks for low-power AI inference in audio and video processing, the company has already started running GNA-specific code on the VPU with promising results. This development raises the possibility of Intel fully transitioning to the VPU in future chips and discontinuing the GNA engine.
Moreover, Intel has disclosed that Meteor Lake will incorporate a coherent fabric, facilitating a unified memory subsystem and seamless data sharing between compute elements. This feature aligns with the strategies of competitors like Apple and AMD in the AI CPU space, exemplified by Apple’s M-series and AMD’s Ryzen 7040 chips.
To showcase the benefits of the VPU, Intel has presented a demo. In which the onboard VPU showcased a significant reduction in power consumption during Advanced Blur processing. In another demonstration, the VPU accelerated Stable Diffusion, an AI model generating images from text, resulting in faster artwork processing compared to running the AI model without the VPU.
The Intel chip used in the Computex 2023 demo boasts 16 cores and 22 threads. It features a base clock 3.1GHz, with an idle state of 0.37GHz. The processor is equipped with 1.6MB of L1 cache, 18MB of L2 cache, and 24MB of L3 cache.
Additionally, Intel has hinted at an embedded GPU in the Meteor Lake chips, which is anticipated to be a version of the company’s Arc graphics chip. The embedded GPU is expected to support features like DX12 Ultimate, ray tracing, and XeSS. However, the GPU’s performance in ray tracing might not match that of discrete GPUs in the market.