NVIDIA and Intel Accelerate AI Computing with Hopper H100 GPUs and 4th Generation Xeon CPUs: A 25x Efficiency Boost

NVIDIA announced an updated line of NVIDIA Hopper accelerated computing systems powered by its proprietary H100 Tensor Core GPU and 4th Gen Intel Xeon Scalable processors. The new combination of hardware is touted to offer up to 25 times more efficiency than previous-generation machines.

NVIDIA’s new system, the NVIDIA DGX H100, is powered by Intel’s 4th Generation Intel Xeon Scalable processors, known as the “Intel Xeon CPU Max Series” and “Intel Data Center GPU Max Series,” to boost data centre performance, and efficiency, security and AI Performance. This new generation of processors brings new capabilities to the cloud, network, edge and the world’s most powerful supercomputers.

One of the major new features of Intel’s 4th generation Xeon processors is support for PCIe 5.0. This doubles the speed of data transfer from CPU to GPU. This enables higher GPU density and faster networking within each server, boosting performance for data-intensive workloads such as AI, increasing network speeds up to 400 Gbit/s per connection, and increasing network speeds. Accelerate data transfer sequence between server and storage.

The NVIDIA DGX H100 system pairs Intel CPUs with eight NVIDIA H100 GPUs. The NVIDIA H100 GPU is the company’s most powerful chip, with over 80 billion transistors, making it the perfect fit for Intel’s new processors. It also has unique features ideal for high-performance computing workloads, such as the built-in Transformer Engine and the highly scalable NVLink interconnect, to power large-scale AI models, recommender systems, and more.

In addition to the NVIDIA DGX H100 system, several of NVIDIA’s partners have also announced their server systems based on this new hardware combination. Companies like ASUS, Atos, Cisco, Dell, Fujitsu, GIGABYTE, HPE, Lenovo, and Quanta have all announced products with Intel CPUs.

NVIDIA claims the new system can run workloads 25 times more efficiently than traditional CPU-only servers and consumes significantly less power due to its amazing performance-per-watt. Additionally, compared to its previous-generation NVIDIA DGX system, the latest hardware is 3.5x more efficient for AI training and inference workloads, with a nearly 3x lower cost of ownership.

The software that drives NVIDIA’s new system is also substantial. All new DGX H100 systems come with a free NVIDIA Enterprise AI license. It is a cloud-native set of AI development tools and deployment software, providing users with a complete platform for their AI initiatives.

Additionally, according to NVIDIA, a customer can purchase multiple DGX H100 systems in the form of his NVIDIA DGX SuperPod platform. It is a small supercomputing platform that delivers up to 1 exaflop of AI performance.

With these announcements, it’s clear that both Intel and NVIDIA are establishing themselves as leaders in the HPC and data centre space, focusing on performance, efficiency, and AI capabilities. The combination of Intel’s new Xeon processors and NVIDIA’s H100 GPUs provides a powerful platform for enterprise customers, researchers, and organizations looking to accelerate AI workloads.

Avinash A
Avinash A
Meet Avinash, a tech editor with a Master's in Computer Science and a passion for futuristic tech, AI, and Machine Learning. Known for making complex tech easy to understand, he's a respected voice in leading tech publications and podcasts. When he's not deciphering the latest AI trends, Avinash indulges in building robots and dreaming up the next big tech breakthrough.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.

More from this stream