French Franch      Spanish Spanish
HomeBlogHPE Unveils Turnkey Solution Powered by NVIDIA GH200 for AI Training

HPE Unveils Turnkey Solution Powered by NVIDIA GH200 for AI Training


A supercomputing solution for generative artificial intelligence has been unveiled by Hewlett Packard Enterprise (HPE). This solution is intended for major businesses, research institutes, and government organizations, and it is aimed at expediting the training and tweaking of artificial intelligence (AI) models utilizing proprietary data sets.

Customers would be enabled to train and optimize models, as well as construct AI applications – with the help of the software suite that is included in this solution. In addition, the solution features supercomputers with liquid cooling, accelerated compute, networking, and storage capabilities, as well as services that assist businesses in releasing the value of AI more quickly.

According to Justin Hotard, Executive Vice President and General Manager, HPC, AI & Labs at Hewlett Packard Enterprise, “The world’s leading companies and research centers are training and tuning AI models to drive innovation and unlock breakthroughs in research. However, in order to do so in an effective and efficient manner, they require purpose-built solutions. Organizations need to utilize solutions that are sustainable and deliver the dedicated performance and scale of a supercomputer to enable AI model training in order to support generative AI. This is a requirement for supporting generative AI. We are very excited to expand our relationship with NVIDIA in order to deliver a turnkey AI-native solution that will assist our customers in greatly accelerating the training of AI models and the consequences of such trainings.”

This supercomputing system for generative AI would include crucial components such as software tools for the creation of AI applications, the customization of pre-built models, and the development and modification of code. The software is integrated with the HPE Cray supercomputing technology, which is built on the same powerful architecture that is used in the world’s fastest supercomputer and is powered by NVIDIA Grace Hopper GH200 Superchips. This solution would provide businesses with the scale and performance necessary for major AI workloads, such as training for large language models (LLM) and deep learning recommendation models (DLRM).

The open source Llama 2 model with 70 billion parameters was fine-tuned on this new system using the HPE Machine Learning Development Environment in less than three minutes, according to HPE, which would immediately translate to a faster time-to-value for clients. The powerful supercomputing capabilities of HPE, which are complemented by technologies from NVIDIA, would increase the system’s performance by a factor of two to three times.

“Generative AI is transforming every industrial and scientific endeavor,” said Ian Buck, Vice President of Hyperscale and HPC at NVIDIA. “Customers will receive the performance necessary to achieve breakthroughs in their generative AI initiatives thanks to NVIDIA’s collaboration with HPE on this turnkey AI training and simulation solution powered by NVIDIA GH200 Grace Hopper Superchips.”

An Integrated AI Solution

The supercomputing solution for generative AI is an AI-native offering that is purpose-built and integrated. It encompasses all of the following end-to-end technologies and services:

AI/ML acceleration software – This software consists of a suite of three software tools that users may use to construct their own AI applications, as well as train and tune AI models.

The HPE Machine Learning Development Environment is a machine learning (ML) software platform that simplifies data preparation and helps clients to design and deploy artificial intelligence models more quickly. This is accomplished by integrating with major ML frameworks.

NVIDIA AI Enterprise helps enterprises get to the cutting edge of AI technology more quickly while providing them with security, stability, manageability, and support. It provides broad frameworks, models that have already been pretrained, and tools that ease the process of developing and deploying industrial artificial intelligence.

The HPE Cray Programming Environment suite provides developers with a comprehensive collection of tools for porting, debugging, and improving programs.

Designed for scale – The solution is based on the exascale-class HPE Cray EX2500 system and features the industry-leading NVIDIA GH200 Grace Hopper Superchips. It can scale up to thousands of graphics processing units (GPUs) and has the ability to dedicate the full capacity of nodes to supporting a single AI workload, which results in a faster time-to-value.

A network for real-time AI – HPE Slingshot Interconnect is an open, Ethernet-based high performance network that is meant to accommodate exascale-class applications. It is a network that can be used for real-time artificial intelligence. This customizable connectivity, which is based on technology developed by HPE Cray, enables extremely high speed networking, which in turn boosts performance across the board for the entire system.

Simplified and turnkey AI adoption – The solution is complimented by HPE Complete Care Services, which offers local and worldwide expertise for AI adoption simplification throughout set-up, installation, and throughout the whole lifetime.

Sustainability

It is anticipated that the expansion of AI workloads would need for around 20 gigawatts of additional power within data centers by the year 2028. Organizations would require solutions that achieve a new degree of energy efficiency – to minimize the impact that the solutions have on their own carbon footprints.

Energy efficiency is at the center of HPE’s computing ambitions, which would result in the delivery of solutions with liquid-cooling capabilities. According to HPE, these capabilities can result in a performance boost of “up to 20% per kilowatt when compared to air-cooled solutions and can reduce power consumption by 15%.”

Direct liquid cooling (DLC), which is featured in the supercomputing solution for generative AI to efficiently cool systems while lowering energy consumption for compute-intensive applications, is one of the technologies that HPE uses today to supply the majority of the world’s top 10 most efficient supercomputers. This technology was developed by HPE.

Availability

This supercomputing solution for generative AI will be generally available in December 2023 through HPE in over 30 countries.

Post a Comment

You must be logged in to post a comment.