HPE, Nvidia take ‘AI-native’ architecture to the enterprise with GenAI stack

  • The stack uses Nvidia’s new AI platform

  • The platform is slated to be available in the first quarter of 2024

  • HPE and Nvidia are also collaborating on an AI-enabled digital twin tool through the HPE GreenLake Flex

Long-standing computing collaborators HPE and Nvidia announced at HPE Discover Barcelona that they are building a full-stack enterprise computing offering for generative AI (GenAI) applications. 

The co-engineered and pre-configured stack aims to provide flexibility to businesses' AI model deployments and is “designed specifically to hit the sweet spot of enterprise use cases,” HPE EVP & GM Compute Neil MacDonald said in a press briefing.

The company’s CPO of AI Evan Sparks added, “Our view at HPE is that AI requires a fundamentally different architecture because the workload is fundamentally different than the classic transaction processing and web services workloads that have become so dominant in computing over the last couple of decades.”

The shift requires “full-stack thinking,” from hardware to software, an open ecosystem and a hybrid “harmony” between AI and traditional applications. Simply put, it requires AI-native architecture, he noted.

“For enterprises to effectively incorporate generative AI… they will need to extend these cloud-native approaches to include this AI-native architecture,” Sparks said.

The stack uses Nvidia’s new AI platform, Spectrum-X Ethernet. Nvidia recently named HPE, Dell and Lenovo as some of the tool's initial adopters. The GenAI suite will also integrate a number of other computing power tools like HPE’s Cray Supercomputers (from its 2019 acquisition of Cray), its Ezmeral software, Nvidia’s NeMo framework and more.

The platform is slated to be available in the first quarter of 2024.

Mohan Rajagopalan, HPE VP and GM of Ezmeral Software, explained that one of the biggest observed customer problems is the chaos of an uncontrolled distributed data environment. HPE's platform will extend its data fabric technology to “provide a single pane of glass… a single unified experience to manage, govern and access data no matter where it’s produced.”

HPE and Nvidia are also collaborating on an AI-enabled digital twin tool through the HPE GreenLake Flex platform to help organizations with the design, simulation and optimization of various products and processes in real-time before going into production. 

The announcements display a continued focus on AI this year from HPE.

At its Discover Las Vegas event held this past summer, the company officially announced its move into the AI cloud market, unveiling its GreenLake for large language models (LLMs) — a move HPE CEO Antonio Neri called “one of the boldest bets in the history of our company.”

451 Infrastructure analyst John Abbott told Silverlinings at the time that GreenLake for LLMs — while rife with potential rivalry from competing hyperscalers like Microsoft and Google — provided a more “stripped down” and modular system that’s “easier to upgrade.” And with the unprecedented adoption pace of the AI sector, that quality is a strong one. He also noted the company’s lack of “punitive data egress fees” and its infrastructure’s ability to handle large-scale supercomputer and AI jobs as examples that “might give HPE an advantage.”  

Only a few months later, the company is already pushing up on these investments. MacDonald stressed during the briefing, “Ultimately, we feel like enterprises are either going to become AI-powered, or they're going to become obsolete.”