- AI re-platforming that is increasingly being thrown around by executives and analysts
- It refers to AI-driven evolution – rather than replacement – of hardware, software and applications
- Vertical integration across all these layers could completely change the face of the industry's value chain
If you’re anything like us, you’ve started hearing a lot of talk about “AI re-platforming.” But what the heck does that term even mean? And what does it look like in practice?
Turns out “AI re-platforming,” the latest seismic shift in the tech world, can refer to a few different things. At its core, the term refers to the forced evolution of existing technology stacks to “fully harness AI’s potential,” as Goldman Sachs recently put it.
There are three-ish layers to this evolution: hardware, infrastructure software and applications.
Jack Gold, founder and principal at J. Gold Associates, told Fierce AI re-platforming in practice on the hardware side can include “upgrading a server, perhaps by adding a GPU or more memory so that it can functionally process AI better than the older, existing CPU hardware.”
Indeed, the industry has already seen plenty of this (hello, Nvidia’s rising star), and hyperscalers who capitalized on the last platform shift toward the cloud have acknowledged the changing landscape in their earnings calls.
Just last week, Microsoft CEO Satya Nadella expressed a semblance of relief that the pivot to AI builds on rather than replaces existing cloud infrastructure.
“Our largest business is our infrastructure business, and the good news here is the next big platform shift builds on that. It’s not a complete rebuild,” Nadella said.
Or, as AvidThink Founder Roy Chua explained, while GPUs, TPUs and custom silicon get all the love these days, the CPUs the cloud is built on are still "far from obsolete."
"CPUs are evolving to incorporate instruction set architecture (ISA) improvements for matrix multiplication and vector instructions to better support AI workloads. They are also indispensable for orchestrating logic, I/O, and the wide range of non-AI processes that keep systems running," Chua said.
Goldman Sachs the current AI hardware trend is consistent with previous technology shifts wherein “virtually all financial gains are first captured by semiconductor and hardware companies” before attention and investments turn toward software.
What’s next: Software bonanza
According to Goldman Sachs, infrastructure software is poised to be the next big area of innovation and investment, since it is key to bridging the gap between the hardware layer and the applications that run on top.
If the shift to the cloud is any indication, this will translate to BIG business for players in the market. “In fact, ~60% of the Total Addressable Market (TAM) value in the Cloud Era accrued to the infrastructure software layer,” Goldman Sachs wrote, referring to infrastructure-as-a-service, software-as-a-service (SaaS) and platform-as-a-service offerings. The firm pointed to new ecosystem offerings like inference-as-a-service and agentic infrastructure as emerging categories of interest.
In a recent forecast, Gartner predicted spending on generative AI software will increase nearly 94% in 2025 to $37.16 billion, while services spending will jump a whopping 162.6% to $27.8 billion. While that’s still a fraction of the $180.6 billion that will be spent on generative AI servers, both segments are growing much faster than the 33% gains in the server sector.
Adapting applications for AI
On the application front, both Chua and Gold said AI re-platforming looks more like optimizing an app for AI without rebuilding the whole thing from scratch. It’s kind of like replacing the doors on your kitchen cabinets rather than gutting the whole room, Gold quipped.
“Usually this is talked about in terms of how you can add AI capability to the app without rewriting the entire app, so it’s about adding the ability to bolt-on additional functionality,” he explained. Going this route rather than opting for a complete re-architecting of apps reduces costs and saves on time to market, Gold added.
“You’re just adding new interfaces, APIs, ways to get to the internal data, etc.,” he said. Sure the return on investment might be slightly lower than building an AI-native app from scratch but it’s faster and more financially appealing.
For instance, re-platforming of an app might involve using AI to add natural language interfaces, AI copilots, personalization and automation, Chua added.
The biggest changes, however, will come from companies that find ways to integrate vertically across the hardware, software and application layers, or "from silicon to SaaS," Chua said.
"Vertical integration, particularly among hyperscalers, will continue to reshape the value chain as the wave of cloudification did a decade-plus ago," he concluded. "I see cloud providers offering ever more tightly coupled AI infrastructure and model services. But I also see the increased verticalization and centralization balanced by innovation in low-power, distributed inference hardware from startups like Groq, Graphcore, SambaNova and Cerebras, which could decentralize where AI workloads are run. Qualcomm are other device silicon providers are also banking on this."