Do AI/ML PC chips pose a threat to cloud usage, revenue?

Today, the vast majority of AI and machine learning (ML) work happens on the cloud, but will new AI/ML PC chips change that, and if so, will those chips negatively impact cloud providers?

When Panos Panay, Microsoft's chief product officer in charge of Windows and Microsoft hardware, announced at AMD’s CES 2023 keynote, "AI is going to reinvent how you do everything on Windows, quite literally, people pay attention."

Panay is Microsoft's main man for Windows. He meets with Microsoft CEO Satya Nadella regularly so if he claims the Windows PC's future is with AI, it’s time to listen up.

Not the kind of chips you find at Best Buy

What does this shift really mean for AI, the PC and the cloud? The cloud is where the neural machine horsepower — with processors such as the NVIDIA A100 Tensor Core GPU or Google Tensor Processing Unit (TPU) v4, an AI accelerator application-specific integrated circuit (ASIC) — gets serious work done. Those are not the kind of chips you find in Best Buy.

For that reason, Julien Salinas, CTO at natural language AI company NLP Cloud, doesn't see the bulk of hard AI work moving off the cloud and onto PCs.

Salinas explained, "We use AI chips extensively from various popular cloud providers. It's very clear to me: consumer hardware will never be able to integrate the GPUs offered by cloud providers."

"The reason is price,” he explained. “AI-oriented GPUs you can find in the cloud are very expensive, and their cost would never be justified on mere PCs. Their size is just too big for PCs too. Given that Moore's Law does not really apply anymore today, nothing seems to show that these prices will go down in the future."

Salinas added this wasn't just his take. NVIDIA's CEO Jensen Huang recently said, "The idea that a chip is going to go down in cost over time, unfortunately, is a story of the past. Moore's Law is dead."

True, Salinas added, "There are some specialized AI chips for consumer hardware like Google TPU Coral, Tenser G2, Apple M2, etc. But these chips are far from matching the performances of the NVIDIA GPUs or Google TPUs, which are required for AI training and inference today in business applications."

In short, "I don't think we will ever see ChatGPT run locally on a smartphone unless we manage to create quantum computers!"

Yes, it's possible that, as Stephen Kleynhans, a Gartner vice president of research, predicts, an "AI engine will become as common in PCs as an embedded GPU."

We're already on our way with such chips as the Snapdragon 8+ Gen 1, Google Tensor G2, Apple M2, the newly released AMD Ryzen 7040, and Intel's forthcoming Meteor Lake chips.

But will they be capable of running TensorFlow, ClearML, PyTorch or the other top open-source AI/ML programming tools?

It seems unlikely. But that may not really be where Microsoft is going with AI.

The AI route

As Athima Chansanchai, a Microsoft technology writer, spells out, the Ryzen 7040 will be used for video collaboration, content creation, gaming and security.

While Panay didn't go into specifics about AI in Windows, it sounds much more like improving and transforming the Windows experience. For example, a locally AI-driven webcam can blur backgrounds, detect eye contact and automatically frame someone on a video call using  Windows Studio Effects.

Earlier at Microsoft Ignite 2022, Panay had demonstrated Voice Focus auto-framing and background noise reduction using local AI.

So, what we can really expect to see on AI/ML-empowered PCs is not AI/ML development or high-end applications.

Instead, we'll see enhancements to Windows that Panay claimed will be as big as those created by the introduction of the Windows, Icons, Menus and Pointer (WIMP) interface in the 80s.

But, even then, Panay added, the need for the cloud providers won’t go away. The new desktops will be turbocharged by AI computing from the cloud to the edge to the PC.

Got something to tell us? Send us an email here.