Can an old networking concept solve today's compute shortage?

  • Most servers in data centers are 75% to 85% idle, according to an executive at Kinesis Network
  • Pooling idle compute resources and matching them with demand could solve that problem
  • The concept isn't entirely new but there's a new urgency behind it thanks to AI demand

In a world where the need for GPUs for AI is running up against power constraints and sustainability concerns, the idea of data center servers sitting idle while still burning precious electricity is outrageous and hard to imagine — and true, according to executives at Kinesis Network. There is actually plenty of supply available today, they told Fierce. The parties who need it just can’t access it, and this is where Kinesis comes in.

There are two levels of compute waste today, said Baris Saydag, co-founder and CEO at Kinesis. First, not every server is leased out to a customer. Those unleased servers are sitting idle on racks. This is especially true for Tier-2 providers of cloud compute services – the companies that aren’t household names. Servers that are leased are not being utilized 100%, he said.

“The one myth that everybody thinks is that when they are in the data center ... there is no idle compute. That is a complete misconception,” Bina Khimani, Kinesis co-founder and chief product officer, explained. “Most of these servers are 75% to 85% idle."

The other issue is that compute power is fragmented, spread across various vendors and suppliers. Accessing it is a hassle.

Pooling idle compute resources

Enter Kinesis. Founded last year, the company has created a software orchestration layer that pools idle compute resources and seamlessly matches demand to the abundance of resources available, including machines running Nvidia’s H200, H100 and A100 chips capable of supporting AI workloads.

The idea of repurposing idle compute resources to match available supply to demand isn’t new. Last year, Fierce Network covered several startups – Akash Network and Hivenet (formerly Hive) among them – that were tackling the same problem in similar ways. These companies are chasing a share in what has the potential to be a huge market.

Indeed, Fortune Business Insights predicted the GPU-as-a-service market will grow from $4.3 billion in 2024 to $49.8 billion in 2032.

However, the question remains whether Kinesis and its ilk can find large-scale success. After all, the whole idea requires wooing not just compute suppliers, but also buyers.

Matching demand with idle capacity

Saydag and Khimani said what makes Kinesis different from the competition is that it has been built from the ground up with enterprise needs and operational patterns in mind. Competitors can provide the compute resources but leave the enterprise to sort out the orchestration element, they noted.

But Saydag and Khimani – who have previous experience at Meta, Microsoft, AWS and IBM – argued that’s not how large enterprises work; they can’t be bothered. So, Kinesis does all that for them, seamlessly subbing different machines in and out as their available capacity fluctuates.

Looking ahead, Khimani said there are two growth paths Kinesis plans to pursue. First, it's eyeing a way to license its software to large enterprises to help them pull together and put to work all the idle compute power that exists across their global footprints. Second, it will continue to develop Kinesis Network as a Web3-oriented project to create a global compute exchange system.