What’s the deal with falling data center switch sales?

  • Data center switch sales fell for the first time in a while in Q1 2024
  • Arista and Huawei managed to beat the dip
  • Spending on back-end network switches is expected to be "tremendous" this year, Dell'Oro told us

Revenue from data center switch sales dipped for the first time in more than three years in Q1 2024, despite a boom in networking demand for artificial intelligence. To hear Dell’Oro Group VP Sameh Boujelbene tell it, a combination of backlog normalization, inventory digestion and spending optimization among both cloud service providers and enterprises was to blame. But it seems some vendors managed to beat the slide.

“Arista was able to outperform the market due to its high exposure to a diverse group of Cloud Service Providers and increased penetration in major enterprise accounts, including new customer acquisitions that began contributing to Arista's revenues this quarter,” Boujelbene told Fierce.

Huawei also managed to eek out growth, and that didn’t just come courtesy of its home country.

Data Center Switch Sales Q1 2024

“All of their growth was driven by regions outside of China, specifically EMEA,” Boujelbene said of Huawei. “Most of their large project wins were with governments.”

Network nitty gritty

The big guns – that is, 200G, 400G and 800G switches – accounted for around a quarter of revenue, with the remainder coming from switches powering speeds 100G and below.

Data Center Switch shipments Q1 2024

While adoption of 400G and 800G switches is expected to accelerate this year and next, Boujelbene explained the divide is partly due to how different switch speeds are used in data centers.

For front-end networks, which connect general purpose servers, speeds of 100G and below are the ticket. That’s partly because these networks are “usually compute-bound, not network-bound, meaning the compute is the bottleneck, not the network. Network utilization is usually 50% or less. That’s why there is less need for bandwidth requirement,” she explained.

So, for instance, 100G switches can be used for core aggregation or leaf and spine use cases, while 25G and 10G are applied for server access. That said, some hyperscalers are starting to use 200G and 400G for their front-end networks.

In general, though, switch speeds of 400G and above today are mainly used for back-end networks for accelerated servers (aka those being used for artificial intelligence). In these back-end use cases, it is the network that is the bottleneck, not the compute. Hence the need for faster speeds to maximize utilization of expensive GPUs.

“Ideally, the network needs to operate at 100% so that the very expensive accelerators (i.e. GPUs) don’t sit idle waiting for the network to respond,” Boujelbene said.

Looking at the rest of 2024, Boujelbene said Dell’Oro generally expects “soft spending in the front-end network but tremendous spending in AI back-end network.”

“We will be watching how much share Ethernet can capture in the back-end network,” she concluded.