2  L3 & Routing Trends

Created Dec 24 2025

2.1 Big Picture

There is a big trend towards transition to P2P & Routing based transmission (vs. the old shared medium and broadcast based transmission):

  • OnChip: Data BUS Network-on-Chip (NOC) packet switched network SoCs.
  • Motherboard: PCI parallel lanes PCIe serial lane which switching between devices
  • Network: L2 broadcast based Ethernet L3 Point-to-Point & Routing datacenter networks.

No broadcast, no sharing of transmission medium. Only individual links & intelligent routing.

2.2 Moore’s law again!

2.2.1 High bandwidth between dense units

Exponentially increasing number of small, dense active elements in silicon need high bandwidth communication between them.

  • During old times, for the same size elements, we used to do broadcast or do time sharing in a shared medium. But broadcast is dividing bandwidth which is not enough now.
  • To maximize bandwidth, we can have isolated P2P links between each element pair, but the wires needed will explode to N², which is not practical.
  • So the only way is to have a mesh or multi-hop interconnected P2P links, like a road network, such that we can both get P2P like functionality but also with practical sharing of medium/links with intelligence at the junctions called routing.

2.2.2 Compute for routing is cheap, is everywhere

Routing or Switching is the intelligence in the control plane to decide on the next hop (based on minimizing distance to destination or maximizing capacity utilization of pipes etc) and also the intelligence in the management plane to automatically learn and maintain meta data like routing tables using algorithms - BGP, ECMP etc. Such intelligent systems are abundant now available in all transmission equipments as a base capability.

2.2.3 More scalability, stability & smaller blast radius

Switching networks can achieve more stability as it is easier to build multiple paths and has more intelligence to failover/recover (redundancy), and is also loosely coupled (smaller blast radius), which also makes it easier to scale and upgrade.

As silicon gets more denser with more cores, more accelerators, more flash storage (more elements in SoC and more chips per motherboard), the internal motherboard itself will become like a datacenter, with hardware accelerated virtualized units & L3 networking everywhere, which again can be hardware accelerated and virtualized.

2.3 Why the need to build deep expertise in L3?

Last week, I casually started learning about how OpenStack does network virtualization and I was stumped by the complexity of the (1) network overlay, (2) options available for the main underlay network & also the (3) options available for Kubernetes network.

I used to think knowing networking was just about IP, TCP, UDP, NAT, DNAT, MAC, MTU etc. But the following are the terms that opened up!!

2.3.1 Key Networking Terms

  1. OVS, OVN, GENEVE, TEP, FRR/BGP/eBGP/iBGP, ECMP (& WCMP), BFD, MPTCP, SR-IOV, TC-Flower, VRRP, VRF, Tomahawk chip

  2. LACP, MLAG, bridge & bonding types (xor, alb etc), peer links / vPC

  3. VXLAN, eVPN, TEP, eVPN-MH, Trident chip

We need to take an 80/20 approach to understand the right concepts that matter – and that is the (A) focus on L3 and (B) focus on virtualization of L3.

2.4 Topics to Cover

  1. Building the L3 underlay Network (with dumb L2)
  2. The GENEVE Overlay Network - OVN & OVSwitch (L3 virtualization)
  3. OVN-Kubernetes reusing GENEVE overlay