3 Key Concepts
3.1 Overview
Before diving into the architecture, let’s establish the key networking concepts that underpin this design. This chapter provides a quick primer on underlay/overlay networks, OVN/OVS, and why Kubernetes shares the same network infrastructure.
Note: For detailed definitions, see the Glossary.
3.2 Underlay vs Overlay: Two-Layer Architecture
Modern datacenter networks use a two-layer approach:
3.2.1 The Underlay (Physical Network)
The underlay is the physical L3+ network that connects servers:
- Pure L3 routing: BGP for route advertisement, ECMP for load balancing
- Unaware of VMs/containers: Just moves IP packets between physical servers
- Simple and fast: Hardware-accelerated L3 forwarding
Our underlay: Dual ToRs per rack, Clos topology, eBGP everywhere, 5-tuple ECMP hashing.
3.2.2 The Overlay (Virtual Network)
The overlay is the virtual network created in software for VMs and containers:
- Encapsulation: Wraps virtual network packets inside physical network packets
- Multi-tenancy: Multiple isolated networks on same physical infrastructure
- Software-defined: Control plane manages virtual topology, routing, security
Our overlay: OVN control plane, OVS data plane, GENEVE tunneling protocol.
3.2.3 How They Work Together
┌─────────────────────────────────────────────────────────┐
│ VM1 (10.10.1.5) talks to VM2 (10.10.1.6) │
│ [Virtual network: 10.10.1.0/24] │
└──────────────────────┬──────────────────────────────────┘
│
OVS encapsulates
│
▼
┌─────────────────────────────────────────────────────────┐
│ Outer: Server-A (TEP) → Server-B (TEP) [UDP/GENEVE] │
│ ┌───────────────────────────────────────────────────┐ │
│ │ Inner: VM1 → VM2 [Original packet] │ │
│ └───────────────────────────────────────────────────┘ │
└──────────────────────┬──────────────────────────────────┘
│
Underlay routes it
│
▼
Physical server-to-server delivery
Key insight: The underlay doesn’t see VMs—it only sees UDP packets between servers. This separation keeps the underlay simple and scalable.
3.3 OVN and OVS: Control Plane + Data Plane
3.3.1 OVS (Open vSwitch) - The Data Plane
OVS is the software switch that actually forwards packets:
- Runs on each host (hypervisor)
- Implements virtual switches and bridges
- Performs GENEVE encapsulation/decapsulation
- Hardware-accelerated (via NIC offload)
Think of OVS as the “worker” that does the actual packet forwarding.
3.3.2 OVN (Open Virtual Network) - The Control Plane
OVN is the control plane that programs OVS:
- Manages logical switches, routers, and security groups
- Learns VM/container locations (which host each workload is on)
- Distributes forwarding rules to all OVS instances
- Handles VM migration, IP assignment, routing
Think of OVN as the “brain” that tells OVS what to do.
3.3.3 The Division of Labor
┌─────────────────────────────────────────────────────────┐
│ OVN CONTROL PLANE │
│ (Centralized - knows everything) │
│ • Logical network topology │
│ • VM/container locations │
│ • Security policies │
│ • Routing decisions │
└──────────────────┬──────────────────────────────────────┘
│ Programs
▼
┌─────────────────────────────────────────────────────────┐
│ OVS DATA PLANE │
│ (Distributed - one per host) │
│ • Fast packet forwarding │
│ • GENEVE encap/decap │
│ • Flow caching │
│ • Hardware offload │
└─────────────────────────────────────────────────────────┘
Why this matters: OVN makes the decisions, OVS does the work. This separation allows OVN to be flexible and programmable while OVS stays fast and hardware-accelerated.
3.4 GENEVE: The Overlay Protocol
GENEVE (Generic Network Virtualization Encapsulation) is the tunneling protocol that creates the overlay.
3.4.1 What It Does
Wraps the original VM packet in a new UDP packet:
Original packet: [Eth][IP: VM1→VM2][TCP][Data]
↓ OVS encapsulates
GENEVE packet: [Eth][IP: Server-A→Server-B][UDP][GENEVE][Eth][IP: VM1→VM2][TCP][Data]
↑ ↑ ↑
Outer headers Protocol Inner packet
(underlay sees this) header (original)
3.4.2 Key Properties
- UDP-based: Uses UDP (port 6081) so it benefits from L3/L4 ECMP hashing
- Random source port: OVN generates random UDP source ports per flow, providing entropy for ECMP
- Variable-length options: Flexible metadata (unlike VXLAN’s fixed format)
- TEP-based: Uses host loopback IPs as Tunnel Endpoints (not switch VTEPs)
Why GENEVE over VXLAN: GENEVE is more flexible and is the native protocol for OVN. VXLAN is older, switch-oriented, and less extensible.
3.5 Why Kubernetes Uses the Same Overlay
OVN-Kubernetes is a CNI plugin that integrates Kubernetes with OVN, allowing pods to use the same GENEVE overlay as OpenStack VMs.
3.5.1 The Problem with Separate Networks
If Kubernetes had its own overlay:
VM → GENEVE → Underlay (Single encapsulation)
Pod → Kubernetes-overlay → GENEVE → Underlay (DOUBLE encapsulation!)
Double encapsulation wastes bandwidth, adds latency, and complicates troubleshooting.
3.5.2 The Solution: Unified Overlay
OVN-Kubernetes makes Kubernetes pods first-class citizens in the OVN network:
┌─────────────────────────────────────────────────────────┐
│ OVN CONTROL PLANE │
│ Manages both VMs and Pods │
└──────────────────┬──────────────────────────────────────┘
│
┌──────────┴──────────┐
│ │
┌───────▼──────┐ ┌───────▼──────┐
│ OpenStack VM │ │ K8s Pod │
└──────────────┘ └──────────────┘
│ │
└──────────┬──────────┘
│
Same GENEVE Overlay
Benefits: - No double encapsulation: Pods and VMs both use GENEVE directly - Unified networking: One control plane, one overlay, one underlay - Seamless communication: VMs and pods can talk directly (same L2/L3 domain) - Consistent policies: OVN security groups work for both - Simpler operations: One network stack to manage and troubleshoot
3.6 Summary: Why This Matters for Architecture
These concepts establish the foundation:
- Underlay = Simple: Pure L3+ routing (BGP/ECMP), hardware-accelerated, no overlay awareness
- Overlay = Flexible: OVN/OVS handles all virtualization complexity at the hosts
- GENEVE = Efficient: UDP-based tunneling with random source ports for ECMP
- Unified = Better: Single overlay for VMs and pods prevents complexity
With these fundamentals understood, we can now explore the actual architecture and implementation: how we design the Clos fabric, allocate IPs, configure BGP, and evolve to super-spine topologies.
3.7 What’s Next
- Network Architecture Overview: Clos topology, dual ToRs, spine evolution
- Design Decisions & Tradeoffs: Why L3+ over EVPN/MLAG/etc.
- Network Design & IP Addressing: Concrete IP plans and topology