14 Glossary of Terms
14.1 A
14.1.1 ASN (Autonomous System Number)
A unique identifier for a network (autonomous system) in BGP routing. In our design, each server, ToR, and spine has its own ASN for eBGP peering.
14.1.2 ARP (Address Resolution Protocol)
Protocol used to map IP addresses to MAC addresses. In our pure L3 design, ARP is restricted to directly connected links only, not across the fabric.
14.2 B
14.2.1 BFD (Bidirectional Forwarding Detection)
A fast failure detection protocol used with BGP to quickly detect link failures and trigger route convergence.
14.2.2 BGP (Border Gateway Protocol)
The routing protocol used throughout our fabric. We use eBGP (external BGP) between all devices - servers, ToRs, and spines.
14.3 C
14.3.1 Clos Fabric
A network topology where leaf switches connect to spine switches, forming a non-blocking fabric. Our design uses this topology when scaling beyond 6 racks.
14.3.2 Control Plane
The network intelligence that decides how traffic should be routed. In our design, BGP is the control plane for the underlay, and OVN is the control plane for the overlay.
14.4 D
14.4.1 Data Plane
The forwarding path that actually moves packets. In our design, the data plane is pure L3 IP forwarding with ECMP.
14.5 E
14.5.1 ECMP (Equal Cost Multi-Path)
A routing technique that distributes traffic across multiple paths of equal cost. In our design, ECMP automatically distributes traffic across both NICs and multiple spine paths.
14.5.2 eBGP (External BGP)
BGP peering between devices in different autonomous systems. We use eBGP everywhere - no iBGP needed.
14.5.3 EVPN (Ethernet VPN)
A BGP-based control plane for VXLAN. Not used in our design - we use pure L3 BGP/ECMP at the fabric layer.
14.6 F
14.6.1 Fabric
The physical network infrastructure (switches, links, routing). Our fabric is pure L3 with BGP routing.
14.6.2 FRR (Free Range Routing)
Open-source routing suite that implements BGP, OSPF, and other routing protocols. Used on servers and switches in our design.
14.7 G
14.7.1 GENEVE (Generic Network Virtualization Encapsulation)
The encapsulation protocol used by OVN to create overlay networks. GENEVE uses UDP and provides variable-length options. Defined in RFC 8926.
14.8 H
14.8.1 Host-Based TEP
Tunnel endpoints (TEPs) located at hypervisors, not at switches. This is our design approach - each host is a TEP endpoint.
14.9 I
14.9.1 iBGP (Internal BGP)
BGP peering within the same autonomous system. Not used in our design - we use eBGP everywhere.
14.9.2 Independent A/B Fabrics
Our architecture where two completely separate L3 networks (Fabric-A and Fabric-B) operate independently with zero shared state.
14.10 K
14.10.1 Kubernetes
Container orchestration platform. In our design, Kubernetes uses OVN’s network to prevent double overlay.
14.11 L
14.11.1 Leaf-Spine
A network topology where leaf switches (ToRs) connect to spine switches. Our design evolves from mesh to leaf-spine when scaling beyond 6 racks.
14.11.2 Loopback IP
A virtual interface IP address that is independent of physical links. In our design, each server’s loopback IP is its identity and OVN TEP.
14.11.3 L2 (Layer 2)
Data link layer - Ethernet, MAC addresses, VLANs. Not used in our underlay - we are pure L3.
14.11.4 L3 (Layer 3)
Network layer - IP addresses, routing. Our entire underlay is pure L3.
14.12 M
14.12.1 MLAG (Multi-Chassis Link Aggregation)
A technology that allows two switches to act as one for link aggregation. Not used in our design - we avoid L2 constructs.
14.12.2 Mesh Topology
A network topology where switches connect directly to each other. Our design starts with mesh for 5-6 racks, then evolves to leaf-spine.
14.13 N
14.13.1 Network-A / Network-B
The two independent L3 networks in our A/B Fabrics architecture. Each server connects to both networks via separate NICs.
14.14 O
14.14.1 OpenStack
Cloud computing platform for managing virtualized compute, storage, and networking resources. This network design is specifically for Canonical’s OpenStack deployment with OVN/OVS overlay networking. See OpenStack Documentation and Canonical OpenStack.
14.14.2 OVN (Open Virtual Network)
The control plane system for OVS that provides logical networking abstractions. OVN handles VM learning, routing, and security groups. See OVN Architecture and Red Hat OVN Documentation.
14.14.3 OVS (Open vSwitch)
The data plane forwarding engine that performs packet switching and GENEVE encapsulation/decapsulation.
14.14.4 Overlay Network
Virtual networks created on top of the physical underlay. In our design, OVN creates GENEVE overlay networks.
14.15 P
14.15.1 Point-to-Point Link
A direct connection between two devices. In our design, all links are /31 point-to-point routed links.
14.16 R
14.16.1 Router-ID
A stable identifier for a BGP router, typically the loopback IP address.
14.17 S
14.17.1 Spine Switch
A switch in the spine layer of a leaf-spine topology. Spines provide high-bandwidth connectivity between leaf switches.
14.17.2 STP (Spanning Tree Protocol)
A protocol to prevent loops in L2 networks. Not used in our design - we are pure L3.
14.18 T
14.18.1 TEP (Tunnel Endpoint)
The IP address used for GENEVE encapsulation. In our design, each host’s loopback IP is its TEP. Note: We use “TEP” not “VTEP” - VTEP is VXLAN-specific terminology (VXLAN Tunnel Endpoint). GENEVE is defined in RFC 8926.
14.18.2 ToR (Top of Rack)
A switch located at the top of a server rack. In our design, ToRs are actually L3 routers, not L2 switches.
14.19 U
14.19.1 Underlay Network
The physical network infrastructure that provides IP connectivity. Our underlay is pure L3 with BGP routing.
14.20 V
14.20.1 VXLAN
A network virtualization technology using MAC-in-UDP encapsulation. Not used in our design - we use GENEVE instead.
14.20.2 VTEP (VXLAN Tunnel Endpoint)
VXLAN-specific terminology. Not used in our design - we use “TEP” for GENEVE tunnel endpoints.
14.21 W
14.21.1 Workload
Virtual machines, containers, or applications running on hosts. Our network design supports OpenStack workloads with OVN.