13  Quick Reference

13.1 IP Address Allocation

Component Range Pattern Example
Host Loopbacks (TEP) 10.0.0.0/16 10.0.{rack}.{host}/32 Rack1 Host11 = 10.0.1.11/32
Host Loopbacks (NP) 10.255.0.0/16 10.255.{pod-rack}.{host}/32 NP1 Rack1 Host1 = 10.255.11.11/32
ToR Loopbacks 10.254.0.0/16 10.254.{rack}.{tor}/32 Rack1 ToR-A = 10.254.1.1/32
ToR Loopbacks (NP) 10.254.0.0/16 10.254.{pod-rack}.{tor}/32 NP1 Rack1 ToR-A = 10.254.11.11/32
Spine Loopbacks 10.255.0.0/16 10.255.0.{spine}/32 Spine1 = 10.255.0.1/32
Spine Loopbacks (NP) 10.254.0.0/16 10.254.{pod}.{spine}/32 NP1 Spine1 = 10.254.1.1/32
Super-Spine Loopbacks 10.254.100.0/24 10.254.100.{superspine}/32 SuperSpine1 = 10.254.100.1/32
Host↔︎ToR Links 172.16.0.0/16 172.16.{rack}.0/24 (/31 links) Rack1 = 172.16.1.0/24
Host↔︎ToR Links (NP) 172.16.0.0/16 172.16.{pod-rack}.0/24 (/31 links) NP1 Rack1 = 172.16.11.0/24
ToR↔︎Spine Links 172.20.0.0/16 /31 per link ToR1→Spine1 = 172.20.1.0/31
ToR↔︎Spine Links (NP) 172.20.0.0/22 172.20.{pod}.0/22 (/31 links) NP1 = 172.20.1.0/22
Spine↔︎Super-Spine 172.24.100.0/24 /31 per link Spine1→SS1 = 172.24.100.0/31

13.2 Hardware Specifications

Component Specification Notes
Server NICs 2 × 100G (ConnectX-6 DX) Hardware GENEVE offload, 200G aggregate
ToR Switches 100G×64 or 200G×32 (Tomahawk) Pure L3 routers, no L2 switching
Spine Switches 400G (Tomahawk) Pure L3 transit, ECMP
Super-Spine Switches 400G+ (Tomahawk) Inter-pod connectivity

13.3 BGP Configuration

Parameter Value Notes
BGP Type eBGP everywhere No iBGP, no route reflectors
Host AS Numbers 65001-65150 or 66xxx One per host or reuse per rack
ToR AS Numbers 65101, 65102, etc. Per ToR or per rack
Spine AS Numbers 65010, 65011, etc. Per spine
Super-Spine AS 65100+ When deployed
BGP Timers 3s keepalive, 10s hold Fast convergence
BFD Interval 100ms <1s failure detection
ECMP maximum-paths 2 (hosts), 8 (switches) Enable multipath

13.4 Essential Commands

13.4.1 BGP

# BGP summary
vtysh -c "show ip bgp summary"

# Specific route
vtysh -c "show ip bgp 10.0.1.11/32"

# ECMP routes
ip route show | grep "nexthop"

13.4.2 OVN

# OVN configuration
ovs-vsctl get open . external-ids

# OVN topology
ovn-nbctl show

# OVN controller status
systemctl status ovn-controller

13.4.3 Network

# Interface status
ip addr show

# ECMP verification
ip route show | grep "nexthop"

# Connectivity test
ping -I <loopback> <remote-loopback>

13.5 Common Troubleshooting

Issue Quick Check Fix
BGP down vtysh -c "show ip bgp summary" Check connectivity, AS numbers, firewall
No ECMP ip route show \| grep nexthop Verify maximum-paths configured
OVN tunnel fail ping <remote-tep> Check TEP reachability, MTU, OVN central
High latency ping <destination> Check ECMP, interface errors, congestion
MTU issues ping -M do -s 8972 <dest> Set MTU to 9000 on all interfaces

13.6 Design Quick Reference

Aspect Decision Reason
Underlay Pure L3 BGP/ECMP No L2, simple, scalable
Overlay GENEVE (OVN/OVS) Host-based TEPs, hardware offload
NIC config Separate routed interfaces No bonding, pure L3 ECMP
Loopback /32 per host Stable identity, TEP IP
P2P links /31 (RFC 3021) Saves IPs, clear intent
ECMP 5-tuple hashing Automatic load balancing
BFD 100ms interval <1s failure detection
Fabric isolation Independent A/B Zero shared state

13.7 For More Details