Every time you send a packet across the internet, IP routing makes a decision at every single hop. Each router examines the destination address, consults its forwarding table, and independently determines where to send the packet next. For most traffic, this works fine. But for carriers managing massive backbone networks with strict performance requirements, hop-by-hop IP routing is a blunt instrument.

Multi-Protocol Label Switching—MPLS—sits quietly between Layer 2 and Layer 3, often called a Layer 2.5 protocol. It replaces per-hop IP lookups with a simple label-swapping mechanism that lets carriers predetermine exact paths through their networks. This isn't just an optimization. It's a fundamentally different forwarding paradigm that enables traffic engineering and virtual private network services that IP routing alone cannot deliver.

Understanding MPLS means understanding why the largest networks in the world don't rely on destination-based forwarding alone—and what architectural trade-offs make label switching so powerful for carrier-scale infrastructure.

Label Switching Fundamentals

In traditional IP forwarding, every router performs a longest-prefix match against its routing table for each incoming packet. This operation is repeated independently at every hop, and the path a packet takes is entirely determined by distributed routing protocol decisions. MPLS replaces this model with something structurally simpler: a 32-bit label inserted between the Layer 2 header and the IP header. Routers—called Label Switching Routers (LSRs)—forward packets based on this label rather than the destination IP address.

The label isn't just a shortcut for the same lookup. It represents membership in a Forwarding Equivalence Class (FEC)—a group of packets that should all be treated identically from a forwarding perspective. A FEC might correspond to a destination prefix, a VPN customer, a specific quality-of-service class, or any combination. This abstraction is what gives MPLS its versatility. The same label-switching infrastructure supports traffic engineering, VPN isolation, and fast reroute—all because the label can encode meaning beyond just "where is this going."

Labels are distributed by dedicated protocols. The Label Distribution Protocol (LDP) is the simplest: it automatically maps labels to IP prefixes learned from the IGP, building what's called a Label Switched Path (LSP) along the IGP shortest path. At the network's edge, an ingress LSR pushes a label onto the packet. Each transit LSR swaps the incoming label for an outgoing label and forwards the packet along. The egress LSR pops the label and delivers the packet using normal IP forwarding. This push-swap-pop lifecycle is the heartbeat of MPLS.

One elegant optimization is penultimate hop popping (PHP). The second-to-last router in the path strips the label before forwarding to the egress router, saving the egress router from performing both a label pop and an IP lookup. It's a small detail, but it reflects the engineering discipline that runs through MPLS design—every unnecessary operation at scale costs real forwarding resources.

Takeaway

MPLS decouples the forwarding decision from the destination address. By encoding forwarding intent into a simple label, it turns routers into high-speed switches that can carry meaning far richer than 'deliver this to that IP.'

Traffic Engineering with RSVP-TE

IP routing protocols like OSPF and IS-IS calculate shortest paths based on link metrics. This works well enough for convergence and reachability, but it creates a fundamental problem for carriers: traffic naturally concentrates on the shortest paths, leaving expensive alternate links underutilized while primary links become congested. You've paid for bandwidth across your entire backbone, but shortest-path routing only uses a fraction of it efficiently.

MPLS Traffic Engineering solves this by letting operators define explicit paths through the network using RSVP-TE (Resource Reservation Protocol with Traffic Engineering extensions). Instead of relying on the IGP to determine the path, RSVP-TE signals an LSP along a specified sequence of routers, reserving bandwidth at each hop. The ingress router computes or is configured with a path that accounts for available bandwidth, link constraints, and administrative policies—then RSVP-TE establishes and maintains that path end to end.

The key enabler is the Traffic Engineering Database (TED), which each router builds from IGP extensions. OSPF-TE and IS-IS-TE flood additional link attributes—available bandwidth, administrative groups, shared risk link groups—across the network. The ingress router runs a Constrained Shortest Path First (CSPF) algorithm against this database, finding paths that satisfy bandwidth and policy constraints rather than simply minimizing hop count. This is where MPLS transforms from a forwarding optimization into a genuine capacity management tool.

RSVP-TE also provides Fast Reroute (FRR) capabilities. By pre-computing and pre-signaling backup paths around every protected link or node, MPLS-TE can reroute traffic in under 50 milliseconds after a failure—comparable to SONET/SDH protection switching. The combination of explicit path control, bandwidth reservation, and sub-second failover is why MPLS-TE remains the backbone of carrier traffic engineering, even as newer technologies emerge.

Takeaway

Shortest-path routing optimizes for reachability, not utilization. Traffic engineering flips the problem: instead of letting traffic find its own path, you engineer the paths to fit the traffic. That distinction is the core of carrier-grade network design.

VPN Services Over Shared Infrastructure

Perhaps the most commercially significant application of MPLS is enabling VPN services. Carriers need to sell isolated virtual networks to enterprise customers—all running over the same physical infrastructure. MPLS makes this possible through label stacking: an outer label identifies the LSP through the provider backbone, while an inner label identifies the specific VPN or customer. Transit routers only examine the outer label, remaining entirely unaware of customer routing information.

Layer 3 VPNs (L3VPN), defined in RFC 4364 (commonly called BGP/MPLS VPNs), use VPN Routing and Forwarding (VRF) instances on provider edge routers. Each VRF maintains a separate routing table for a customer, and MP-BGP (Multiprotocol BGP) distributes customer routes between PE routers using route distinguishers to maintain uniqueness even when customers use overlapping address space. The customer sees a routed network connecting their sites. The carrier sees labeled packets traversing an MPLS core with complete isolation between customers.

Layer 2 VPNs (L2VPN) take a different approach: instead of participating in customer routing, the provider simply transports Layer 2 frames between customer sites. Technologies like VPLS (Virtual Private LAN Service) emulate a LAN segment across the MPLS backbone, while pseudowires provide point-to-point Layer 2 connectivity. The customer connects their switches or routers to the provider edge and sees a transparent Ethernet service, with no visibility into the MPLS infrastructure carrying their frames.

The elegance of this architecture is in the separation of concerns. The MPLS core is a label-switching fabric that knows nothing about customer networks. PE routers handle the complex work of VPN membership, route distribution, and encapsulation. P routers in the core simply swap labels and forward at line rate. This layered design lets carriers scale to thousands of VPN customers without the core infrastructure growing in complexity—a direct consequence of the label abstraction that MPLS was built on.

Takeaway

Label stacking is MPLS's architectural secret weapon. By nesting labels, a single forwarding plane supports unlimited logical networks—each isolated, each unaware of the others, all sharing the same physical links. Shared infrastructure, private service.

MPLS endures because it solved problems that IP routing was never designed to address. Destination-based forwarding is powerful for reachability, but carriers need path control, bandwidth management, and service isolation. MPLS delivers all three through a single, elegant abstraction: the label.

The architecture's strength lies in its layered design. Labels encode forwarding intent. Label stacking separates transport from services. Traffic engineering decouples path selection from shortest-path routing. Each layer solves a distinct problem without burdening the others.

Even as SD-WAN and segment routing reshape carrier networks, the principles MPLS established—forwarding abstraction, explicit path engineering, and infrastructure-level service isolation—remain foundational to how large-scale networks are designed and operated.