March 1, 2023
As part of Green House Data’s recent acquisition of FiberCloud, the company gained three data centers in the state of Washington, each connected via redundant fiber.
These network links are further improved through Multiple Protocol Label Switching (MPLS) network technology, which increases data center Quality of Service by allowing administrators better control over traffic shaping and faster receipt of data packets at endpoints.
MPLS is a network protocol that increases speeds through network shaping. It forwards the majority of packets at network Layer 2, the switch level. Normally data would have to be passed to the routing level, Level 3. (Networks are often described with a 7 layer OSI model, from the physical layer 1, bits, up to the application level data.)
The ingress router, where the data enters the network, labels the packet header (called the label stack), and this label is stripped at the egress router when it exits the network.
Sometimes this is referred to as “layer 2.5” protocol as the definition of this network layer is somewhat ambivalent and outside the strict data link layer 2 and network layer 3.
By adding a shorter path label instead of having the router read full length network addresses, every router on the network path doesn’t have to lookup the address in a routing table. It also means packets can transfer on any network regardless of network protocol, reducing dependence on certain link modes.
MPLS can even be stacked, so the top level label is used to deliver the packet to a destination, where that label is stripped and a second label is then used for the next destination, and so forth.
Different level-switched paths can be used to shape network traffic, so administrators can control the flow of data on the network via MPLS. Pre-defined paths can be set for latency thresholds, jitter, packet loss, and downtime. This helps meet agreed upon Service Level Agreements.
The three primary advantages of MPLS in a data center service provider environment are to engineer network traffic, controlling how it is routed through the network, managing capacity, and prioritizing some services over others; using the same infrastructure to transport data and IP routing; and improving network resiliency.