The Transmission Control Protocol was engineered for a world of copper wires and fiber optics, where round-trip times measured in milliseconds and packet loss almost always signaled network congestion. These assumptions, baked into TCP's congestion control algorithms over four decades of iterative refinement, create a fundamental impedance mismatch when packets must traverse 36,000 kilometers to geostationary orbit and back. The protocol interprets the inherent physics of space communication as pathological network behavior, triggering conservative responses that strangle throughput on links capable of delivering far more.
This mismatch extends beyond mere inefficiency. Modern satellite constellations—whether geostationary workhorses or proliferating low-earth-orbit meshes—represent critical infrastructure for global connectivity, maritime operations, aviation networks, and disaster response communications. When TCP's congestion control algorithms systematically underutilize these expensive space assets, the economic and operational consequences compound across every application layer built atop the transport.
The networking research community has responded with decades of workarounds, protocol modifications, and increasingly, clean-slate transport designs that abandon TCP's terrestrial assumptions entirely. Understanding why standard congestion control fails requires examining the precise mechanisms of that failure, evaluating the proxy architectures that dominate current deployments, and assessing emerging transport protocols designed from first principles for the unique characteristics of space-based communication. The solutions reveal not just satellite-specific engineering, but deeper insights into how transport protocol assumptions shape—and constrain—network capability.
Bandwidth-Delay Product Misconceptions
TCP's congestion control operates on a deceptively simple feedback loop: grow the transmission window until packets drop, interpret drops as congestion signals, reduce the window aggressively, then probe upward again. This additive-increase-multiplicative-decrease rhythm evolved in networks where round-trip times rarely exceeded tens of milliseconds and packet loss genuinely indicated router queue overflow. The bandwidth-delay product—the volume of data that must remain in flight to saturate a link—stayed manageable at these timescales, allowing TCP windows to reach optimal sizes within seconds.
Geostationary satellite links shatter these assumptions with round-trip latencies approaching 600 milliseconds. A 100 Mbps GEO link requires approximately 7.5 megabytes continuously in flight to achieve full utilization. TCP's slow-start algorithm, which doubles the congestion window each round-trip time, requires roughly 16 RTTs to reach this window size from a typical initial value—nearly 10 seconds of suboptimal throughput on every new connection. For short-lived HTTP transactions, the connection often terminates before reaching steady state.
The latency problem compounds with TCP's loss interpretation. Satellite links exhibit packet loss from atmospheric attenuation, handoff events, and link-layer error correction failures—none of which indicate congestion in the terrestrial sense. When TCP observes these losses, algorithms like Reno and CUBIC halve the congestion window, interpreting weather-induced bit errors as network overload. The window then requires another lengthy climb back to optimal size, creating sawtooth throughput patterns that chronically waste available capacity.
Even loss-based algorithms designed for high-bandwidth networks struggle with satellite characteristics. CUBIC's cubic function recovery was calibrated for datacenter and backbone latencies; its behavior at 600ms RTT produces oscillations that never stabilize at optimal throughput. BBR's model-based approach fares somewhat better by estimating bandwidth and RTT independently, but its probing phases still misinterpret satellite-specific loss patterns and its RTT measurements can be corrupted by variable queuing in satellite ground infrastructure.
The fundamental issue transcends algorithm tuning. TCP's entire feedback architecture assumes that round-trip times provide meaningful congestion signals on human-perceptible timescales. When light-speed physics imposes half-second feedback loops, the control system becomes sluggish, oscillatory, and unable to track the actual available capacity of space links. No parameter adjustment within the existing TCP framework fully resolves a mismatch rooted in the protocol's core assumptions about the relationship between delay, loss, and congestion.
TakeawayTCP's congestion control algorithms interpret satellite latency and link-layer losses as congestion signals, causing chronic underutilization that no parameter tuning can fully resolve because the feedback architecture itself assumes terrestrial timescales.
Performance Enhancing Proxies
Rather than replacing TCP across entire network stacks, the satellite industry converged on an architectural compromise: Performance Enhancing Proxies that terminate TCP connections at the satellite link boundaries and implement specialized protocols across the space segment. This split-connection approach allows terrestrial endpoints to communicate using unmodified TCP while hiding the satellite link's pathological characteristics behind protocol translation layers.
PEP architectures typically deploy proxy nodes at satellite ground stations or hub facilities, intercepting TCP connections and acknowledging packets locally rather than waiting for end-to-end confirmation. The satellite segment then carries data using proprietary or specialized protocols optimized for high-latency, lossy links—often with aggressive forward error correction, selective acknowledgment, and window sizes scaled to the actual bandwidth-delay product. The receiving PEP reconstructs the TCP connection to the destination, maintaining the illusion of standard end-to-end semantics.
This approach delivers substantial throughput improvements, often achieving 80-90% link utilization compared to 10-20% for unproxied TCP on GEO links. Commercial PEP implementations from vendors like Hughes, Comtech, and Gilat have become standard infrastructure in satellite networks, deployed transparently to end users who perceive only improved performance. The economic case is compelling: PEPs extract dramatically more value from expensive satellite bandwidth without requiring changes to billions of deployed TCP implementations.
The architectural compromise carries significant costs. PEPs break TCP's end-to-end semantics, intercepting connections in ways that can confuse applications expecting genuine acknowledgments from destination hosts. More critically, PEPs are fundamentally incompatible with transport-layer encryption. TLS 1.3 and QUIC encrypt their headers and payloads in ways that prevent PEPs from identifying connection boundaries or manipulating acknowledgments. As encrypted transport becomes ubiquitous, the PEP architecture faces obsolescence—unable to intercept connections it cannot parse.
Satellite operators have responded with various encrypted-traffic strategies, from encouraging users to terminate VPNs at the PEP boundary to developing application-layer proxies that can optimize HTTP/3 traffic after TLS termination. None of these approaches fully preserve both encryption and optimization. The PEP paradigm, however successful in the plaintext era, represents a transitional architecture increasingly constrained by the security requirements of modern internet protocols.
TakeawayPerformance Enhancing Proxies have dominated satellite network optimization for decades by hiding link characteristics from TCP endpoints, but their fundamental incompatibility with encrypted transport protocols threatens their long-term viability.
Delay-Tolerant Transport
The limitations of both modified TCP and proxy architectures have driven research into transport protocols designed from first principles for high-latency networks. These delay-tolerant transport layers abandon TCP's assumption that timely feedback enables effective congestion control, instead building protocols around the reality of multi-hundred-millisecond round trips and non-congestive loss patterns.
QUIC's design offers partial improvements through encrypted transport with more sophisticated loss recovery and connection migration capabilities. Google's original QUIC implementation and the subsequent IETF standardization included mechanisms like connection-level multiplexing that prevent head-of-line blocking across streams, and more granular acknowledgment frames that convey richer information about received packets. Several research groups have developed QUIC variants specifically tuned for satellite characteristics, adjusting pacing algorithms and loss detection thresholds to accommodate GEO latencies without triggering spurious retransmissions.
More radical approaches reject the connected transport paradigm entirely. The Delay-Tolerant Networking architecture, originally developed for interplanetary communication, uses store-and-forward bundle protocols that treat extended delays as normal operating conditions rather than error states. While DTN's design targets latencies measured in minutes to hours—appropriate for Mars missions—its concepts have influenced satellite transport research, particularly for networks with intermittent connectivity or highly variable delay characteristics.
Specialized satellite transport protocols like SCPS-TP (Space Communications Protocol Standards - Transport Protocol) incorporate satellite-specific mechanisms including selective negative acknowledgment, explicit congestion notification optimized for satellite link characteristics, and Vegas-style delay-based congestion inference that distinguishes queuing delay from propagation delay. These protocols achieve near-optimal throughput on characterized satellite links but require deployment on both endpoints, limiting their applicability to closed satellite network environments.
The most promising near-term developments involve QUIC extensions and new congestion control algorithms designed for explicit satellite deployment. Research prototypes combining BBRv2's model-based control with satellite-specific probing schedules have demonstrated substantial improvements over baseline QUIC on emulated GEO links. As QUIC becomes the dominant transport for HTTP/3 traffic, satellite-optimized QUIC variants may finally offer a path to high-performance encrypted transport that works with, rather than against, the physics of space communication.
TakeawayEmerging transport protocols designed for high-latency networks—from satellite-optimized QUIC variants to delay-tolerant networking architectures—offer paths beyond TCP's terrestrial assumptions, though widespread deployment remains constrained by endpoint compatibility requirements.
The failure of TCP congestion control on satellite links illustrates a broader principle in protocol design: assumptions embedded at standardization become constraints that persist across decades of network evolution. TCP's architects made reasonable choices for 1980s internetworks, but those choices now impose measurable costs on communication systems they never anticipated.
Current solutions span a pragmatic spectrum from parameter tuning through proxy architectures to clean-slate transport designs. Each approach trades different costs against different benefits—compatibility against performance, encryption against optimization, deployment complexity against throughput gains. No single solution dominates across all satellite network scenarios.
As LEO constellations proliferate and satellite bandwidth becomes increasingly central to global connectivity, the pressure for better solutions intensifies. The transport protocols that emerge from current research will shape not just satellite network performance, but our understanding of how to build reliable communication across any network where terrestrial assumptions fail to hold.