All Articles for

End-To-End

The end-to-end principle is a classic design principle in computer networking, first explicitly articulated in a 1981 conference paper by Saltzer, Reed, and Clark. The end-to-end principle states that in a general-purpose network, application-specific functions ought to reside in the end hosts of a network rather than in intermediary nodes, provided that they can be implemented "completely and correctly" in the end hosts. The principle goes back to Paul Baran's 1960's work on obtaining reliability from unreliable parts; the basic intuition is that the payoffs from adding functions to a simple network quickly diminish, especially in cases where the end hosts have to re-implement those functions themselves for reasons of completeness and correctness. Furthermore, as implementing any specific function incurs some resource penalties regardless of whether the function is used or not, implementing a specific function in the network distributes these penalties among all clients, regardless of whether they use that function or not. The canonical example for the end-to-end principle is that of an arbitrarily reliable file transfer between two end-points in a distributed network of some nontrivial size: The only way two end-points can obtain a completely reliable transfer is by transmitting and acknowledging a checksum for the entire data stream; in such a setting, lesser checksum and acknowledgement (ACK/NACK) protocols are justified only for the purpose of optimizing performance - they are useful to the vast majority of clients, but are not enough to fulfil the reliability requirement of this particular application. Thorough checksum is hence done at the end-points, and the network maintains a relatively low level of complexity and reasonable performance for all clients. In debates about network neutrality, a common interpretation of the end-to-end principle is that it implies a neutral, or "dumb" network.