TCP congestion is a giant topic and has lots of neat stuff to investigate. I think it’s become a hinderance to me in my basic TCP congestion control comparison work since I could basically spend the rest of my time at Hacker School doing a literature review. So in order to clear my head and browser, I’ve put together a list of interesting links.
Remy: Remy automatically generates TCP congestion algorithms using a combination of:
- The designer’s prior knowledge of a network known in advance, like wire speed.
- A model of the kind of work the network does.
- The goal of the end algorithm — what to optimize for.
It looks really interesting and claims to generate algorithms that perform better than any existing alternatives!
Sprout: A transport protocol that uses packet delay to determine congestion. Seems to be optimized for wireless networks where changes are sudden. Tries to predict packet loss instead of react to it. Another MIT project, alongside Remy.
ns3 and ns2: Network simulators! These look like they could be the answer/better approach to testing, over messing with tc and shaping traffic locally. This page on making an application with ns seems especially interesting.
Fairness Measure: interesting Wikipedia page on measure fairness with a specific section on TCP congestion.
Network Namespaces and Traffic Control: Awesome post by a former Hacker Schooler on network namespaces. I laughed here:
So first let’s talk about what a network namespace is. A network namespace is a feature provided by the
ip-netnstool (which is run as
ip netnsbut looked up as
man ip-netnsjust to confuse everyone) which is part of the seemingly-poorly-understood
ipsuite of tools that was supposed to deprecate
route, et al seven years ago, but for whatever reason most of the Linux community is not only still using
ifconfigbut is teaching newcomers to use it. Anyway that’s not the point.
Disappointingly true, even coming from one of those people that still runs ifconfig. Could also be useful in some testing.
LEDBAT: super interesting TCP congestion control algorithm that aims to take up all of the available bandwidth, but instantly backs off in the presence of other TCP flows. Used by Apple and BitTorrent.
Micro Transport Protocol: also from BitTorrent, a mini implementation of TCP in UDP that implements LEDBAT and does some NAT traversal and UDP hole punching.
github.com/ra1nb0w/tcp_evaluation: appears to be me in the future. Collection of bash scripts that are doing what I was talking about doing in my last post. This is what I’d try to use when I come back to the project.
Reproducing Network Research: really awesome Stanford blog where students attempt to reproduce published networking papers.
CoDel: CoDel is a network scheduling algorithm. Finding this kind of blew my mind, because it’s totally separate from TCP congestion control. There are other scheduling algorithms. There’s so much to say here. It looks like you can also plug-and-play these algorithms through the use of tc. There’s some really interesting discussion of bufferbloat too. I think finding this officially slowed me down on the TCP comparison — there are so many variables to consider that I just went reading instead of doing. It’s very fascinating.
TCP Incast Problem: describes the problem when someone receives lots of traffic at once, where the total of the traffic is more than the switch can handle. The common example is where a client makes a request that many servers answer a part of. The buffers on the switch to the client are full, and now the failed senders have to detect the timeout (TCP retransmission timeout of 200ms) and resend the packet. The client can’t make the next request until all parts are combined. This can especially be problematic in distributed filesystems and map-reduce systems.
Enabling High Performance Data Transfers: Tons of notes on tuning TCP. Interesting quote:
To put these data rates into perspective, consider this: 100 Mb/s is more than 10 megabytes in one second, or 600 megabytes (an entire CD-R image) in one minute. Clearly very few people see these data rates. However, some experts can get very high data rates (for example see the Land Speed Records). Why? The biggest strength of the Internet is the way in which the TCP/IP “hourglass” hides the details of the network from the application and vice versa. An unfortunate but direct consequence of the hourglass is that it also hides all flaws everywhere. Network performance debugging (often euphemistically called “TCP tuning”) is extremely difficult because nearly all flaws have exactly same symptom: reduced performance. For example insufficient TCP buffer space is indistinguishable from excess packet loss (silently repaired by TCP retransmissions) because both flaws just slow the application, without any specific identifying symptoms.
packetdrill: a frustratingly cool looking tool for scripting network stack testing. Amazing.
The Ultimate SO_LINGER page and TCP is unreliable are two interesting posts about Linux’s implementation of sockets and problems verifying that applications have received the data a sender has written.
Congestion Exposure – This is a working group in IETF for exposing congestion control information at the IP level. This would let network devices (and operators) see which flows are experiencing congestion. This was a confusing read. I found that it was helpful to go into the IETF archives for the first minutes of the working group. At this point, the idea was brand new, and the author(s) had to work harder to explain the reasoning for the RFC. It made some of the pieces more accessible for me.
Datacenter TCP: the formalization of DCTCP, a modification to TCP. In a TCP header, the sender can explicitly mark that congestion exists (explicit congestion notification, ECN). This modification changes the header structure to allow packets to indicate the extent of congestion present, rather than a simple yes/no.