[vox-tech] tcp tuning with wireshark?

Bill Broadley bill at broadley.org
Fri Feb 28 20:27:13 PST 2014


On 02/26/2014 11:37 PM, Nick Schmalenberger wrote:
> I need to increase throughput for long lived tcp connections over
> ipsec over wan between Amazon in Ireland and a Level3 gigabit
> link in Ashburn Virginia (currently running at about 20Mbps).

Do you mean that the link is 20mbit?  Or that the bandwidth you are
achieving over it is 20 mbit?

What is the largest MTU supported across that link?

> I've read various articles saying to enlarge the buffers and make
> various other kernel tweaks. Some say to base it on the bandwidth
> delay product, some say just on the link speed, and some say
> don't bother linux does all that automatically now. Alot of it
> seems random.

Heh, well there's quite a few variables to consider.  But in general
bandwidth is much harder to utilize over a high latency link.  So BDP is
definitely relevant.

What exactly are you trying to send/receive over this high latency link?

> However, with wireshark I see that the "bytes in flight"
> measurement which counts unacknowledged bytes from the source
> never gets close to the window size sent by the destination. Does
> this suggest anything in particular to tweak? I got some books on

One cheat/hack is to just do more TCP connections.  Generally the
throughput will increase over high latency links with more TCP
connections.  Up to a point of course.



> wireshark, which were quite helpful in how to use the graphs and
> filters, but on tcp performance they mostly just talked about the
> effect of packet loss. I'm not certain, but I don't think packet
> loss is the main thing holding back my performance, because there
> is some which causes a brief dip in the window size and then it
> recovers. Throughput stays pretty flat.
> 
> It would be really amazing if there was a flowchart on doing this
> for linux that could be informed by wireshark io graphs and other
> graphs. Has anybody ever seen such a chart? If this approach is
> succesful for me, and I can understand how to do it in several
> scenarios, I think I will even like to make such a flow chart if
> it doesn't already exist. Thanks for any tips and disabusement of
> my misunderstandings about tcp in linux :) 

TCP defaults are definitely suboptimal for transatlantic links.  As our
many assumptions that applications have.  Much depends on what you are
trying to do.  I've seen various appliance like widgets that will proxy
a given protocol for a high latency link so that servers/clients with
poor assumptions don't take quite as much of a hit.

I pretty good overview of the related issues is:
  http://www.psc.edu/index.php/networking/641-tcp-tune

What I'd do first is attempt to fix things manually by tinkering with
the mentioned values.  It wouldn't be particularly hard to write a
traffic generator that played with bitrate and number of simultaneous
connections to analyze available performance.  Said tool could even
explore the reasonable ranges for the various knobs you can tinker with
by writing to /sys and /proc.  Personally I'd be more likely to parse
the tcpdump logs myself so I could use an arbitrary number of filters,
statistics, and post processing.   Shouldn't be too hard to graph the
number of unack'd packets over time for instance.




More information about the vox-tech mailing list