DragonFly kernel List (threaded) for 2005-03
Re: Delayed ACK triggered by Header Prediction
Ok, and I'm afraid my original diagnosis was incorrect... I had run that
test in half-duplex mode without realizing it so of course I wasn't
getting 10MBytes/sec+ out of it. Running properly I get full bandwidth.
:Thank you very much for your reply.
:Actually, I am using GbE on my DragonFlyBSD box.
:Here is my experimental environment.
: Sender (FreeBSD Stable) --- Router (FreeBSD 5.3R) --- Receiver (DragonFlyBSD)
: (Bandwidth = 100Mbps, RTT = 100ms, router queue length = 64 pkts.
: On the router above, dummynet runs with HZ=1000.)
:All ethernet interfaces above are Broadcom BCM5703 GbE. So, there is
:a possibility that receive interrupts for ACK segments are aggregated.
:I think TCP performance reduction is due to bandwidth-delay product.
:In my environment, if slow start grew slowly, the TCP performance
:became poor. I am sorry I did not describe my experimental environment.
:I should do so. It was important information in my story.
There are a lot of question marks in your setup. First, 100Mbps... is
that 100 MBits/sec or 100 MBytes/sec you are talking about there?
Second, if the ethernet interfaces are GigE interfaces then even if
dummynet is setup for 100 MBits/sec you are going to have a problem
with packet bursts that are likely to blow out the 64 packet queue
length you specified. The problem is that even at 1000hz dummynet is
going to accumulate a potentially large number of packets on the input
before it has a chance to do any processing, and it will also generate
packet bursts on the output. It is only able to approximate the
bandwidth-delay you are requesting. Assuming a perfect system the
minimum burst size at full bandwidth is going to be on the order of
65 packets. The actual burst size will almost certainly range from
30-150 packets, possibly more if the timer interrupt gets delayed (which
is almost certainly going to happen). A 64 packet queue on the dummynet
is just too small.
Third, if we assume 100MBits/sec and an RTT of 100ms, that's 6500+
packet per second to fill the pipe. With an RTT of 100ms that would
require 650 packets to be 'in-transit'. I am assuming that you are
using window scaling and a TCP buffer size in the 1.5MByte - 2MByte range
to handle that (and that's assuming no packet loss).
So the question here is... are you intentionally creating such massive
packet loss? Even SACK would have a problem keeping the pipe full with
packet loss occuring every ~64 packets or so on a link with a 100ms
delay! In fact, for SACK to work properly at all in that environment
the TCP buffers and window sizes would have to be huge, like in the
5-10 MByte/range in order to deal with the additional ~100-~300 ms
latency in getting the old data resent due to the network latency.
I am rather intrigued by the setup. It kinda sounds like you are
simulating a lossy satellite link.
:I have not experienced DragonFlyBSD on normal LAN. I did not talk
:about the performance on LAN. I am sorry for my ambiguous report.
:Today, I took many packet traces of my experiments. But they have gone...
:Tomorrow, I will take another traces and put onto my web page.