DragonFly BSD
DragonFly kernel List (threaded) for 2005-12
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: Severe packet loss on fxp interfaces with the new bridging code...


From: Chris Csanady <cc@xxxxxxx>
Date: Sat, 24 Dec 2005 07:58:42 -0600

On 12/23/05, Matthew Dillon <dillon@xxxxxxxxxxxxxxxxxxxx> wrote:
>     (p.s. if you want to submit a patch to fix the output packet count
>     too that would be great!)
>
> test28# netstat -I bridge0 -in 1
>             input      (bridge0)           output
>    packets  errs      bytes    packets  errs      bytes colls
>      29392     0    2880416          0     0          0     0
>      27542     0    2699116          0     0          0     0
>      25886     0    2536828          0     0          0     0

Part of the patch Simon committed (thanks!) does just this.  The
output column in netstat should now show the count of locally
generated packets sent out the bridge interface.  The input column
shows the sum of all packets received on any bridge interface,
locally destined or otherwise.  This seems reasonable, and also
matches the numbers seen by bpf.

>     Maybe your kern.ipc.nmbclusters is too low.  If so, set it in
>     /boot/loader.conf (it has to be set at boot-time, not afterwords).
>
>     If that turns out to be the problem the only thing I'll have to do
>     is nudge the defaults.

It looks like this is the problem after all.   My machine has 128M of
ram, which results in 1504 mbuf clusters by default.  In operation,
I have never seen it break 1000, so I assumed that this was not
the issue.  Since the magazine size for clusters ends up being 256
though, it appears that there is not enough wiggle room for the
objcache here.

After increasing nmbclusters though, things work perfectly.  Still,
it might be useful for this knob to take into account the number
of cpus and magazine size.

Anyways, enjoy the holidays!

Chris




[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]