DragonFly BSD
DragonFly users List (threaded) for 2005-02
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: em driver - issue #2


From: EM1897@xxxxxxx
Date: Sun, 6 Feb 2005 17:12:43 EST

In a message dated 2/6/2005 4:30:10 PM Eastern Standard Time, Matthew Dillon 
<dillon@xxxxxxxxxxxxxxxxxxxx> writes:

>:
>:It would be good if a user could make the decision that mbuf allocations
>:should be a priority. For a normal system, the current settings are likely
>:adequate; while a network appliance would require a different set of 
>:priorities. What we do is test the absolute capacity of the box under
>:best conditions to forward packets, so it would be rather easy to set
>:a threshold. 
>
>    A router has a more definable set of conditions then a general purpose
>    box.  I agree that we need some sort of threshold but it would have
>    to be fairly generous to deal with the varying usage patterns out 
>    there.  e.g. we would want to allow at least a 1000 packet queue
>    backlog between the IF interrupt and the protocol stacks.  Packet
>    processing actually becomes more efficient as the queues get longer
>    so the only real consideration here is how much latency one wishes to
>    tolerate.  Latency is going to be a function of available cpu in the
>    backlog case.

Thats why I suggested making it a tunable. In fact, you almost HAVE
to make it a tunable, because a gigabit link needs a much larger buffer
than a 100Mb/s link, and a 32/33Mhz PCI bus machine has much
different processing capability than a 66/133Mhz machine with a 
processor twice the speed. It doesnt have to be a sysctl tunable;
but it should be changable even if it has to be hard coded into a 
particular kernel.

The queue depth isn't really a "latency" issue as it is a resource 
control issue: high latency is always preferred to a drop, and a 
drop is always preferable to running out of memory. The trade-off is 
performance vs stability, where stability *should* always win out 
in that arguement. First you
assure stability, then you tune to get it as fast as you can get it
without compromising the stability. Engineers often argue about 
queue sizes being about optimization, but in reality latency makes
customers frown, and dropped packets make them take their
business elsewhere. So you  want to make your queues
as large as you can to protect your resources from running out,
without having to drop packets unnecessarily.



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]