DragonFly BSD
DragonFly users List (threaded) for 2005-02
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: em driver - issue #2


From: Matthew Dillon <dillon@xxxxxxxxxxxxxxxxxxxx>
Date: Mon, 7 Feb 2005 11:28:17 -0800 (PST)

:>After reading this I realized that you are right about the reason that the
:>memory fails is that the box is interrupt bound (which is just what I was
:> trying to achieve when I started this test). I didnt choose 145Kpps by 
:>accident; I was trying to find a point at which the machine would livelock, 
:>to compare it to freebsd (since top wasn't working). Usually I fire about 
:>30Kpps (which is typical load on a busy 100Mb/s network) and see what 
:>pct of system resources is being used to index the performance of the box. 
:>145K would be more than this particular box can handle. A faster box can 
:>easily FORWARD 300K pps, so its not the raw number, but the box's 
:>capability. I hadn't considered that I'm working with a 32bit bus on this
:>system.
:>
:>Lowering the test to 95Kpps, dragonfly  handled it without any problems. So 
:>I'd say that the failure to get mbuf clusters is a function of the system 
:>being
:>perpetually overloaded. However the elegance in which a system handles an
:>overload condition is important. The fact that the em driver doesn't recover 
:>normally is the issue now. You can't have a spurt of packets bringing down
:>the system. 
:
:I need to take back what I said here. I ran the 145Kpps test on a 
:FreeBSD 4.9 system, and it not only handles it eloquently, but it 
:only runs at 30% cpu utilization. So I certainly HOPE that the dragonfly 
:system isn't interrupt bound, because if it is then something is very, 
:very wrong with the performance. There is definately something that
:doesn't work right. Here is the output of vmstat -m right after the failure.

    This could be apples and oranges here, since it is the EM device that 
    appears to be failing.  It could simply be due to the fact that 
    DragonFly doesn't try as hard as FreeBSD to allocate memory when it
    is told that the memory allocation is allowed to fail.  There are 
    huge swaths of code in FreeBSD (4, 5, and 6) that assume memory
    allocations succeed simply because they usually do, that we've had to
    'fix' in DragonFly.

:Memory statistics by type                          Type  Kern
:        Type  InUse MemUse HighUse  Limit Requests Limit Limit Size(s)
:       mbufcl  3064  3088K      0K 24584K    24396    0     0
:         mbuf  5533  1384K      0K 24584K    12219    0     0
:
:...

    Well, something odd is going on, because it doesn't look like all that
    many mbufs are actually allocated.  It sounds like the EM drive may
    be culprit and that the missing mbuf could be a temporary failure that
    is causing it.

    You could try increasing the mbuf cluster free reserve, it is
    kern.ipc.mcl_pool_max.  Try bumping it up from 1000 to 5000.
    It is unlikely to fix the problem, but it might mitigate (as a test).

    What I really need to investigate this is a recipe to reproduce
    the packet traffic.  e.g. what ports to install, what options to 
    give the programs, etc.  I don't have anything like that rigged up
    at the moment.

:Getting back to my question about allocating memory for the kernel,
:there is no way currently to do this in dragonfly as you could with
:kern_vm_kmem_size before?

    vm_kmem_size has to do with the fact that FreeBSD has two KVM maps,
    kernel_map and kvm_map.  vm_kmem_size does not change how much KVM
    the kernel has, it adjusts how much one map gets verses the other.

    DragonFly only has one kernel map, so it doesn't need a vm_kmem_size
    sysctl.

						-Matt



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]