DragonFly BSD
DragonFly kernel List (threaded) for 2005-02
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: phk malloc, was (Re: ptmalloc2)


From: Jonathan Dama <jd@xxxxxxxxxxx>
Date: Tue, 22 Feb 2005 17:16:47 -0800

I think this is a perfectly reasonable discussion to be
having on the forums here, though perhaps it belongs on
users@dragonflybsd  but ::shrugs::

I think mlock is probably far too aggressive.  All you need
to do is touch the relevant pages of memory.  You'd use
mlock if you expected that access to that memory was very
latency sensitive--i.e., you couldn't accept the delay
necessary to fetch the page from swap.

That said, you are free to mlock pages you get from malloc.
You could even free() them too.  Of course at that point one
of two things would happen:
1) malloc would return the page to the system using madvise
   and the lock would go away.
2) malloc would service another allocation requestion which
   include the previously locked page and the lock would
   remain for the new allocation.

Anyways, neither mlock nor touching the pages completely
solves the problem.  The reason is because those things only
ensure that your program will not run into an unexpected out
of memory situation.  But what about the other programs on
the system?  If they exhust the swap space, your program
which you have carefully touched or mlocked all the relevant
pages could still get the axe on their behalf!

To protect against this you have to call madvise(2) with 
MADV_PROTECT.

I agree that there should be much more commentary about
overcommit in the malloc and mmap (w.r.t. MAP_ANON and
mapping /dev/zero) man pages.

To reiterate though there are at least three resources that
need to be managed on any system:
1) process address space  (malloc(3)/brk(2)/mmap(2))
2) physical memory usage  (mlock, setrlimit(2), madvise(2))
3) vm memory usage        (touching pages, madvise(2))

If you put 'Z' into your malloc option strings, malloc will
touch all of the allocated pages for you.

-Jon

P.S., now that I've finished this, I see that you've sent
another message that touches on some of these issues.




[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]