DragonFly BSD

alexh todo

Note: this is my personal todo and ideas list (alexh@)

Boring:

[alexh@leaf:~/home] $ roundup-server -p 8080 bt=bugtracker

-05:48- :        dillon@: no, double frees to the object cache are nasty.  It can't detect them.  the object 
                          winds up in the magazine array twice
-05:48- :        dillon@: (and possibly different magazines, too)
-05:49- :         alexh@: can't I just write some magic to a free object on the first objcache_put and check 
                          if it's there on objcache_put?
-05:49- :         alexh@: and clear it on objcache_get, anyways
-05:50- :        dillon@: no, because the object is still may have live-initialized fields
-05:50- :        dillon@: because it hasn't been dtor'ed yet (one of the features of the objcache, to avoid 
                          having to reinitialize objects every time)
-05:50- :        dillon@: the mbuf code uses that feature I think, probably other bits too
-05:51- :        dillon@: theoretically we could allocate slightly larger objects and store a magic number at 
                          offset [-1] or something like that, but it gets a little iffy doing that
-05:52- :        dillon@: the objcache with the objcache malloc default could probably do something like that 
                          I guess.
-05:52- :        dillon@: I don't consider memory tracking to be a huge issue w/ dragonfly, though I like the 
                          idea of being able to do it.  It is a much bigger problem in FreeBSD due to the 
                          large number of committers 


-05:55- :        dillon@: For the slab allocator you may be able to do something using the Zone header.
-05:55- :        dillon@: the slab allocator in fact I think already has optional code to allocate a tracking 
                          bitmap to detect double-frees
-05:56- :        dillon@: sorry, I just remembered the bit about the power-of-2 allocations
-05:56- :        dillon@: for example, power-of-2-sized allocations are guaranteed not only to be aligned on 
                          that particular size boundary, but also to not cross a PAGE_BOUNDARY (unless the 
                          size is > PAGE_SIZE)
-05:57- :        dillon@: various subsystems such as AHCI depend on that behavior to allocate system 
                          structures for which the chipsets only allow one DMA descriptor.
-05:59- :         alexh@: http://svn.freebsd.org/viewvc/base/head/sys/vm/redzone.c?view=markup&pathrev=155086 
                          < this is redzone. it basically calls redzone_addr_ntor() to increase the size in 
                          malloc(), and then redzone_setup() just before returning the chunk
-06:02- :        dillon@: jeeze. that looks horrible.
-06:03- :         alexh@: I don't quite get that nsize + redzone_roundup(nsize)
-06:03- :        dillon@: I don't get it either.  It would completely break power-of-2-sized alignments in the 
                          original request
-06:04- :        dillon@: hmmm.  well, no it won't break them, but the results are oging to be weird
-06:04- :        dillon@: ick.

-06:15- :        dillon@: if the original request is a power of 2 the redzone adjusted request must be a power 
                          of 2
-06:15- :        dillon@: basically
-06:16- :        dillon@: so original request 64, redzone request must be 128, 256, 512, 1024, etc.
-06:16- :         alexh@: yah, k
-06:16- :        dillon@: original request 32, current redzone code would be 32+128 which is WRONG.
-06:16- :         alexh@: how big is PAGE_SIZE ?
-06:16- :        dillon@: 4096 on i386 and amd64
-06:17- :         alexh@: and one single malloc can't be bigger than that?
-06:17- :        dillon@: I'm fairly sure our kmalloc does not guarantee alignment past PAGE_SIZE (that is, 
                          the alignment will be only PAGE_SIZE eve if you allocate PAGE_SIZE*2)
-06:17- :        dillon@: a single kmalloc can be larger then PAGE_SIZe
-06:18- :        dillon@: it will use the zone up to around 1/2 the zone size (~64KB I think), after which it 
                          allocates pages directly with the kernel kvm allocator
-06:18- :        dillon@: if you look at the kmalloc code you will see the check for oversized allocations
-06:18- :         alexh@: yah, saw that
-06:18- :         alexh@: "handle large allocations directly"
-06:19- :         alexh@: not sure how to do this, really, as the size is obviously also changed in 
                          kmem_slab_alloc
-06:20- :         alexh@: but kmem_slab_alloc isn't called always, is it?
-06:20- :         alexh@: only if the req doesn't fit into an existant zone
-06:20- :        dillon@: right
-06:20- :        dillon@: you don't want to redzone the zone allocation itself