DragonFly kernel List (threaded) for 2009-03
DragonFly BSD
DragonFly kernel List (threaded) for 2009-03
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: Porting tmpfs

From: Nikita Glukhov <a63fvb48@xxxxxxxxx>
Date: Sat, 28 Mar 2009 22:23:22 +0300

>    I think for now just use the buffer cache and get it working, even
>    though that means syncing data between the buffer cache and the backing
>    uobj.

Yesterday I've got it working using buffer cache. vop_read() and vop_write()
have been stolen from HAMMER. Also I've implemented tmpfs_strategy() simply by
moving here tmpfs_mappedread() and tmpfs_mappedwrite() with some changes.

Now there is ability to run files from tmpfs and it unmounts without deadlocks
(how it was earlier). It survives after fsstress but still has
problems with fsx -
reading bad data after truncating up. When mapped writing is used one
error sometimes happens: at vnode_pager_generic_getpages() after
VOP_READ() "page failed but no I/O error". I've became familiar with
that error when I was trying to
implement vop_getpages().

>    I think the way to fix this is to implement a feature in the real
>    kernel that tells it that the VM object backing the vnode should
>    never be cleaned (so clean pages in the object are never destroyed),
>    and then instead of destroying the VM object when the vnode is
>    reclaimed we simply remove the vnode association and keep the VM
>    object as the backing uobj.

Now uobj is allocated by swap_pager. Is it possible to use swap_pager object in
the capacity of vnode's object to get swapping working or it may interfere with
buffer cache?

Attachment: tmpfs.patch
Description: Binary data

[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]