DragonFly kernel List (threaded) for 2009-04
DragonFly BSD
DragonFly kernel List (threaded) for 2009-04
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: Porting tmpfs


From: Alex Hornung <ahornung@xxxxxxxxx>
Date: Thu, 23 Apr 2009 19:40:07 +0100

Hi Nikita,

I was just curious about how the tmpfs port is going. Any progress? :)


Cheers,
Alex


On Sat, 2009-03-28 at 22:23 +0300, Nikita Glukhov wrote:
> >    I think for now just use the buffer cache and get it working, even
> >    though that means syncing data between the buffer cache and the backing
> >    uobj.
> 
> Yesterday I've got it working using buffer cache. vop_read() and vop_write()
> have been stolen from HAMMER. Also I've implemented tmpfs_strategy() simply by
> moving here tmpfs_mappedread() and tmpfs_mappedwrite() with some changes.
> 
> Now there is ability to run files from tmpfs and it unmounts without deadlocks
> (how it was earlier). It survives after fsstress but still has
> problems with fsx -
> reading bad data after truncating up. When mapped writing is used one
> error sometimes happens: at vnode_pager_generic_getpages() after
> VOP_READ() "page failed but no I/O error". I've became familiar with
> that error when I was trying to
> implement vop_getpages().
> 
> 
> >    I think the way to fix this is to implement a feature in the real
> >    kernel that tells it that the VM object backing the vnode should
> >    never be cleaned (so clean pages in the object are never destroyed),
> >    and then instead of destroying the VM object when the vnode is
> >    reclaimed we simply remove the vnode association and keep the VM
> >    object as the backing uobj.
> 
> Now uobj is allocated by swap_pager. Is it possible to use swap_pager object in
> the capacity of vnode's object to get swapping working or it may interfere with
> buffer cache?




[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]