DragonFly BSD
DragonFly kernel List (threaded) for 2004-08
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: VFS ROADMAP (and vfs01.patch stage 1 available for testing)

From: Matthew Dillon <dillon@xxxxxxxxxxxxxxxxxxxx>
Date: Fri, 13 Aug 2004 13:02:38 -0700 (PDT)

:On 13.08.2004, at 20:09, Matthew Dillon wrote:
:>     One would be able to export raw disk partitions as block devices, 
:> or
:>     file systems (fully cache coherent within the cluster, managed by
:>     the kernel), cpu, memory, etc.
:Can this also be used to replicate filesystems across boxes? At the 
:moment I'm missing a way to run redundant systems with redundant data 
:storage, i.e. have a fallback machine that holds the same production 
:data and is permanently synced with the master (or even completely 
:   simon

    Real time replication is a different problem entirely.  For that what
    you need is a high level journaling stream.  I believe we will have the
    the ability to hook in a journaling stream with the new vop_*() API.

    The requirement here is that the vop_*() wrappers have some sort of
    management structure to hold the 'I want to journal this filesytem'
    flag.  This structure is looking more and more like it ought to be 
    a mount point (a struct mount).  I can't think of a better structure
    to hold information about journaling a filesystem, after all!

    Part of my plan is to pass some sort of management structure into the
    vop_*() call that will hold the operations vectors (instead of hanging
    them off the vnode)... this is needed because in our future many 
    namespace related VOP's, like open and remove, will not have a vnode
    passed in any more.  The VOP wrapper routines need a common point of
    reference to tell them what to do.

    The mount structure pointer would suit both needs BUT there is a
    medium sized programming problem that needs to be resolved before
    that can happen.... right now a standard filesystem like UFS will
    store one of *three* different operations vectors into a vnode depending
    on whether the vnode represents a file/dir, a device, or a pipe.

    We have to figure out what to do about that before we can move the
    operations vector from the vnode to the mount structure (or construct
    some sort of governing structure, maybe not 'mount' but something new).

    vop_wrapper(mount_p, other args...)
		 +-> Journaling/Replication hooks (kernel managed)
		 +-> Cache coherency hooks (kernel managed)
		 +-> Range locking hooks (kernel managed)
		 +-> [Vnode operations vector] (VFS managed)

    I also want to consolidate the struct fileops and device ops functions 
    into the same management structure.  Most of the subsystems listed
    above apply equally to fileops AND devices (at least block storage
    devices).  It would be utterly cool to not only be able to journal
    high level FS calls, but to also journal lower level block I/O 


[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]