DragonFly users List (threaded) for 2007-02
DragonFly BSD
DragonFly users List (threaded) for 2007-02
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

[no subject]


eader.dragonflybsd.org> <45d98a2c$0$833$415eb37d@crater_reader.dragonflybsd.org> <200702192245.l1JMjH7a091153@apollo.backplane.com> <45DA49B7.7000806@exemail.com.au>
From: Matthew Dillon <dillon@apollo.backplane.com>
Subject: Re: Plans for 1.8+ (2.0?)
Date: Mon, 19 Feb 2007 23:11:07 -0800 (PST)
BestServHost: crater.dragonflybsd.org
List-Post: <mailto:users@crater.dragonflybsd.org>
List-Subscribe: <mailto:users-request@crater.dragonflybsd.org?body=subscribe>
List-Unsubscribe: <mailto:users-request@crater.dragonflybsd.org?body=unsubscribe>
List-Help: <mailto:users-request@crater.dragonflybsd.org?body=help>
List-Owner: <mailto:owner-users@crater.dragonflybsd.org>
Sender: users-errors@crater.dragonflybsd.org
Errors-To: users-errors@crater.dragonflybsd.org
Lines: 78
X-Trace: 1171956284 crater_reader.dragonflybsd.org 833
Xref: crater_reader.dragonflybsd.org dragonfly.users:8901

:Hey Matt,
:1) Does your filesystem plan include the ability to grow and shrink a 
:partition/volume? ie. /home is running out of space so we could run 
:"shrinkfs" ... on /usr which has a lot of space and "growfs" ... on /home

    The filesystem's backing store will be segmented.  Segment size can 
    range from 1MB to 4GB (ish).  A 'partition' would be able to hold
    multiple segments from the filesystem's point of view, but the main
    purpose of the segmentation is to create large, near-independant 
    blocks of data which can be dealt with on a segment-by-segment basis
    (e.g. for recovery, fsck/check, replication, growing, and shrinking

    Segmentation also means the filesystem's backing store is not 
    restricted to a single block device but can be glued together
    with several block devices, or even mixed-and-matched between
    separate replicated data stores for recovery purposes.

    So, yes, it will be possible to grow or shrink the filesystem on
    a segment-by-segment basis.
:2) Are you going to do away disklabel stuff and replace it with 
:something better/easier to use?

    Probably not in 2.0.  The disklabel still serves a purpose with
    regards to mixing and matching different filesystems.

    However, within the context of the new filesystem itself each
    'segment' will be completely identified in its header so segments
    belonging to different filesystems could comingle within one
    disklabel partition.  The disklabel would simply say that the
    storage is associated with the new filesystem but would not imply
    that a particular parition would be associated with a mount 1:1
    like they are currently.

    This would effectively remove the partitioning requirement.  You would
    just say how many segments you wanted each 'filesystem' to use, 
    dynamically.  Growing is easy.  Shrinking would require a background
    scan or temporary relocation of the effected segments but would
    also be easy.

    Since segments will self-identify in their header, the actual physical
    location of a segment becomes irrelevant.

    If you had 1TB of storage and 4GB segments the kernel would have to
    do only 256 I/O's (reading the segment headers) to self-identify all
    the segments and associate them with their filesystems.  Just as an
    example.  Such a list would be cached, of course, but the point is
    that for recovery purposes the OS would be able to regenerate the
    list from scratch, given only access to the physical storage, with 
    minimal delay.

:3) Is vinum finally gonna die with the new filesystem? ie. volume 
:manager will be integrated in the new file system, like ZFS?

    I personally have never used vinum.  I never trusted the code enough
    to use it... not so much the original code, but the fact that it has
    gone unmaintained for so long a period of time.

    But, yes, the new filesystem will have its own volume manager based
    on the principle of self-identifying disk segments.

    Note that I am not talking about RAID-5 here.  I'm talking about
    replication topologies only.  I have no intention of supporting RAID-5
    or other physical abstractions beyond pure replication at the logical
    level.  This isn't to say that RAID-5 would not be supportable, only
    that it would have to be implemented at the block level or the device
    level rather then at the filesystem level.  The replication on the
    other hand will be fully integrated into the filesystem.

					Matthew Dillon 

[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]