DragonFly kernel List (threaded) for 2008-06
DragonFly BSD
DragonFly kernel List (threaded) for 2008-06
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: HAMMER update 19-June-2008 (56C) (HEADS UP - MEDIA CHANGED)

From: Johannes Hofmann <hofmann@xxxxxxxxxxxxxxxxxxxxxx>
Date: 21 Jun 2008 14:41:01 GMT

Matthew Dillon <dillon@apollo.backplane.com> wrote:
> :Pardon my ignorance if I am missing something, I haven't looked much
> :into HAMMER yet.
> :
> :Will the FS have the same atomic update features that UFS has? Meaning
> :fsync(2) returns only when all directory entries are safely on the
> :disk (whether it's with softupdate-type ordering or journaling). It's
> :important for mail servers and such so they don't lose messages at the
> :time of powerfail/crash. If you dig around mailing lists, you'll find
> :interesting stories how people who ran their FS mounted async (the
> :default Linux EXT2/3 mount) for mail servers (and AFAIK at least on
> :Linux in that case fsync returns early - not atomic, so software
> :written with BSD behavior in mind wasn't safe to run without patching)
> :found some of the messages in lost+found.
>    Basically yes.  HAMMER maintains a dependancy hierarchy for directory
>    entries and the related inodes, so if you create a directory structure
>    and then fsync some file deep down in it it should fsync the directory
>    entries as well.
>    HAMMER does not have an async mount mode :-).  It will never have an
>    async mount mode, in fact, but it doesn't need one.  Writing is so-well
>    decoupled from the media that an async mode would not actually make
>    things any faster.
>    HAMMER's fsync might return early too, BTW... not intentionally, it's
>    still an all-or-nothing deal from the point of view of crash recovery,
>    but if the inode is already queued to the flusher it would have to be
>    re-queued to get the rest of the modifications and that might cause
>    fsync() to return early.  Doing it properly shouldn't be too difficult
>    but it isn't at the top of my priority list.
> :Also will there be a feature to grow/and or shrink the FS live without
> :having to unmount? I can do this right now with XFS and LVM on Linux
> :(grow, but not shrink), and its working amazingly well and very
> :quickly to boot.
> :
> :Thanks.
> :
> :-- 
> :Dan
>    I haven't written the utility support but growing a HAMMER filesystem
>    is fairly trivial.  All one needs to do is add the appropriate entries
>    to the freemap.  Not only that, but also adding volumes to a HAMMER
>    filesystem.
>    HAMMER's freemap is a two-layer blockmap.  It is NOT pre-sized, and
>    there is no block translation.  it works more like a sparse file whos
>    size is the maximum possible size of a HAMMER filesystem (uh, that
>    would be, uh, 1 Exabyte I think with the work done last week).
>    Shrinking is also possible.  Not only shrinking, but also removing whole
>    volumes.  Again, the feature hasn't been written yet and it would be a
>    bit more time consuming because the reblocker would have to be run to
>    clean out (aka copy out) the areas being removed, but there would be
>    nothing inherently difficult about it and it certainly could be done
>    live.

Would it be possible to do that by allowing read-only volumes? So that
hammer_blockmap_alloc() would not return space from these read-only
volumes. Reblocking could then move the data off these volumes.
Read-only volumes could perhaps also be used for copy-on-write 
functionality, or am I missing something?


>    p.s. if someone wants to make a side-project of it, go for it!
>    Mirroring is at the top of my list for the release.  Frankly, the
>    best way to resize a filesystem is to mirror and cluster, and then
>    simply take the 'old' filesystem offline and completely redo it.
>    Clustering is kinda the holy grail for the project and clearly won't
>    be ready for this release, but it is something to think about.
>                                        -Matt
>                                        Matthew Dillon 
>                                        <dillon@backplane.com>

[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]