DragonFly bugs List (threaded) for 2008-06
DragonFly BSD
DragonFly bugs List (threaded) for 2008-06
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: File system panic on recent HEAD


From: Matthew Dillon <dillon@xxxxxxxxxxxxxxxxxxxx>
Date: Wed, 4 Jun 2008 10:50:14 -0700 (PDT)

:FWIW, I have been running with a HAMMER filesystem on /home since May 25.
:
:I have not had any problem with it since Matt fixed the owner set to root
:issue :-)
:
:-- 
:Francois Tigeot

    Excellent.  I recently added filesystem-full detection.  Note however
    that any filesystems formatted prior to the addition of that code will
    report too many free blocks and the filesystem full detection will
    not work properly.  Also, newer versions of newfs_hammer pre-allocate
    *ALL* the blockmap infrastructure.

    Right now I am working on the read() and write() path.  I have removed
    the double-copy from the read() path and I expect to be able to remove
    the double-copy from the write() path as well as have the strategy
    write code actually perform the IO directly, instead of queueing the
    BIO to the flusher.  That should double HAMMER's write() efficiency
    and jump its write performance up to the platter speed, as well as
    fix issues that have cropped up in 'blogbench' testing where pending
    writes sometimes cause HAMMER's buffer efficiency to drop drastically.
    I should be able to commit it in the next 2 days or so, I'm hoping.

    It is also clear to me that I need to have a catastrophic recovery mode,
    for situations where the filesystem becomes corrupt beyond the control
    of the software (such as due to memory corruption or a failing disk or
    something like that).  That is, a way to extract good filesystem objects
    from a mostly destroyed filesystem.  HAMMER has a huge amount of
    referential redundancy due to the typed blockmaps it uses.  Even the
    freemap has typed back-pointers into the blockmaps!  It is literally
    possible to simply scan the the FREEMAP zone, locate all the big-blocks
    related to B-Tree nodes, scan them linearly, and reconstruct the
    entire filesytem from that information.

    Those two things, plus further stabilization testing, are going to keep
    me busy through to the release in mid-July.

    There is currently one known bug, which I also intend to fix this month.
    The bug is that if you have a really large file and you 'rm' it, HAMMER
    can run out of UNDO space (and panic) because it tries to remove all the
    records associated with the file all in a single sync cycle.  This 
    is easy to reproduce on small filesystems but can still happen on large
    filesystems even if they reserve the maximum 512M of UNDO FIFO space,
    if the file is really big (like hundreds of gigabytes big).

					-Matt
					Matthew Dillon 
					<dillon@backplane.com>



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]