DragonFly BSD
DragonFly users List (threaded) for 2012-07
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: Unable to mount hammer file system Undo failed

From: Wojciech Puchar <wojtek@xxxxxxxxxxxxxxxxxxxxxxx>
Date: Thu, 19 Jul 2012 18:13:47 +0200 (CEST)

>> Any Tree-like structure produces a huge risk of losing much more data that 
>> was corrupted at first place.
> Not so sure about that statement, but well, let's agree we might disagree :)
disagreement is a source of all good ideas. but you should explain why.

my explanation below.

> You asked for a little documentation about its layout, workings; this may be 
> a good fit: http://www.dragonflybsd.org/presentations/nycbsdcon08/
this is about older hammer revision.

Matthew claimed some time ago that new hammer is completely different.

But after reading i understood that everything is in B-Tree. exactly 
what i call dangerous. B-Tree used to store everything, directory entries, inodes etc.

B-Tree are dangerous if they are used as the only way to access data. 
Corrupted B-Tree does mean no access to anything below it!!

What i see as a main difference between HAMMER and ZFS are:

1) practical - hammer is very fast, don't use gigabytes of RAM and lots of 
CPU speed. Not that i did a lot of tests but it seems like UFS speed, 
sometimes even more, rarely less.

It is actually USAFUL, cannot be said on ZFS ;)

2) basic way of storing data are similar, details are different, danger is 

3) HAMMER have recovery program. It will need to read whole media. Assume 
2TB disk at 100MB/s -> 20000 seconds==6 hours.
ZFS doesn't have, there are few businesses that recover ZFS data for 
money. For sure they doesn't feel it's a crisis ;)

assume that i store my clients data in hammer filesystem and it crashed 
completely,  but disks are fine. Assume it's tuesday 16pm, last copy done 
automatically monday 17:30, failure found at 17pm, i am on place 18pm

I ask my client - what do you prefer:

- wait 6 hours and there is good deal of chance that most of your data 
will be recovered. If so, the little few would be found out and recovered 
from backup. If not we will start recovery from backup that would take 
another 6 hours?

- just clear things out and start recovery from backup, everything would 
be for sure recovered as it was yesterday after work?

the answer?

1) divide disk space for metadata space and data space. amount of 
metadata space defined at filesystem creation, say 3% of whole drive.

2) data stored only in B-Tree leafs, and all B-Tree leafs stored in 
"metadata space". few critical filesystem blocks stored here too at 
predefined place.

3) everything else stored in data space. B-Tree blocks excluding leafs, 
undo log, actual data.

4) everything else as it is already with modification to make sure every 
B-Tree leaf block will have data describing it properly. inodes having 
inode number inside, directory having it's inode number inside too. AFAIK 
it is already like that.

5) hammer recover modified to scan this 3% of space and then rebuild 
B-Tree. Will work faster or similar than fsck_ffs this way, in spite of 
being "last resort" tool.

THE RESULT: Fast and featureful filesystem that can always be quickly 
recovered even in "last resort" cases.

[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]