DragonFly kernel List (threaded) for 2003-11
Re: Added-checksum filesystem
Kip Macy <kmacy@xxxxxxxxxxx> wrote:
> Netapp filers already do this. On ZCS disks every 64th file system block
> consists of checksums for the other 63. On BCS, sectors are 520 bytes,
> so every 4k block has an additional 64 bytes in which the checksum is
Several venodrs (all large unix ones?) have 520 systems using 520 secord
disks but I think they always stash an individual per-sector checksum in
the extra 8 bytes. This is of course very easy to offload to the disk
> stored. This is really only meaningful if you have a parity block (if
> it is the parity block that fails the check, than you just re-generate
> it) for the stripe to recover the data from, otherwise you are really
> only notifying yourself that you've lost data and it is time to restore
> from backup.
Which *is* a useful thing to know. It might say save you a whole
bunch of time to try to find out why you got an odd result (before
before checking data).
Zip style disks are worse in this.
For correction you'd have to compute per-fragment ecc and while this might
be useful in cases i suspect the cost may be prohibitive, but i have no
> Few people realize just how crappy today's disks are, the most recent
> generation of disks of all classes (ATA, SCSI, FC) have substantially
> higher media error rates than the previous generation. For people who
> have large disk systems and genuinely depend on the integrity of their
> data, what you are suggesting is not idle banter.
Even if disk error rates had remained the same things would not be nice
as the volume of data has gone up considerably - commodity disks have
gone up a factor of at least 20 over the past 6 years. in 3 more we'll
have lost about two orders of magintude of protection.
Runnig over any base setup is imho important - whic is why file system
(imho) and not disk subsystem level.
+++ Out of cheese error +++