DragonFly users List (threaded) for 2009-01
DragonFly BSD
DragonFly users List (threaded) for 2009-01
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: RAID 1 or Hammer


From: Matthew Dillon <dillon@xxxxxxxxxxxxxxxxxxxx>
Date: Mon, 12 Jan 2009 23:43:55 -0800 (PST)

:I confess that, lacking ZFS, I have a very paranoid strategy on my
:Linux machines for doing backups (of code workspaces, etc). I archive
:the code onto a tmpfs and checksum that, and from the tmpfs distribute
:the archive and checksum to local and remote archives. This avoids the
:unthinkably unlikely worst case where an archive can be written to
:disk, dropped from cache, corrupted, and read bad wrong in time to be
:checksummed. The on-disk workspace itself has no such protection, but
:at least I can tell that each backup is as good as the workspace was
:when archived, which of course has to pass a complete recompile.
:
:-- 
:Dmitri Nikulin
:
:Centre for Synchrotron Science

    I pretty much do the same thing.  Taring up a HAMMER snapshot produces
    a consistent MD5 so I basically just cpdup to a HAMMER backup machine
    and snapshot each day.  This way the backup sources don't necessarily
    have to be HAMMER filesystems.  I can check that the snapshots are
    still good by re-running the tar | md5 and comparing against the md5
    I generated on the day of the backup.

    For the off-site backup I am currently just cpdup'ing from the backup
    box to the off-site box, but I have to use the hardlink trick because
    the off-site box is running linux atm.  Unfortunately I can't really
    validate the off-site at the moment.  When I get DFly box out to the
    colo I'll just use HAMMER's mirroring stream feature and that will
    give me identical snapshot points on the off-site that will tar to
    the same md5's.  Theoretically anyway.

					-Matt
					Matthew Dillon 
					<dillon@backplane.com>



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]