DragonFly users List (threaded) for 2009-02
DragonFly BSD
DragonFly users List (threaded) for 2009-02
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: OT - was Hammer or ZFS based backup, encryption


From: Ulrich Spörlein <uspoerlein@xxxxxxxxx>
Date: Sun, 22 Feb 2009 20:12:36 +0100

On Sun, 22.02.2009 at 06:33:44 -0800, Jeremy Chadwick wrote:
> On Sun, Feb 22, 2009 at 01:36:28PM +0100, Michael Neumann wrote:
> > Okay "zpool remove" doesn't seem to work as expected, but it should
> > work well at least for RAID-1 (which probably no one uses for large
> > storage systems ;-). Maybe "zfs replace" works, if you replace an old
> > disk, with a larger disk, and split it into two partitions, the one
> > equally sized to the old, and the other containing the remainder of the
> > space. Then do:
> > 
> >   zfs replace tank old_device new_device_equally_sized
> >   zfs add tank new_device_remainder
> > 
> > But you probably know more about ZFS than me ;-)
> 
> In this case, yes (that I know more about ZFS than you :-) ).  What
> you're trying to do there won't work.
> 
> The "zfs" command manages filesystems (e.g. pieces under a zpool).  You
> cannot do anything with devices (disks) with "zfs".  I think you mean
> "zpool", especially since the only "replace" command is "zpool replace".
> 
> What you're trying to describe won't work, for the same reason I
> described above (with your "zpool add tank ad8s1" command).  You can
> split the disk into two pieces if you want, but it's not going to
> change the fact that you cannot *grow* a zpool.  You literally have to
> destroy it and recreate it for the pool to increase in size.
> 
> I've been through this procedure twice in the past year, as I replaced
> 250GB disks with 500GB, and then 500GB disks with 750GB.  It's a *huge*
> pain, and I cannot imagine anyone in an enterprise environment using ZFS
> to emulate a filer -- it simply won't work.  For individual servers
> (where disks are going to remain the same size unless the box is
> formatted, etc.), oh yes, ZFS is absolutely fantastic.

This is nonsense, of course. Here's proof (running on FreeBSD 7.1)

root@roadrunner: ~# mdconfig -atswap -s128m
md1
root@roadrunner: ~# mdconfig -atswap -s128m
md2
root@roadrunner: ~# mdconfig -atswap -s256m
md3
root@roadrunner: ~# mdconfig -atswap -s256m
md4
root@roadrunner: ~# zpool create foo mirror md1 md2
root@roadrunner: ~# zpool status foo
  pool: foo
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        foo         ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            md1     ONLINE       0     0     0
            md2     ONLINE       0     0     0

errors: No known data errors
root@roadrunner: ~# zfs list foo
NAME   USED  AVAIL  REFER  MOUNTPOINT
foo    106K  90.9M    18K  /foo
root@roadrunner: ~# zpool replace foo md1 md3
root@roadrunner: ~# zpool scrub foo
root@roadrunner: ~# zpool status foo
  pool: foo
 state: ONLINE
 scrub: scrub completed with 0 errors on Sun Feb 22 20:06:18 2009
config:

        NAME        STATE     READ WRITE CKSUM
        foo         ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            md3     ONLINE       0     0     0
            md2     ONLINE       0     0     0

errors: No known data errors
root@roadrunner: ~# zpool replace foo md2 md4
root@roadrunner: ~# zpool scrub foo
root@roadrunner: ~# zpool status foo
  pool: foo
 state: ONLINE
 scrub: scrub completed with 0 errors on Sun Feb 22 20:06:35 2009
config:

        NAME        STATE     READ WRITE CKSUM
        foo         ONLINE       0     0     0
          mirror    ONLINE       0     0     0
            md3     ONLINE       0     0     0
            md4     ONLINE       0     0     0

errors: No known data errors
root@roadrunner: ~# zfs list foo
NAME   USED  AVAIL  REFER  MOUNTPOINT
foo    110K  90.9M    18K  /foo
root@roadrunner: ~# zpool export foo; zpool import foo
root@roadrunner: ~# zfs list foo
NAME   USED  AVAIL  REFER  MOUNTPOINT
foo    110K   219M    18K  /foo

The export/import dance might be a problem in a HA environment of
course. But at least it works for RAIDZ, too.

root@roadrunner: ~# zpool create foo raidz md1 md2 md5
root@roadrunner: ~# zfs list foo
NAME   USED  AVAIL  REFER  MOUNTPOINT
foo    122K   214M  24.0K  /foo
root@roadrunner: ~# zpool replace foo md1 md3
root@roadrunner: ~# zpool replace foo md2 md4
root@roadrunner: ~# zpool replace foo md5 md6
root@roadrunner: ~# zfs list foo; zpool export foo; zpool import foo; zfs list foo
NAME   USED  AVAIL  REFER  MOUNTPOINT
foo    122K   214M  24.0K  /foo
NAME   USED  AVAIL  REFER  MOUNTPOINT
foo    126K   470M  24.0K  /foo


Cheers,
Ulrich Spörlein
-- 
None are more hopelessly enslaved than those who falsely believe they are free
-- Johann Wolfgang von Goethe



[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]