DragonFly users List (threaded) for 2009-02
DragonFly BSD
DragonFly users List (threaded) for 2009-02
[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]

Re: OT - was Hammer or ZFS based backup, encryption

From: Freddie Cash <fjwcash@xxxxxxxxx>
Date: Sun, 22 Feb 2009 22:32:02 -0800

On Sun, Feb 22, 2009 at 6:33 AM, Jeremy Chadwick <jdc@parodius.com> wrote:
> On Sun, Feb 22, 2009 at 01:36:28PM +0100, Michael Neumann wrote:
>> Okay "zpool remove" doesn't seem to work as expected, but it should
>> work well at least for RAID-1 (which probably no one uses for large
>> storage systems ;-). Maybe "zfs replace" works, if you replace an old
>> disk, with a larger disk, and split it into two partitions, the one
>> equally sized to the old, and the other containing the remainder of the
>> space. Then do:
>>   zfs replace tank old_device new_device_equally_sized
>>   zfs add tank new_device_remainder
>> But you probably know more about ZFS than me ;-)
> In this case, yes (that I know more about ZFS than you :-) ).  What
> you're trying to do there won't work.
> The "zfs" command manages filesystems (e.g. pieces under a zpool).  You
> cannot do anything with devices (disks) with "zfs".  I think you mean
> "zpool", especially since the only "replace" command is "zpool replace".
> What you're trying to describe won't work, for the same reason I
> described above (with your "zpool add tank ad8s1" command).  You can
> split the disk into two pieces if you want, but it's not going to
> change the fact that you cannot *grow* a zpool.  You literally have to
> destroy it and recreate it for the pool to increase in size.
> I've been through this procedure twice in the past year, as I replaced
> 250GB disks with 500GB, and then 500GB disks with 750GB.  It's a *huge*
> pain, and I cannot imagine anyone in an enterprise environment using ZFS
> to emulate a filer -- it simply won't work.  For individual servers
> (where disks are going to remain the same size unless the box is
> formatted, etc.), oh yes, ZFS is absolutely fantastic.

This is patently false, and you've been creating unnecessary work for
yourself.  :)

You most definitely can add drives to a pool, thus increasing the
total amount of storage space available in the pool.  It's as simple
  zpool add <poolname> <type> <device1> <device2> <...>

That's the whole point of the "add" keyword ... you add storage to the
pool.  For example, you can create a pool using a 6-disk raidz2 vdev
like so:
  zpool create pool raidz2 da0 da1 da2 da3 da4 da5

Later, you can add another raidz2 vdev like so:
  zpool add pool raidz2 da6 da7 da8 da9

Your pool has now become, effectively, a RAID60:  a RAID0 stripe made
up of two RAID6 arrays.

You can later add a mirrorred vdev to the pool using:
  zpool add pool mirror da10 da11

And data will be striped across the three different vdevs.  This is
the whole point of the pooled storage setup ... you just keep adding
storage to the pool, and it gets striped across it all.

You're getting tripped up by the same thing that I did when I first
started with ZFS:  you can't extend raidz vdevs (ie you can't start
with a 6-drive raidz2 and then later expand it into a 10-drive
raidz2).  But there's nothing stopping you from adding more raidz2
vdevs to the pool.

One of the servers we have at work uses 3x 8-drive raidz2 vdevs:

[fcash@thehive  ~]$ zpool status
  pool: storage
 state: ONLINE
 scrub: none requested

        NAME              STATE     READ WRITE CKSUM
        storage           ONLINE       0     0     0
          raidz2          ONLINE       0     0     0
            label/disk01  ONLINE       0     0     0
            label/disk02  ONLINE       0     0     0
            label/disk03  ONLINE       0     0     0
            label/disk04  ONLINE       0     0     0
            label/disk13  ONLINE       0     0     0
            label/disk14  ONLINE       0     0     0
            label/disk15  ONLINE       0     0     0
            label/disk16  ONLINE       0     0     0
          raidz2          ONLINE       0     0     0
            label/disk05  ONLINE       0     0     0
            label/disk06  ONLINE       0     0     0
            label/disk07  ONLINE       0     0     0
            label/disk08  ONLINE       0     0     0
            label/disk17  ONLINE       0     0     0
            label/disk18  ONLINE       0     0     0
            label/disk19  ONLINE       0     0     0
            label/disk20  ONLINE       0     0     0
          raidz2          ONLINE       0     0     0
            label/disk09  ONLINE       0     0     0
            label/disk10  ONLINE       0     0     0
            label/disk11  ONLINE       0     0     0
            label/disk12  ONLINE       0     0     0
            label/disk21  ONLINE       0     0     0
            label/disk22  ONLINE       0     0     0
            label/disk23  ONLINE       0     0     0
            label/disk24  ONLINE       0     0     0

errors: No known data errors

[fcash@thehive  ~]$ zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
storage                10.9T   3.90T   6.98T    35%  ONLINE     -

Freddie Cash

[Date Prev][Date Next]  [Thread Prev][Thread Next]  [Date Index][Thread Index]