Increasing size by replacing/resilvering known to work?

All your general support questions for OpenZFS on OS X.

Increasing size by replacing/resilvering known to work?

Postby alanr » Wed Nov 18, 2015 10:32 pm

My zpool is comprised of 4 mirrors. I want to increase the size of the pool by replacing the 1.5tb disks in one of the mirrors with 4tb disks. I've read that I can replace one of them, wait for resilver, replace the other, wait for resilver again, and then the available space to the pool will increase (in this case by 2.5tb).

Do I understand the process correctly? Is this known to work (or not) in HEAD or a recent version?

Thanks,
Alan
alanr
 
Posts: 1
Joined: Thu Jun 12, 2014 12:26 am

Re: Increasing size by replacing/resilvering known to work?

Postby ilovezfs » Fri Nov 20, 2015 8:01 am

As far as I know, that has always worked. Here's an example for you:

Code: Select all
bash-3.2# devicesize() {(set -o pipefail ; diskutil info -plist "$1" &>/dev/null && diskutil info -plist "$1" | xmllint --xpath '//key[text()="IOKitSize"]/following-sibling::*[1]/text()' - && echo "")}
bash-3.2# for i in {2..7} ; do printf "disk${i}\t" ; devicesize "disk${i}" || echo "-" ; done
disk2   268435456
disk3   268435456
disk4   402653184
disk5   402653184
disk6   536870912
disk7   536870912
bash-3.2# for i in {2..7} ; do printf "disk${i}s1\t" ; devicesize "disk${i}s1" || echo "-" ; done
disk2s1   257949696
disk3s1   257949696
disk4s1   392167424
disk5s1   392167424
disk6s1   -
disk7s1   -
bash-3.2# #pool uses whole disks 2, 3, 4, and 5
bash-3.2# #so it really uses partitions disk2s1, ..., disk5s1
bash-3.2# #BEFORE size should be 257949696 bytes + 392167424 bytes
bash-3.2# echo '257949696 + 392167424' | bc
650117120
bash-3.2# gnumfmt --to=iec-i --suffix=B <<< $(echo '257949696 + 392167424' | bc)
620MiB
bash-3.2# zpool list mypool
NAME     SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
mypool   608M  1.03M   607M         -     0%     0%  1.00x  ONLINE  -
bash-3.2# #So that looks about right with about 12MiB of overhead
bash-3.2# zpool status
  pool: mypool
 state: ONLINE
  scan: none requested
config:

   NAME        STATE     READ WRITE CKSUM
   mypool      ONLINE       0     0     0
     mirror-0  ONLINE       0     0     0
       disk2   ONLINE       0     0     0
       disk3   ONLINE       0     0     0
     mirror-1  ONLINE       0     0     0
       disk4   ONLINE       0     0     0
       disk5   ONLINE       0     0     0

errors: No known data errors
bash-3.2# zpool offline mypool disk4
bash-3.2# zpool replace mypool disk4 disk6
invalid vdev specification
use '-f' to override the following errors:
/dev/disk6 does not contain an EFI label but it may contain partition
information in the MBR.
bash-3.2# zpool replace -f mypool disk4 disk6
bash-3.2# zpool list mypool
NAME     SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
mypool   608M  1.60M   606M         -     0%     0%  1.00x  ONLINE  -
bash-3.2# zpool offline mypool disk5
bash-3.2# zpool replace mypool disk5 disk7
invalid vdev specification
use '-f' to override the following errors:
/dev/disk7 does not contain an EFI label but it may contain partition
information in the MBR.
bash-3.2# zpool replace -f mypool disk5 disk7
bash-3.2# zpool list mypool
NAME     SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
mypool   608M  1.34M   607M      128M     0%     0%  1.00x  ONLINE  -
bash-3.2# #Where's the new space?
bash-3.2# for i in {2..7} ; do printf "disk${i}s1\t" ; devicesize "disk${i}s1" || echo "-" ; done
disk2s1   257949696
disk3s1   257949696
disk4s1   392167424
disk5s1   392167424
disk6s1   526385152
disk7s1   526385152
bash-3.2# #Note that disk6s1 and disk7s1 already span nearly the entire disk
bash-3.2# #with the same amount of space left over for s9
bash-3.2# echo '536870912 - 526385152' | bc
10485760
bash-3.2# echo '402653184 - 392167424' | bc
10485760
bash-3.2# echo '268435456 - 257949696' | bc
10485760
bash-3.2# #So why is it saying 608M still?
bash-3.2# gnumfmt --to=iec-i --suffix=B <<< $(echo '257949696 + 526385152' | bc)
748MiB
bash-3.2# #So with still about 12MiB overhead, we should expect about 736MiB as the AFTER size.
bash-3.2# #How do we get the pool to use the new space?
bash-3.2# zpool online -e mypool disk6
cannot expand disk6: cannot relabel '/dev/disk6': unable to open device: 16
bash-3.2# zpool offline mypool disk6
bash-3.2# zpool online -e mypool disk6
bash-3.2# zpool list mypool
NAME     SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
mypool   736M  1.33M   735M         -     0%     0%  1.00x  ONLINE  -
bash-3.2# #OK, so there's the 736MiB we were expecting.
bash-3.2# zpool status
  pool: mypool
 state: ONLINE
  scan: resilvered 17K in 0h0m with 0 errors on Fri Nov 20 07:55:09 2015
config:

   NAME        STATE     READ WRITE CKSUM
   mypool      ONLINE       0     0     0
     mirror-0  ONLINE       0     0     0
       disk2   ONLINE       0     0     0
       disk3   ONLINE       0     0     0
     mirror-1  ONLINE       0     0     0
       disk6   ONLINE       0     0     0
       disk7   ONLINE       0     0     0

errors: No known data errors
bash-3.2#
ilovezfs
 
Posts: 232
Joined: Thu Mar 06, 2014 7:58 am

Re: Increasing size by replacing/resilvering known to work?

Postby lundman » Mon Nov 23, 2015 4:44 pm

Yes, it works. Be aware of the pool property "autoexpand" is default to off. So once the final disk (in each vdev) is replaced, you will need to export/import (reboot) for it to grow. You can set autoexpand to "on" before replacing last disk, if you want it to "just get bigger".
User avatar
lundman
 
Posts: 1337
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan


Return to General Help

Who is online

Users browsing this forum: No registered users and 18 guests