Adding a hot spare to pool

Moderators: jhartley, MSR734, nola

Adding a hot spare to pool

Post by tschlichter » Tue Apr 02, 2013 8:22 pm

Hello.

I created a zpool sometime ago, and am now getting disk errors on a disk in that pool. can i add a hot spare now, using the 'add' command? then force the pool to replace the failing disk using the 'replace' command? the pool is the maximum allowable size, will this pose any problem with the adding the spare? i am referencing the solaris zfs help for the syntax to do this, will these commands work the same in zevo zfs?

thanks!
tschlichter Offline


 
Posts: 7
Joined: Mon Oct 15, 2012 1:43 pm

Re: Adding a hot spare to pool

Post by scasady » Wed Apr 03, 2013 12:54 pm

what do you mean "the pool is the maximum allowable size" ?
You shouldn't need to add a dev as a spare first, just use replace.
If the failing drive is part of a mirror than it is best to use "attach" to make a 3 way mirror, wait for resilver to finish then detach the failing drive
scasady Offline


 
Posts: 45
Joined: Sat Sep 15, 2012 8:00 am

Re: Adding a hot spare to pool

Post by tschlichter » Wed Apr 03, 2013 3:37 pm

16TB is the maximum allowable size under Zevo Community Edition.
tschlichter Offline


 
Posts: 7
Joined: Mon Oct 15, 2012 1:43 pm

Re: Adding a hot spare to pool

Post by scasady » Wed Apr 03, 2013 3:49 pm

Ah.. I forgot that max size from zevo thing.
scasady Offline


 
Posts: 45
Joined: Sat Sep 15, 2012 8:00 am

Re: Adding a hot spare to pool

Post by tschlichter » Wed Apr 03, 2013 6:28 pm

Actually its not a mirror its five 3tb disk, creating a pool w/o a spare. Thats why I would like to add a spare now, and replace the failing drive.
tschlichter Offline


 
Posts: 7
Joined: Mon Oct 15, 2012 1:43 pm

Re: Adding a hot spare to pool

Post by raattgift » Wed Apr 03, 2013 8:20 pm

Is the maximum size still present in CE 1.1.1 ?

How does it manifest ?

It clearly doesn't apply at the vdev or pool level.

I can sorta see reasons why it *may* apply at the DMU (or ZPL) layer, but I'm not likely to get around to wiring up extra 3TB drives to a Mac to find out particularly soon.

Code: Select all

$ uname -a
Darwin ... 12.3.0 Darwin Kernel Version 12.3.0: Sun Jan  6 22:37:10 PST 2013; root:xnu-2050.22.13~1/RELEASE_X86_64 x86_64

$ zpool list Dual
NAME    SIZE   ALLOC    FREE     CAP  HEALTH  ALTROOT
Dual  21.8Ti  17.7Ti  4.08Ti     81%  ONLINE  -

$ zpool status -v Dual
  pool: Dual
 state: ONLINE
 scan: scrub repaired 112Ki in 56h31m with 0 errors on Sat Mar 30 21:32:08 2013
config:

   NAME                                           STATE     READ WRITE CKSUM
   Dual                                           ONLINE       0     0     0
     raidz3-0                                     ONLINE       0     0     0
       GPTE_   ONLINE       0     0     0  at disk19s2
       GPTE_   ONLINE       0     0     0  at disk14s2
       GPTE_   ONLINE       0     0     0  at disk13s2
       GPTE_   ONLINE       0     0     0  at disk11s2
       GPTE_   ONLINE       0     0     0  at disk15s2
       GPTE_   ONLINE       0     0     0  at disk12s2
       GPTE_   ONLINE       0     0     0  at disk16s2
       GPTE_   ONLINE       0     0     0  at disk18s2
   logs
     GPTE_    ONLINE       0     0     0  at disk7s1
   cache
     GPTE_    ONLINE       0     0     0  at disk4s2

errors: No known data errors

$ zfs list -o space Dual
NAME   AVAIL    USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
Dual  2.13Ti  10.1Ti         0  35.0Mi              0     10.1Ti


Poking around with strings and egrep finds nothing particularly revealing other than:

Code: Select all
/System/Library/Filesystems/zfs.fs/Contents/Info.plist:

                <key>ZFS Pool</key>
                <dict>
...
                        <key>FSFormatMaximumSize</key>
                        <integer>9223372034707292160</integer>
                        <key>FSFormatMinimumSize</key>
                        <integer>268435456</integer>
...
                        <key>FSName</key>
                        <string>ZFS Pool</string>
...
                </dict>


which caps the pool size *a lot* larger than 16 TiB.
raattgift Offline


 
Posts: 98
Joined: Mon Sep 24, 2012 11:18 pm

Re: Adding a hot spare to pool

Post by raattgift » Wed Apr 03, 2013 8:36 pm

tschlichter wrote:Actually its not a mirror its five 3tb disk, creating a pool w/o a spare. Thats why I would like to add a spare now, and replace the failing drive.


You have a pool with five top level vdevs; none of the vdevs has any redundancy; a failure of any of the vdevs faults the pool and also risks the pool becoming unimportable and its datable unretrievable in the future. Additionally, you are unlikely to realize any on-line repairing should there be a sudden error burst affecting one of the drives.

If you have to rely on the data in the pool you are already in deep trouble !

I hope you have backups. (1 April was international backup day).

You can add redundancy to each of the vdevs, and have a reasonably fault-tolerant pool by adding five more similar drives, each as a mirror of one of the five existing vdevs.

You would "zpool attach poolname GPTE_vdev1 /dev/newdrive1"
"zpool attach poolname GPTE_vdev2 /dev/newdrive2"
...
"zpool attach poolname GPTE_vdev5 /dev/newdrive5"

at which point no small set of disk errors will fault your pool (many errors will also be in-line repaired), and your data remains online and avaialable as long as one disk in each now-mirrored vdev is working.

If you really don't care about the data, then scsady's advice still applies: attach a drive to the single-drive vdev whose drive you want to replace, wait for the resilvering to happen, then detach the old drive.

Finally, if you can't add new disks for some reason and you want a reasonable pool, you should destroy the existing one and rebuild it with some redudancy. If your workload is anything other than dominated by enormous numbers of random write IOPS, then the five drives would make a fine raidz2.

(I would not make a raidz1 out of them; resilvering 3TB drives in raidzn takes a long long long time, and you will not like it if errors appear on any of the surviving drives during a raidz1 resilver. I would however make a two mirror vdev pool from four of them, and keep the fifth around as a spare.)
raattgift Offline


 
Posts: 98
Joined: Mon Sep 24, 2012 11:18 pm

Link

Post by grahamperrin » Thu Apr 04, 2013 9:40 pm

raattgift wrote:Is the maximum size still present in CE 1.1.1 ? …


Please see developer Don Brady's post under Community Edition 16TB Limit.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Adding a hot spare to pool

Post by raattgift » Fri Apr 05, 2013 9:03 am

Riiiight, but this just re-underlines my questions.

grahamperrin wrote:Please see developer Don Brady's post under Community Edition 16TB Limit.


which says:

dbrady wrote:The current 16TB limit is for the total pool storage size not including spares, logs and cache devices. This pool size property can be seen in a zpool list command or a zpool get.

Code: Select all
$ zpool get size tank
NAME    PROPERTY  VALUE   SOURCE
tank    size      16Ti    -



Here's Dual again (cf. the output of "zpool list" in the message you replied to) :

Code: Select all
# zpool get size Dual
NAME  PROPERTY  VALUE   SOURCE
Dual  size      21.8Ti  -
raattgift Offline


 
Posts: 98
Joined: Mon Sep 24, 2012 11:18 pm


Return to General Discussion

Who is online

Users browsing this forum: ilovezfs and 0 guests

cron