replacing drive in raidz pool

All your general support questions for OpenZFS on OS X.

replacing drive in raidz pool

Postby carpman » Sat Feb 11, 2017 7:46 am

Hi, i have had one drive go down in a 4 drive raidz pool. I have a replacement drive installed and ready to go.

I formated the new drive with GUID_partition_scheme.

I have done

sudo gpt destroy /dev/disk7 (this is new drive)

Then
Code: Select all
sudo zpool replace -f Storage 14940719397246996105 /dev/disk7
cannot replace 14940719397246996105 with /dev/disk7: no such device in pool


Any ideas what to try now?

Thanks


Code: Select all
zpool status
  pool: Ext-Storage
 state: DEGRADED
status: One or more devices could not be opened.  Sufficient replicas exist for
   the pool to continue functioning in a degraded state.
action: Attach the missing device and online it using 'zpool online'.
   see: http://zfsonlinux.org/msg/ZFS-8000-2Q
  scan: resilvered 3.50K in 0h0m with 0 errors on Thu Mar 19 09:00:19 2015
config:

   NAME                                            STATE     READ WRITE CKSUM
   Ext-Storage                                     DEGRADED     0     0     0
     raidz1-0                                      DEGRADED     0     0     0
       media-2FFC8CE3-425B-254E-9B6A-62036451A846  ONLINE       0     0     0
       14940719397246996105                        UNAVAIL      0     0     0  was /private/var/run/disk/by-id/media-DD542733-D4B7-6142-9A7B-4E3AF436B7DE
       media-F2D090E2-F4DA-154B-82A6-D95A9728FE84  ONLINE       0     0     0
       media-9DD90B89-7A3C-B642-B9FF-369D95A29BC3  ONLINE       0     0     0

errors: No known data errors



Code: Select all
zdb
Ext-Storage:
    version: 5000
    name: 'Ext-Storage'
    state: 0
    txg: 4880145
    pool_guid: 2819644488756112022
    errata: 0
    hostid: 186269896
    hostname: 'localhost'
    vdev_children: 1
    vdev_tree:
        type: 'root'
        id: 0
        guid: 2819644488756112022
        children[0]:
            type: 'raidz'
            id: 0
            guid: 10415523844208958417
            nparity: 1
            metaslab_array: 33
            metaslab_shift: 35
            ashift: 9
            asize: 4000759939072
            is_log: 0
            create_txg: 4
            children[0]:
                type: 'disk'
                id: 0
                guid: 7212440721484018281
                path: '/private/var/run/disk/by-id/media-2FFC8CE3-425B-254E-9B6A-62036451A846'
                whole_disk: 1
                DTL: 53
                create_txg: 4
            children[1]:
                type: 'disk'
                id: 1
                guid: 14940719397246996105
                path: '/private/var/run/disk/by-id/media-DD542733-D4B7-6142-9A7B-4E3AF436B7DE'
                whole_disk: 1
                not_present: 1
                DTL: 52
                create_txg: 4
            children[2]:
                type: 'disk'
                id: 2
                guid: 8972474747319470450
                path: '/private/var/run/disk/by-id/media-F2D090E2-F4DA-154B-82A6-D95A9728FE84'
                whole_disk: 1
                DTL: 51
                create_txg: 4
            children[3]:
                type: 'disk'
                id: 3
                guid: 13300463437381664295
                path: '/private/var/run/disk/by-id/media-9DD90B89-7A3C-B642-B9FF-369D95A29BC3'
                whole_disk: 1
                DTL: 50
                create_txg: 4
    features_for_read:



Code: Select all
/dev/disk5 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *1.0 TB     disk5
   1:                        ZFS                         1.0 TB     disk5s1
   2: 6A945A3B-1DD2-11B2-99A6-080020736631               8.4 MB     disk5s9

/dev/disk6 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *1.0 TB     disk6
   1:                        ZFS                         1.0 TB     disk6s1
   2: 6A945A3B-1DD2-11B2-99A6-080020736631               8.4 MB     disk6s9

/dev/disk7 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *1.0 TB     disk7
   1:                        ZFS                         1.0 TB     disk7s1
   2: 6A945A3B-1DD2-11B2-99A6-080020736631               8.4 MB     disk7s9

/dev/disk8 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *1.0 TB     disk8
   1:                        ZFS                         1.0 TB     disk8s1
   2: 6A945A3B-1DD2-11B2-99A6-080020736631               8.4 MB     disk8s9
carpman
 
Posts: 13
Joined: Wed Mar 18, 2015 7:52 am

Re: replacing drive in raidz pool

Postby carpman » Sat Feb 11, 2017 11:48 am

Sorted thanks
carpman
 
Posts: 13
Joined: Wed Mar 18, 2015 7:52 am

Re: replacing drive in raidz pool

Postby Sharko » Sun Feb 12, 2017 1:57 pm

I'm curious: what fixed your problem? I would have liked to have seen the output of

zpool status -g

since that is where I always get the identifier of the unavailable drive for the replace command.
Sharko
 
Posts: 230
Joined: Thu May 12, 2016 12:19 pm

Re: replacing drive in raidz pool

Postby carpman » Mon Feb 13, 2017 1:17 pm

Bit embarrassing really :(

Should having be selecting pool Ext-Storage but was selecting pool Storage!
carpman
 
Posts: 13
Joined: Wed Mar 18, 2015 7:52 am


Return to General Help

Who is online

Users browsing this forum: No registered users and 30 guests

cron