Moving a Pool

This forum is to find answers to problems you may be having with ZEVO Community Edition.

Moderators: jhartley, MSR734, nola

Moving a Pool

Post by d.jacobs » Wed Oct 17, 2012 4:45 pm

Hi,

Moving a 4-disk pool from one host to another. The pool resides in an external Mediabox USB3/ESATA chassis and consists of 4x 3 TB disks in mirrored stripes for a total of 6TB.

On the old host, using a USB3 addin card or on USB2, the pool loads and runs fine.

On the new host, using an ESATA Expresscard with a MacBook Pro, the pool registers as corrupt:

Code: Select all
  pool: Mediabox
    id: 12635093095557323110
 state: UNAVAIL
action: The pool cannot be imported due to damaged devices or data.
config:

   Mediabox                                       UNAVAIL  insufficient replicas
     mirror-0                                     UNAVAIL  corrupted data
       GPTE_7E0EB42A-951E-44AE-A186-3D4222A8B818  ONLINE
       GPTE_6A41B2EE-22D7-4EF1-BB9F-84E857222A1A  ONLINE
     mirror-1                                     UNAVAIL  corrupted data
       GPTE_7725770F-1D79-4A91-90BA-4B57244B3905  ONLINE
       GPTE_3A8190E1-248D-4B82-B1AF-E83932289B6C  ONLINE

?(bluevulpine@Expanse.local)?(0|ttys000)?(Wed Oct 17|04:31:54)? ? -


Reconnecting it to the old host works fine. I still need to pull a USB cable to the MacBook from the drive (which I expect will work) but the goal is to have ESATA speeds instead of USB2 speeds on the laptop so I'd like to figure out why it won't work over the ESATA card.

EDIT: Confirmed, imports and functions fine with USB on the MacBook, meaning it's a problem when the disks are exposed with ESATA. I've used this enclosure with ESATA and ZEVO in the past without a problem, so I'm not sure what's changed.

Code: Select all
/dev/disk2
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *3.0 TB     disk2
   1:                        EFI                         209.7 MB   disk2s1
   2:                        ZFS                         3.0 TB     disk2s2
/dev/disk3
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *3.0 TB     disk3
   1:                        EFI                         209.7 MB   disk3s1
   2:                        ZFS                         3.0 TB     disk3s2
/dev/disk4
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:             zfs_pool_proxy Mediabox               *6.0 TB     disk4
/dev/disk5
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *3.0 TB     disk5
   1:                        EFI                         209.7 MB   disk5s1
   2:                        ZFS                         3.0 TB     disk5s2
/dev/disk6
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *3.0 TB     disk6
   1:                        EFI                         209.7 MB   disk6s1
   2:                        ZFS                         3.0 TB     disk6s2
d.jacobs Offline


 
Posts: 5
Joined: Wed Oct 17, 2012 3:55 pm

Re: Moving a Pool

Post by scasady » Wed Oct 17, 2012 6:49 pm

I can't really help but just to let you know you are not the only one I had to give up using the esata interface to an external seagate drive as it was just to unreliable. It would sometimes work and sometimes not. USB was fine. Nothing to do with ZFS in this case.
scasady Offline


 
Posts: 45
Joined: Sat Sep 15, 2012 8:00 am

Re: Moving a Pool

Post by grahamperrin » Wed Oct 17, 2012 9:57 pm

Which OSes?
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Moving a Pool

Post by dbrady » Wed Oct 17, 2012 10:03 pm

Is there anything zfs related in the kernel log when it fails to import?
/var/log/system.log (on 10.8)
/var/log/kernel.log (10.7 and 10.6)
dbrady Offline


 
Posts: 67
Joined: Wed Sep 12, 2012 12:43 am

order of numbering of disks required by a zfs_pool_proxy

Post by grahamperrin » Wed Oct 17, 2012 10:14 pm

Recalling part of what was said at the recent ZFS?Day:

"??we dynamically build it up on the fly when you plug it in, and as soon as we get critical mass ? add an import, and a straggler comes along, online??"

d.jacobs wrote:? fine with USB ?


The order is intriguing ??disks 2, 3, 5 and 6 for a zfs_pool_proxy at 4.

For a pool that requires four disks, I imagine that:

  • all must be numbered by the system -?critical mass ??before the import is automated by ZEVO.

If that requirement is true, then:

  • I don't understand how the pool (the zfs_pool_proxy) could be fine with late?arrivals.


I assume that OS numbering of the zfs_pool_proxy does not precede the import.

So. Is there leeway within ZFS or ZEVO, to allow for late arrival of two disks in a pool of the type configured by d.jacobs???

Code: Select all
   pool                                       
     mirror-0                                     
       disk 
       disk 
     mirror-1                                     
       disk 
       disk
Last edited by grahamperrin on Thu Oct 18, 2012 1:52 am, edited 5 times in total.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Moving a Pool

Post by audiophil » Thu Oct 18, 2012 1:11 am

a 4 drive esata-usb combo case is probably using some form of port multiplier / FIS chip. Added complexity there; does the case work on that esata card with more than 1 esata drive attached? And what ESATA card (vendor) are you using? I'm curious.
audiophil Offline


 
Posts: 15
Joined: Sat Sep 15, 2012 1:51 pm

Re: Moving a Pool

Post by d.jacobs » Thu Oct 18, 2012 4:44 pm

grahamperrin wrote:Which OSes?


OS X 10.8.2 on both machines.

dbrady wrote:Is there anything zfs related in the kernel log when it fails to import?
/var/log/system.log (on 10.8)
/var/log/kernel.log (10.7 and 10.6)



Code: Select all
10/18/12 4:42:13.000 PM kernel[0]: ZFSLabelScheme:probe: label 'Mediabox', vdev 2144138310266240547
10/18/12 4:42:13.000 PM kernel[0]: ZFSLabelScheme:probe: label 'Mediabox', vdev 9074293012843039376
10/18/12 4:42:15.000 PM kernel[0]: ZFSLabelScheme:probe: label 'Mediabox', vdev 17623926178148148774
10/18/12 4:42:15.000 PM kernel[0]: ZFSLabelScheme:start: 'Mediabox' critical mass with 3 vdevs (importing)
10/18/12 4:42:15.000 PM kernel[0]: zfsx_kev_importpool:'Mediabox' (12635093095557323110)
10/18/12 4:42:17.000 PM kernel[0]: ZFSLabelScheme:probe: label 'Mediabox', vdev 6434596853639618350
10/18/12 4:42:21.000 PM kernel[0]: zfsx_vdm_open: 'Mediabox' disk6s2
10/18/12 4:42:21.000 PM kernel[0]: zfsx_vdm_open: 'Mediabox' disk2s2
10/18/12 4:42:21.000 PM kernel[0]: zfsx_vdm_open: 'Mediabox' disk5s2
10/18/12 4:42:21.000 PM kernel[0]: zfsx_vdm_open: 'Mediabox' disk4s2
10/18/12 4:42:21.000 PM kernel[0]: zfsx_vdm_close: 'disk6s2'
10/18/12 4:42:21.000 PM kernel[0]: zfsx_vdm_close: 'disk5s2'
10/18/12 4:42:21.000 PM kernel[0]: zfsx_vdm_close: 'disk4s2'
10/18/12 4:42:21.000 PM kernel[0]: zfsx_vdm_close: 'disk2s2'
10/18/12 4:42:21.000 PM kernel[0]: zfsdev_ioctl: function error 22 on command 2


The following is added once the ESATA cable is disconnected:
Code: Select all
10/18/12 5:04:24.000 PM kernel[0]: ZFSLabelScheme:willTerminate: this 0xffffff802b0a1e00 provider 0xffffff802b098600 'zfs vdev for 'Mediabox''
10/18/12 5:04:24.000 PM kernel[0]: ZFSLabelScheme:stop: 0xffffff802b0a1e00 goodbye 'zfs vdev for 'Mediabox''
10/18/12 5:04:24.000 PM kernel[0]: ZFSLabelScheme:willTerminate: this 0xffffff802b0a1e00 provider 0xffffff802b06c400 'zfs vdev for 'Mediabox''
10/18/12 5:04:24.000 PM kernel[0]: ZFSLabelScheme:stop: 0xffffff802b0a1e00 goodbye 'zfs vdev for 'Mediabox''
10/18/12 5:04:24.000 PM kernel[0]: ZFSLabelScheme:willTerminate: this 0xffffff802b0a1e00 provider 0xffffff802af8af00 'zfs vdev for 'Mediabox''
10/18/12 5:04:24.000 PM kernel[0]: ZFSLabelScheme:stop: 0xffffff802b0a1e00 goodbye 'zfs vdev for 'Mediabox''
10/18/12 5:04:24.000 PM kernel[0]: ZFSLabelScheme:willTerminate: this 0xffffff802b0a1e00 provider 0xffffff8032e0c200 'zfs vdev for 'Mediabox''
10/18/12 5:04:24.000 PM kernel[0]: ZFSLabelScheme:stop: 0xffffff802b0a1e00 goodbye 'zfs vdev for 'Mediabox''
Last edited by d.jacobs on Thu Oct 18, 2012 5:05 pm, edited 2 times in total.
d.jacobs Offline


 
Posts: 5
Joined: Wed Oct 17, 2012 3:55 pm

Re: Moving a Pool

Post by d.jacobs » Thu Oct 18, 2012 5:02 pm

audiophil wrote:a 4 drive esata-usb combo case is probably using some form of port multiplier / FIS chip. Added complexity there; does the case work on that esata card with more than 1 esata drive attached? And what ESATA card (vendor) are you using? I'm curious.


I unfortunately don't have another ESATA device to test against at the moment. The case definitely has a port multiplier for ESATA. (Amazon link to case; Mediasonic HF2-SU3S2). The ESATA card is a CalDigit card (FASTA-2ex) and is noted to support port multiplier enclosures. System Profiler shows this for the AHCI portion of the card (the IDE portion has no driver loaded, as expected for OS X) -

Code: Select all
ExpressCard:

  Type:   AHCI Controller
  Driver Installed:   Yes
  MSI:   Yes
  Bus:   PCI
  Slot:   ExpressCard
  Vendor ID:   0x1b4b
  Device ID:   0x9123
  Subsystem Vendor ID:   0x1b4b
  Subsystem ID:   0x9123
  Revision ID:   0x0011
  Link Width:   x1
  Link Speed:   5.0 GT/s



I believe the above IDs identify this as a Marvell chip, likely a 88se9123.
d.jacobs Offline


 
Posts: 5
Joined: Wed Oct 17, 2012 3:55 pm

Re: order of numbering of disks required by a zfs_pool_proxy

Post by d.jacobs » Thu Oct 18, 2012 5:18 pm

grahamperrin wrote:The order is intriguing ??disks 2, 3, 5 and 6 for a zfs_pool_proxy at 4.


Here's an odder situation, then. Rebooted the MacBook, waited until the desktop settled, then connected the USB cable. I checked `diskutil list` - the proxy now appears after the first disk but at half the final volume's capacity. Shortly afterward once the volume mounted it showed 6.0 TB on the proxy.

In the past when I noticed the proxy in the middle of all disks I assumed (but never checked) that the first two were each one of the individual mirrored groups so that it was a pair of the striped halves and therefore usable but not fault tolerant until the other two disks caught up and were brought online. With just one disk showing up before the proxy as below...? No clue.

Code: Select all
diskutil list (with USB connection)

/dev/disk2
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *3.0 TB     disk2
   1:                        EFI                         209.7 MB   disk2s1
   2:                        ZFS                         3.0 TB     disk2s2
/dev/disk3
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:             zfs_pool_proxy                        *3.0 TB     disk3
/dev/disk4
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *3.0 TB     disk4
   1:                        EFI                         209.7 MB   disk4s1
   2:                        ZFS                         3.0 TB     disk4s2
/dev/disk5
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *3.0 TB     disk5
   1:                        EFI                         209.7 MB   disk5s1
   2:                        ZFS                         3.0 TB     disk5s2
/dev/disk6
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *3.0 TB     disk6
   1:                        EFI                         209.7 MB   disk6s1
   2:                        ZFS                         3.0 TB     disk6s2



I noticed with the ESATA cable connected that the proxy showed up after the four disks at the time I looked. I'm not sure if that means anything, either.
d.jacobs Offline


 
Posts: 5
Joined: Wed Oct 17, 2012 3:55 pm

Re: Moving a Pool

Post by dbrady » Thu Oct 18, 2012 7:06 pm

Until the pool is imported, the proxy device is just a placeholder and it's size or name ordering are not significant. It typically gets created after the first device is probed, so if there are stragglers (or the enclosure is serializing device descovery) the proxy can occur before other pool devices. Once the pool is imported its size will get adjusted.
dbrady Offline


 
Posts: 67
Joined: Wed Sep 12, 2012 12:43 am

Next

Return to Troubleshooting

Who is online

Users browsing this forum: bileyqrkq and 0 guests

cron