The first connection to the drive(s) for a pool is trouble-free. Volumes of the pool may be ejected with Finder, and the cable may be disconnected, but then reconnection to the same drive(s) results in a kernel panic with the following characteristics:
- type 14=page fault
- BSD process name corresponding to current thread: zpool
Workaround
If an affected drive is to be disconnected, do not use Finder to eject its volumes. Instead:
- zpool export nameofpool
Background
MacBookPro5,2 with 8 GB memory and USB 2. Mountain Lion. For a few months I used a single-disk pool with its hard disk drive (Seagate GoFlex Desk (0x50a5)) connected to an old hub (Sitecom USB 2.0 Dock CN-022 (0x0022)).
A few days ago I added another hard disk drive to the pool: Seagate Backup+ Desk (0xa0a4) connected to the same old hub. Following that addition, every reconnection (second connection) to the hub resulted in a kernel panic.
There's a similar topic involving USB 3 (not USB 2): Kernel panic on import (USB3 external pool). Under that topic, links to six earlier topics and kamil's workaround, for which I'm most grateful.
Notes
Notifications from HardwareGrowler before a kernel panic:
Without a panic:
For the most recent incident, a review of system.log (stored on HFS Plus) suggests that the messages below were the last before the panic:
- Code: Select all
2013-03-14 17:54:12.000 kernel[0]: ZFSLabelScheme:probe: label 'tall', vdev 14860644203735757357
2013-03-14 17:54:16.000 kernel[0]: ZFSLabelScheme:probe: label 'tall', vdev 3631269066261115103
2013-03-14 17:54:16.000 kernel[0]: ZFSLabelScheme:start: 'tall' critical mass with 2 vdevs (importing)
2013-03-14 17:54:16.000 kernel[0]: zfsx_kev_importpool:'tall' (11005033736436061818)
In fact, there were two more recent messages (lost as a result of the panic). Screenshots were taken moments before the panic.
The relevant parts of the messages that were seen by me immediately before the panic:
- Code: Select all
… kernel: zfsx_vdm_open: 'tall' disk5s2
… kernel: zfsx_vdm_open: 'tall' disk3s2
I could (and will eventually) purchase a more modern hub but essentially, this seems like a bug in #zevo …