L2ARC: dynamic import before cache vdev is present

Moderators: jhartley, MSR734, nola

L2ARC: dynamic import before cache vdev is present

Post by grahamperrin » Sun Nov 04, 2012 12:03 pm

OS X 10.8.2 on a MacBookPro5,2 with 8 GB memory.

ZEVO Community Edition 1.1.1 dynamically imports before the cache vdev is connected to the computer, resulting in warnings and errors. My most recent example:

Code: Select all
2012-11-04 16:40:21.000 kernel[0]: ZFSLabelScheme:probe: label 'zhandy', vdev 10616857169251329946
2012-11-04 16:40:21.000 kernel[0]: ZFSLabelScheme:start: 'zhandy' critical mass with 1 vdev(s) (importing)
2012-11-04 16:40:22.000 kernel[0]: zfsx_kev_importpool:'zhandy' (4688397874579579662)
2012-11-04 16:40:23.000 kernel[0]: zfsx_vdm_open: 'zhandy' disk4s2
2012-11-04 16:40:23.000 kernel[0]: zfsx_vdm_open: couldn't find vdevMedia for 'GPTE_EC9A371E-C089-4E64-A8AA-F270CB9FB4B6'
2012-11-04 16:40:23.000 kernel[0]: zfsx_vdm_open: couldn't find vdevMedia for 'GPTE_EC9A371E-C089-4E64-A8AA-F270CB9FB4B6'
2012-11-04 16:40:25.000 kernel[0]: zfsx_vdm_open: couldn't find vdevMedia for 'GPTE_EC9A371E-C089-4E64-A8AA-F270CB9FB4B6'
2012-11-04 16:40:25.000 kernel[0]: zfsx_vdm_open: couldn't find vdevMedia for 'GPTE_EC9A371E-C089-4E64-A8AA-F270CB9FB4B6'
2012-11-04 16:40:25.000 kernel[0]: ________________________________________
2012-11-04 16:40:25.000 kernel[0]: ZFS WARNING: 'error from: fs.zfs.vdev.open_failed'
2012-11-04 16:40:25.000 kernel[0]: pool: 'zhandy'
2012-11-04 16:40:25.000 kernel[0]: vdev_type: 'disk'
2012-11-04 16:40:25.000 kernel[0]: vdev_path: '/dev/dsk/GPTE_EC9A371E-C089-4E64-A8AA-F270CB9FB4B6'
2012-11-04 16:40:25.000 kernel[0]: prev_state: 1
2012-11-04 16:40:25.000 kernel[0]: zfsx_vdm_open: couldn't find vdevMedia for 'GPTE_EC9A371E-C089-4E64-A8AA-F270CB9FB4B6'
2012-11-04 16:40:25.000 kernel[0]: zfsx_vdm_open: couldn't find vdevMedia for 'GPTE_EC9A371E-C089-4E64-A8AA-F270CB9FB4B6'
2012-11-04 16:40:25.000 kernel[0]: ________________________________________
2012-11-04 16:40:25.000 kernel[0]: ZFS WARNING: 'error from: fs.zfs.vdev.open_failed'
2012-11-04 16:40:25.000 kernel[0]: pool: 'zhandy'
2012-11-04 16:40:25.000 kernel[0]: vdev_type: 'disk'
2012-11-04 16:40:25.000 kernel[0]: vdev_path: '/dev/dsk/GPTE_EC9A371E-C089-4E64-A8AA-F270CB9FB4B6'
2012-11-04 16:40:25.000 kernel[0]: prev_state: 1
2012-11-04 16:40:25.000 kernel[0]: zfsx_vdm_open: couldn't find vdevMedia for 'GPTE_EC9A371E-C089-4E64-A8AA-F270CB9FB4B6'
2012-11-04 16:40:25.000 kernel[0]: zfsx_vdm_open: couldn't find vdevMedia for 'GPTE_EC9A371E-C089-4E64-A8AA-F270CB9FB4B6'
2012-11-04 16:40:25.000 kernel[0]: ________________________________________
2012-11-04 16:40:25.000 kernel[0]: ZFS WARNING: 'error from: fs.zfs.vdev.open_failed'
2012-11-04 16:40:25.000 kernel[0]: pool: 'zhandy'
2012-11-04 16:40:25.000 kernel[0]: vdev_type: 'disk'
2012-11-04 16:40:25.000 kernel[0]: vdev_path: '/dev/dsk/GPTE_EC9A371E-C089-4E64-A8AA-F270CB9FB4B6'
2012-11-04 16:40:25.000 kernel[0]: prev_state: 1
2012-11-04 16:40:25.000 kernel[0]: zfsx_mount: '/Volumes/zhandy'
2012-11-04 16:40:26.000 kernel[0]: zfsx_mount: '/Volumes/zhandy/Pocket Time Machine'
2012-11-04 16:40:51.000 kernel[0]: ZFSLabelScheme:probe: label '???', vdev 4693709588154435632
2012-11-04 16:40:51.000 kernel[0]: zfsx_vdm_open: 'zhandy' disk6s2
2012-11-04 16:41:42.000 kernel[0]: zfsx_unmount: '/Volumes/zhandy/Pocket Time Machine' (umount)
2012-11-04 16:41:42.000 kernel[0]: zfsvfs_teardown: '/Volumes/zhandy/Pocket Time Machine' (txg_wait_synced in 171 ms)
2012-11-04 16:41:43.000 kernel[0]: zfsx_unmount: '/Volumes/zhandy' (umount)
2012-11-04 16:41:43.000 kernel[0]: zfsvfs_teardown: '/Volumes/zhandy' (txg_wait_synced in 158 ms)
2012-11-04 16:41:43.000 kernel[0]: zfsx_vdm_close: 'disk6s2'
2012-11-04 16:41:43.000 kernel[0]: zfsx_vdm_close: 'disk4s2'
2012-11-04 16:44:03.000 kernel[0]: ZFSLabelScheme:willTerminate: this 0xffffff802dfe1900 provider 0xffffff8027947800 'zfs vdev for 'zhandy''
2012-11-04 16:44:03.000 kernel[0]: ZFSLabelScheme:stop: 0xffffff802dfe1900 goodbye 'zfs vdev for 'zhandy''
2012-11-04 16:44:05.000 kernel[0]: ZFSLabelScheme:willTerminate: this 0xffffff802dfe1900 provider 0xffffff802918a900 '%noformat%'
2012-11-04 16:44:05.000 kernel[0]: ZFSLabelScheme:stop: 0xffffff802dfe1900 goodbye '%noformat%'


– in the midst of that I reached as quickly as I could for the cache device, made a connection, observed things in Console then exported the pool (on this occasion I did not want reads or writes with an Unexpected Disk Condition):

Code: Select all
sh-3.2$ date
Sun  4 Nov 2012 16:41:28 GMT
sh-3.2$ zpool export zhandy
sh-3.2$


– then I physically disconnected both devices.

Subsequent connection of devices – cache vdev first – seems to allow a more proper approach to critical mass:

Code: Select all
2012-11-04 16:44:13.000 kernel[0]: ZFSLabelScheme:probe: label '???', vdev 4693709588154435632
2012-11-04 16:44:13.000 kernel[0]: ZFSLabelScheme:hasCriticalMass: 0xffffff802e87c600 no top level vdevs!
2012-11-04 16:44:27.000 kernel[0]: ZFSLabelScheme:probe: label 'zhandy', vdev 10616857169251329946
2012-11-04 16:44:27.000 kernel[0]: ZFSLabelScheme:start: 'zhandy' critical mass with 2 vdev(s) (importing)
2012-11-04 16:44:27.000 kernel[0]: zfsx_kev_importpool:'zhandy' (4688397874579579662)
2012-11-04 16:44:28.000 kernel[0]: zfsx_vdm_open: 'zhandy' disk5s2
2012-11-04 16:44:28.000 kernel[0]: zfsx_vdm_open: 'zhandy' disk4s2
2012-11-04 16:44:30.000 kernel[0]: zfsx_vdm_close: 'disk4s2'
2012-11-04 16:44:30.000 kernel[0]: zfsx_vdm_open: 'zhandy' disk4s2
2012-11-04 16:44:30.000 kernel[0]: zfsx_vdm_close: 'disk4s2'
2012-11-04 16:44:30.000 kernel[0]: zfsx_vdm_open: 'zhandy' disk4s2
2012-11-04 16:44:30.000 kernel[0]: zfsx_vdm_close: 'disk4s2'
2012-11-04 16:44:30.000 kernel[0]: zfsx_vdm_open: 'zhandy' disk4s2
2012-11-04 16:44:31.000 kernel[0]: zfsx_mount: '/Volumes/zhandy'
2012-11-04 16:44:32.000 kernel[0]: zfsx_mount: '/Volumes/zhandy/Pocket Time Machine'


Time now: 17:02. Let's see whether the Mac will restart without force …
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

no problem with reboot

Post by grahamperrin » Sun Nov 04, 2012 2:02 pm

… no problem. Love to ZFS.

In this instance I acted quickly and avoided/minimised reads and writes whilst the cache vdev was missing.

Whether things would have been so smooth in a more aggressive write scenario, I can't guess.

Cross reference

ZEVO resilience to untimely loss of L2ARC
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: L2ARC: dynamic import before cache vdev is present

Post by grahamperrin » Mon Nov 05, 2012 12:54 pm

The log of chat at http://echelog.com/logs/browse/opensolaris/1280872800 (2010-08-04) with a relevant comment – 

[09:54:21] <Seemone> would a pool import without cache devices present? (b134)


– refers to a directory that seems to be unavailable at Oracle following the acquisition of Sun.

http://web.archive.org/web/201009231339 ... /2010/292/ there's an archive but (as noted with other archived OpenSolaris caselog directories) no content.

Cached by Google:

http://webcache.googleusercontent.com/s ... issing_log

http://webcache.googleusercontent.com/s ... rge.wilson

– that's log devices, not cache vdevs.

In the man page for zpool(8) with ZEVO Community Edition 1.1.1 under the zpool import subcommand:

Code: Select all
           -m                 Allows  a pool to import when there is a missing
                              log device.


Log devices aside: Tom's comments in the resilience topic makes me guess now that with ZEVO:

  • dynamic import in the absence of a cache vdev is by design.

Postscripts

Currently at http://en.wikipedia.org/wiki/ZFS#ZFS_ca ... ARC.2C_ZIL

… If the L2ARC device is lost, all reads will go out to the disks which slows down performance but nothing else will happen (no data will be lost).…


At Solaris™ 10 ZFS Essentials > Managing Storage Pools > ZFS Pool Concepts - Pg. 10: Safari Books Online:

… There is no redundancy support at this point for this vdev type. If there is a read error, then ZFS will read from the original storage pool. …


So. I removed from my MacBookPro5,2 the cache vdev for one of the pools. If I wait long enough, I'll probably get a notification about the pool … yep:

2012-11-05 18-05-00 screenshot.png
notification from HardwareGrowler
2012-11-05 18-05-00 screenshot.png (14.94 KiB) Viewed 38 times
then after a while,

2012-11-05 18-05-37 screenshot.png
notification from ZEVO
2012-11-05 18-05-37 screenshot.png (25.01 KiB) Viewed 38 times
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom


Return to General Discussion

Who is online

Users browsing this forum: ilovezfs and 2 guests

cron