Errors noted with ZEVO after a test of pools with ZFS-OSX

This forum is to find answers to problems you may be having with ZEVO Community Edition.

Moderators: jhartley, MSR734, nola

Errors noted with ZEVO after a test of pools with ZFS-OSX

Post by grahamperrin » Fri Oct 04, 2013 8:22 pm

Part of this topic relates to my first follow-up under a topic in the MacZFS area:

  • OpenZFS ZFS-OSX (osx.zfs-20130929.dmg): testing

Briefly

In my excitement during and shortly after a test of three pools with ZFS-OSX, I carelessly overwrote a note that could have helped to explain permanent errors that were noted – with one of the pools – after all three were returned to ZEVO. Two of the errors affect directories that have a com.apple.system.Security extended attribute (xattr) – an attribute that was sometimes discussed in IRC for MacZFS (not discussed in the MacZFS-devel discussion group).

The attribute can be used for ACLs, but since I was careless with my note (sorry): I can't say whether the ACL issue with ZFS-OSX was contributory to the errors.

For the affected pool: a rollback with ZEVO revealed what may be a bug with ZEVO Community Edition 1.1.1.

As far as I can tell: no relationship between (a) the permanent errors affecting data and (b) the bug around rollback, but as there was a sequence of events I'll present things in a single topic.

Details

To follow …
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Before, during and shortly after a test with ZFS-OSX

Post by grahamperrin » Sat Oct 05, 2013 2:13 am

For reference only

Whilst using three pools with ZEVO, preparing to test ZFS-OSX, I allowed a kernel panic (build 12F37 of OS X 10.8.5). A kernel core dump began, but I don't expect anyone to analyse a full dump so I chose to force a shut down.

With an external drive, I started pre-release build 13A584 of OS X 10.9 and made just a little use of ZFS-OSX with all three pools. As expected (following the panic), it was necessary to force the first imports. Terminal output was saved to a separate partition of the drive.

I returned to Mountain Lion with ZEVO and repartitioned the external drive, preparing for a next phase of testing of ZFS-OSX.

Around 20:10 on the evening of Sunday 2013-09-29 I realised that the pool named 'tall', to which I normally back up with Time Machine, was UNAVAIL (insufficient replicas):

Code: Select all
gpes3e-gjp4:2013-06 gjp22$ date
Sun 29 Sep 2013 20:10:46 BST
gpes3e-gjp4:2013-06 gjp22$ zpool status
  pool: gjp22
 state: ONLINE
 scan: scrub repaired 0 in 20h52m with 0 errors on Sun Sep 22 18:42:02 2013
config:

   NAME                                         STATE     READ WRITE CKSUM
   gjp22                                        ONLINE       0     0     0
     GPTE_71B8BDA2-3EBA-4B91-9E1C-2AE2B1DAAD06  ONLINE       0     0     0  at disk9s2
   cache
     GPTE_4DE8B2ED-797A-407B-A11B-C51B96DBD4CB  OFFLINE      0     0     0

errors: No known data errors

  pool: tall
 state: UNAVAIL
status: One or more devices are faulted in response to persistent errors.  There are insufficient replicas for the pool to
   continue functioning.
action: Destroy and re-create the pool from a backup source.  Manually marking the device
   repaired using 'zpool clear' may allow some data to be recovered.
 scan: scrub repaired 0 in 76h17m with 0 errors on Sat Sep 21 21:45:31 2013
config:

   NAME                                         STATE     READ WRITE CKSUM
   tall                                         UNAVAIL      0     0     0  insufficient replicas
     GPTE_78301A52-4AFF-4D96-8DE9-E76ABC14909C  ONLINE       0     0     0  at disk3s2
     GPTE_99056308-F5E2-4314-852C-4DA04732A2D0  FAULTED      1     1     0  too many errors

errors: 69 data errors, use '-v' for a list

  pool: zhandy
 state: ONLINE
 scan: scrub repaired 0 in 17h10m with 0 errors on Tue Sep 17 11:23:07 2013
config:

   NAME                                         STATE     READ WRITE CKSUM
   zhandy                                       ONLINE       0     0     0
     GPTE_A54431D5-B46F-44A9-83B4-76802A584C6E  ONLINE       0     0     0  at disk4s2

errors: No known data errors
gpes3e-gjp4:2013-06 gjp22$ sudo zpol status -v tall
sudo: zpol: command not found
gpes3e-gjp4:2013-06 gjp22$ sudo zpool status -v tall
  pool: tall
 state: UNAVAIL
status: One or more devices are faulted in response to persistent errors.  There are insufficient replicas for the pool to
   continue functioning.
action: Destroy and re-create the pool from a backup source.  Manually marking the device
   repaired using 'zpool clear' may allow some data to be recovered.
 scan: scrub repaired 0 in 76h17m with 0 errors on Sat Sep 21 21:45:31 2013
config:

   NAME                                         STATE     READ WRITE CKSUM
   tall                                         UNAVAIL      0     0     0  insufficient replicas
     GPTE_78301A52-4AFF-4D96-8DE9-E76ABC14909C  ONLINE       0     0     0  at disk3s2
     GPTE_99056308-F5E2-4314-852C-4DA04732A2D0  FAULTED      1     1     0  too many errors

errors: Permanent errors have been detected in the following files:

        <0x3a30e>:<0x217f7>
        tall:/.fseventsd/fc007477ca14dce6
        <0x3a254>:<0x217f7>
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/6e0
        tall/com.apple.backupd:<0x5019>
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/6e4
        tall/com.apple.backupd:/.DS_Store
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/611
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/62d
        tall/com.apple.backupd:<0x20073>
        tall/com.apple.backupd:<0x20074>
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/650
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/665
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/2216
        tall/com.apple.backupd:/.fseventsd/fc007477ca3db06b
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/772
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/77c
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/8a7
        <0x3a3b9>:<0x217f7>
        tall/com.bombich.ccc:/.fseventsd/fc007477ca14ea25
        tall/com.bombich.ccc:/OS/Applications/additions/p/Paparazzi!/Paparazzi!.app/Contents/Resources/en.lproj/ServicesMenu.strings
        tall/com.bombich.ccc:/OS/Applications/Cyberduck.app/Contents/Resources/en.lproj/ServicesMenu.strings
        tall/com.bombich.ccc:/OS/Applications/Mail.app/Contents/Resources/en.lproj/ServicesMenu.strings
        tall/com.bombich.ccc:/.Spotlight-V100/Store-V2/F7F59C98-AD5A-47E2-B7BE-09B8FEDC2705/.store.db
gpes3e-gjp4:2013-06 gjp22$ clear


… I'll not post my notes in their entirety, but ultimately I could not export the affected pool (a known issue with ZEVO in situations such as this) –

Code: Select all
gpes3e-gjp4:2013-06 gjp22$ date
Sun 29 Sep 2013 20:19:25 BST
gpes3e-gjp4:2013-06 gjp22$ zfs mount
tall                            /Volumes/tall
tall/com.apple.backupd          /Volumes/tall/com.apple.backupd
gjp22/casesensitive             /Volumes/casesensitive
gjp22                           /Volumes/gjp22
gjp22/opt                       /opt
gpes3e-gjp4:2013-06 gjp22$ zfs unmount -f tall/com.apple.backupd
gpes3e-gjp4:2013-06 gjp22$ zpool export tall
load: 6.41  cmd: zpool 3135 uninterruptible 0.00u 0.01s
load: 6.41  cmd: zpool 3135 uninterruptible 0.00u 0.01s
^Cload: 7.32  cmd: zpool 3135 uninterruptible 0.00u 0.01s


– so the session, which began with a verbose boot, probably ended with a forced shut down (or forced restart) when it became clear that shut down could not complete normally.

Now – looking closely at that first list of permanent errors – I'm almost certain that:

  • the situation simply arose from me using something other than the usual bus for the pool
  • prior use of ZFS-OSX was not contributory.

There's more … 
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Pretty much ruling out an issue with ZFS-OSX

Post by grahamperrin » Sat Oct 05, 2013 2:22 am

The morning after:

Code: Select all
gpes3e-gjp4:~ gjp22$ date
Mon 30 Sep 2013 07:25:14 BST
gpes3e-gjp4:~ gjp22$ sudo zpool status -v tall
Password:
  pool: tall
 state: ONLINE
status: One or more devices has experienced an error resulting in data
   corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
   entire pool from backup.
 scan: scrub repaired 0 in 76h17m with 0 errors on Sat Sep 21 21:45:31 2013
config:

   NAME                                         STATE     READ WRITE CKSUM
   tall                                         ONLINE       0     0     0
     GPTE_78301A52-4AFF-4D96-8DE9-E76ABC14909C  ONLINE       0     0     0  at disk5s2
     GPTE_99056308-F5E2-4314-852C-4DA04732A2D0  ONLINE       0     0     0  at disk4s2

errors: Permanent errors have been detected in the following files:

        <0x3a30e>:<0x217f7>
        tall:/.Spotlight-V100/Store-V2/0FA64E7A-A824-4C59-A4B2-1320B32ACB9D/permStore
        tall:/.fseventsd/fc007477ca14dce6
        tall:/Users/gjp22/VirtualBox/Machines/OpenIndiana/Snapshots/<xattrdir>/com.apple.system.Security
        tall:/.DS_Store
        tall:/Users/gjp22/VirtualBox/Machines/Oracle/sol-11-exp-201011-live-x86.iso/<xattrdir>/com.apple.system.Security
        <0x3a254>:<0x217f7>
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/Info.plist
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/6e0
        tall/com.apple.backupd:<0x5019>
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/6e4
        tall/com.apple.backupd:/.DS_Store
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/abf
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/23
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/611
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/624
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/62d
        tall/com.apple.backupd:<0x20073>
        tall/com.apple.backupd:<0x20074>
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/650
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/1da
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/665
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/2216
        tall/com.apple.backupd:/.fseventsd/fc007477ca3db06b
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/772
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/77c
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/68f
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/692
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/8a7
        <0x3a3b9>:<0x217f7>
        tall/com.bombich.ccc:/.fseventsd/fc007477ca14ea25
        tall/com.bombich.ccc:/OS/Applications/additions/p/Paparazzi!/Paparazzi!.app/Contents/Resources/en.lproj/ServicesMenu.strings
        tall/com.bombich.ccc:/OS/Applications/Cyberduck.app/Contents/Resources/en.lproj/ServicesMenu.strings
        tall/com.bombich.ccc:/OS/Applications/Mail.app/Contents/Resources/en.lproj/ServicesMenu.strings
        tall/com.bombich.ccc:/.Spotlight-V100/Store-V2/F7F59C98-AD5A-47E2-B7BE-09B8FEDC2705/.store.db
        tall/com.bombich.ccc:/.Spotlight-V100/Store-V2/F7F59C98-AD5A-47E2-B7BE-09B8FEDC2705/permStore
gpes3e-gjp4:~ gjp22$ clear


Knowing that com.apple.system.Security had been discussed in connection with ZFS-OSX issue 44, two of the errors were remarkable –

  • tall:/Users/gjp22/VirtualBox/Machines/OpenIndiana/Snapshots/<xattrdir>/com.apple.system.Security
  • tall:/Users/gjp22/VirtualBox/Machines/Oracle/sol-11-exp-201011-live-x86.iso/<xattrdir>/com.apple.system.Security

– so for a few days I wondered about ZFS-OSX. Now, in retrospect, I see that those two were not amongst the first list of errors. So with the information that's available I'm happy to assume that all of what's above was PEBKAM – my unusual choice of bus for the pool. No issue with ZFS-OSX.

There's more …
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

An issue following a rollback with ZEVO

Post by grahamperrin » Sat Oct 05, 2013 2:31 am

An export of the pool where I wished to roll back a file system:

Code: Select all
gpes3e-gjp4:~ gjp22$ date
Fri  4 Oct 2013 02:48:44 BST
gpes3e-gjp4:~ gjp22$ zpool export tall
gpes3e-gjp4:~ gjp22$ sudo zpool import -FnX tall
Password:
Would be able to return tall to its state as of Fri  4 Oct 02:49:01 2013.
gpes3e-gjp4:~ gjp22$ clear


The rollback, and more:

Code: Select all
gpes3e-gjp4:~ gjp22$ date
Fri  4 Oct 2013 02:58:31 BST
gpes3e-gjp4:~ gjp22$ zfs rollback tall@2013-09-27-081037
cannot rollback to 'tall@2013-09-27-081037': more recent snapshots exist
use '-r' to force deletion of the following snapshots:
tall@2013-09-29-190323
tall@2013-09-29-200319
gpes3e-gjp4:~ gjp22$ zfs rollback -r tall@2013-09-27-081037
gpes3e-gjp4:~ gjp22$ clear





gpes3e-gjp4:~ gjp22$ zpool status -v tall
  pool: tall
 state: ONLINE
status: One or more devices has experienced an error resulting in data
   corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
   entire pool from backup.
 scan: scrub repaired 0 in 70h5m with 0 errors on Thu Oct  3 21:25:32 2013
config:

   NAME                                         STATE     READ WRITE CKSUM
   tall                                         ONLINE       0     0     0
     GPTE_78301A52-4AFF-4D96-8DE9-E76ABC14909C  ONLINE       0     0     0  at disk4s2
     GPTE_99056308-F5E2-4314-852C-4DA04732A2D0  ONLINE       0     0     0  at disk5s2

errors: List of errors unavailable (insufficient privileges)
gpes3e-gjp4:~ gjp22$ sudo zpool status -v tall
  pool: tall
 state: ONLINE
status: One or more devices has experienced an error resulting in data
   corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
   entire pool from backup.
 scan: scrub repaired 0 in 70h5m with 0 errors on Thu Oct  3 21:25:32 2013
config:

   NAME                                         STATE     READ WRITE CKSUM
   tall                                         ONLINE       0     0     0
     GPTE_78301A52-4AFF-4D96-8DE9-E76ABC14909C  ONLINE       0     0     0  at disk4s2
     GPTE_99056308-F5E2-4314-852C-4DA04732A2D0  ONLINE       0     0     0  at disk5s2

errors: Permanent errors have been detected in the following files:

        <0x3a30e>:<0x217f7>
        tall:/.Spotlight-V100/Store-V2/0FA64E7A-A824-4C59-A4B2-1320B32ACB9D/permStore
        tall:/Users/gjp22/VirtualBox/Machines/OpenIndiana/Snapshots/<xattrdir>/com.apple.system.Security
        tall:/.DS_Store
        tall:/Users/gjp22/VirtualBox/Machines/Oracle/sol-11-exp-201011-live-x86.iso/<xattrdir>/com.apple.system.Security
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/Info.plist
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/6e4
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/abf
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/23
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/624
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/62d
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/650
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/1da
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/665
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/68f
        tall/com.apple.backupd:/gpes3e-gjp4.sparsebundle/bands/692
        <0x3a3b9>:<0x217f7>
        tall/com.bombich.ccc:/.Spotlight-V100/Store-V2/F7F59C98-AD5A-47E2-B7BE-09B8FEDC2705/.store.db
        tall/com.bombich.ccc:/.Spotlight-V100/Store-V2/F7F59C98-AD5A-47E2-B7BE-09B8FEDC2705/permStore
gpes3e-gjp4:~ gjp22$ clear





gpes3e-gjp4:~ gjp22$ date
Fri  4 Oct 2013 03:07:09 BST
gpes3e-gjp4:~ gjp22$ zfs mount
gjp22/casesensitive             /Volumes/casesensitive
gjp22                           /Volumes/gjp22
gjp22/opt                       /opt
tall                            /Volumes/tall
tall                            /Volumes/tall/com.apple.backupd
tall                            /Volumes/tall/com.bombich.ccc
gpes3e-gjp4:~ gjp22$ mount | grep zfs
/dev/disk3s1 on /Volumes/casesensitive (zfs, local, journaled, noatime)
/dev/disk3 on /Volumes/gjp22 (zfs, local, journaled, noatime)
/dev/disk3s2 on /opt (zfs, local, journaled, noatime)
/dev/disk7 on /Volumes/tall (zfs, local, journaled, noatime)
/dev/disk7s1 on /Volumes/tall/com.apple.backupd (zfs, local, journaled, noatime)
/dev/disk7s2 on /Volumes/tall/com.bombich.ccc (zfs, local, journaled, noatime)
gpes3e-gjp4:~ gjp22$ zpool export tall
Unmount failed for /Volumes/tall/com.bombich.ccc
cannot unmount '/Volumes/tall/com.bombich.ccc': Invalid argument
gpes3e-gjp4:~ gjp22$ clear





gpes3e-gjp4:~ gjp22$ sudo lsof /Volumes/tall/com.bombich.ccc/
Password:


lsof found nothing open, nothing to explain why I could not unmount
/Volumes/tall/com.bombich.ccc/

Weird, the three lines for the dataset (tall) that was rolled back:

Code: Select all
gpes3e-gjp4:~ gjp22$ zfs mount
gjp22/casesensitive             /Volumes/casesensitive
gjp22                           /Volumes/gjp22
gjp22/opt                       /opt
tall                            /Volumes/tall
tall                            /Volumes/tall/com.apple.backupd
tall                            /Volumes/tall/com.bombich.ccc


If I'm not mistaken, that's a bug with ZEVO. To avoid confusion with the discussion of ZFS-OSX, at this point I'll spin off to a separate topic:

grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Errors noted with ZEVO after a test of pools with ZFS-OS

Post by d.jacobs » Mon Oct 21, 2013 6:32 pm

At least you're having better luck than I am.

I poked my toes into the ZFS-OSX installation. It mounted the pool, then froze all disk IO forcing me to reboot the system with cmd-ctl-power. Repeated the next time I loaded the modules. So, I reloaded Community edition 1.1.1. Now it won't mount the pool at all.

As a user: zpool list
failed to read pool configuration: permission denied
no pools available


As root: zpool list
internal error: Invalid argument
Abort trap: 6

Generates a crashlog. This is the interesting bit:

Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0 libsystem_kernel.dylib 0x00007fff8c9e4d46 __kill + 10
1 libsystem_c.dylib 0x00007fff85377f83 abort + 177
2 libzfs.dylib 0x00000001000209a2 zfs_verror + 206
3 libzfs.dylib 0x0000000100020c17 zfs_standard_error_fmt + 588
4 libzfs.dylib 0x00000001000209c4 zfs_standard_error + 24
5 libzfs.dylib 0x000000010003ec2c namespace_reload + 323
6 libzfs.dylib 0x000000010003ea54 zpool_iter + 42
7 libzfs.dylib 0x000000010003795d zpool_search_import + 51
8 zpool 0x00000001000069bd zpool_do_import + 1658
9 zpool 0x0000000100004267 main + 383
10 zpool 0x000000010000174c start + 52

So, I'm assuming that this has become a one-way journey to ZFS-OSX as it has apparently done something to the disk labels, yet zdb doesn't seem to have an issue. I'm not sure I built it at all correctly either, but that's more of a question for over there I expect.
d.jacobs Offline


 
Posts: 5
Joined: Wed Oct 17, 2012 3:55 pm

Crash of ZEVO following tests with ZFS-OSX

Post by grahamperrin » Mon Oct 21, 2013 7:42 pm

d.jacobs wrote:… ZFS-OSX … I loaded the modules … I reloaded Community edition 1.1.1 …


Hi

Did you manually restart the Mac following uninstallation of ZEVO, before testing ZFS-OSX?

Did you manually remove ZFS-OSX before performing the reinstallation of ZEVO?

Did you accept the invitation to restart after installation of 1.1, before the update to 1.1.1?

With assertions enabled (as they are in ZEVO Community Edition 1.1.1): if there's corruption, I might expect a kernel panic but not a discrete crash whilst the OS continues to run.

For what it's worth: results from my most recent ZFS-OSX test case were indicative of a corrupt ZIL, but I shouldn't jump to any conclusion for your case.

Do you have a record of your pool configuration from before the issue?

Thanks
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Errors noted with ZEVO after a test of pools with ZFS-OS

Post by lundman » Tue Oct 22, 2013 7:21 pm

This is curious. During iteration as well. I wonder if our new pool/dataset properties makes zevo fail (due to unknowns). We will start zevo compatibility testing again, to make sure we are clear.

We have addressed some of the IO hangs this week, so I am curious if your compile is up to date? Also, use clang over gcc as the latter will not work (llvm-gcc will though).
lundman Offline


 
Posts: 11
Joined: Sun Nov 25, 2012 9:54 pm

Re: Errors noted with ZEVO after a test of pools with ZFS-OS

Post by ilovezfs » Tue Oct 22, 2013 7:53 pm

lundman wrote:This is curious. During iteration as well. I wonder if our new pool/dataset properties makes zevo fail (due to unknowns). We will start zevo compatibility testing again, to make sure we are clear.

We have addressed some of the IO hangs this week, so I am curious if your compile is up to date? Also, use clang over gcc as the latter will not work (llvm-gcc will though).

So I just did some testing of round trips of a ZEVO pool, from ZEVO, to the new MacZFS (github repository ZFS-OSX https://github.com/zfs-osx/zfs) and back to ZEVO. There was no issue at all, and ZEVO was able to read new data written by the new MacZFS to an existing ZEVO dataset, as well as read new data on a new dataset created by the new MacZFS. So I would rule out the possibility of any sort of problem with pool/dataset properties making ZEVO fail.

More likely, you ran into one of the IO hangs fixed this week. So it would helpful if you could try the new MacZFS again. Please join us in IRC #mac-zfs so we can troubleshoot if you run into the same IO hang again, or something else.

Also, you should definitely heed grahamperrin's suggestion that you may have reinstalled ZEVO incorrectly.

Keep in mind, you must uninstall MacZFS completely for ZEVO to work, and you must uninstall ZEVO completely for MacZFS to work.

d.jacobs wrote:So, I'm assuming that this has become a one-way journey to ZFS-OSX as it has apparently done something to the disk labels, yet zdb doesn't seem to have an issue.

Using pool version 28 on both there is no reason it should be a "one-way journey."

d.jacobs wrote:I'm not sure I built it at all correctly either, but that's more of a question for over there I expect.

Good instructions here: http://zerobsd.tumblr.com/post/62586498 ... x-with-zfs
ilovezfs Online


 
Posts: 249
Joined: Sun Feb 10, 2013 9:02 am

Clarification

Post by grahamperrin » Tue Oct 22, 2013 8:05 pm

ilovezfs wrote:… grahamperrin's suggestion that you may have reinstalled ZEVO incorrectly. …


Not quite. I thought more of uninstallations.

(Long ago, after one of my earliest tests of ZFS-OSX (on Mavericks) I realised that I had not uninstalled ZEVO prior to experimenting with ZFS-OSX. The ZEVO KEXTs were naturally not loaded but there may have been inadvertent use of one of the ZEVO binaries (zfs or zpool, on PATH) whilst the ZFS-OSX KEXTs were loaded.)
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Clarification

Post by ilovezfs » Tue Oct 22, 2013 8:08 pm

grahamperrin wrote:
ilovezfs wrote:… grahamperrin's suggestion that you may have reinstalled ZEVO incorrectly. …


Not quite. I thought more of uninstallations.

(Long ago, after one of my earliest tests of ZFS-OSX (on Mavericks) I realised that I had not uninstalled ZEVO prior to experimenting with ZFS-OSX. The ZEVO KEXTs were naturally not loaded but there may have been inadvertent use of one of the ZEVO binaries (zfs or zpool, on PATH) whilst the ZFS-OSX KEXTs were loaded.)

Yes, exactly. (Re)installing ZEVO requires uninstalling MacZFS correctly and (re)installing MacZFS requires uninstalling ZEVO correctly. Also with ZEVO you must do the install-reboot-upgrade-reboot dance correctly.
ilovezfs Online


 
Posts: 249
Joined: Sun Feb 10, 2013 9:02 am

Next

Return to Troubleshooting

Who is online

Users browsing this forum: bileyqrkq, ilovezfs, toduefiwu and 0 guests

cron