Debug automount problem

All your general support questions for OpenZFS on OS X.

Debug automount problem

Postby sir_brickalot » Fri May 26, 2017 3:03 am

Hi there,
I update from 1.5.2 to 1.6.1 and after that my pool doesn't automount after boot anymore.
I reinstalled zfs from scratch again but that didn't help. Very randomly the automount seems to work but rarely.

To add to this problem, the device names are a cryptic ID now instead of the mount point names (before).

Could someone help to debug this problem?

Specs:
Hackintosh with OSX 10.11.6
OpenZFS v1.6.1

thanks
sir_brickalot
 
Posts: 23
Joined: Tue Apr 11, 2017 11:13 am

Re: Debug automount problem

Postby Brendon » Sat May 27, 2017 2:30 pm

More context required!

- Brendon
Brendon
 
Posts: 286
Joined: Thu Mar 06, 2014 12:51 pm

Re: Debug automount problem

Postby sir_brickalot » Mon May 29, 2017 1:59 am

Brendon wrote:More context required!

- Brendon

I'm trying. Here's the output of zpool status after reboot:
Code: Select all
Restored session: Mo 29 Mai 2017 11:26:28 CEST
MAC:~ sir_brickalot$ sudo zpool status -v
Password:
no pools available
MAC:~ sir_brickalot$ sudo zpool import THEDRIVE
MAC:~ sir_brickalot$ sudo zpool status -v
  pool: THEDRIVE
 state: ONLINE
  scan: resilvered 1,04G in 0h23m with 0 errors on Fri May 26 16:27:26 2017
config:

   NAME                                            STATE     READ WRITE CKSUM
   THEDRIVE                                        ONLINE       0     0     0
     mirror-0                                      ONLINE       0     0     0
       media-CAF53989-C08B-42B2-9630-20BB3Cxxxxxx  ONLINE       0     0     0
       media-242157B4-F704-456E-BCE4-DA5761xxxxxx  ONLINE       0     0     0

errors: No known data errors


Code: Select all
MAC:~ sir_brickalot$ zpool list
NAME       SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
THEDRIVE  1,81T  1,67T   142G         -    32%    92%  1.00x  ONLINE  -


Boot info from console:
Code: Select all
29.05.17 11:27:23,000 kernel[0]: ZFS: Loading module ...
29.05.17 11:27:23,000 kernel[0]: ZFS: Loaded module v1.6.1-1, ZFS pool version 5000, ZFS filesystem version 5
29.05.17 11:27:23,000 kernel[0]: ZFS filesystem version: 5
29.05.17 11:27:23,000 kernel[0]: ZFS: hostid set to 6e3cfe58 from UUID '64BDA7E0-3638-5D5F-80A4-28B5xxxxxxxx'
29.05.17 11:29:23,000 kernel[0]: zfs_vfs_mount: fspec 'THEDRIVE' : mflag 0100 : optptr 0x7fff59ace270 : optlen 4096 : options
29.05.17 11:30:12,000 kernel[0]: ZFS: arc_reclaim_thread, (old_)to_free has returned to zero from 17809408
29.05.17 11:32:06,000 kernel[0]: ZFS: arc_reclaim_thread, (old_)to_free has returned to zero from 19259392
29.05.17 11:33:19,000 kernel[0]: ZFS: arc growtime expired
29.05.17 11:35:29,000 kernel[0]: ZFS: arc_reclaim_thread, (old_)to_free has returned to zero from 18923520
29.05.17 11:35:58,000 kernel[0]: ZFS: arc_reclaim_thread, (old_)to_free has returned to zero from 18923520
29.05.17 11:37:24,000 kernel[0]: ZFS: arc growtime expired
29.05.17 11:40:30,000 kernel[0]: ZFS: arc_reclaim_thread, (old_)to_free has returned to zero from 16777216
29.05.17 11:41:13,000 kernel[0]: ZFS: arc_reclaim_thread, (old_)to_free has returned to zero from 3297280
29.05.17 11:45:27,000 kernel[0]: ZFS: arc growtime expired

Mount info from console:
Code: Select all
29.05.17 11:28:55,655 sudo[1043]: sir_brickalot : TTY=ttys002 ; PWD=/Users/sir_brickalot ; USER=root ; COMMAND=/usr/local/bin/zpool status -v
29.05.17 11:29:14,417 sudo[1046]: sir_brickalot : TTY=ttys002 ; PWD=/Users/sir_brickalot ; USER=root ; COMMAND=/usr/local/bin/zpool import THEDRIVE
29.05.17 11:29:24,018 sudo[1061]:     root : TTY=unknown ; PWD=/ ; USER=sir_brickalot ; COMMAND=/usr/bin/osascript -e display notification "zpool.cache file has been renamed" with title "config.sync"
29.05.17 11:29:24,222 zed[1064]: eid=4 class=zpool.import pool=THEDRIVE
29.05.17 11:29:31,761 sudo[1080]:     root : TTY=unknown ; PWD=/ ; USER=sir_brickalot ; COMMAND=/usr/bin/osascript -e display notification "zpool.cache file has been renamed" with title "config.sync"
29.05.17 11:29:32,451 sudo[1082]: sir_brickalot : TTY=ttys002 ; PWD=/Users/sir_brickalot ; USER=root ; COMMAND=/usr/local/bin/zpool status -v


More info: I use a mirrored 2 TB pool, each device encrypted with Filevault.
More info2: My pool creation settings:
Code: Select all
sudo zpool create -f -o ashift=12 -O casesensitivity=insensitive -O atime=off -O normalization=formD -O compression=lz4 -O utf8only=on ZFS_POOLNAME mirror 'diskX' 'diskY'

More info3: kext link
Code: Select all
MAC:~ sir_brickalot$ ls -l /System/Library/Extensions/ |grep zfs
lrwxr-xr-x    1 root  wheel    28B 10 Apr 11:39 zfs.kext -> /Library/Extensions/zfs.kext

Is there anything that helps? Thank you.
sir_brickalot
 
Posts: 23
Joined: Tue Apr 11, 2017 11:13 am

Re: Debug automount problem

Postby abc123 » Mon May 29, 2017 2:55 am

sir_brickalot wrote:More info: I use a mirrored 2 TB pool, each device encrypted with Filevault


I believe there is a race between Finder unlocking the CoreStorage volume and ZFS running it's auto-mount routine. If the volumes haven't quite been unlocked when ZFS tries auto-mount, then the pool won't be mounted as the volumes it needs aren't available.

ZFS doesn't try and auto-mount when devices become available as far as I can tell, only at boot time.

So I think this is just something you have to accept when using FileVault.
abc123
 
Posts: 63
Joined: Mon Jan 30, 2017 11:46 pm

Re: Debug automount problem

Postby sir_brickalot » Mon May 29, 2017 3:42 am

abc123 wrote:So I think this is just something you have to accept when using FileVault.

Good idea. I checked the logs for CoreStorage [edit] and there might be a collision [/edit]:

Code: Select all
$ cat /private/var/log/system.log |grep -iE 'corestorage|zpool|zfs'
May 29 11:27:23 localhost kernel[0]: ZFS: Loading module ...
May 29 11:27:23 localhost kernel[0]: ZFS: Loaded module v1.6.1-1, ZFS pool version 5000, ZFS filesystem version 5
May 29 11:27:23 localhost kernel[0]: ZFS filesystem version: 5
May 29 11:27:23 localhost kernel[0]: ZFS: hostid set to 6e3cfe58 from UUID '64BDA7E0-3638-5D5F-80A4-28B5xxxxxxxx'
May 29 11:27:23 localhost kernel[0]: CoreStorage: fsck_cs has finished for group "0364C057-C7CF-4333-97D9-2832xxxxxxxx" with status 0x00
May 29 11:27:23 localhost kernel[0]: CoreStorage: fsck_cs has finished for group "56CB43B7-C681-4D5F-AE27-68F1xxxxxxxx" with status 0x00
May 29 11:27:23 localhost kernel[0]: CoreStorageFamily::unlockVEKs(A60E9C1F-38B9-4ACB-9197-ED3Cxxxxxxxx) VEK unwrap failed. this is normal, except for the root volume.
May 29 11:27:23 localhost kernel[0]: CoreStorageFamily::unlockVEKs(FCE9E118-95D6-4C58-9E2E-C61Exxxxxxxx) VEK unwrap failed. this is normal, except for the root volume.
May 29 11:27:52 WIESELMAC kernel[0]: CoreStorageFamily::unlockVEKs(FCE9E118-95D6-4C58-9E2E-C61Exxxxxxxx) was successful.
May 29 11:27:53 WIESELMAC kernel[0]: CoreStorageFamily::unlockVEKs(A60E9C1F-38B9-4ACB-9197-ED3Cxxxxxxxx) was successful.


Is there something that has changed between 1.5.2 and 1.6.1? Because it did work near flawless with the old version. I only had auto-mount troubles while one of the drives was faulty.
Is there an option to delay ZFS auto-mount or any other method to automate this?

Thank you.
sir_brickalot
 
Posts: 23
Joined: Tue Apr 11, 2017 11:13 am

Re: Debug automount problem

Postby Sharko » Mon May 29, 2017 4:24 pm

I'm not sure that it is ZFS-version-related; I'm still running 1.5.2, and my Mac Pro with two sets of 2TB mirrored drives (each Filevault encrypted) has never successfully automounted on startup. I've just accepted that I need to log into my admin user first, do the import, and then switch to my usual user account.

Kurt
Sharko
 
Posts: 230
Joined: Thu May 12, 2016 12:19 pm

Re: Debug automount problem

Postby lundman » Mon May 29, 2017 4:44 pm

Could you put in a sleep or something in the ZFS mount script to let corestorage unlock the volumes?
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Debug automount problem

Postby sir_brickalot » Tue May 30, 2017 3:11 am

What about the less important curiosity about the device names? This definitely changed between versions for me. Before the labels were like '/dev/disk3s1' or something similar, now they are labeled something like 'media-CAF53989-C08B-42B2-9630-20BB3Cxxxxxx'.

Why is that and how would I find out which drive actually died?
Last edited by sir_brickalot on Tue May 30, 2017 3:17 am, edited 1 time in total.
sir_brickalot
 
Posts: 23
Joined: Tue Apr 11, 2017 11:13 am

Re: Debug automount problem

Postby sir_brickalot » Tue May 30, 2017 3:16 am

lundman wrote:Could you put in a sleep or something in the ZFS mount script to let corestorage unlock the volumes?

This would be a workaround but not a solution. I have quite a few startup apps (or "Restored" apps) that want to access the not yet mounted drive. Maybe if the delay would be dependent on the decryption status or something...

Example:
I use Thunderbird and have all my mailboxes on the encrypted pool. When restarting, Thunderbird can't access the mailboxes and removes them from the overview. Luckily a restart of the app after importing the pool is enough though.
sir_brickalot
 
Posts: 23
Joined: Tue Apr 11, 2017 11:13 am

Re: Debug automount problem

Postby abc123 » Tue May 30, 2017 4:26 am

sir_brickalot wrote:Example:
I use Thunderbird and have all my mailboxes on the encrypted pool. When restarting, Thunderbird can't access the mailboxes and removes them from the overview. Luckily a restart of the app after importing the pool is enough though.


I think this would actually need to be fixed on a per-app basis, not in ZFS. CoreStorage encrypted volumes are mounted at some point after logging in. If you have apps auto-starting, then there will always be a race condition as to whether the volume mounts or the app starts and fails.

DropBox suffers the same issue.

Ideally apps would be aware that their data storage may be on external volumes that are slow to mount and would try for a few seconds before bombing out and saying the folder doesn't exist but in practice they don't do this.
abc123
 
Posts: 63
Joined: Mon Jan 30, 2017 11:46 pm

Next

Return to General Help

Who is online

Users browsing this forum: No registered users and 29 guests

cron