March 14th build fails to mount encrypted filesystem

All your general support questions for OpenZFS on OS X.

Re: March 14th build fails to mount encrypted filesystem

Postby FadingIntoBlue » Thu Apr 08, 2021 5:49 pm

I have posted zfs-2.0.0rc2-3-gbaec3413df


So I've upgraded the pool, it mounts fine using the above pkg
Still getting the testpool:<0x0> error when checking status
The installer gives me the zfs version as noted at the end; I've installed twice, is that the right one?

Code: Select all
%df
/dev/disk4s1      1669832      2264    1667568     1%      90     1667568    0%   /Volumes/testpool
testpool/safe     1700136     32568    1667568     2%      95     1667568    0%   /Volumes/testpool/safe

% sudo zpool status -v
  pool: testpool
 state: ONLINE
status: One or more devices has experienced an error resulting in data
   corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
   entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
config:

   NAME                                    STATE     READ WRITE CKSUM
   testpool                                ONLINE       0     0     0
     /Users/henryh/Downloads/filepool.img  ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        testpool:<0x0>

% zfs version
zfs-2.0.0-rc2
zfs-kmod-zfs-2.0.0-rc1-442-g816946801e
FadingIntoBlue
 
Posts: 106
Joined: Tue May 27, 2014 12:25 am

Re: March 14th build fails to mount encrypted filesystem

Postby lundman » Thu Apr 08, 2021 8:19 pm

Installed on my BS VM:

zfs-2.0.0-rc2
zfs-kmod-zfs-2.0.0rc2-3-gbaec3413df

So perhaps do the trigger-panic-medic dance to make sure the old kext is gone before installing
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: March 14th build fails to mount encrypted filesystem

Postby theit » Fri Apr 09, 2021 2:01 am

lundman wrote:rc2 is for Big Sur I believe. Check the values of the features flags

These are the actual values:
Code: Select all
~ % zpool get all zroot 
NAME   PROPERTY                       VALUE                          SOURCE
zroot  size                           3,62T                          -
zroot  capacity                       86%                            -
zroot  altroot                        -                              default
zroot  health                         ONLINE                         -
zroot  guid                           16271452840368492864           default
zroot  version                        -                              default
zroot  bootfs                         -                              default
zroot  delegation                     on                             default
zroot  autoreplace                    off                            default
zroot  cachefile                      -                              default
zroot  failmode                       wait                           default
zroot  listsnapshots                  off                            default
zroot  autoexpand                     off                            default
zroot  dedupratio                     1.00x                          -
zroot  free                           512G                           -
zroot  allocated                      3,12T                          -
zroot  readonly                       off                            -
zroot  ashift                         12                             local
zroot  comment                        -                              default
zroot  expandsize                     -                              -
zroot  freeing                        0                              default
zroot  fragmentation                  13%                            -
zroot  leaked                         0                              default
zroot  checkpoint                     -                              -
zroot  multihost                      off                            default
zroot  autotrim                       off                            default
zroot  feature@async_destroy          enabled                        local
zroot  feature@empty_bpobj            active                         local
zroot  feature@lz4_compress           active                         local
zroot  feature@multi_vdev_crash_dump  enabled                        local
zroot  feature@spacemap_histogram     active                         local
zroot  feature@enabled_txg            active                         local
zroot  feature@hole_birth             active                         local
zroot  feature@extensible_dataset     active                         local
zroot  feature@embedded_data          active                         local
zroot  feature@bookmarks              enabled                        local
zroot  feature@filesystem_limits      enabled                        local
zroot  feature@large_blocks           enabled                        local
zroot  feature@large_dnode            enabled                        local
zroot  feature@sha512                 enabled                        local
zroot  feature@skein                  enabled                        local
zroot  feature@edonr                  enabled                        local
zroot  feature@encryption             active                         local
zroot  feature@device_removal         enabled                        local
zroot  feature@obsolete_counts        enabled                        local
zroot  feature@zpool_checkpoint       enabled                        local
zroot  feature@spacemap_v2            active                         local
zroot  feature@allocation_classes     enabled                        local
zroot  feature@bookmark_v2            enabled                        local
zroot  feature@resilver_defer         enabled                        local


What puzzles me is the following output under Big Sur:
Code: Select all
~ % sudo zpool status -v
  pool: zroot
 state: ONLINE
status: One or more devices has experienced an error resulting in data
   corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
   entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: resilvered 36K in 00:00:02 with 0 errors on Fri Apr  9 11:09:31 2021
config:

   NAME                                            STATE     READ WRITE CKSUM
   zroot                                           ONLINE       0     0     0
     mirror-0                                      ONLINE       0     0     0
       media-7242F2F5-6C89-1346-80BF-8476478B3A58  ONLINE       0     0     0
       media-32C12AE0-135F-9242-8BD1-76595BB1FB18  ONLINE       0     0     0
       media-42CB6F36-4B54-5C41-AE87-0705578B65B0  ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        zroot/daten:<0x0>
~ %


I don't know where this error comes from and what exactly I did to make it appear... Now I'm actually back into Catalina and started scrubbing the whole pool (unless you have a better idea of how to get rid of this error ;) )

Additionally what I think is a bit strange that when I was in Big Sur after a "sudo zpool export -a" followed by a "sudo zpool import -la" without unplugging the drives, "zpool status -v" showed me slightly different names:
Code: Select all
...
   NAME                                             STATE     READ WRITE CKSUM
   zroot                                            ONLINE       0     0     0
     mirror-0                                       ONLINE       0     0     0
       My_Book_1140-504C313332314C4147334C4E5248:1  ONLINE       0     0     0
       PCI0@0-XHC1@14-@8:1                          ONLINE       0     0     0
       PCI0@0-XHC1@14-@b:1                          ONLINE       0     0     0


Back into Catalina, the output is once more a bit different:
Code: Select all
~ % zpool status -v                           
  pool: zroot
 state: ONLINE
status: One or more devices has experienced an error resulting in data
   corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
   entire pool from backup.
   see: http://zfsonlinux.org/msg/ZFS-8000-8A
  scan: scrub in progress since Fri Apr  9 11:22:44 2021
   426G scanned at 213M/s, 167G issued at 83,3M/s, 3,12T total
   0 repaired, 5,21% done, 0 days 10:21:19 to go
config:

   NAME                                            STATE     READ WRITE CKSUM
   zroot                                           ONLINE       0     0     0
     mirror-0                                      ONLINE       0     0     0
       media-7242F2F5-6C89-1346-80BF-8476478B3A58  ONLINE       0     0     0
       PCI0@0-XHC1@14-@5:1                         ONLINE       0     0     0
       media-42CB6F36-4B54-5C41-AE87-0705578B65B0  ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        zroot/daten:<0x0>

Do you have any explanation for this?
theit
 
Posts: 21
Joined: Tue Oct 20, 2015 10:52 pm

Re: March 14th build fails to mount encrypted filesystem

Postby lundman » Fri Apr 09, 2021 3:34 am

OK but you said your version had "zfs-kmod-zfs-2.0.0-rc1-442-g816946801e"

That is still rc1's kernel. Make sure you get "zfs-kmod-zfs-2.0.0rc2-3-gbaec3413df"
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: March 14th build fails to mount encrypted filesystem

Postby theit » Fri Apr 09, 2021 11:16 am

Damn, I thought I was using rc2, but indeed I had rc1 installed. Anyway, even with rc2 importing and mounting my pool still results in the same error:
Code: Select all
thorsten@Thorstens-MBP ~ % zfs --version       
zfs-2.0.0-rc2
zfs-kmod-zfs-2.0.0rc2-3-gbaec3413df
thorsten@Thorstens-MBP ~ % sudo zpool import -la                                   
zpool_open_func: zpool_read_label returned error -1 (errno: 0 name: /private/var/run/disk/by-id/media-6B00BFB7-8D17-AB46-85A6-A008A5539E83)
zpool_open_func: zpool_read_label returned error -1 (errno: 0 name: /private/var/run/disk/by-id/media-5DAB017F-01AA-8145-8D0C-AE66858F61EF)
zpool_open_func: zpool_read_label returned error -1 (errno: 0 name: /private/var/run/disk/by-id/media-704F36FB-DA19-4BF1-9F89-4B0B17273D3C)
zpool_open_func: zpool_read_label returned error -1 (errno: 0 name: /private/var/run/disk/by-id/media-C82AA85F-548D-4821-960C-9FCEA2B083D0)
zpool_open_func: zpool_read_label returned error -1 (errno: 0 name: /private/var/run/disk/by-id/media-7242F2F5-6C89-1346-80BF-8476478B3A58)
zpool_open_func: zpool_read_label returned error -1 (errno: 0 name: /private/var/run/disk/by-id/volume-C82AA85F-548D-4821-960C-9FCEA2B083D0)
zpool_open_func: zpool_read_label returned error -1 (errno: 0 name: /private/var/run/disk/by-path/PCI0@0-SATA@1F,2-PRT0@0-PMP@0-@0:0)
zpool_open_func: zpool_read_label returned error -1 (errno: 0 name: /private/var/run/disk/by-path/PCI0@0-SATA@1F,2-PRT0@0-PMP@0-@0:1)
zpool_open_func: zpool_read_label returned error -1 (errno: 0 name: /private/var/run/disk/by-id/volume-1FB448AE-CF91-4E39-9676-B3A21E62EA24)
zpool_open_func: zpool_read_label returned error -1 (errno: 0 name: /private/var/run/disk/by-path/PCI0@0-SATA@1F,2-PRT0@0-PMP@0-@0:2)
zpool_open_func: zpool_read_label returned error -1 (errno: 0 name: /private/var/run/disk/by-path/PCI0@0-XHC1@14-@4:0)
zpool_open_func: zpool_read_label returned error -1 (errno: 0 name: /private/var/run/disk/by-path/PCI0@0-XHC1@14-@3:9)
zpool_open_func: zpool_read_label returned error -1 (errno: 0 name: /private/var/run/disk/by-path/PCI0@0-XHC1@14-@4:1)
zpool_open_func: zpool_read_label returned error -1 (errno: 0 name: /private/var/run/disk/by-path/PCI0@0-XHC1@14-@4:9)
zpool_open_func: zpool_read_label returned error -1 (errno: 0 name: /private/var/run/disk/by-serial/APPLE_SSD_SM512E-S118NYACB06301)
zpool_open_func: zpool_read_label returned error -1 (errno: 0 name: /private/var/run/disk/by-serial/APPLE_SSD_SM512E-S118NYACB06301:1)
zpool_open_func: zpool_read_label returned error -1 (errno: 0 name: /private/var/run/disk/by-path/PCI0@0-XHC1@14-@6:9)
zpool_open_func: zpool_read_label returned error -1 (errno: 0 name: /private/var/run/disk/by-path/PCI0@0-XHC1@14-@6:1)
zpool_open_func: zpool_read_label returned error -1 (errno: 0 name: /private/var/run/disk/by-serial/APPLE_SSD_SM512E-S118NYACB06301:2)
zpool_open_func: zpool_read_label returned error -1 (errno: 0 name: /private/var/run/disk/by-serial/Portable-NAA510QQ:9)
Enter passphrase for 'zroot':
1 / 1 keys successfully loaded
cannot mount 'zroot/daten': Unknown error: -1
thorsten@Thorstens-MBP ~ %
theit
 
Posts: 21
Joined: Tue Oct 20, 2015 10:52 pm

Re: March 14th build fails to mount encrypted filesystem

Postby glessard » Fri Apr 09, 2021 11:35 am

Being leery of upgrading my pool yet, I booted back into a Catalina that has 1.9.4 and created an encrypted filesystem there:
% sudo zfs create -o encryption=aes-256-gcm -o keylocation=prompt -o keyformat=passphrase Stuff/Test
(The top level filesystem is unencrypted.)

Upon rebooting in Big Sur with zfs-2.0.0rc2-3-gbaec3413df, that still failed to mount. I can load the key, but:
% sudo zfs mount Stuff/Test
Password:
cannot mount 'Stuff/Test': Unknown error: -1

To top it off, I twice got a kernel panic after about 10 minutes after booting up.

Will I have to upgrade my pool to make this work?
glessard
 
Posts: 17
Joined: Thu Mar 27, 2014 11:27 am

Re: March 14th build fails to mount encrypted filesystem

Postby lundman » Fri Apr 09, 2021 4:05 pm

And you guys remembered to enable the features/upgraded the pool for encryption to mount?

The panics we would need, in case it is something serious
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: March 14th build fails to mount encrypted filesystem

Postby FadingIntoBlue » Fri Apr 09, 2021 4:16 pm

And you guys remembered to enable the features/upgraded the pool for encryption to mount?


I've upgraded the testpool, and confirmed that it will no longer mount under 1.9.4
I'm in the process of moving out, packing for Monday, so will try to get the right rc3 kexts installed in downtime between now and then, will let you know when I've managed it.
FadingIntoBlue
 
Posts: 106
Joined: Tue May 27, 2014 12:25 am

Re: March 14th build fails to mount encrypted filesystem

Postby glessard » Sat Apr 10, 2021 11:28 am

I'm trying a situation where I created and tested my encrypted filesystem with 1.9.4; this did not mount with the gbaec3413df build. Is there a feature I should enable under 1.9.4 to make it work with gbaec3413df? (encryption is active, so that's not it.)
glessard
 
Posts: 17
Joined: Thu Mar 27, 2014 11:27 am

Re: March 14th build fails to mount encrypted filesystem

Postby theit » Sun Apr 11, 2021 2:13 am

lundman wrote:And you guys remembered to enable the features/upgraded the pool for encryption to mount?

The panics we would need, in case it is something serious

On Catalina with ZFS 1.9.4 everything is enabled. As far as I understood upgrading the pool under Big Sur with 2.0rc2 and enabling the new features will render the pool useless for Catalina so I actually can't do this...
theit
 
Posts: 21
Joined: Tue Oct 20, 2015 10:52 pm

PreviousNext

Return to General Help

Who is online

Users browsing this forum: Haravikk and 22 guests