Will v2.0 release keep the previous OS compatibility?

New to OpenZFS on OS X (Or ZFS in general)? Ask your questions here!

Re: Will v2.0 release keep the previous OS compatibility?

Postby nodarkthings » Wed May 19, 2021 1:42 am

I confirm, same version, same place.
Now I've created a new pool on another drive, changed its type to bf01, exported it and here's what I get:
Code: Select all
sh-3.2# zpool create -f -o ashift=13 -O compression=lz4 -O casesensitivity=insensitive -O atime=off -O normalization=formD Test /dev/disk2s3
sh-3.2# gdisk -l /dev/disk2
GPT fdisk (gdisk) version 1.0.3

Warning: Devices opened with shared lock will not have their
partition table automatically reloaded!
Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/disk2: 250069680 sectors, 119.2 GiB
Sector size (logical): 512 bytes
Disk identifier (GUID): BB4B7164-986C-49CD-ADB1-B6D4C92666C9
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 250069646
Partitions will be aligned on 8-sector boundaries
Total free space is 1048589 sectors (512.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              40          409639   200.0 MiB   EF00  EFI System Partition
   2          409640        78534639   37.3 GiB    AF00  Sans titre
   3        78796784       125108567   22.1 GiB    AF00  Test
   4       125370712       187870711   29.8 GiB    AF00  MojoSan 2
   5       188132856       249807495   29.4 GiB    AF00  ELCAP San 2
sh-3.2# gdisk /dev/disk2
GPT fdisk (gdisk) version 1.0.3

Warning: Devices opened with shared lock will not have their
partition table automatically reloaded!
Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.

Command (? for help): t
Partition number (1-5): 3
Current type is 'Apple HFS/HFS+'
Hex code or GUID (L to show codes, Enter = AF00): bf01
Changed type of partition to 'Solaris /usr & Mac ZFS'

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/disk2.
Warning: Devices opened with shared lock will not have their
partition table automatically reloaded!
Warning: The kernel may continue to use old or deleted partitions.
You should reboot or remove the drive.
The operation has completed successfully.
sh-3.2# gdisk -l /dev/disk2
GPT fdisk (gdisk) version 1.0.3

Warning: Devices opened with shared lock will not have their
partition table automatically reloaded!
Partition table scan:
  MBR: protective
  BSD: not present
  APM: not present
  GPT: present

Found valid GPT with protective MBR; using GPT.
Disk /dev/disk2: 250069680 sectors, 119.2 GiB
Sector size (logical): 512 bytes
Disk identifier (GUID): BB4B7164-986C-49CD-ADB1-B6D4C92666C9
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 250069646
Partitions will be aligned on 8-sector boundaries
Total free space is 1048589 sectors (512.0 MiB)

Number  Start (sector)    End (sector)  Size       Code  Name
   1              40          409639   200.0 MiB   EF00  EFI System Partition
   2          409640        78534639   37.3 GiB    AF00  Sans titre
   3        78796784       125108567   22.1 GiB    BF01  Test
   4       125370712       187870711   29.8 GiB    AF00  MojoSan 2
   5       188132856       249807495   29.4 GiB    AF00  ELCAP San 2
sh-3.2# zpool list
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
Test    22G  2.56M  22.0G        -         -     0%     0%  1.00x    ONLINE  -
sh-3.2# zpool export Test
disk3s1 was already unmounted
sh-3.2# zpool import
Segmentation fault: 11


There appears to be some confusion between drives! :o As you see, I've created the pool on Disk2 and it says "disk3s1 was already unmounted" when I export it... (and there's no disk3, actually)
Couldn't it be because of the HFS/ZFS/HFS sandwich? May be it works ok with a full drive, that's why you don't have this issue? At the moment, I have no free drive to make the test.
nodarkthings
 
Posts: 174
Joined: Mon Jan 26, 2015 10:32 am

Re: Will v2.0 release keep the previous OS compatibility?

Postby lundman » Wed May 19, 2021 1:57 am

Ah you are just using a partition/slice? not disk3 but disk3s3 ? I can see if I can replicate that locally.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Will v2.0 release keep the previous OS compatibility?

Postby Arne » Thu May 20, 2021 7:08 am

nodarkthings wrote:Tried 10.11(rc7): works perfectly, no issue so far. I imported/exported a few times and ran the basic commands — note that I'm not using send/receive and no raid, so I can't tell.


I want to try it, too. I think the way is to uninstall 1.9.4 first (with the script from the dmg) and than running the 2.0rc7 pkg install.
But what if all went wrong and I need the 1.9.4 back, how to uninstall the 2.0rc7 version?
Can it be done with the script from the 1.9.4 dmg?

And I want to try 2.0rc7 first on a test-pool on an external drive (an existing empty mirror from 1.9.4) while letting my "real and important data" pool on my internal disk-partition unmounted.
But is it safe (enough) to use the 2.0rc7 productive with my pool on my internal disk with the "real and important data"?
Or shouldn't I even think about using it productive in the state of RCx?
My system: Mini 2009 (early) with El-Capitan 10.11.6
Arne
 
Posts: 30
Joined: Mon Oct 29, 2018 7:59 am

Re: Will v2.0 release keep the previous OS compatibility?

Postby lundman » Thu May 20, 2021 3:52 pm

Same uninstall script should work, but now zfs is mostly self contained;

sudo rm -rf /Library/Extensions/zfs.kext. /usr/local/zfs/

Although the launchctl scripts should probably be stopped.. I'll make a note of updating the uninstall.script for the /usr/local/zfs path.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Will v2.0 release keep the previous OS compatibility?

Postby lundman » Thu May 20, 2021 4:37 pm

On Catalina:

Code: Select all
# gdisk -l /dev/disk1
Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048          411647   200.0 MiB   EF00  EFI System Partition
   2          411648        10897407   5.0 GiB     AF00  Apple HFS/HFS+
   3        10897408        21383167   5.0 GiB     BF01  Solaris /usr & Mac ZFS
   4        21383168        31868927   5.0 GiB     AF00  Apple HFS/HFS+
   5        31868928        41943006   4.8 GiB     AF00  Apple HFS/HFS+

# diskutil list
/dev/disk1 (internal, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *21.5 GB    disk1
   1:                        EFI EFI                     209.7 MB   disk1s1
   2:                  Apple_HFS roger                   5.4 GB     disk1s2
   3:                        ZFS                         5.4 GB     disk1s3
   4:                  Apple_HFS                         5.4 GB     disk1s4
   5:                  Apple_HFS                         5.2 GB     disk1s5

# zpool create -f -o ashift=13 -O compression=lz4 -O casesensitivity=insensitive -O atime=off -O normalization=formD Test /dev/disk1s3

# zpool export -a
Volume Test on disk3s1 unmounted

# zpool import -d /dev
   pool: Test
     id: 2211434988046827994
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

   Test        ONLINE
     disk1s3   ONLINE

#  zpool import -d /dev Test


Let me boot the 10.9 VM and see it that triggers something.
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Will v2.0 release keep the previous OS compatibility?

Postby lundman » Thu May 20, 2021 5:18 pm

OK 10.9 does indeed fail:

zpool import
Segmentation fault: 11
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Will v2.0 release keep the previous OS compatibility?

Postby lundman » Thu May 20, 2021 9:55 pm

I have posted a new mavericks build. Turns out it didn't like fdopendir().
User avatar
lundman
 
Posts: 1335
Joined: Thu Mar 06, 2014 2:05 pm
Location: Tokyo, Japan

Re: Will v2.0 release keep the previous OS compatibility?

Postby nodarkthings » Fri May 21, 2021 5:06 am

New 10.9 rc7 works indeed! ;) Thanks for your dedication!
Notes:
- Each time a pool is imported the OS displays "unreadable disk, do you want to format it?" dialog (I tried -a and by name).
- A little weird thing, though, when exporting, is the apparent confusion between disk identifiers.
Code: Select all
sh-3.2# gdisk -l /dev/disk5
...
Number  Start (sector)    End (sector)  Size       Code  Name
   1              40          409639   200.0 MiB   EF00  EFI System Partition
   2          409640        78534639   37.3 GiB    AF00  Sans titre
   3        78796784       125108567   22.1 GiB    BF01  Test
   4       125370712       187870711   29.8 GiB    AF00  MojoSan 2
   5       188132856       249807495   29.4 GiB    AF00  ELCAP San 2
sh-3.2# zpool export Test
disk6s1 was already unmounted

I export Test on disk5 and it says disk6s1 already unmounted... :? perrhaps just a "cosmetic" issue.
nodarkthings
 
Posts: 174
Joined: Mon Jan 26, 2015 10:32 am

Re: Will v2.0 release keep the previous OS compatibility?

Postby Arne » Fri May 21, 2021 9:55 am

I dared to try rc7 on 10.11
All went well. Almost. I uninstalled 1.9.4. with its script in the dmg and installed the 2rc7 pkg.
I mounted the external mirror which I created with 1.9.4 on partitions at two ext. disks.
Copying, deleting, all went good.
Even exporting the pool and only importing it again with only one disk present works.
Copied files on it, deleted them, onlined the missing disk, and tataa the resilvering started and ended quickly (there where no files on it just the hidden garbage that osx put on it).
Happy ever after? Yes. Uhm... No.
Cause then it happened.
I copied some apps from the application folder in a new created dataset on the mirror pool. After a few GB I stopped it. Works. Fine.
Then I wanted to delete all the files that where copied before. Marked them in the Finder and pressed cmd+backspc.
And after almost all files were deleted the process halted and denied to do its work to the end.
My terminal window still works. The activity monitor still works. The mouse still works.
And then after a few seconds all stopped to work. Terminal was visual but not responding, the same with activity monitor. Rightclicking on the icons on the dock showed that these programs weren't responding. So I killed them by clicking the apropriate line in the app menu.
The activitymonitor vanished but the terminal stayed visual.
The mouse cursor remaind in the shape of an text cursor and I wasn't able to click on anything.
That lasts for half an hour. So I switched off my Mini.
Booted back, all works. But I decided to end my time in this playground. So I called the uninstall script from 1.9.4. And looked in the folders and all was deleted (and also /Library/Extensions/zfs.kext and /usr/local/zfs/ as mentioned by Lundman).
But reinstalling 1.9.4 didn't work. There was still the /dev/zfs as the install.log told me and it assumes that the kext was still running.
Booting and the install of 1.9.4 got thru to the end. Uff, yippie. And my "important real data" pool on the inner disk is still working. Relief. Relax.

I guess that my story will not tell any infos about why and where and what but I worked with 1.9.4 -> 2rc7 -> 1.9.4 with a lot of benchmarking (iozone) many hours so I let it be as it is right now.
Curiosity, happiness, excitment, frustration, anxienty, relief. Enough for today.

Oh, a little feedback to the performance:

It seems that the 2rc7 is a little faster than 1.9.4 except when iozone tested recordsize 128k, 1024k and 2048k (when I remember it well) where the performance of a file size of 256mb and above goes down to 7mb/s reading (usb2) while writing stayes by 100000+. I tested with a primarycache of none, metadata and all and switched the compression between lz4, zstd and off. Without compression and no primarycache the write performance is more realistic to usb2 speed, but still a little to high due to caching of the computer. But the performance with compression and caching was a little better than with 1.9.4. I watched it with zpool iostat, too.

The memory footprint was a little strange because switching programs while benchmarking and even without benchmarking was sluggish. Activity monitor shows a wired memory part that took almost all of my 2gb (1.55gb). It went down when there was no writing/reading and when I opened another program. But not very fast. I started Firefox from the mirror pool and it needed a little longer to start but when running it worked well (not tested a lot, just starting, opening a site and switching between pages of the site to test the caching that I put on the mirror pool, too).

Maybe other testers of 10.11 had the same or other or even no problems. I am interested in reading about your experiences of the rc7.
As I now know that I can survive such testing I'll give it another trial. Maybe rc8 or rc9 and than - hopefully - 2.0 stable.

Thank you Jörgen for posting rc7 for 10.11 and for the exciting day today :-D
My system: Mini 2009 (early) with El-Capitan 10.11.6
Arne
 
Posts: 30
Joined: Mon Oct 29, 2018 7:59 am

Re: Will v2.0 release keep the previous OS compatibility?

Postby nodarkthings » Thu May 27, 2021 12:29 am

10.9 2.0.1: partial success.
No auto-import but all pools import ok manually, although still with message for each pool:
Capture d’écran 2021-05-26 à 23.17.19.jpg
Capture d’écran 2021-05-26 à 23.17.19.jpg (27.24 KiB) Viewed 9969 times

The export is not doing well, though (probably still because of my using slices I guess... ;) )
Code: Select all
sh-3.2# zpool export -a
disk3s1 was already unmounted
Unmount successful for /Volumes/ResteZFS/Machines_Virtuelles
disk2s1 was already unmounted
umount(/Volumes/ResteZFS): Resource busy -- try 'diskutil unmount'
cannot unmount '/Volumes/ResteZFS': unmount failed
disk4s1 was already unmounted


Here's /Volumes before export:
Capture d’écran 2021-05-26 à 23.28.51.jpg
Capture d’écran 2021-05-26 à 23.28.51.jpg (20.09 KiB) Viewed 9969 times

and after:
Capture d’écran 2021-05-26 à 23.33.28.jpg
Capture d’écran 2021-05-26 à 23.33.28.jpg (17.63 KiB) Viewed 9969 times

So, the first pool AudioZFS is exported ok, ResteZFS is not, because of its dataset Machines_Virtuelles that is not fully unmounted (leaves an empty folder instead, same thing for my last pool.
I've tried deleted the empty folders manually, rebooting, but all behaves the same way.
I see a lot of WARNINGs in the Console.
Could it be because of the new "com.apple.mimic=hfs" syntax? (that I have not changed on those pools)

EDIT: I seem to have fixed the export issue by doing a zfs inherit for mountpoint and checksum... :o Those pools have been created in 2017 and 2018 and had those values set to local instead of default.
Last edited by nodarkthings on Mon May 31, 2021 2:14 am, edited 1 time in total.
nodarkthings
 
Posts: 174
Joined: Mon Jan 26, 2015 10:32 am

PreviousNext

Return to Absolute Beginners Section

Who is online

Users browsing this forum: No registered users and 13 guests