Are you an m2/NVMe pool user? How's it working out?

Here you can discuss every aspect of OpenZFS on OS X. Note: not for support requests!

Are you an m2/NVMe pool user? How's it working out?

Postby incumbent » Tue Oct 19, 2021 6:06 pm

Hello,

I caved and picked up a discounted 4-device (!) m2/NVMe JBOD with thunderbolt3 and an extra DisplayPort interface today and while I'm selecting drives, would love to hear from people that have built zpools entirely of solid-state drive.

I've used SATA SSDs as l2arc or ZIL and messing around but I've never built one intended as primary storage.

I'd also appreciate any feedback on how you're provisioned; I usually use 2-3 device mirror vdevs and stack them. In this case I'll probably get four drives and create a pool of two mirrored vdevs with 2 devices each;

/ m2 + m2 mirror 4TB
8TB usable
\ m2 + m2 mirror 4TB

I have not used a raidz1 ever since 2TB drives became available. I live in fear of that cascade of failures on slow mechanical drives.

On the other hand an NVMe pool in that enclosure (I wouldn't buy fancy top shelf 4x4 NVME drives because the enclosure says it can push 2500 MBps and that should probably resilver itself in about an hour?

I will not put my root filesystem on this, it'd be replacing a mechanical pool I use for my photography, audio and some of my research. I use Arq for backups to B2 and S3/Wasabi.

I don't know if I could create some sort of redundant APFS volume group at all, I haven't built a macOS RAID of any type in several years.
incumbent
 
Posts: 49
Joined: Mon Apr 25, 2016 8:52 am

Re: Are you an m2/NVMe pool user? How's it working out?

Postby incumbent » Mon Oct 25, 2021 3:03 pm

forum post about ssds

OK I went ahead and built it out and I'm experimenting. Pool creation the first time:

Code: Select all
sudo zpool create -f \
-o ashift=12 \
-O compression=lz4 \
-O atime=off \
-O recordsize=1M \
-O checksum=on \
-O relatime=off \
-O casesensitivity=insensitive \
-O normalization=formD \
cascade \
mirror media-099E1F0E-245F-4D61-963E-33C98B7570AC media-BE0F8391-49DB-432C-BA43-3CD872E31A94 \
mirror media-E6812E12-E5C4-4571-A056-D3F94F5CE65C media-9FDD7FF7-5A41-4CFB-9365-E5E2E6CDB31F


I have a weird situation where i've painted my workstation into a corner a bit because I am essentially running out of lanes. When I can pull the mechanical zpool off this thing I'll get some back to improve the performance of the 4M2 enclosure.

My i7 mini has 64GB of memory but only 1TB internally (I don't know why I thought saving $200 was more imporant that day) but I used Stibium for some disk performance testing and I'm playing with other configurations but Stibium running with a dozen 220MB TIFF read/writes was pretty interesting - the summary:

Code: Select all
Read (noncache) = 160
   Average = 1.08 GB/s
   20% trimmed mean = 1.09 GB/s (794.6 MB/s - 1.3 GB/s)
   Theil-Sen regressed rate = 1.06 GB/s (810.5 MB/s - 1.34 GB/s)
   Linear regressed rate = 888.4 MB/s latency = -0.018997903604517852 s

Write = 16
   Average = 3.88 GB/s
   20% trimmed mean = 4.02 GB/s (1.51 GB/s - 5.89 GB/s)
   Theil-Sen regressed rate = 5.22 GB/s (2.28 GB/s - 9.04 GB/s)
   Linear regressed rate = 5.17 GB/s latency = 0.0017659188170352919 s

Test path /Volumes/cascade/test
Mac model = Macmini8,1
Machine name = Mac mini
Version = 1.0
Hardware UUID = 68DDFAC8-59F3-5DFC-B7A3-5EEFCEF05970
CPU = Intel(R) Core(TM) i7-8700B CPU @ 3.20GHz, 6 cores
Logic board ID = Mac-7BA5B2DFE22DDD8C
Physical memory = 64 GB
Running macOS version Version 11.6 (Build 20G165)
Stibium version 1.0
Started at 2021-10-25 17:18:34 +0000
Ended at 2021-10-25 17:19:38 +0000


I saw those 6GBps and laughed out loud. I ticked disable cache in Stibium but OpenZFS is using in-memory ARC, right? I still don't know why it's entirely inverted like that but I'm trying it again soon with some adjustments to my pool creation, e.g. I'm going to not use _slices_ by accident this time and I'll feed it the whole disk by-serial because I don't know if moving thunderbolt bus/ports for the 4M2 enclosure will change those by-path IDs and I can't imagine they wouldn't change just by looking at the names of the devices.
incumbent
 
Posts: 49
Joined: Mon Apr 25, 2016 8:52 am

Re: Are you an m2/NVMe pool user? How's it working out?

Postby incumbent » Tue Oct 26, 2021 4:04 am

this time i used ashift=13:

Code: Select all
sudo zpool create -f \
-o ashift=13 \
-O compression=lz4 \
-O atime=off \
-O recordsize=128k \
-O checksum=on \
-O relatime=off \
-O casesensitivity=insensitive \
-O normalization=formD \
cascade \
mirror PCIe_SSD-21051220002672 ADATA_SX8200PNP-2K482L1DEC1G \
mirror Samsung_SSD_970_EVO_Plus_2TB-S59CNZFNB14688W Samsung_SSD_970_EVO_Plus_2TB-S59CNM0R715436F


and long story short:

Code: Select all
Read (noncache) = 160
   Average = 827.9 MB/s
   20% trimmed mean = 833.1 MB/s (466.6 MB/s - 1.08 GB/s)
   Theil-Sen regressed rate = 754.7 MB/s (603.2 MB/s - 1.17 GB/s)
   Linear regressed rate = 708.3 MB/s latency = -0.021377247120773823 s

Write = 16
   Average = 2.51 GB/s
   20% trimmed mean = 2.51 GB/s (1.05 GB/s - 4.02 GB/s)
   Theil-Sen regressed rate = 3.62 GB/s (1.35 GB/s - 6.62 GB/s)
   Linear regressed rate = 2.87 GB/s latency = 0.0001053648429285986 s

Test path /Volumes/cascade/test
Mac model = Macmini8,1
Machine name = Mac mini
Version = 1.0
Hardware UUID = 68DDFAC8-59F3-5DFC-B7A3-5EEFCEF05970
CPU = Intel(R) Core(TM) i7-8700B CPU @ 3.20GHz, 6 cores
Logic board ID = Mac-7BA5B2DFE22DDD8C
Physical memory = 64 GB
Running macOS version Version 11.6 (Build 20G165)
Stibium version 1.0
Started at 2021-10-26 11:51:30 +0000
Ended at 2021-10-26 11:52:52 +0000


I'll do more tests and time trials later doing some zfs send/rec of my photography and research datasets which are very typical workloads for me.
incumbent
 
Posts: 49
Joined: Mon Apr 25, 2016 8:52 am

Re: Are you an m2/NVMe pool user? How's it working out?

Postby incumbent » Wed Nov 03, 2021 9:40 am

had some errors show up in my pool that i believe were caused by me rummaging around behind the scenes and probably knocking a cable loose or something. i had metadata faults and i was really anxious about having to recover from a zapped pool again but once my agonizingly slow rsync job finished an export/reboot/import/scrub solved it all.

performance is still quite good but hard to quantify. i have some datasets that would benefit from some recordsize adjustments but there's plenty of time for that later. i want to make sure i've got reliable backups local and off-site before start tinkering.

i need to re-read the performance testing page at the wiki again because i am sure there's a way to get benchmarks that aren't wildly swinging in bursts.

in this Express 4m2 enclosure the fan seems to be doing a good job of keeping them cool. i don't have much room for heatsinks but i've got it tabletop vertical and it's not loud enough to bother me at all and seems to do well so that's a win too.
incumbent
 
Posts: 49
Joined: Mon Apr 25, 2016 8:52 am

Re: Are you an m2/NVMe pool user? How's it working out?

Postby incumbent » Thu Sep 22, 2022 6:42 am

It's been almost a year now and my quad-SSD pool is doing great.

I store research data (Obsidian/Dendron markdown vaults and DEVONthink Databases) there, the last 12 months of photos and videos, MailMate.app's Messages folder in an encrypted dataset, and also VMware Fusion volumes. I am on Monterey on that workstation and am using:

- zfs-macOS-2.1.0-1
- zfs-kmod-2.1.0-1

I haven't investigated updating ZFS or macOS yet, but it's that time of year.
incumbent
 
Posts: 49
Joined: Mon Apr 25, 2016 8:52 am

Re: Are you an m2/NVMe pool user? How's it working out?

Postby kgreene » Fri Jan 26, 2024 12:57 pm

I recently picked up an m1 machine with the 4m2 nvme case.

I'm planning on setting up a 2 ssd pool (two 4tb ssds) and wanted to know if anyone using this case is also using a m1/m2/m3 vs an intel machine.

I'm a little nervous about this external case as I've always had internal Mac Pro zfs drives but that's not really an option anymore (unless you want to dump $$$ lol)

Also is there a current recommended zpool config, ie ashift, etc? I usually do case sensitive for unix compatible development but haven't looked at current apple compatibility options especially with apfs.
kgreene
 
Posts: 19
Joined: Sun Jul 05, 2015 8:10 am

Re: Are you an m2/NVMe pool user? How's it working out?

Postby LennaNepomuceno » Sun Feb 25, 2024 3:01 am

{spam posting removed by moderator}
LennaNepomuceno
 
Posts: 1
Joined: Sat Feb 03, 2024 8:56 pm

Re: Are you an m2/NVMe pool user? How's it working out?

Postby incumbent » Thu May 16, 2024 3:51 am

kgreene wrote:I recently picked up an m1 machine with the 4m2 nvme case.

I'm planning on setting up a 2 ssd pool (two 4tb ssds) and wanted to know if anyone using this case is also using a m1/m2/m3 vs an intel machine.

I'm a little nervous about this external case as I've always had internal Mac Pro zfs drives but that's not really an option anymore (unless you want to dump $$$ lol)

Also is there a current recommended zpool config, ie ashift, etc? I usually do case sensitive for unix compatible development but haven't looked at current apple compatibility options especially with apfs.


the OWC NVMe jbod has been excellent for over two years and zero problems i
or concern. i posted my pool creation above in the first post or two, at tbe time that was the best practices as i understood them. i think some of us will recommend you never use com.apple.mimic=apfs and yse hfs instead until the voodoo is understood there are several people with kernel panics that disappear when you don't mimic apfs and mount or attach a disk imagine. that happens a lot on macOS.
incumbent
 
Posts: 49
Joined: Mon Apr 25, 2016 8:52 am


Return to General Discussions

Who is online

Users browsing this forum: No registered users and 2 guests

cron