Is anyone using 03X with usable performance, and if so, how?

All your general support questions for OpenZFS on OS X.

Re: Is anyone using 03X with usable performance, and if so,

Postby nodarkthings » Mon Feb 03, 2020 2:39 am

Quite strange... I've always had throughputs ranging from 98 Mb/s to 12 Mb/s, never lower — depending on what I'm doing (real cases measurements, not benchmarks). I consider 40 to 50 Mb/s to be the average. And I've got a 7 year old computer that wasn't the state of the art (i3 3GHz and my pools are on a 1Tb SATA3 rotating HD).
nodarkthings
 
Posts: 174
Joined: Mon Jan 26, 2015 10:32 am

Re: Is anyone using 03X with usable performance, and if so,

Postby mohak » Tue Feb 25, 2020 11:46 am

DanielSmedegaardBuus wrote:I'm wondering now, reading about your experiences (does every one of you use SATA storage?), if there's some issue with 03X and USB specifically.


I have been using ZFS formatted USB drives on my mac for over 2 years now with great performance. I see almost the same read speeds with O3X as with accessing the raw sectors on the drive with dd (~100 MBps for 2.5" drives and ~180MBps for 3.5" drives).

P.S.: I did initially have weird performance issue but that was because those drives were encrypted with Truecrypt (this is before encryption was baked in to ZFS), and I was mounting the truecrypt volume using /dev/diskX instead of /dev/rdiskX.
mohak
 
Posts: 8
Joined: Sat Oct 28, 2017 10:44 am

Re: Is anyone using 03X with usable performance, and if so,

Postby tangles » Mon Jun 08, 2020 5:36 pm

It's a bit strange that you're getting only 5-10MB/sec speeds…

I was complaining that it's slow, but not that slow…

What is the nature of your data that you're trying to move/copy? and what was the setup? cos it sounds like something was a miss here…

I just did a quick test by selecting everything on my desktop and dragging over to a pool I just created on a 64GB flash stick USB3 enclosure:

Code: Select all
madmin@MPro13 ~ % sysctl {spl,zfs}.kext_version
spl.kext_version: 1.9.0-1
zfs.kext_version: 1.9.0-1
madmin@MPro13 ~ %


MacBook Pro 13" 2016
16GB RAM
i5-6267U @ 2.90GHz

23 documents, 5 folders
4,957,285,140 bytes (5.09 GB on disk)

Here's the last 20 or so of the Finder transfer:

Code: Select all
madmin@MPro13 ~ % zpool iostat -v 1 1000
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
wedge       2.46G  53.5G      0      0    110   365K
  disk2     2.46G  53.5G      0      0    110   365K
----------  -----  -----  -----  -----  -----  -----
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
wedge       2.62G  53.4G      1    261  1.50K   141M
  disk2     2.62G  53.4G      1    261  1.50K   141M
----------  -----  -----  -----  -----  -----  -----
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
wedge       2.79G  53.2G     27    269  17.9K   139M
  disk2     2.79G  53.2G     27    269  17.9K   139M
----------  -----  -----  -----  -----  -----  -----
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
wedge       2.88G  53.1G      0    239      0   110M
  disk2     2.88G  53.1G      0    239      0   110M
----------  -----  -----  -----  -----  -----  -----
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
wedge       2.96G  53.0G      0    238      0   161M
  disk2     2.96G  53.0G      0    238      0   161M
----------  -----  -----  -----  -----  -----  -----
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
wedge       3.15G  52.9G      0    274      0   141M
  disk2     3.15G  52.9G      0    274      0   141M
----------  -----  -----  -----  -----  -----  -----
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
wedge       3.32G  52.7G      0    263      0   142M
  disk2     3.32G  52.7G      0    263      0   142M
----------  -----  -----  -----  -----  -----  -----
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
wedge       3.40G  52.6G      0    215      0   147M
  disk2     3.40G  52.6G      0    215      0   147M
----------  -----  -----  -----  -----  -----  -----
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
wedge       3.56G  52.4G     66    263  76.1K   142M
  disk2     3.56G  52.4G     66    263  76.1K   142M
----------  -----  -----  -----  -----  -----  -----
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
wedge       3.64G  52.4G      0    235      0   136M
  disk2     3.64G  52.4G      0    235      0   136M
----------  -----  -----  -----  -----  -----  -----
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
wedge       3.83G  52.2G      0    328      0   136M
  disk2     3.83G  52.2G      0    328      0   136M
----------  -----  -----  -----  -----  -----  -----
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
wedge       3.98G  52.0G     49    327  61.9K   148M
  disk2     3.98G  52.0G     49    327  61.9K   148M
----------  -----  -----  -----  -----  -----  -----
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
wedge       4.06G  51.9G      0    257      0   130M
  disk2     4.14G  51.9G      0    257      0   130M
----------  -----  -----  -----  -----  -----  -----
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
wedge       4.23G  51.8G     43    218  51.2K   144M
  disk2     4.23G  51.8G     43    218  51.2K   144M
----------  -----  -----  -----  -----  -----  -----
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
wedge       4.38G  51.6G      0    271      0   136M
  disk2     4.38G  51.6G      0    271      0   136M
----------  -----  -----  -----  -----  -----  -----
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
wedge       4.55G  51.5G     16    367  14.5K   140M
  disk2     4.55G  51.5G     16    367  14.5K   140M
----------  -----  -----  -----  -----  -----  -----
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
wedge       4.61G  51.4G      0    186      0  55.7M
  disk2     4.61G  51.4G      0    186      0  55.7M
----------  -----  -----  -----  -----  -----  -----
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
wedge       4.63G  51.4G      0    112      0  28.1M
  disk2     4.63G  51.4G      0    112      0  28.1M
----------  -----  -----  -----  -----  -----  -----
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
wedge       4.63G  51.4G      0      0      0      0
  disk2     4.63G  51.4G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
              capacity     operations     bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
wedge       4.63G  51.4G      0      0      0      0
  disk2     4.63G  51.4G      0      0      0      0
----------  -----  -----  -----  -----  -----  -----
^C
madmin@MPro13 ~ %


These numbers above look good considering the variable/random file sizes…

I then destroyed the pool and created a new APFS volume and tested with BlackMagic's Disk Speed Test app. (5GB file size) to get a utopian value.

Surprisingly, after the cache on the flash filled up, I was getting about 150MB/sec at best and so things appear to have improved?

Admittedly, I was testing a while ago with rotational disks, raidz and at least 5-7 spindles and so I might see if these values now scale up as expected.

This is my daily lappy and so had to use USB and also why I'm not running the latest kexts…
tangles
 
Posts: 195
Joined: Tue Jun 17, 2014 6:54 am

Re: Is anyone using 03X with usable performance, and if so,

Postby atonaldenim » Mon Sep 07, 2020 1:21 pm

Hi everyone, hope I'm not hijacking the thread, I'm interested in using O3X on a Mac Pro 5,1 (2009) on High Sierra, with four 10TB internal SATA HDDs. My goal is to do a simple RAIDZ1 setup to make a large network file share available over 10 gigabit ethernet for video editing media storage for 1-2 users.

I've tried using FreeNAS on an older HP server I have, but for whatever reason the 10Gbe connection was flaky. Consistently stopped working after like 30 minutes. Also with FreeNAS it seems you can't do something as simple as plug in a USB hard drive to the server and share its files on the network. I've found 10Gbe connections between two Mac Pro computers to be stable, so I'm interested in a MacOS-based file server using ZFS as a FreeNAS alternative. I'm always getting client media on external drives and the ability to connect HFS+ drives directly to the file server for large copies would be really useful. As well as the possibility of being able to run Mac software directly on the server, like transcoding, rendering etc.

Hearing about the performance issues that @tangles and others have had in the past with O3X, I'm wondering if this is a reasonable plan? Speed is definitely a goal of mine, with the money invested in 10GBe equipment and drive array. I was considering using High Sierra but Mojave would be fine too, if there is a recommended OS for best performance. With the video / audio files I'll be doing mostly large sequential transfers.

Thanks for any advice!
atonaldenim
 
Posts: 2
Joined: Mon Sep 07, 2020 1:04 pm

Re: Is anyone using 03X with usable performance, and if so,

Postby Sharko » Sun Sep 13, 2020 7:22 pm

You're right, this should probably be its own post in a separate thread, but here is my $0.02.

If I'm remembering Allan Jude's advice correctly, speed scales with number of VDEV's. So if you put all the disks in a RAIDZ configuration you're going to have just one VDEV. Whereas, if you set up the four disks as two VDEVs of two disks each you will get a speed benefit - maybe not twice as fast as one VDEV, but significantly faster than one VDEV. It comes at the cost of space efficiency, of course.

Presumably you're going to be working with large files, so you might want to change the recordsize from the 128k default to 1 megabyte.

I would recommend running Mojave over High Sierra. The closer you stay to the top of the development tree, the better, and Mojave is still supported for another year officially. It is also easier, I believe, for Mr. Lundman to support, since it is closer to the changes required to make O3X work on the latest release (currently Catalina, soon to be Big Sur).

You don't say anything about RAM, but of course you should try and make sure that your O3X box has plenty of RAM for the ARC (probably 16 GB is a good idea).
Sharko
 
Posts: 230
Joined: Thu May 12, 2016 12:19 pm

Re: Is anyone using 03X with usable performance, and if so,

Postby JasonBelec » Thu Oct 01, 2020 9:32 am

Me thinks some of you have issues other than ZFS. I currently about 300TB across various clients on ZFS. We even run a VirtualBox with a production Windows Server in one location without issues. All running on Mac mini’s of course. USB enclosures supporting 4 drives with an external fan. Connected to either a Sonnettech hub or directly USB-C. All systems do network backups hourly to ZVOL based Sparse Bundles. Snapshots run every 10 min and send off site (to me) hourly.

Have we had the odd issue? Hell yeah! Software ones are usually talked rather quickly by the Developers. Perhaps you could talk to them. They get lonely.

I usually grab hardware off Amazon, including drives and cables. Clients have been happy. Three instances stand out where we were able to save their businesses. 1) lightning strike, destroyed 60% of the computer room - lucky hit. 2) Someone clicked on something they shouldn’t have and gave out information they shouldn’t have resulting in a major hack - humans, whatyagoingtodo. 3) Motheboard on Mac Mini (2018) server failed, wasn’t writing data. - ouch, it happens. ZFS is best, On OS X is just fine. ;)
JasonBelec
 
Posts: 32
Joined: Mon Oct 26, 2015 1:07 pm

Re: Is anyone using 03X with usable performance, and if so,

Postby tangles » Tue Oct 06, 2020 1:02 am

Hi @atonaldenim,

Now that our favourite all-time developer Mr Lundy has released version 2 of ZFS that is inline with the Linux version, I'll have a play on the weekend with a 2012 MacPro, Catalina and 4-6 x 4TB disk in the bays with ZFS and see what happens in terms on speed.

I'll also install ubuntu 20.04 with ZFS and test with the same hardware. (no pressure Lundy!)

Having 10 gigabit happening in the home, I don't care what hardware/OS combo I use, as long as it saturates the 10 gigabit connection–I'm happy… :D As JasonBelec points out, once you have ZFS holding your data, storage management becomes enjoyable!
tangles
 
Posts: 195
Joined: Tue Jun 17, 2014 6:54 am

Previous

Return to General Help

Who is online

Users browsing this forum: Google [Bot] and 27 guests