<?xml version="1.0"?>
<?xml-stylesheet type="text/css" href="https://openzfsonosx.org/w/skins/common/feed.css?303"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://openzfsonosx.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Brendon</id>
		<title>OpenZFS on OS X - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://openzfsonosx.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Brendon"/>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/Special:Contributions/Brendon"/>
		<updated>2026-04-19T11:56:16Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.22.3</generator>

	<entry>
		<id>https://openzfsonosx.org/wiki/FAQ</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/FAQ"/>
				<updated>2016-08-17T09:53:37Z</updated>
		
		<summary type="html">&lt;p&gt;Brendon: /* Q) I compiled from source, how do I know that I am running what I compiled? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:About O3X]]&lt;br /&gt;
&lt;br /&gt;
Besides the questions covered below, you may find [[Documentation]] and [[OpenZFS on OS X]] helpful. Both articles contain a good deal of information about OpenZFS on OS X.&lt;br /&gt;
&lt;br /&gt;
== General ==&lt;br /&gt;
&lt;br /&gt;
===Q) What is OpenZFS on OS X?===&lt;br /&gt;
'''A)''' See the article entitled [[OpenZFS on OS X]].&lt;br /&gt;
&lt;br /&gt;
===Q) What does O3X mean?===&lt;br /&gt;
'''A)''' O3X = O O O X = '''O'''penZFS '''o'''n '''O'''S '''X'''.&lt;br /&gt;
&lt;br /&gt;
===Q) What version of ZFS do you use?===&lt;br /&gt;
'''A)''' OpenZFS. Pool version 5000. File system version 5. Pool version 5000 is pool version 28 plus support for feature flags. We support pool version 5000 and pool versions less than or equal to 28. We do not support the closed-source Oracle Solaris ZFS pool versions 29 and up.&lt;br /&gt;
&lt;br /&gt;
===Q) Can I set up a test pool using files instead of disks?===&lt;br /&gt;
'''A)''' Yes. The example uses simple files, its also possible to use disk images.&lt;br /&gt;
&lt;br /&gt;
  $ cd /tmp&lt;br /&gt;
  $ mkfile 10G aaa&lt;br /&gt;
  $ mkfile 10G bbb&lt;br /&gt;
  $ mkfile 10G ccc&lt;br /&gt;
  $ sudo zpool create tank raidz /tmp/aaa /tmp/bbb /tmp/ccc&lt;br /&gt;
  $&lt;br /&gt;
&lt;br /&gt;
===Q) I compiled from source, how do I know that I am running what I compiled?===&lt;br /&gt;
'''A)''' There are a couple of sysctls that can be read to determine what commit was compiled into your custom kexts. Assuming you used zfsadm to build, or that you executed the autoconf, and configure steps before compiling the following works.&lt;br /&gt;
&lt;br /&gt;
  $ sysctl -a | grep kext&lt;br /&gt;
      spl.kext_version: 1.5.2-2_g'''115aa2f'''&lt;br /&gt;
      zfs.kext_version: 1.5.2-33_g'''9ac66a7'''&lt;br /&gt;
&lt;br /&gt;
  $ cd &amp;lt;path to source code&amp;gt;/zfs&lt;br /&gt;
  $ git log -n 1&lt;br /&gt;
      commit '''9ac66a7'''1e53636eec04f4718b0b3870a18f07840&lt;br /&gt;
         Merge: 3326995 890ef86&lt;br /&gt;
         Author: zadmin &amp;lt;zadmin@jerry.local&amp;gt;&lt;br /&gt;
         Date:   Thu Jun 16 17:19:24 2016 +1000&lt;br /&gt;
&lt;br /&gt;
         Merge branch 'master' of https://github.com/openzfsonosx/zfs&lt;br /&gt;
&lt;br /&gt;
  $ cd &amp;lt;path to source code&amp;gt;/spl&lt;br /&gt;
  $ git log -n 2&lt;br /&gt;
       commit f1ff660a2f1fa340d451c2afa5f726f9bd3e609d&lt;br /&gt;
       Author: Brendon Humphrey &amp;lt;brendon.humphrey@mac.com&amp;gt;&lt;br /&gt;
       Date:   Sat Jun 18 20:25:09 2016 -0700&lt;br /&gt;
       ...&lt;br /&gt;
       commit '''115aa2f'''05b6f843e0d39d4f6bf999602db120113&lt;br /&gt;
       Author: Jorgen Lundman &amp;lt;lundman@lundman.net&amp;gt;&lt;br /&gt;
       Date:   Thu May 12 09:48:31 2016 +0900&lt;br /&gt;
       ...&lt;br /&gt;
&lt;br /&gt;
I have highlighted the relevant text to match. You can see that this machine is running the latest ZFS, and is one commit behind the latest SPL.&lt;br /&gt;
&lt;br /&gt;
If you make a small local change to the code then this technique will not work. One workaround for that is to edit your file(s) and commit them in your repository clone, that way it will have a commit id.&lt;br /&gt;
&lt;br /&gt;
== Best practices ==&lt;br /&gt;
&lt;br /&gt;
===Q) Do I have to use mirrors or raidz? ===&lt;br /&gt;
'''A)''' Have to? No. Should you? Virtually always. ZFS will not be able to repair errors it finds unless you have redundancy at the vdev level.&lt;br /&gt;
&lt;br /&gt;
===Q) Can I set copies=2 in lieu of using mirrors or raidz? ===&lt;br /&gt;
Setting copies=2 is a poor substitute for vdev-level redundancy. Two copies on a broken drive are worthless. That being said, yes, you can set copies=2. Do so at your own risk.&lt;br /&gt;
&lt;br /&gt;
== Administration ==&lt;br /&gt;
&lt;br /&gt;
===Q) How can I access the .zfs snapshot directories?===&lt;br /&gt;
'''A)''' You need to set snapdir visible and manually mount a snapshot.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo zfs set snapdir=visible tank/bob&lt;br /&gt;
$ sudo zfs mount tank/bob@yesterday&lt;br /&gt;
$ ls -l /tank/bob/.zfs/snapshot/yesterday/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You can see existing snapshots via:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ zfs list -t snapshot&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Q) Is .zfs snapdir auto-mounting supported?===&lt;br /&gt;
'''A)''' No, not at this time. You must manually &amp;quot;zfs mount&amp;quot; snapshots manually to see them in the snapdir.&lt;br /&gt;
&lt;br /&gt;
===Q) OK, I manually mounted my snapshot but still cannot see it in Finder. What gives?===&lt;br /&gt;
'''A)''' Currently mounted snapshots are only visible from Terminal, not from Finder.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ ls -l /tank/bob/.zfs/snapshot/yesterday/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Q) Why does OSX server not allow the server storage to be on ZFS?===&lt;br /&gt;
'''A)''' OSX Server has been coded in such a way as to only allow the server storage area to be an HFS formatted drive. O3X offers a feature that causes ZFS datasets to identify themselves as HFS. This is sufficient for OSX server to allow storage on a ZFS filesystem. HFS mimic is enabled by setting the com.apple.mimic_hfs property on a per dataset basis.&lt;br /&gt;
&lt;br /&gt;
In addition as of OSX Server 5.x it seems that the Application Store Caching server can only store its cache on HFS. This is a new behaviour. I just used an HFS formatted ZVOL to get around this. &lt;br /&gt;
&lt;br /&gt;
  sudo zfs create -o volblocksize=1m -s -V 250g tank/cachingzvol&lt;br /&gt;
&lt;br /&gt;
== Interoperability ==&lt;br /&gt;
&lt;br /&gt;
===Q) How do I create an O3X compatible pool on another OpenZFS platform? ===&lt;br /&gt;
'''A)''' Only enable feature flags supported by O3X, as discussed [https://openzfsonosx.org/wiki/Zpool#Feature_flags here].&lt;br /&gt;
&lt;br /&gt;
===Q) Can I import my ZEVO pools?===&lt;br /&gt;
'''A)''' Yes. O3X can import pool version 28, which means it can import ZEVO pools.&lt;br /&gt;
&lt;br /&gt;
===Q) Can I import my MacZFS pools?===&lt;br /&gt;
'''A)''' Yes. O3X can import pool version 8, which means it can import MacZFS pools.&lt;br /&gt;
&lt;br /&gt;
===Q) Do HFS only applications such as Photos, OSX server and others work on ZFS?===&lt;br /&gt;
'''A)''' Sometimes. Apple codes some software to only work when stored on HFS. We can't change that. We have provided a property that causes ZFS filesystems to identify themselves as a HFS when enabled. We are unable to guarantee that the application will work 100% correctly as HFS may have specific behaviours that the application depends on, and ZFS may not behave identically.&lt;br /&gt;
   &lt;br /&gt;
    sudo zfs set com.apple.mimic_hfs=on &amp;lt;dataset&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Q) Does Spotlight work? ===&lt;br /&gt;
'''A)''' Yes. Spotlight works on O3X 1.3.1+.&lt;br /&gt;
&lt;br /&gt;
===Q) Can Time Machine backups be stored on ZFS? ===&lt;br /&gt;
'''A)''' Yes. It is possible to host a TimeMachine backup within a SparseImage on ZFS, or an HFS formatted ZVOL On ZFS.&lt;br /&gt;
&lt;br /&gt;
== Limitations ==&lt;br /&gt;
&lt;br /&gt;
===Q) Can I use finder permissions aka ACLs? ===&lt;br /&gt;
'''A)''' Not yet. There is work to go in this area. See https://github.com/openzfsonosx/zfs/issues/275&lt;br /&gt;
&lt;br /&gt;
===Q) Can I boot my computer off of O3X?===&lt;br /&gt;
'''A)''' No. O3X cannot be used as your main system partition.&lt;br /&gt;
&lt;br /&gt;
===Q) So if I use O3X, that means I don't need to back up, right? ===&lt;br /&gt;
'''A)''' Wrong. Wrong. Wrong. &lt;br /&gt;
&lt;br /&gt;
===Q) Can TimeMachine backup the contents of a ZFS volume? ===&lt;br /&gt;
'''A)''' No. We believe that when &amp;quot;Issue 116&amp;quot; is resolved it may be supportable. At the present time TimeMachine excludes ZFS filesystems from the list of available backup targets.&lt;/div&gt;</summary>
		<author><name>Brendon</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/FAQ</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/FAQ"/>
				<updated>2016-08-17T09:46:57Z</updated>
		
		<summary type="html">&lt;p&gt;Brendon: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:About O3X]]&lt;br /&gt;
&lt;br /&gt;
Besides the questions covered below, you may find [[Documentation]] and [[OpenZFS on OS X]] helpful. Both articles contain a good deal of information about OpenZFS on OS X.&lt;br /&gt;
&lt;br /&gt;
== General ==&lt;br /&gt;
&lt;br /&gt;
===Q) What is OpenZFS on OS X?===&lt;br /&gt;
'''A)''' See the article entitled [[OpenZFS on OS X]].&lt;br /&gt;
&lt;br /&gt;
===Q) What does O3X mean?===&lt;br /&gt;
'''A)''' O3X = O O O X = '''O'''penZFS '''o'''n '''O'''S '''X'''.&lt;br /&gt;
&lt;br /&gt;
===Q) What version of ZFS do you use?===&lt;br /&gt;
'''A)''' OpenZFS. Pool version 5000. File system version 5. Pool version 5000 is pool version 28 plus support for feature flags. We support pool version 5000 and pool versions less than or equal to 28. We do not support the closed-source Oracle Solaris ZFS pool versions 29 and up.&lt;br /&gt;
&lt;br /&gt;
===Q) Can I set up a test pool using files instead of disks?===&lt;br /&gt;
'''A)''' Yes. The example uses simple files, its also possible to use disk images.&lt;br /&gt;
&lt;br /&gt;
  $ cd /tmp&lt;br /&gt;
  $ mkfile 10G aaa&lt;br /&gt;
  $ mkfile 10G bbb&lt;br /&gt;
  $ mkfile 10G ccc&lt;br /&gt;
  $ sudo zpool create tank raidz /tmp/aaa /tmp/bbb /tmp/ccc&lt;br /&gt;
  $&lt;br /&gt;
&lt;br /&gt;
===Q) I compiled from source, how do I know that I am running what I compiled?===&lt;br /&gt;
'''A)''' There are a couple of sysctls that can be read to determine what commit was compiled into your custom kexts. Assuming you used zfsadm to build, or that you executed the autoconf, and configure steps before compiling the following works.&lt;br /&gt;
&lt;br /&gt;
  $ sysctl -a | grep kext&lt;br /&gt;
      spl.kext_version: 1.5.2-2_g'''115aa2f'''&lt;br /&gt;
      zfs.kext_version: 1.5.2-33_g'''9ac66a7'''&lt;br /&gt;
&lt;br /&gt;
  $ cd &amp;lt;path to source code&amp;gt;/zfs&lt;br /&gt;
  $ git log -n 1&lt;br /&gt;
      commit '''9ac66a7'''1e53636eec04f4718b0b3870a18f07840&lt;br /&gt;
         Merge: 3326995 890ef86&lt;br /&gt;
         Author: zadmin &amp;lt;zadmin@jerry.local&amp;gt;&lt;br /&gt;
         Date:   Thu Jun 16 17:19:24 2016 +1000&lt;br /&gt;
&lt;br /&gt;
         Merge branch 'master' of https://github.com/openzfsonosx/zfs&lt;br /&gt;
&lt;br /&gt;
  $ cd &amp;lt;path to source code&amp;gt;/spl&lt;br /&gt;
  $ git log -n 2&lt;br /&gt;
       commit f1ff660a2f1fa340d451c2afa5f726f9bd3e609d&lt;br /&gt;
       Author: Brendon Humphrey &amp;lt;brendon.humphrey@mac.com&amp;gt;&lt;br /&gt;
       Date:   Sat Jun 18 20:25:09 2016 -0700&lt;br /&gt;
       ...&lt;br /&gt;
       commit '''115aa2f'''05b6f843e0d39d4f6bf999602db120113&lt;br /&gt;
       Author: Jorgen Lundman &amp;lt;lundman@lundman.net&amp;gt;&lt;br /&gt;
       Date:   Thu May 12 09:48:31 2016 +0900&lt;br /&gt;
       ...&lt;br /&gt;
&lt;br /&gt;
I have highlighted the relevant text to match. You can see that this machine is running the latest ZFS, and is one commit behind the latest SPL.&lt;br /&gt;
&lt;br /&gt;
== Best practices ==&lt;br /&gt;
&lt;br /&gt;
===Q) Do I have to use mirrors or raidz? ===&lt;br /&gt;
'''A)''' Have to? No. Should you? Virtually always. ZFS will not be able to repair errors it finds unless you have redundancy at the vdev level.&lt;br /&gt;
&lt;br /&gt;
===Q) Can I set copies=2 in lieu of using mirrors or raidz? ===&lt;br /&gt;
Setting copies=2 is a poor substitute for vdev-level redundancy. Two copies on a broken drive are worthless. That being said, yes, you can set copies=2. Do so at your own risk.&lt;br /&gt;
&lt;br /&gt;
== Administration ==&lt;br /&gt;
&lt;br /&gt;
===Q) How can I access the .zfs snapshot directories?===&lt;br /&gt;
'''A)''' You need to set snapdir visible and manually mount a snapshot.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo zfs set snapdir=visible tank/bob&lt;br /&gt;
$ sudo zfs mount tank/bob@yesterday&lt;br /&gt;
$ ls -l /tank/bob/.zfs/snapshot/yesterday/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
You can see existing snapshots via:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ zfs list -t snapshot&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Q) Is .zfs snapdir auto-mounting supported?===&lt;br /&gt;
'''A)''' No, not at this time. You must manually &amp;quot;zfs mount&amp;quot; snapshots manually to see them in the snapdir.&lt;br /&gt;
&lt;br /&gt;
===Q) OK, I manually mounted my snapshot but still cannot see it in Finder. What gives?===&lt;br /&gt;
'''A)''' Currently mounted snapshots are only visible from Terminal, not from Finder.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ ls -l /tank/bob/.zfs/snapshot/yesterday/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Q) Why does OSX server not allow the server storage to be on ZFS?===&lt;br /&gt;
'''A)''' OSX Server has been coded in such a way as to only allow the server storage area to be an HFS formatted drive. O3X offers a feature that causes ZFS datasets to identify themselves as HFS. This is sufficient for OSX server to allow storage on a ZFS filesystem. HFS mimic is enabled by setting the com.apple.mimic_hfs property on a per dataset basis.&lt;br /&gt;
&lt;br /&gt;
In addition as of OSX Server 5.x it seems that the Application Store Caching server can only store its cache on HFS. This is a new behaviour. I just used an HFS formatted ZVOL to get around this. &lt;br /&gt;
&lt;br /&gt;
  sudo zfs create -o volblocksize=1m -s -V 250g tank/cachingzvol&lt;br /&gt;
&lt;br /&gt;
== Interoperability ==&lt;br /&gt;
&lt;br /&gt;
===Q) How do I create an O3X compatible pool on another OpenZFS platform? ===&lt;br /&gt;
'''A)''' Only enable feature flags supported by O3X, as discussed [https://openzfsonosx.org/wiki/Zpool#Feature_flags here].&lt;br /&gt;
&lt;br /&gt;
===Q) Can I import my ZEVO pools?===&lt;br /&gt;
'''A)''' Yes. O3X can import pool version 28, which means it can import ZEVO pools.&lt;br /&gt;
&lt;br /&gt;
===Q) Can I import my MacZFS pools?===&lt;br /&gt;
'''A)''' Yes. O3X can import pool version 8, which means it can import MacZFS pools.&lt;br /&gt;
&lt;br /&gt;
===Q) Do HFS only applications such as Photos, OSX server and others work on ZFS?===&lt;br /&gt;
'''A)''' Sometimes. Apple codes some software to only work when stored on HFS. We can't change that. We have provided a property that causes ZFS filesystems to identify themselves as a HFS when enabled. We are unable to guarantee that the application will work 100% correctly as HFS may have specific behaviours that the application depends on, and ZFS may not behave identically.&lt;br /&gt;
   &lt;br /&gt;
    sudo zfs set com.apple.mimic_hfs=on &amp;lt;dataset&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Q) Does Spotlight work? ===&lt;br /&gt;
'''A)''' Yes. Spotlight works on O3X 1.3.1+.&lt;br /&gt;
&lt;br /&gt;
===Q) Can Time Machine backups be stored on ZFS? ===&lt;br /&gt;
'''A)''' Yes. It is possible to host a TimeMachine backup within a SparseImage on ZFS, or an HFS formatted ZVOL On ZFS.&lt;br /&gt;
&lt;br /&gt;
== Limitations ==&lt;br /&gt;
&lt;br /&gt;
===Q) Can I use finder permissions aka ACLs? ===&lt;br /&gt;
'''A)''' Not yet. There is work to go in this area. See https://github.com/openzfsonosx/zfs/issues/275&lt;br /&gt;
&lt;br /&gt;
===Q) Can I boot my computer off of O3X?===&lt;br /&gt;
'''A)''' No. O3X cannot be used as your main system partition.&lt;br /&gt;
&lt;br /&gt;
===Q) So if I use O3X, that means I don't need to back up, right? ===&lt;br /&gt;
'''A)''' Wrong. Wrong. Wrong. &lt;br /&gt;
&lt;br /&gt;
===Q) Can TimeMachine backup the contents of a ZFS volume? ===&lt;br /&gt;
'''A)''' No. We believe that when &amp;quot;Issue 116&amp;quot; is resolved it may be supportable. At the present time TimeMachine excludes ZFS filesystems from the list of available backup targets.&lt;/div&gt;</summary>
		<author><name>Brendon</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/Getting_involved</id>
		<title>Getting involved</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/Getting_involved"/>
				<updated>2015-09-18T21:49:08Z</updated>
		
		<summary type="html">&lt;p&gt;Brendon: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:About O3X]]&lt;br /&gt;
Getting involved and contributing benefits not only the community member and their fellow O3X users, but all users of [[Wikipedia:Free and open source software|free and open source software]].&lt;br /&gt;
&lt;br /&gt;
This article describes how both new and experienced O3X users can contribute to the community. Note that this is not an exhaustive list.&lt;br /&gt;
&lt;br /&gt;
== Official OpenZFS on OS X projects ==&lt;br /&gt;
&lt;br /&gt;
=== Post on the forums ===&lt;br /&gt;
&lt;br /&gt;
One of the easiest ways to get involved is participating in the [https://openzfsonosx.org/forum/ O3X Forums], which allow O3X users to get to know the community and help each other.&lt;br /&gt;
&lt;br /&gt;
=== Improve this wiki ===&lt;br /&gt;
&lt;br /&gt;
[[AboutWiki|O3XWiki]] is collaboratively maintained O3X documentation. All users are encouraged to [[O3XWiki:Contributing|contribute]].&lt;br /&gt;
&lt;br /&gt;
=== Join the chatroom ===&lt;br /&gt;
&lt;br /&gt;
You can help other users to solve problems in the [[IRC channel|IRC Channel]].&lt;br /&gt;
&lt;br /&gt;
=== Fix and report bugs ===&lt;br /&gt;
&lt;br /&gt;
Reporting and fixing bugs on the [https://github.com/zfs-osx/zfs/issues bug tracker] is one of the possible ways to help the community.&lt;br /&gt;
&lt;br /&gt;
When reporting errors, please ensure that you include sufficient contextual information to assist with reproduction, diagnosis and resolution of the issue. At a minimum consider describing what you were doing, how repeatable the issue is, all text from the stack dump, what OSX version, and what O3X version you are running. &lt;br /&gt;
&lt;br /&gt;
If you are experiencing panics, please turn keepsyms on (https://openzfsonosx.org/wiki/Install). This will make the stack dumps human readable, thereby improving our ability to diagnose.&lt;br /&gt;
&lt;br /&gt;
If you are deadlocking O3X, please run the spindump command after O3X deadlocks and before the machine becomes unusable and post the resulting output. There is often a narrow window of time to do this. Spindump needs to run with elevated permissions using sudo. Normally only administrators can do this. &lt;br /&gt;
&lt;br /&gt;
   #sudo spindump&lt;br /&gt;
   Password:&lt;br /&gt;
   Sampling all processes for 10 seconds with 10 milliseconds of run time between samples&lt;br /&gt;
   Sampling completed, processing symbols...&lt;br /&gt;
   Spindump analysis written to file /tmp/spindump.txt&lt;br /&gt;
&lt;br /&gt;
Copy the output file to an hfs filesystem and reboot.&lt;br /&gt;
&lt;br /&gt;
For real time assistance please see us on IRC, there is usually someone online that can help (https://openzfsonosx.org/wiki/IRC_channel)&lt;br /&gt;
&lt;br /&gt;
=== Code ===&lt;br /&gt;
&lt;br /&gt;
Developers of all levels of experience and skill are encouraged to come talk about how they can contribute in the [[IRC channel|IRC Channel]] and on the [https://openzfsonosx.org/forum/viewforum.php?f=24 OpenZFS on OS X Development] forum.&lt;/div&gt;</summary>
		<author><name>Brendon</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/FAQ</id>
		<title>FAQ</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/FAQ"/>
				<updated>2015-09-18T21:30:24Z</updated>
		
		<summary type="html">&lt;p&gt;Brendon: add all comments&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:About O3X]]&lt;br /&gt;
&lt;br /&gt;
Besides the questions covered below, you may find [[Documentation]] and [[OpenZFS on OS X]] helpful. Both articles contain a good deal of information about OpenZFS on OS X.&lt;br /&gt;
&lt;br /&gt;
== General ==&lt;br /&gt;
&lt;br /&gt;
===Q) What is OpenZFS on OS X?===&lt;br /&gt;
'''A)''' See the article entitled [[OpenZFS on OS X]].&lt;br /&gt;
&lt;br /&gt;
===Q) What does O3X mean?===&lt;br /&gt;
'''A)''' O3X = O O O X = OpenZFS on OS X.&lt;br /&gt;
&lt;br /&gt;
===Q) What version of ZFS do you use?===&lt;br /&gt;
'''A)''' OpenZFS. Pool version 5000. File system version 5. Pool version 5000 is pool version 28 plus support for feature flags. We support pool version 5000 and pool versions less than or equal to 28. We do not support the closed-source Oracle Solaris ZFS pool versions 29 and up.&lt;br /&gt;
&lt;br /&gt;
== Best practices ==&lt;br /&gt;
&lt;br /&gt;
===Q) Do I have to use mirrors or raidz? ===&lt;br /&gt;
'''A)''' Have to? No. Should you? Virtually always. ZFS will not be able to repair errors it finds unless you have redundancy at the vdev level.&lt;br /&gt;
&lt;br /&gt;
===Q) Can I set copies=2 in lieu of using mirrors or raidz? ===&lt;br /&gt;
Setting copies=2 is a poor substitute for vdev-level redundancy. Two copies on a broken drive are worthless. That being said, yes, you can set copies=2. Do so at your own risk.&lt;br /&gt;
&lt;br /&gt;
== Administration ==&lt;br /&gt;
&lt;br /&gt;
===Q) How can I access the .zfs snapshot directories?===&lt;br /&gt;
'''A)''' You need to set snapdir visible and manually mount a snapshot.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo zfs set snapdir=visible tank/bob&lt;br /&gt;
$ sudo zfs mount tank/bob@yesterday&lt;br /&gt;
$ ls -l /tank/bob/.zfs/snapshot/yesterday/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Q) Is .zfs snapdir auto-mounting supported?===&lt;br /&gt;
'''A)''' No, not at this time. You must manually &amp;quot;zfs mount&amp;quot; snapshots manually to see them in the snapdir.&lt;br /&gt;
&lt;br /&gt;
===Q) OK, I manually mounted my snapshot but still cannot see it in Finder. What gives?===&lt;br /&gt;
'''A)''' Currently mounted snapshots are only visible from Terminal, not from Finder.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ ls -l /tank/bob/.zfs/snapshot/yesterday/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Q) Why does OSX server not allow the server storage to be on ZFS?===&lt;br /&gt;
'''A)''' OSX Server has been coded in such a way as to only allow the server storage area to be an HFS formatted drive. O3X offers a feature that causes ZFS datasets to identify themselves as HFS. This is sufficient for OSX server to allow storage on a ZFS filesystem. HFS mimic is enabled by setting the com.apple.mimic_hfs property on a per dataset basis.&lt;br /&gt;
&lt;br /&gt;
== Interoperability ==&lt;br /&gt;
&lt;br /&gt;
===Q) How do I create an O3X compatible pool on another OpenZFS platform? ===&lt;br /&gt;
'''A)''' Only enable feature flags supported by O3X, as discussed [https://openzfsonosx.org/wiki/Zpool#Feature_flags here].&lt;br /&gt;
&lt;br /&gt;
===Q) Can I import my ZEVO pools?===&lt;br /&gt;
'''A)''' Yes. O3X can import pool version 28, which means it can import ZEVO pools.&lt;br /&gt;
&lt;br /&gt;
===Q) Can I import my MacZFS pools?===&lt;br /&gt;
'''A)''' Yes. O3X can import pool version 8, which means it can import MacZFS pools.&lt;br /&gt;
&lt;br /&gt;
== Limitations ==&lt;br /&gt;
&lt;br /&gt;
===Q) Can I use finder permissions aka ACLs? ===&lt;br /&gt;
'''A)''' Not yet. There is work to go in this area. See https://github.com/openzfsonosx/zfs/issues/275&lt;br /&gt;
&lt;br /&gt;
===Q) Can I boot my computer off of O3X?===&lt;br /&gt;
'''A)''' No. O3X cannot be used as your main system partition.&lt;br /&gt;
&lt;br /&gt;
===Q) So if I use O3X, that means I don't need to back up, right? ===&lt;br /&gt;
'''A)''' Wrong. Wrong. Wrong.&lt;br /&gt;
&lt;br /&gt;
===Q) Does Spotlight work? ===&lt;br /&gt;
'''A)''' Yes. Spotlight works on O3X 1.3.1+.&lt;br /&gt;
&lt;br /&gt;
===Q) Can Time Machine backups be stored on ZFS? ===&lt;br /&gt;
'''A)''' Yes. It is possible to host a TimeMachine backup within a SparseImage on ZFS, or an HFS formatted ZVOL On ZFS.&lt;br /&gt;
&lt;br /&gt;
===Q) Can TimeMachine backup the contents of a ZFS volume? ===&lt;br /&gt;
'''A)''' No. We believe that when &amp;quot;Issue 116&amp;quot; is resolved it may be supportable. At the present time TimeMachine excludes ZFS filesystems from the list of available backup targets.&lt;/div&gt;</summary>
		<author><name>Brendon</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/Development</id>
		<title>Development</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/Development"/>
				<updated>2014-12-05T23:13:09Z</updated>
		
		<summary type="html">&lt;p&gt;Brendon: /* Kernel */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:O3X development]]&lt;br /&gt;
You should also familiarize yourself with the [[Project_roadmap|project roadmap]] so that you can put the technical details here in context.&lt;br /&gt;
&lt;br /&gt;
== Kernel ==&lt;br /&gt;
&lt;br /&gt;
=== Debugging with GDB ===&lt;br /&gt;
&lt;br /&gt;
Dealing with [[Panic|panics]].&lt;br /&gt;
&lt;br /&gt;
Apple's documentation: https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KEXTConcept/KEXTConceptDebugger/debug_tutorial.html&lt;br /&gt;
&lt;br /&gt;
Boot target VM with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo nvram boot-args=&amp;quot;-v keepsyms=y debug=0x144&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Make it panic.&lt;br /&gt;
&lt;br /&gt;
On your development machine, you will need the Kernel Debug Kit. Download it from Apple [https://developer.apple.com/downloads/index.action?q=Kernel%20Debug%20Kit here].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ gdb /Volumes/Kernelit/mach_kernel&lt;br /&gt;
(gdb) source /Volumes/KernelDebugKit/kgmacros&lt;br /&gt;
(gdb) target remote-kdp&lt;br /&gt;
(gdb) kdp-reattach  192.168.30.133   # obviously use the IP of your target / crashed VM&lt;br /&gt;
(gdb) showallkmods&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Find the addresses for ZFS and SPL modules.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;^Z&amp;lt;/code&amp;gt; to suspend gdb, or, use another terminal&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
^Z&lt;br /&gt;
$ sudo kextutil -s /tmp -n \&lt;br /&gt;
-k /Volumes/KernelDebugKit/mach_kernel \&lt;br /&gt;
-e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ \&lt;br /&gt;
../spl/module/spl/spl.kext/&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then resume gdb, or go back to gdb terminal.&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
$ fg&lt;br /&gt;
(gdb) set kext-symbol-file-path /tmp&lt;br /&gt;
(gdb) add-kext /tmp/spl.kext &lt;br /&gt;
(gdb) add-kext /tmp/zfs.kext&lt;br /&gt;
(gdb) bt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Debugging with LLDB ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ echo &amp;quot;settings set target.load-script-from-symbol-file true&amp;quot; &amp;gt;&amp;gt; ~/.lldbinit&lt;br /&gt;
$ lldb /Volumes/KernelDebugKit/mach_kernel&lt;br /&gt;
(lldb) kdp-remote  192.168.30.146&lt;br /&gt;
(lldb) showallkmods&lt;br /&gt;
(lldb) addkext -F /tmp/spl.kext/Contents/MacOS/spl 0xffffff7f8ebb0000   (Address from showallkmods)&lt;br /&gt;
(lldb) addkext -F /tmp/zfs.kext/Contents/MacOS/zfs 0xffffff7f8ebbf000&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then follow the guide for GDB above.&lt;br /&gt;
&lt;br /&gt;
=== Non-panic ===&lt;br /&gt;
&lt;br /&gt;
If you prefer to work in GDB, you can always panic a kernel with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo dtrace -w -n &amp;quot;BEGIN{ panic();}&amp;quot;&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
But this was revealing:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo /usr/libexec/stackshot -i -f /tmp/stackshot.log &lt;br /&gt;
$ sudo symstacks.rb -f /tmp/stackshot.log -s -w /tmp/trace.txt&lt;br /&gt;
$ less /tmp/trace.txt&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that my hang is here:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
PID: 156&lt;br /&gt;
    Process: zpool&lt;br /&gt;
    Thread ID: 0x4e2&lt;br /&gt;
    Thread state: 0x9 == TH_WAIT |TH_UNINT &lt;br /&gt;
    Thread wait_event: 0xffffff8006608a6c&lt;br /&gt;
    Kernel stack: &lt;br /&gt;
    machine_switch_context (in mach_kernel) + 366 (0xffffff80002b3d3e)&lt;br /&gt;
      0xffffff800022e711 (in mach_kernel) + 1281 (0xffffff800022e711)&lt;br /&gt;
        thread_block_reason (in mach_kernel) + 300 (0xffffff800022d9dc)&lt;br /&gt;
          lck_mtx_sleep (in mach_kernel) + 78 (0xffffff80002265ce)&lt;br /&gt;
            0xffffff8000569ef6 (in mach_kernel) + 246 (0xffffff8000569ef6)&lt;br /&gt;
              msleep (in mach_kernel) + 116 (0xffffff800056a2e4)&lt;br /&gt;
                0xffffff7f80e52a76 (0xffffff7f80e52a76)&lt;br /&gt;
                  0xffffff7f80e53fae (0xffffff7f80e53fae)&lt;br /&gt;
                    0xffffff7f80e54173 (0xffffff7f80e54173)&lt;br /&gt;
                      0xffffff7f80f1a870 (0xffffff7f80f1a870)&lt;br /&gt;
                        0xffffff7f80f2bb4e (0xffffff7f80f2bb4e)&lt;br /&gt;
                          0xffffff7f80f1a9b7 (0xffffff7f80f1a9b7)&lt;br /&gt;
                            0xffffff7f80f1b65f (0xffffff7f80f1b65f)&lt;br /&gt;
                              0xffffff7f80f042ee (0xffffff7f80f042ee)&lt;br /&gt;
                                0xffffff7f80f45c5b (0xffffff7f80f45c5b)&lt;br /&gt;
                                  0xffffff7f80f4ce92 (0xffffff7f80f4ce92)&lt;br /&gt;
                                    spec_ioctl (in mach_kernel) + 157 (0xffffff8000320bfd)&lt;br /&gt;
                                      VNOP_IOCTL (in mach_kernel) + 244 (0xffffff8000311e84)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It is a shame that it only shows the kernel symbols, and not inside SPL and ZFS, but we can ask it to load another sym file. (Alas, it cannot handle multiple symbols files. Fix this Apple.)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo kextstat #grab the addresses of SPL and ZFS again&lt;br /&gt;
$ sudo kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel \&lt;br /&gt;
-e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/ &lt;br /&gt;
&lt;br /&gt;
$ sudo symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.spl.sym&lt;br /&gt;
              0xffffff800056a2e4 (0xffffff800056a2e4)&lt;br /&gt;
                spl_cv_wait (in net.lundman.spl.sym) + 54 (0xffffff7f80e52a76)&lt;br /&gt;
                  taskq_wait (in net.lundman.spl.sym) + 78 (0xffffff7f80e53fae)&lt;br /&gt;
                    taskq_destroy (in net.lundman.spl.sym) + 35 (0xffffff7f80e54173)&lt;br /&gt;
                      0xffffff7f80f1a870 (0xffffff7f80f1a870)&lt;br /&gt;
&lt;br /&gt;
$ sudo symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.zfs.sym&lt;br /&gt;
                    0xffffff7f80e54173 (0xffffff7f80e54173)&lt;br /&gt;
                      vdev_open_children (in net.lundman.zfs.sym) + 336 (0xffffff7f80f1a870)&lt;br /&gt;
                        vdev_root_open (in net.lundman.zfs.sym) + 94 (0xffffff7f80f2bb4e)&lt;br /&gt;
                          vdev_open (in net.lundman.zfs.sym) + 311 (0xffffff7f80f1a9b7)&lt;br /&gt;
                            vdev_create (in net.lundman.zfs.sym) + 31 (0xffffff7f80f1b65f)&lt;br /&gt;
                              spa_create (in net.lundman.zfs.sym) + 878 (0xffffff7f80f042ee)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Voilà!&lt;br /&gt;
&lt;br /&gt;
=== Memory leaks ===&lt;br /&gt;
&lt;br /&gt;
In some cases, you may suspect memory issues, for instance if you saw the following panic:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
panic(cpu 1 caller 0xffffff80002438d8): &amp;quot;zalloc: \&amp;quot;kalloc.1024\&amp;quot; (100535 elements) retry fail 3, kfree_nop_count: 0&amp;quot;@/SourceCache/xnu/xnu-2050.7.9/osfmk/kern/zalloc.c:1826&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To debug this, you can attach GDB and use the zprint command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
(gdb) zprint&lt;br /&gt;
ZONE                   COUNT   TOT_SZ   MAX_SZ   ELT_SZ ALLOC_SZ         TOT_ALLOC         TOT_FREE NAME&lt;br /&gt;
0xffffff8002a89250   1620133  18c1000  22a3599       16     1000         125203838        123583705 kalloc.16 CX&lt;br /&gt;
0xffffff8006306c50    110335   35f000   4ce300       32     1000          13634985         13524650 kalloc.32 CX&lt;br /&gt;
0xffffff8006306a00    133584   82a000   e6a900       64     1000          26510120         26376536 kalloc.64 CX&lt;br /&gt;
0xffffff80063067b0    610090  4a84000  614f4c0      128     1000          50524515         49914425 kalloc.128 CX&lt;br /&gt;
0xffffff8006306560   1070398 121a2000 1b5e4d60      256     1000          72534632         71464234 kalloc.256 CX&lt;br /&gt;
0xffffff8006306310    399302  d423000  daf26b0      512     1000          39231204         38831902 kalloc.512 CX&lt;br /&gt;
0xffffff80063060c0    100404  6231000  c29e980     1024     1000          22949693         22849289 kalloc.1024 CX&lt;br /&gt;
0xffffff8006305e70       292    9a000   200000     2048     1000          77633725         77633433 kalloc.2048 CX&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this case, kalloc.256 is suspect.&lt;br /&gt;
&lt;br /&gt;
Reboot kernel with zlog=kalloc.256 on the command line, then we can use&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
(gdb) findoldest                                                                &lt;br /&gt;
oldest record is at log index 393:&lt;br /&gt;
&lt;br /&gt;
--------------- ALLOC  0xffffff803276ec00 : index 393  :  ztime 21643824 -------------&lt;br /&gt;
0xffffff800024352e &amp;lt;zalloc_canblock+78&amp;gt;:        mov    %eax,-0xcc(%rbp)&lt;br /&gt;
0xffffff80002245bd &amp;lt;get_zone_search+23&amp;gt;:        jmpq   0xffffff80002246d8 &amp;lt;KALLOC_ZINFO_SALLOC+35&amp;gt;&lt;br /&gt;
0xffffff8000224c39 &amp;lt;OSMalloc+89&amp;gt;:       mov    %rax,-0x18(%rbp)&lt;br /&gt;
0xffffff7f80e847df &amp;lt;zfs_kmem_alloc+15&amp;gt;: mov    %rax,%r15&lt;br /&gt;
0xffffff7f80e90649 &amp;lt;arc_buf_alloc+41&amp;gt;:  mov    %rax,-0x28(%rbp)&lt;br /&gt;
and indeed, list any index&lt;br /&gt;
&lt;br /&gt;
(gdb) zstack 394&lt;br /&gt;
&lt;br /&gt;
--------------- ALLOC  0xffffff8032d60700 : index 394  :  ztime 21648810 -------------&lt;br /&gt;
0xffffff800024352e &amp;lt;zalloc_canblock+78&amp;gt;:        mov    %eax,-0xcc(%rbp)&lt;br /&gt;
0xffffff80002245bd &amp;lt;get_zone_search+23&amp;gt;:        jmpq   0xffffff80002246d8 &amp;lt;KALLOC_ZINFO_SALLOC+35&amp;gt;&lt;br /&gt;
0xffffff8000224c39 &amp;lt;OSMalloc+89&amp;gt;:       mov    %rax,-0x18(%rbp)&lt;br /&gt;
0xffffff7f80e847df &amp;lt;zfs_kmem_alloc+15&amp;gt;: mov    %rax,%r15&lt;br /&gt;
0xffffff7f80e90649 &amp;lt;arc_buf_alloc+41&amp;gt;:  mov    %rax,-0x28(%rbp)&lt;br /&gt;
How many times was zfs_kmem_alloc involved in the leaked allocs?&lt;br /&gt;
&lt;br /&gt;
(gdb) countpcs 0xffffff7f80e847df&lt;br /&gt;
occurred 3999 times in log (100% of records)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
At least we know it is our fault.&lt;br /&gt;
&lt;br /&gt;
How many times is it arc_buf_alloc?&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;text&amp;quot;&amp;gt;&lt;br /&gt;
(gdb) countpcs 0xffffff7f80e90649&lt;br /&gt;
occurred 2390 times in log (59% of records)&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Memory Architecture ===&lt;br /&gt;
&lt;br /&gt;
ZFS is designed to aggressively cache filesystem data in main memory. The result of this caching can be significant filesystem performance improvement.&lt;br /&gt;
&lt;br /&gt;
Selection of an allocator has been very challenging on OSX. In the last year we have evolved from:&lt;br /&gt;
* Direct call to OSMalloc - a very low level allocator in the kernel - rejected because of slow performance and because the minimum allocation size is one page (4k)&lt;br /&gt;
* Direct call to zalloc - the OSX zones allocator - rejected because only 25% of the machines memory can be accessed (50% under some circumstances), and because the result of exceeding this limit is a kernel panic with no other feedback mechanisms available.&lt;br /&gt;
* Direct call to bmalloc - bmalloc was a home grown slice allocator that allocated slices of memory from the kernel page allocator, and subdivided into smaller units of allocation to use by ZFS. This was quite successful but very space inefficient. Was used in O3X 1.27 and 1.3.0. At this stage we had no real response to memory pressure in the machine, so the total memory allocation to O3X was kept to 50% of the machine.&lt;br /&gt;
* Implementation of kmem and vmem allocators using code from Illumos. Provision of a memory pressure monitor mechanism - we are now able to allocate most of the machines memory to ZFS, and scale that back when the machine experiences memory pressure.&lt;br /&gt;
&lt;br /&gt;
O3X has the Solaris Porting Layer (SPL). The SPL has long since provided the Illumos kmem.h API for use by ZFS. In O3X releases up to 1.3.0 the kmem implementation was been a stub that passes allocation requests to an underlying allocator. In O3X 1.3.0 we were still missing some key behaviours in the allocator - efficient lifecycle control of objects, and an effective response to memory pressure in the machine, and the allocator was not very space efficient because of metadata overheads in bmalloc. We were also not convinced that bmalloc represented the state of the art.&lt;br /&gt;
&lt;br /&gt;
Our strategy was to determine how much of the Illumos allocator could be implemented on OSX. After a series of experiments where we implemented significant portions of the kmem code from illumos on top of bmalloc, we had learned enough to take the final step of essentially copying the entire kmem/vmem allocator stack from Illumos. Some portions of the kmem code have been disabled in kmem such as logging, and hot swap CPU support have been disabled due to architectural differences between OSX and Illumos.&lt;br /&gt;
&lt;br /&gt;
By default kmem/vmem require a certain level of performance from the OS page allocator. It is easy to overwhelm the OSX page allocator. We tuned vmem to use 512Kb chunks of memory from the page allocator rather than the smaller allocations that vmem prefers. This is less than ideal as it reduces the ability for vmem to smoothly release memory to the page allocator when the machine is under pressure. While we have an adequately performing solution now, there will always be a tension between our allocator and OSX itself. OSX only provides minimal mechanisms to observe and respond to memory pressure in the machine, so we are somewhat limited in what can be achieved in this regard.  &lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
&lt;br /&gt;
Jeff Bonwicks paper - kmem and vmem implement this design. https://www.usenix.org/legacy/event/usenix01/full_papers/bonwick/bonwick_html/&lt;br /&gt;
&lt;br /&gt;
== Flamegraphs ==&lt;br /&gt;
&lt;br /&gt;
Huge thanks to [http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html BrendanGregg] for so much of the dtrace magic.&lt;br /&gt;
&lt;br /&gt;
dtrace the kernel while running a command:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo dtrace -x stackframes=100 -n 'profile-997 /arg0/ {&lt;br /&gt;
    @[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
It will run for 60 seconds.&lt;br /&gt;
&lt;br /&gt;
Convert it to a flamegraph:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ ./stackcollapse.pl out.stacks &amp;gt; out.folded&lt;br /&gt;
$ ./flamegraph.pl out.folded &amp;gt; out.svg&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This is &amp;lt;code&amp;gt;rsync -a /usr/ /BOOM/deletea/&amp;lt;/code&amp;gt; running:&lt;br /&gt;
&lt;br /&gt;
[[File:rsyncflamegraph.svg|thumb|rsync flamegraph]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Or running '''Bonnie++''' in various stages:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery mode=&amp;quot;packed-hover&amp;quot;&amp;gt;&lt;br /&gt;
File:create.svg|Create files in sequential order|alt=[[File:create.svg]]&lt;br /&gt;
File:stat.svg|Stat files in sequential order|alt=Stat files in sequential order&lt;br /&gt;
File:delete.svg|Delete files in sequential order|alt=Delete files in sequential order&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:VX_create.svg|thumb|Create files in sequential order]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:iozone.svg|thumb|IOzone flamegraph]]&lt;br /&gt;
&lt;br /&gt;
[[File:iozoneX.svg|thumb|IOzone flamegraph (untrimmed)]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
------&lt;br /&gt;
&lt;br /&gt;
== Iozone ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Quick peek at how they compare, just to see how much we should improve it by.&lt;br /&gt;
&lt;br /&gt;
HFS+ and ZFS were created on the same virtual disk in VMware. Of course, this is not ideal testing specs, but should serve as an indicator. &lt;br /&gt;
&lt;br /&gt;
The pool was created with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo zpool create -f -o ashift=12 \&lt;br /&gt;
-O atime=off \&lt;br /&gt;
-O casesensitivity=insensitive \&lt;br /&gt;
-O normalization=formD \&lt;br /&gt;
BOOM /dev/disk1&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
and the HFS+ file system was created with the standard OS X Disk Utility.app, with everything default (journaled, case-insensitive).&lt;br /&gt;
&lt;br /&gt;
'''Iozone''' was run with standard automode:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
sudo iozone -a -b outfile.xls&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:hfs2_read.png|thumb|HFS+ read]]&lt;br /&gt;
[[File:hfs2_write.png|thumb|HFS+ write]]&lt;br /&gt;
[[File:zfs2_read.png|thumb|ZFS read]]&lt;br /&gt;
[[File:zfs2_write.png|thumb|ZFS write]]&lt;br /&gt;
&lt;br /&gt;
As a guess, writes need to double, and reads need to triple.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== VFS ===&lt;br /&gt;
&lt;br /&gt;
[[VFS]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== File-based zpools for testing==&lt;br /&gt;
&lt;br /&gt;
* create 2 files (each 100 MB) to be used as block devices:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ dd if=/dev/zero bs=1m count=100 of=vdisk1&lt;br /&gt;
$ dd if=/dev/zero bs=1m count=100 of=vdisk2&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* attach files as raw disk images:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ hdiutil attach -imagekey diskimage-class=CRawDiskImage -nomount vdisk1&lt;br /&gt;
/dev/disk2&lt;br /&gt;
$ hdiutil attach -imagekey diskimage-class=CRawDiskImage -nomount vdisk2&lt;br /&gt;
/dev/disk3&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* create mirrored zpool:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo zpool create -f -o ashift=12 -O casesensitivity=insensitive -O normalization=formD tank mirror disk2 disk3&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* show zpool:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo zpool status&lt;br /&gt;
  pool: tank&lt;br /&gt;
 state: ONLINE&lt;br /&gt;
  scan: none requested&lt;br /&gt;
config:&lt;br /&gt;
&lt;br /&gt;
	NAME        STATE     READ WRITE CKSUM&lt;br /&gt;
	tank        ONLINE       0     0     0&lt;br /&gt;
	  mirror-0  ONLINE       0     0     0&lt;br /&gt;
	    disk2   ONLINE       0     0     0&lt;br /&gt;
	    disk3   ONLINE       0     0     0&lt;br /&gt;
&lt;br /&gt;
errors: No known data errors&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* test ZFS features, find bugs, ...&lt;br /&gt;
&lt;br /&gt;
* export zpool:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ sudo zpool export tank&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* detach raw images:&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot;&amp;gt;&lt;br /&gt;
$ hdiutil detach disk2&lt;br /&gt;
&amp;quot;disk2&amp;quot; unmounted.&lt;br /&gt;
&amp;quot;disk2&amp;quot; ejected.&lt;br /&gt;
$ hdiutil detach disk3&lt;br /&gt;
&amp;quot;disk3&amp;quot; unmounted.&lt;br /&gt;
&amp;quot;disk3&amp;quot; ejected.&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Platform differences ==&lt;br /&gt;
&lt;br /&gt;
This section is an attempt to outline the differences from ZFS versions of other platforms, as compared to OSX. To assist developers new to the Apple platform, who wishes to assist, or understand, development of the O3X version.&lt;br /&gt;
&lt;br /&gt;
=== Reclaim ===&lt;br /&gt;
&lt;br /&gt;
One of the biggest hassles with OSX is the VFS layer's handling of reclaim. First it is worth noting that &amp;quot;struct vnode&amp;quot; is an opaque type, so we are not allowed to see, nor modify, the contents of a vnode.&lt;br /&gt;
(Of course, we could craft a mirror struct of vnode and tailor it to each OSX version where vnode changes. But that is rather hacky.)&lt;br /&gt;
&lt;br /&gt;
Following that, the '''only''' place where you can set the '''vtype''' (VREG, VDIR), '''vdata''' (user pointer to hold the ZFS znode), '''vfsops''' (list of filesystem calls &amp;quot;vnops&amp;quot;) etc, is '''only in calling vnode_create()'''. &lt;br /&gt;
So there is no way to &amp;quot;allocate an empty vnode, and set its values later&amp;quot;. The FreeBSD method of pre-allocating vnodes, to avoid reclaim, can not be done. &lt;br /&gt;
ZFS will start a new dmu_tx, then call zfs_mknode which will eventually call vnode_create, so we can not do anything with dmu_tx in those vnops.&lt;br /&gt;
&lt;br /&gt;
The problem is, if vnode_create decides to reclaim, it will do so directly, as the same thread. It will end up in vclean() which can call vnop_fsync, vnop_pageout, vnop_inactive and vnop_reclaim. The first three of these calls, we can &lt;br /&gt;
use the API call vnode_isrecycled() to detect if these vnops are called &amp;quot;the normal way&amp;quot;, or from vclean. If we come from vclean, and the vnode is doomed, we will do as little as possible. We can not open a new TX, and&lt;br /&gt;
we can not use mutex locks (panic: locking against ourselves).&lt;br /&gt;
&lt;br /&gt;
Nor is there any way to defer, or delay, a doomed vnode. If vnop_reclaim returns anything but 0, you find the lovely XNU code of &lt;br /&gt;
 2205         if (VNOP_RECLAIM(vp, ctx))&lt;br /&gt;
 2206                 panic(&amp;quot;vclean: cannot reclaim&amp;quot;);&lt;br /&gt;
in vfs_subr.c&lt;br /&gt;
&lt;br /&gt;
As for vnop_reclaim, it requires a little more work. We need to detect if curthread is coming via the inside of vnode_create. Currently, the call to vnode_create looks like:&lt;br /&gt;
&lt;br /&gt;
 reentry.id = curthread;&lt;br /&gt;
 mutex_enter(&amp;amp;zfsvfs-&amp;gt;z_reentry_lock);&lt;br /&gt;
 '''list_insert_head'''(&amp;amp;zfsvfs-&amp;gt;z_reentry_threads, &amp;amp;reentry);&lt;br /&gt;
 mutex_exit(&amp;amp;zfsvfs-&amp;gt;z_reentry_lock);&lt;br /&gt;
 '''vnode_create'''(VNCREATE_FLAVOR, VCREATESIZE, &amp;amp;vfsp, vpp);&lt;br /&gt;
 mutex_enter(&amp;amp;zfsvfs-&amp;gt;z_reentry_lock);&lt;br /&gt;
 '''list_remove'''(&amp;amp;zfsvfs-&amp;gt;z_reentry_threads, &amp;amp;reentry);&lt;br /&gt;
 mutex_exit(&amp;amp;zfsvfs-&amp;gt;z_reentry_lock);&lt;br /&gt;
&lt;br /&gt;
That is to say, add a node (carrying only curthread id) to the list z_reentry_threads in zfsvfs. Then at the start of vnop_reclaim, we can test curthread against this list to see if the current thread is inside a call to&lt;br /&gt;
vnode_create.&lt;br /&gt;
If we are not inside vnode_create (zp-&amp;gt;z_reclaim_reentry = FALSE) we can handle the reclaim as upstream, either calling zfs_znode_free or zfs_zinactive depending on the value of z_sa_hdl.&lt;br /&gt;
However, if we are inside vnode_create (zp-&amp;gt;z_reclaim_reentry = TRUE) we call zfs_zinactive, but IF zp-&amp;gt;unlinked is set, and we end up in zfs_rmnode(), this function will return early, before it does the &amp;quot;final TX&amp;quot;.&lt;br /&gt;
At the end of vnop_reclaim, we then add the znode (zp) to a zfsvfs-&amp;gt;z_reclaim_list. Note that the znode has not finished reclaim, but its z_vnode is now set to NULL (since vnop_reclaim was called, we have no choice but to release it)&lt;br /&gt;
nor has it called zfs_znode_free().&lt;br /&gt;
&lt;br /&gt;
The reclaim_thread (started from zfs_vfsops creation of zfsvfs) will wake up, see the znode on the reclaim_list, and do the final TX, lifted from zfs_rmnode. It then finally calls zfs_znode_free(). As it is run as a separate thread, dmu_tx and dmu_tx_commit will succeed.&lt;br /&gt;
&lt;br /&gt;
=== Fastpath vs Recycle ===&lt;br /&gt;
&lt;br /&gt;
Another interesting aspect is that IllumOS has a delete fastpath. In zfs_remove, if it is detected that the znode can be &amp;quot;deleted_now&amp;quot;, it marks the vnode as free and directly calls zfs_znode_delete(), if it can not, adds it to zfs_unlinked_add().&lt;br /&gt;
&lt;br /&gt;
In OSX, there is no way to directly release a vnode. Ie, XNU always has full control of the vnodes. Even if you call vnode_recycle(), the vnode is not released '''until''' vnop_reclaim is called. The vnode can just be marked for later reclaim, but remain active (especially if you are racing against other threads using the same vnode). So in zfs_remove, we attempt to call vnode_recycle(), and only if this returns &amp;quot;1&amp;quot; do we know that vnop_reclaim was called, and we can directly call zfs_znode_delete(). Note that the O3X vnop_reclaim handler then has special code to not do anything with the vnode (zp-&amp;gt;z_fastpath) but to only clear out the z_vnode and return.&lt;br /&gt;
&lt;br /&gt;
There is also a little special lock-handling in zfs_zinactive, since we can call it from inside of a vnode_create() which is called by ZFS with locks held. If this is the case, we do not attempt to acquire locks in zfs_zinactive.&lt;/div&gt;</summary>
		<author><name>Brendon</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/Performance</id>
		<title>Performance</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/Performance"/>
				<updated>2014-04-13T01:07:50Z</updated>
		
		<summary type="html">&lt;p&gt;Brendon: /* Memory Utilization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Status ==&lt;br /&gt;
&lt;br /&gt;
Currently OpenZFS on OS X is active development, with priority being given to stability and integration enhancements, before performance.&lt;br /&gt;
&lt;br /&gt;
== Performance Tuning ==&lt;br /&gt;
&lt;br /&gt;
=== Memory Utilization ===&lt;br /&gt;
&lt;br /&gt;
ZFS uses memory caching to improve performance. Filesystem meta-data, data, and the De-duplication Tables (DDT) are stored in the ARC, allowing frequently accessed data to be accessed many times faster than possible from hard disk media. The performance that you experience with ZFS can be directly linked to the amount of memory that is allocated to the ARC.&lt;br /&gt;
&lt;br /&gt;
O3X 1.2.0 sets the ARC size to just 6% of your computers main memory. This design decision was made for stability reasons as its not hard to exhaust the built in kernel memory allocator, which allocates memory from a pool that defaults to 25% of your computers main memory.&lt;br /&gt;
&lt;br /&gt;
The O3X project will soon release a version of the code that contains a replacement memory allocator that can access much more of your computers memory. The allocator was recently integrated into the 'master' branch in the source repository. With the introduction of the new allocator, the default ARC size has been raised to 25% of main memory. This has resulted in significant performance improvements. Due to space efficiency limitations in the allocator, this will result in up to 40% of your computers memory being occupied by ZFS. &lt;br /&gt;
&lt;br /&gt;
O3X uses Wired (non-pageable) memory for the ARC. Activity Monitor and Top can display the total amount of Wired memory allocated. The spl.kmem_bytes_total sysctl contains the amount of memory allocated by ZFS. The difference between these figures reflects memory use by the kernel and the efficiency factor of the allocator. A real word example, a 32GB iMac, has ARC set to 8GB. Approximately 7GB RAM allocated by ZFS, results in approximately 10.5GB of Wired RAM in use.&lt;br /&gt;
&lt;br /&gt;
The size of the ARC is controllable by altering the value stored in the sysctl zfs.arc_max which can be set by:&lt;br /&gt;
&lt;br /&gt;
::sudo sysctl -w zfs.arc_max=&amp;lt;arc size in bytes&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When configuring large amount of ARC storage, the total allocated memory should be carefully monitored as the machines performance may suffer if too much memory is allocated.&lt;br /&gt;
&lt;br /&gt;
Future improvements to the allocator will improve its space efficiency for small allocations, which will address some of the limitations present in current code.&lt;br /&gt;
&lt;br /&gt;
== Current Benchmarks ==&lt;br /&gt;
&lt;br /&gt;
In order to establish a baseline of current performance of OpenZFS on OS X, measurements have been made using the iozone [http://www.iozone.org] benchmarking tool. The benchmark consists of running various ZFS implementations inside a VMware 6.0.2 VM on a 2011 iMac. Each VM is provisioned with 8GB of RAM, an OS boot drive, and a 5 GB second HDD containing a ZFS dataset. The HDDs are standard VMware .vmx files. &lt;br /&gt;
&lt;br /&gt;
The test zpool was created using the following command:&lt;br /&gt;
&lt;br /&gt;
::zpool create -o ashift=12 -f tank &amp;lt;disk device name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The benchmark consists of the following steps:&lt;br /&gt;
&lt;br /&gt;
# Start the VM.&lt;br /&gt;
# Import the tank dataset&lt;br /&gt;
# Execute -&amp;gt; mkdir /tank/tmp &amp;amp;&amp;amp; cd /tank/tmp&lt;br /&gt;
# Execute -&amp;gt; time iozone -a&lt;br /&gt;
# Record Time 1&lt;br /&gt;
# Execute -&amp;gt; time iozone -a&lt;br /&gt;
# Record time 2&lt;br /&gt;
# Terminate the VM and VMware before moving to the next ZFS implementation/OS combination.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The results are as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ZFS_iozone_time.png|left|frame|Comparison of ZFS implementations]]&lt;/div&gt;</summary>
		<author><name>Brendon</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/Performance</id>
		<title>Performance</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/Performance"/>
				<updated>2014-04-13T01:02:41Z</updated>
		
		<summary type="html">&lt;p&gt;Brendon: /* Memory Utilization */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Status ==&lt;br /&gt;
&lt;br /&gt;
Currently OpenZFS on OS X is active development, with priority being given to stability and integration enhancements, before performance.&lt;br /&gt;
&lt;br /&gt;
== Performance Tuning ==&lt;br /&gt;
&lt;br /&gt;
=== Memory Utilization ===&lt;br /&gt;
&lt;br /&gt;
ZFS uses memory caching to improve performance. Filesystem meta-data, data, and the De-duplication Tables (DDT) are stored in the ARC, allowing frequently accessed data to be accessed many times faster than possible from hard disk media. The performance that you experience with ZFS can be directly linked to the amount of memory that is allocated to the ARC.&lt;br /&gt;
&lt;br /&gt;
O3X 1.2.0 sets the ARC size to just 6% of your computers main memory. This design decision was made for stability reasons as its not hard to exhaust the built in kernel memory allocator, which allocates memory from a pool that defaults to 25% of your computers main memory.&lt;br /&gt;
&lt;br /&gt;
The O3X project will soon release a version of the code that contains a replacement memory allocator that can access much more of your computers memory. The allocator was recently integrated into the 'master' branch in the source repository. With the introduction of the new allocator, the default ARC size has been raised to 25% of main memory. This has resulted in significant performance improvements. Due to space efficiency limitations in the allocator, this will result in up to 40% of your computers memory being occupied by ZFS. &lt;br /&gt;
&lt;br /&gt;
O3X uses Wired (non-pageable) memory for the ARC. Activity Monitor and Top can display the total amount of Wired memory allocated. The spl.kmem_bytes_total sysctl contains the amount of memory allocated by ZFS. The difference between these figures reflects memory use by the kernel and the efficiency factor of the allocator. A real word example, a 32GB iMac, has ARC set to 8GB. Approximately 7GB RAM allocated by ZFS, results in Approximately 10.5GB of Wired RAM in use.&lt;br /&gt;
&lt;br /&gt;
The size of the ARC is controllable by altering the value stored in the sysctl zfs.arc_max which can be set by:&lt;br /&gt;
&lt;br /&gt;
::sudo sysctl -w zfs.arc_max=&amp;lt;arc size in bytes&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When configuring large amount of ARC storage, the total allocated memory should be carefully monitored as the machines performance may suffer if too much memory is allocated.&lt;br /&gt;
&lt;br /&gt;
Future improvements to the allocator will improve its space efficiency for small allocations, which will address some of the limitations present in current code.&lt;br /&gt;
&lt;br /&gt;
== Current Benchmarks ==&lt;br /&gt;
&lt;br /&gt;
In order to establish a baseline of current performance of OpenZFS on OS X, measurements have been made using the iozone [http://www.iozone.org] benchmarking tool. The benchmark consists of running various ZFS implementations inside a VMware 6.0.2 VM on a 2011 iMac. Each VM is provisioned with 8GB of RAM, an OS boot drive, and a 5 GB second HDD containing a ZFS dataset. The HDDs are standard VMware .vmx files. &lt;br /&gt;
&lt;br /&gt;
The test zpool was created using the following command:&lt;br /&gt;
&lt;br /&gt;
::zpool create -o ashift=12 -f tank &amp;lt;disk device name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The benchmark consists of the following steps:&lt;br /&gt;
&lt;br /&gt;
# Start the VM.&lt;br /&gt;
# Import the tank dataset&lt;br /&gt;
# Execute -&amp;gt; mkdir /tank/tmp &amp;amp;&amp;amp; cd /tank/tmp&lt;br /&gt;
# Execute -&amp;gt; time iozone -a&lt;br /&gt;
# Record Time 1&lt;br /&gt;
# Execute -&amp;gt; time iozone -a&lt;br /&gt;
# Record time 2&lt;br /&gt;
# Terminate the VM and VMware before moving to the next ZFS implementation/OS combination.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The results are as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ZFS_iozone_time.png|left|frame|Comparison of ZFS implementations]]&lt;/div&gt;</summary>
		<author><name>Brendon</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/Performance</id>
		<title>Performance</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/Performance"/>
				<updated>2014-04-12T23:46:16Z</updated>
		
		<summary type="html">&lt;p&gt;Brendon: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Status ==&lt;br /&gt;
&lt;br /&gt;
Currently OpenZFS on OS X is active development, with priority being given to stability and integration enhancements, before performance.&lt;br /&gt;
&lt;br /&gt;
== Performance Tuning ==&lt;br /&gt;
&lt;br /&gt;
=== Memory Utilization ===&lt;br /&gt;
&lt;br /&gt;
ZFS uses memory caching to improve performance. Filesystem meta-data, data, and the De-duplication Tables (DDT) are stored in the ARC, allowing frequently accessed data to be accessed many times faster than possible from hard disk media. The performance that you experience with ZFS can be directly linked to the amount of memory that is allocated to the ARC.&lt;br /&gt;
&lt;br /&gt;
O3X 1.2.0 sets the ARC size to just 6% of your computers main memory. This design decision was made for stability reasons as its not hard to exhaust the built in kernel memory allocator, which allocates memory from a pool that defaults to 25% of your computers main memory.&lt;br /&gt;
&lt;br /&gt;
The O3X project will soon release a version of the code that contains a replacement memory allocator that can access much more of your computers memory. The allocator was recently integrated into the 'master' branch in the source repository. With the introduction of the new allocator, the default ARC size has been raised to 25% of main memory. This has resulted in significant performance improvements. Due to space efficiency limitations in the allocator, this will result in up to 40% of your computers memory being occupied by ZFS. &lt;br /&gt;
&lt;br /&gt;
O3X uses Wired (non-pageable) memory for the ARC. Activity Monitor and Top can display the total amount of Wired memory allocated. the spl.kmem_bytes_total sysctl contains the amount of memory allocated by ZFS. The difference between these figures reflects memory use by the kernel and the efficiency factor of the allocator. A real word example, a 32GB iMac, has ARC set to 8GB. Approximately 7GB RAM allocated by ZFS, results in Approximately 10.5GB of Wired RAM in use.&lt;br /&gt;
&lt;br /&gt;
The size of the ARC is controllable by altering the value stored in the sysctl zfs.arc_max which can be set by:&lt;br /&gt;
&lt;br /&gt;
::sudo sysctl -w zfs.arc_max=&amp;lt;arc size in bytes&amp;gt;&lt;br /&gt;
&lt;br /&gt;
When configuring large amount of ARC storage, the total allocated memory should be carefully monitored as the machines performance may suffer if too much memory is allocated.&lt;br /&gt;
&lt;br /&gt;
Future improvements to the allocator will improve its space efficiency for small allocations, which will address some of the limitations present in current code.&lt;br /&gt;
 &lt;br /&gt;
== Current Benchmarks ==&lt;br /&gt;
&lt;br /&gt;
In order to establish a baseline of current performance of OpenZFS on OS X, measurements have been made using the iozone [http://www.iozone.org] benchmarking tool. The benchmark consists of running various ZFS implementations inside a VMware 6.0.2 VM on a 2011 iMac. Each VM is provisioned with 8GB of RAM, an OS boot drive, and a 5 GB second HDD containing a ZFS dataset. The HDDs are standard VMware .vmx files. &lt;br /&gt;
&lt;br /&gt;
The test zpool was created using the following command:&lt;br /&gt;
&lt;br /&gt;
::zpool create -o ashift=12 -f tank &amp;lt;disk device name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The benchmark consists of the following steps:&lt;br /&gt;
&lt;br /&gt;
# Start the VM.&lt;br /&gt;
# Import the tank dataset&lt;br /&gt;
# Execute -&amp;gt; mkdir /tank/tmp &amp;amp;&amp;amp; cd /tank/tmp&lt;br /&gt;
# Execute -&amp;gt; time iozone -a&lt;br /&gt;
# Record Time 1&lt;br /&gt;
# Execute -&amp;gt; time iozone -a&lt;br /&gt;
# Record time 2&lt;br /&gt;
# Terminate the VM and VMware before moving to the next ZFS implementation/OS combination.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The results are as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ZFS_iozone_time.png|left|frame|Comparison of ZFS implementations]]&lt;/div&gt;</summary>
		<author><name>Brendon</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/File:ZFS_iozone_time.png</id>
		<title>File:ZFS iozone time.png</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/File:ZFS_iozone_time.png"/>
				<updated>2014-04-11T06:21:54Z</updated>
		
		<summary type="html">&lt;p&gt;Brendon: Brendon uploaded a new version of &amp;amp;quot;File:ZFS iozone time.png&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Simple benchmarking of ZFS implementations for comparison purposes.&lt;/div&gt;</summary>
		<author><name>Brendon</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/File:ZFS_iozone_time.png</id>
		<title>File:ZFS iozone time.png</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/File:ZFS_iozone_time.png"/>
				<updated>2014-04-11T06:21:53Z</updated>
		
		<summary type="html">&lt;p&gt;Brendon: Brendon uploaded a new version of &amp;amp;quot;File:ZFS iozone time.png&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Simple benchmarking of ZFS implementations for comparison purposes.&lt;/div&gt;</summary>
		<author><name>Brendon</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/Performance</id>
		<title>Performance</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/Performance"/>
				<updated>2014-04-10T10:04:25Z</updated>
		
		<summary type="html">&lt;p&gt;Brendon: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Status ==&lt;br /&gt;
&lt;br /&gt;
Currently OpenZFS on OSX is under heavy development, with priority being given to completion of features over performance.&lt;br /&gt;
&lt;br /&gt;
== Current Benchmarks ==&lt;br /&gt;
&lt;br /&gt;
In order to establish a baseline of current performance of OpenZFS on OSX, measurements have been made using the iozone [http://www.iozone.org] benchmarking tool. The benchmark consists of running various ZFS implementations inside a VMware 6.0.2 VM on a 2011 iMac. Each VM is provisioned with 8GB of RAM, an OS boot drive, and a 5 GB second HDD containing a ZFS dataset. The HDDs are standard VMware .vmx files. &lt;br /&gt;
&lt;br /&gt;
The test zpool was created using the following command:&lt;br /&gt;
&lt;br /&gt;
::zpool create -o ashift=12 -f tank &amp;lt;disk device name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The benchmark consists of the following steps:&lt;br /&gt;
&lt;br /&gt;
# Start the VM.&lt;br /&gt;
# Import the tank dataset&lt;br /&gt;
# Execute -&amp;gt; mkdir /tank/tmp &amp;amp;&amp;amp; cd /tank/tmp&lt;br /&gt;
# Execute -&amp;gt; time iozone -a&lt;br /&gt;
# Record Time 1&lt;br /&gt;
# Execute -&amp;gt; time iozone -a&lt;br /&gt;
# Record time 2&lt;br /&gt;
# Terminate the VM and VMware before moving to the next ZFS implementation/OS combination.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The results are as follows:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:ZFS_iozone_time.png|left|frame|Comparision of ZFS implementations]]&lt;/div&gt;</summary>
		<author><name>Brendon</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/File:ZFS_iozone_time.png</id>
		<title>File:ZFS iozone time.png</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/File:ZFS_iozone_time.png"/>
				<updated>2014-04-10T09:57:44Z</updated>
		
		<summary type="html">&lt;p&gt;Brendon: Brendon uploaded a new version of &amp;amp;quot;File:ZFS iozone time.png&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Simple benchmarking of ZFS implementations for comparison purposes.&lt;/div&gt;</summary>
		<author><name>Brendon</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/File:ZFS_iozone_time.png</id>
		<title>File:ZFS iozone time.png</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/File:ZFS_iozone_time.png"/>
				<updated>2014-04-10T09:55:56Z</updated>
		
		<summary type="html">&lt;p&gt;Brendon: Brendon uploaded a new version of &amp;amp;quot;File:ZFS iozone time.png&amp;amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Simple benchmarking of ZFS implementations for comparison purposes.&lt;/div&gt;</summary>
		<author><name>Brendon</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/Performance</id>
		<title>Performance</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/Performance"/>
				<updated>2014-04-10T09:53:56Z</updated>
		
		<summary type="html">&lt;p&gt;Brendon: Created page with &amp;quot; == Status ==  Currently OpenZFS on OSX is under heavy development, with priority being given to completion of features over performance.  == Current Benchmarks ==  In order t...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
== Status ==&lt;br /&gt;
&lt;br /&gt;
Currently OpenZFS on OSX is under heavy development, with priority being given to completion of features over performance.&lt;br /&gt;
&lt;br /&gt;
== Current Benchmarks ==&lt;br /&gt;
&lt;br /&gt;
In order to establish a baseline of current performance of OpenZFS on OSX, measurements have been made using the iozone [http://www.iozone.org] benchmarking tool. The benchmark consists of running various ZFS implementations inside a VMware 6.0.2 VM on a 2011 iMac. Each VM is provisioned with 8GB of RAM, an OS boot drive, and a 5 GB second HDD containing a ZFS dataset. The HDDs are standard VMware .vmx files. &lt;br /&gt;
&lt;br /&gt;
::zpool create -o ashift=12 -f tank &amp;lt;disk device name&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The benchmark consists of the following steps:&lt;br /&gt;
&lt;br /&gt;
# Start the VM.&lt;br /&gt;
# Import the tank dataset&lt;br /&gt;
# mkdir /tank/tmp &amp;amp;&amp;amp; cd /tank/tmp&lt;br /&gt;
# time iozone -a&lt;br /&gt;
# time iozone -a&lt;br /&gt;
# Terminate the VM and VMware before moving to the next ZFS implementation/OS combination.&lt;br /&gt;
&lt;br /&gt;
The results are as follows:&lt;br /&gt;
&lt;br /&gt;
[[File:ZFS_iozone_time.png|frame|Comparision of ZFS implementations]]&lt;/div&gt;</summary>
		<author><name>Brendon</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/Main_Page</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/Main_Page"/>
				<updated>2014-04-10T09:33:30Z</updated>
		
		<summary type="html">&lt;p&gt;Brendon: Add Performance/benchmarking page.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__NOTOC__ __NOEDITSECTION__&lt;br /&gt;
'''Welcome to the  [[O3XWiki:About|O3XWiki]]: your source for OpenZFS on OS X documentation on the web.'''&lt;br /&gt;
&lt;br /&gt;
'''O'''penZFS '''o'''n '''O'''S '''X''' → '''OOOX''' → '''O3X'''&lt;br /&gt;
&lt;br /&gt;
Please note that if you are running MacZFS, you should use their site instead: [http://maczfs.org maczfs.org].&lt;br /&gt;
&lt;br /&gt;
==The software==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;float: left; margin-right: 1%; width: 49%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; [[OpenZFS on OS X]]: Brief overview of OpenZFS on OS X describing what to expect when using O3X. &lt;br /&gt;
&lt;br /&gt;
; [[Downloads]]: Latest release of the Mac installer.&lt;br /&gt;
&lt;br /&gt;
; [[Documentation|O3X documentation]]: Index of popular articles and often-referenced information about O3X and OpenZFS.&lt;br /&gt;
&lt;br /&gt;
; [[Source code]]: Show me the code.&lt;br /&gt;
&lt;br /&gt;
; [[Performance]]: Performance and Benchmarking&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;float: left; width: 50%;&amp;quot;&amp;gt;&lt;br /&gt;
; [[Install|Installation guide]]: Detailed guide through the whole process of installing and configuring OpenZFS on OS X.&lt;br /&gt;
&lt;br /&gt;
; [[Uninstall|Uninstallation guide]]: How to remove O3X from your computer.&lt;br /&gt;
&lt;br /&gt;
; [[Development]]: Information of particular interest to developers.&lt;br /&gt;
&lt;br /&gt;
; [[FAQ]]: List of common and frequently asked questions.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear: both;&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Our community==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;float: left; margin-right: 1%; width: 49%;&amp;quot;&amp;gt;&lt;br /&gt;
; [[Getting involved]]: Describes various ways O3X users can contribute to the O3X community.&lt;br /&gt;
&lt;br /&gt;
; [[O3XWiki:Contributing]]: If willing and able to contribute to the wiki, please see this article for ideas.&lt;br /&gt;
&lt;br /&gt;
; [[Help:Editing]]: Technical Information about editing and contributing to the OpenZFS on OS X Wiki.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;float: left; width: 50%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; [[O3X forums]]:  General technical assistance for users and discussion of all things O3X including development.&lt;br /&gt;
&lt;br /&gt;
; [[IRC channel]]: Live problem-solving and conversation with your fellow O3X users and developers '''#openzfs-osx'''.&lt;br /&gt;
&lt;br /&gt;
; [[Beyond our walls]]: How to get connect more broadly with users and developers of OpenZFS on other platforms.&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear: both;&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Contact us ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;float: left; margin-right: 1%; width: 49%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; [[O3XWiki:Administrators]]: Contact one of the admins by email.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div style=&amp;quot;float: left; width: 50%;&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; [[O3XWiki:Donations]]: Donations and other ways to contribute&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div style=&amp;quot;clear: both;&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Twitter==&lt;br /&gt;
[https://twitter.com/openzfsonosx @openzfsonosx] on Twitter.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;html&amp;gt;&amp;lt;a class=&amp;quot;twitter-timeline&amp;quot; href=&amp;quot;https://twitter.com/openzfsonosx&amp;quot; data-widget-id=&amp;quot;444275713776951296&amp;quot;&amp;gt;Tweets by @OpenZFSonOSX&amp;lt;/a&amp;gt;&lt;br /&gt;
&amp;lt;script&amp;gt;!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+&amp;quot;://platform.twitter.com/widgets.js&amp;quot;;fjs.parentNode.insertBefore(js,fjs);}}(document,&amp;quot;script&amp;quot;,&amp;quot;twitter-wjs&amp;quot;);&amp;lt;/script&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;span&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;a href=&amp;quot;https://twitter.com/openzfsonosx&amp;quot; class=&amp;quot;twitter-follow-button&amp;quot; data-show-count=&amp;quot;true&amp;quot;&amp;gt;Follow @openzfsonosx&amp;lt;/a&amp;gt;&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;a href=&amp;quot;https://twitter.com/share&amp;quot; class=&amp;quot;twitter-share-button&amp;quot; data-via=&amp;quot;OpenZFSonOSX&amp;quot;&amp;gt;Tweet&amp;lt;/a&amp;gt;&lt;br /&gt;
&amp;lt;/span&amp;gt;&lt;br /&gt;
&amp;lt;script&amp;gt;!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');&amp;lt;/script&amp;gt;&amp;lt;/html&amp;gt;&lt;/div&gt;</summary>
		<author><name>Brendon</name></author>	</entry>

	<entry>
		<id>https://openzfsonosx.org/wiki/File:ZFS_iozone_time.png</id>
		<title>File:ZFS iozone time.png</title>
		<link rel="alternate" type="text/html" href="https://openzfsonosx.org/wiki/File:ZFS_iozone_time.png"/>
				<updated>2014-04-10T09:26:31Z</updated>
		
		<summary type="html">&lt;p&gt;Brendon: Simple benchmarking of ZFS implementations for comparison purposes.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Simple benchmarking of ZFS implementations for comparison purposes.&lt;/div&gt;</summary>
		<author><name>Brendon</name></author>	</entry>

	</feed>