https://openzfsonosx.org/w/api.php?action=feedcontributions&user=50.168.32.57&feedformat=atomOpenZFS on OS X - User contributions [en]2024-03-29T11:54:09ZUser contributionsMediaWiki 1.22.3https://openzfsonosx.org/wiki/ChangelogChangelog2014-05-15T13:42:49Z<p>50.168.32.57: </p>
<hr />
<div><br />
<br />
== OpenZFS_on_OS X-1.2.7-rc6.pkg 2014-05-15 ==<br />
<br />
* Merged with ZFSOnLinux pre-0.6.3 dated Apr 8 2014 ''(6ac770b1961b9468daf0c69eae6515c608535789)''<br />
* create_thread( 75%*num_cpus ) would create literal 75 threads, instead of the intended 3 threads on quad core machine ''(Jorgen Lundman)''<br />
* VMEM allocate changed to use bmalloc (slice, SLAB, allocator on top of k_m_a) ''(Brendon Humphrey)''<br />
* Add ZED (ZFS Event Daemon) to handle events (send alerts, emails) on pool issues. ''(Chris Dunlap)''<br />
* name cache fixes (existing files claimed as missing, missing files claimed as existing) ''(Jorgen Lundman)''<br />
* Change pool sync to remove 'idle' pool writes every 30s. ''(Jorgen Lundman)''<br />
* Work around ZFS recv deadlock ''(ilovezfs)''<br />
* vnop_pageout fixes for zerod blocks beyond EOF (POSIX) ''(Jorgen Lundman)''<br />
* Add autoimport, zed startup scripts ''(ilovezfs)''<br />
* ctldir (.zfs) fixes and cleanup ''(Jorgen Lundman)''<br />
* Finder hardlinks fixes ''(Jorgen Lundman)''<br />
* Reclaim fixes, throttle and waiting on vp changes ''(Jorgen Lundman)''<br />
* ZVOL upstream incompatibility fixes ''(Evan Susarret)'' '''*1'''<br />
* ZFS rollback and promote fixes ''(ilovezfs)''<br />
* Rework EFI label, and wholedisk detection, Core Storage ''(Jorgen Lundman, ilovezfs)''<br />
<br />
Which should result in greater stability, large performance enhancements, and finally capable of using more of the available memory.<br />
<br />
'''The Installer no longer contain 32bit versions.''' <br />
<br />
'''*1''' Note that 1.2.0's ZFS Volumes are unintentionally incompatible with other platform version of ZFS, except for volblocksize = 512.<br />
<br />
== 1.2.0.dmg 2014-03-13 ==<br />
<br />
* First release</div>50.168.32.57https://openzfsonosx.org/wiki/InstallInstall2014-05-12T10:35:23Z<p>50.168.32.57: </p>
<hr />
<div>[[Category:About O3X]]<br />
[[Category:Getting and installing O3X]]<br />
== Installing the official release ==<br />
<br />
Download the most recent dmg from the [[Downloads]] page.<br />
<br />
Verify the checksums.<br />
<br />
$ md5 OpenZFS_on_OS_X_*.dmg<br />
$ sha1sum OpenZFS_on_OS_X_*.dmg<br />
$ openssl dgst -sha256 OpenZFS_on_OS_X_*.dmg<br />
<br />
Open the .dmg file.<br />
<br />
Read ReadMe.rtf.<br />
<br />
Start the installer by opening OpenZFS_on_OS_X_x.y.z.pkg.<br />
<br />
Follow the prompts.<br />
<br />
If you ever want to uninstall, follow the instructions for [[Uninstall#Uninstalling_a_release_version|uninstalling a release version]].<br />
<br />
== Installing from source ==<br />
(Adapted from an [http://zerobsd.tumblr.com/post/62586498252/os-x-with-zfs article by ZeroBSD].)<br />
<br />
If you have any other implementation of ZFS installed, you must uninstall it and reboot before proceeding further.<br />
<br />
If you have OpenZFS on OS X installed, please follow the [https://openzfsonosx.org/wiki/Uninstall uninstallation directions] before proceeding.<br />
<br />
We'll need to fetch the latest source from the [https://github.com/openzfsonosx repository on GitHub] and then compile it.<br />
<br />
For this, we'll need some prerequisites:<br />
<br />
* [https://developer.apple.com/xcode/ Xcode] (from [http://itunes.apple.com/us/app/xcode/id497799835?ls=1&mt=12 Mac App Store] or https://developer.apple.com/downloads/index.action)<br />
* Xcode Command Line Tools (https://developer.apple.com/downloads/index.action)<br />
* [http://brew.sh/ Homebrew] (or [http://www.macports.org/ MacPorts])<br />
<br />
<br />
To install Homebrew:<br />
<br />
<syntaxhighlight lang="bash"><br />
ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"<br />
</syntaxhighlight><br />
<br />
Paste that at a Terminal prompt.<br />
<br />
Once Homebrew is installed, we need a couple of things first:<br />
<br />
<syntaxhighlight lang="text"><br />
brew install automake libtool gawk coreutils<br />
</syntaxhighlight><br />
<br />
Create two folders in your home directory.<br />
<br />
<syntaxhighlight lang="bash"><br />
mkdir -p ~/Developer<br />
mkdir -p ~/bin<br />
</syntaxhighlight><br />
<br />
Add the ~/bin directory to your PATH.<br />
<br />
<syntaxhighlight lang="bash"><br />
[[ ! ":$PATH:" == *":$HOME/bin:"* ]] && echo 'export PATH=$HOME/bin:$PATH' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
To acquire the sources and build ZFS, we'll need the [[zfsadm]] script found [https://gist.github.com/ilovezfs/7713854#file-zfsadm here].<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/<br />
git clone https://gist.github.com/7713854.git zfsadm-repo<br />
cp zfsadm-repo/zfsadm ~/bin/<br />
</syntaxhighlight><br />
<br />
Now you can can build OpenZFS on OS X:<br />
<br />
<syntaxhighlight lang="bash"><br />
zfsadm<br />
</syntaxhighlight><br />
<br />
This will take a few minutes, depending on your hardware. There may be some warnings during the compilation. Do not worry about it unless you see errors.<br />
<br />
Before using ZFS, we need to actually install it.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/spl<br />
sudo make install<br />
cd ~/Developer/zfs<br />
sudo make install<br />
</syntaxhighlight><br />
<br />
You can check to see if the kernel extensions loaded automatically with <br />
<br />
<syntaxhighlight lang="bash"><br />
sudo kextstat | grep lundman<br />
</syntaxhighlight><br />
<br />
You should see something similar to<br />
<br />
<syntaxhighlight lang="text"><br />
137 1 0xffffff803f61a800 0x20c 0x20c net.lundman.kernel.dependencies (10.0.0)<br />
144 1 0xffffff7f82720000 0xd000 0xd000 net.lundman.spl (1.0.0) <137 7 5 4 3 1><br />
145 0 0xffffff7f8272d000 0x202000 0x202000 net.lundman.zfs (1.0.0) <144 13 7 5 4 3 1><br />
</syntaxhighlight><br />
<br />
If not, make sure kextd is aware of them.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo touch /System/Library/Extensions<br />
sudo killall -HUP kextd<br />
</syntaxhighlight><br />
<br />
And check again.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo kextstat | grep lundman<br />
</syntaxhighlight><br />
<br />
If not, you can load the kexts manually.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
Now add /usr/local/sbin to your PATH. This is where you will find the command binaries (zpool, zfs, zdb, etc.).<br />
<br />
<syntaxhighlight lang="bash"><br />
[[ ! ":$PATH:" == *":/usr/local/sbin:"* ]] && echo 'export PATH=/usr/local/sbin:$PATH' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile again.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
Try running<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool<br />
</syntaxhighlight><br />
<br />
to see if everything is installed and configured properly.<br />
<br />
You can go ahead and [[zpool#Creating_a_pool|create your pools]] at this point.<br />
<br />
When you want to get the [https://github.com/openzfsonosx/zfs/commits/master latest commits] from the GitHub, here's a quick overview of things you need to run.<br />
<br />
First make sure you have exported all of your pools.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool list<br />
</syntaxhighlight><br />
<br />
For every pool listed, run<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool export $poolname<br />
</syntaxhighlight><br />
<br />
in order to prevent a kernel panic when the kexts are unloaded.<br />
<br />
Make sure they have exported successfully.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool status<br />
</syntaxhighlight><br />
<br />
It should say, "no pools available."<br />
<br />
Now you should be able to upgrade your ZFS installation safely.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer<br />
<br />
cd spl<br />
make clean<br />
cd ..<br />
<br />
cd zfs<br />
make clean<br />
cd ..<br />
<br />
zfsadm<br />
<br />
# Assuming the build completed successfully,<br />
# unload the kexts. If you did not export all of<br />
# your pools this will panic:<br />
<br />
zfsadm -u<br />
<br />
# Now install the upgrade.<br />
<br />
cd spl<br />
sudo make install<br />
cd ..<br />
<br />
cd zfs<br />
sudo make install<br />
<br />
# And verify they reloaded automatically<br />
<br />
sudo kextstat | grep lundman<br />
<br />
# If not, make sure kextd is aware of them<br />
<br />
sudo touch /System/Library/Extensions<br />
sudo killall -HUP kextd<br />
<br />
# and check again<br />
<br />
sudo kextstat | grep lundman<br />
<br />
# if they they still have not loaded automatically<br />
<br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
If net.lundman.kernel.dependencies has been updated (quite rare) a reboot would be necessary.<br />
<br />
If you ever want to uninstall, follow the instructions for [[Uninstall#Uninstalling_a_source_install|uninstalling a source install]].<br />
<br />
== Installing a development build DMG ==<br />
<br />
Development build DMGs are often released here: http://lundman.net/ftp/osx.zfs/<br />
<br />
* Export your pools and unload the kexts.<br />
<br />
* Download one of the builds.<br />
<br />
* Open the .dmg file<br />
<br />
* cd into either 64 or 32 depending on whether your architecture is i386 or x86_64<br />
<br />
<syntaxhighlight lang="bash"><br />
cd /Volumes/osx.zfs*/64<br />
</syntaxhighlight><br />
<br />
* Run install_zfs.sh<br />
<syntaxhighlight lang="bash"><br />
sudo ./install_zfs.sh<br />
</syntaxhighlight><br />
<br />
If you ever want to uninstall, follow the instructions for [[Uninstall#Uninstalling_a_development_build_DMG|uninstalling a development build dmg]].<br />
<br />
== Using without actually installing (development) ==<br />
This method is usually appropriate only for Developers.<br />
<br />
The procedure is the same as found in the section [[Install#Installing_from_source|installing from source]] except that you never run "make install." Instead you load the kexts manually, and execute the binaries directly from the source tree.<br />
<br />
You can load the kexts manually by running<br />
<br />
<syntaxhighlight lang="bash"><br />
zfsadm -k<br />
</syntaxhighlight><br />
<br />
By default, zfsadm -k will create the directory ~/Library/Extensions if it doesn't exist, remove ~/Library/Extensions/spl.kext and ~/Library/Extensions/zfs.kext if they are present, copy spl.kext and zfs.kext from the source where they were built to ~/Library/Extenions, recursively change the ownership of everything in ~/Library/Extensions/spl.kext and ~/Library/Extensions/zfs.kext to be owned to be owned by the user "root" and the group "wheel," and then load the kexts directly from ~/Library/Extensions. If you prefer to use a different directory, use the -i option in zfsadm or edit zfsadm to hard code a different directory.<br />
<br />
If you do not wish to use zfsadm, you can do all of this yourself, using whatever target directory you'd like. For example, you might do the following:<br />
<br />
<syntaxhighlight lang="bash"><br />
cd /tmp<br />
sudo rm -rf o3x<br />
sudo mkdir o3x<br />
<br />
cd ~/Developer<br />
sudo cp -r zfs/module/zfs/zfs.kext /tmp/o3x/ <br />
sudo cp -r spl/module/spl/spl.kext /tmp/o3x/<br />
<br />
cd /tmp/o3x<br />
sudo chown -R *<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
Once the kexts have been loaded, you can test the commands.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo ~/Developer/zfs/cmd.sh zfs status<br />
</syntaxhighlight><br />
<br />
== Migrating old Pools (from MacZFS or ZEVO) ==<br />
<br />
First export all of your pools, and uninstall the other implementation. It is all right if you forgot to export your pools before uninstalling. You will just need to use the '-f' option when importing into OpenZFS on OS X.<br />
<br />
To find out the pool names, you need to execute the command for pool discovery.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool import<br />
</syntaxhighlight><br />
<br />
This will tell you what pools are available to be imported, but will not actually import anything. You can see that nothing has been imported yet by using the 'zpool status' command.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool status<br />
</syntaxhighlight><br />
<br />
Now that you know what pools are available to be imported, you can actually import a pool by supplying the name or guid that you saw during pool discovery.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool import poolname (or guid)<br />
</syntaxhighlight><br />
<br />
(Notice how this differs from the command for pool discovery.)<br />
<br />
If you forgot to export before migrating, you will need to use the '-f' option.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool import -f poolname (or guid)<br />
</syntaxhighlight><br />
<br />
If you want to see the same information you saw during pool discovery, you will now need to use 'zpool status' rather than 'zpool import'.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool status<br />
</syntaxhighlight><br />
<br />
If all pools have been imported, the pool discovery command— 'zpool import' with no pool or guid specified— will return without any output.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool import<br />
</syntaxhighlight></div>50.168.32.57https://openzfsonosx.org/wiki/EncryptionEncryption2014-05-07T18:26:50Z<p>50.168.32.57: </p>
<hr />
<div>== Core Storage (File Vault 2) ==<br />
<br />
Although the upstream OpenZFS project lists [http://open-zfs.org/wiki/Projects#Platform_agnostic_encryption_support platform-agnostic encryption support] at the ZFS dataset level as a possible future enhancement, we have an obvious solution to block-level encryption already at hand: Core Storage (FileVault 2), which uses AES-XTS.<br />
<br />
This is the OS X analogue of the following block-level encryption systems on other operating systems that support ZFS: <br />
* FreeBSD: geli<br />
* Linux: LUKS<br />
<br />
The overall procedure is, as follows: convert an empty HFS+ partition to use Core Storage and apply Core Storage encryption. Then use the Core Storage Logical Volume as a device in your zpool by supplying it to "zpool create," "zpool add," "zpool attach," etc.<br />
<br />
=== Prerequisites ===<br />
Build ZFS [[Install#Installing_from_source|from source]], or wait for the [[Downloads|next installer]], newer than 1.2.0 (for explanation, see original IRC chat).<br />
<br />
=== Caveats ===<br />
As noted in the article [[suppressing the annoying pop-up]], you will receive a pop-up claiming the disk isn't readable by this computer.<br />
This leads to one step that can be confusing: when unlocking the disk (e.g., on startup), the "bug" will make OS X believe the disk wasn't unlocked, and thus "wiggle," presenting the prompt again.<br />
<br />
Assuming you entered your password correctly, the encrypted volume should now be unlocked, despite the misleading wiggle, and you can safely close the dialog box by clicking "Cancel." You'll know for sure the volume is unlocked when you proceed to import your pool, or you can check directly by looking for <code>Encryption Status: Unlocked</code> in the output of <code>diskutil coreStorage list</code>.<br />
<br />
=== Steps ===<br />
The initial layout, with disk1 being the external disk (counter-intuitively named "Internal HD") intended as encrypted ZFS device.<br />
<br />
# diskutil list<br />
/dev/disk0<br />
#: TYPE NAME SIZE IDENTIFIER<br />
0: GUID_partition_scheme *160.0 GB disk0<br />
1: EFI EFI 209.7 MB disk0s1<br />
2: Apple_HFS Macintosh HD 159.7 GB disk0s2<br />
/dev/disk1<br />
#: TYPE NAME SIZE IDENTIFIER<br />
0: GUID_partition_scheme *1.0 TB disk1<br />
1: EFI EFI 209.7 MB disk1s1<br />
2: Apple_HFS Internal HD 999.9 GB disk1s2<br />
<br />
We note that disk1s2 is the partition to be encrypted, and we convert it to Core Storage (think LVM), to enable encryption:<br />
<br />
# diskutil coreStorage convert /dev/disk1s2<br />
Started CoreStorage operation on disk1s2 Internal HD<br />
Resizing disk to fit Core Storage headers<br />
Creating Core Storage Logical Volume Group<br />
Attempting to unmount disk1s2<br />
Switching disk1s2 to Core Storage<br />
Waiting for Logical Volume to appear<br />
Mounting Logical Volume<br />
Core Storage LVG UUID: 4690972A-484E-42E2-B72D-933A58E41237<br />
Core Storage PV UUID: 22A1A783-01BA-4ABA-B4A3-2A9146506519<br />
Core Storage LV UUID: F6D16BFE-B6E9-4A9B-BC03-E5CD03772C44<br />
Core Storage disk: disk2<br />
Finished CoreStorage operation on disk1s2 Internal HD<br />
<br />
Note that we converted the existing unencrypted HFS+ partition.<br />
<br />
Next, we encrypt the logical volume, our Core Storage disk, disk2:<br />
<br />
# diskutil coreStorage encryptVolume /dev/disk2<br />
New passphrase for existing volume:<br />
Confirm new passphrase:<br />
The Core Storage Logical Volume UUID is F6D16BFE-B6E9-4A9B-BC03-E5CD03772C44<br />
Started CoreStorage operation on disk2 Internal HD<br />
Scheduling encryption of Core Storage Logical Volume<br />
Core Storage LV UUID: F6D16BFE-B6E9-4A9B-BC03-E5CD03772C44<br />
Finished CoreStorage operation on disk2 Internal HD<br />
<br />
Note that we used disk2, the logical volume, not disk1s2.<br />
<br />
This can and will take a while to complete. You can check the status by issuing:<br />
# diskutil coreStorage list | grep "Conversion Progress"<br />
<br />
Until it's done:<br />
Conversion Progress: -none-<br />
<br />
Your partition layout should now look like:<br />
<br />
# diskutil list<br />
/dev/disk0<br />
#: TYPE NAME SIZE IDENTIFIER<br />
0: GUID_partition_scheme *160.0 GB disk0<br />
1: EFI EFI 209.7 MB disk0s1<br />
2: Apple_HFS Macintosh HD 159.7 GB disk0s2<br />
/dev/disk1<br />
#: TYPE NAME SIZE IDENTIFIER<br />
0: GUID_partition_scheme *1.0 TB disk1<br />
1: EFI EFI 209.7 MB disk1s1<br />
2: Apple_CoreStorage 999.9 GB disk1s2<br />
3: Apple_Boot Boot OS X 134.2 MB disk1s3<br />
/dev/disk2<br />
#: TYPE NAME SIZE IDENTIFIER<br />
0: Apple_HFS *999.5 GB disk2<br />
<br />
disk2 being our encrypted, unlocked HFS+ device. If you have yet to be prompted for the passphrase by OS X, now would be a good time to restart your Mac and try it out.<br />
<br />
Lastly, we'll prepare the volume for ZFS, by unmounting /dev/disk2:<br />
<br />
# mount<br />
...<br />
/dev/disk2 on /Volumes/Internal HD (hfs, local, journaled)<br />
# diskutil unmount "/Volumes/Internal HD"<br />
<br />
You can now follow the article on [[Zpool#Creating_a_pool|creating a pool]]. As a simple example, you might<br />
<br />
<syntaxhighlight lang="text"><br />
# zpool list<br />
no pools available<br />
# zpool create -f -o ashift=12 ZFS_VOLUME /dev/disk2<br />
# zpool list<br />
ZFS_VOLUME 928G 20.8G 907G 2% 1.00x ONLINE -<br />
</syntaxhighlight><br />
<br />
<br />
=== Reason to "use latest" ===<br />
<syntaxhighlight lang="text"><br />
<ilovezfs> If you want encryption you have a few options<br />
<ilovezfs> https://github.com/zfsrogue/osx-zfs-crypto<br />
<lundman> :)<br />
<ilovezfs> or you can do what cbreak said, and use an encrypted sparsebundle<br />
<ilovezfs> (I'd give it its own ZFS file system)<br />
<ilovezfs> or you can create a ZVOL, and put an encrypted Core Storage/Filevault 2 HFS+<br />
file system on it<br />
<ilovezfs> or you can put the pool itself on top of Core Storage.<br />
<ilovezfs> The last option you should not do with the installer version.<br />
<ilovezfs> But wait for the next installer if that's the route you want to go<br />
<ilovezfs> or build from source.<br />
<aandy> Ah, interesting. Does FileVault 2 require HFS+? Not that it'd surprise me.<br />
<ilovezfs> No it does not.<br />
<ilovezfs> But it is not possible to set other Content Hints<br />
<ilovezfs> so it will always say HFS+ even if you do put ZFS on your logical volumes.<br />
<ilovezfs> So basically the procedure is to format the volume HFS+.<br />
<ilovezfs> Then run 'diskutil coreStorage convert' on it.<br />
<ilovezfs> Then you can encrypt it.<br />
<ilovezfs> Then you unmount the HFS+<br />
<ilovezfs> and zpool create on the logical volume.<br />
<ilovezfs> And you should be good to go.<br />
<aandy> On the original HFS+ partition, right?<br />
<ilovezfs> Right.<br />
<ilovezfs> But I'd encrypt first<br />
<ilovezfs> then put ZFS on it.<br />
<aandy> Right. Perfect.<br />
<ilovezfs> diskutil coreStorage convert ...<br />
<ilovezfs> diskutil coreStorage encryptVolume ...<br />
<ilovezfs> etc.<br />
<ilovezfs> The reason not to use the installer version, is that it will attempt to<br />
partition the Core Storage Logical Volume.<br />
<ilovezfs> But since 10.8.5 and after, Apple doesn't like that<br />
<ilovezfs> so we added new code to detect Core Storage and not partition if it sees it's<br />
Core Storage.<br />
</syntaxhighlight><br />
<br />
<br />
=== Time Machine backups ===<br />
As a follow-up, here's one approach to using ZFS for your Time Machine Backups:<br />
<br />
While it has been discussed in heated arguments (e.g., https://github.com/openzfsonosx/zfs/issues/66) I still believe there's at least one ZFS feature I'd like to test with Time Machine: compression.<br />
<br />
The hypothesis being:<br />
an HFS+ sparsebundle stored on a compressed (gzip, lz4), deduped dataset should<br />
yield a compression ratio > 1.0.<br />
(previously observed 1.4 with compression=on, dedup=off, FreeBSD network Time Machine drives).<br />
<br />
To work around compatible disks for Time Machine, we create an HFS+ sparsebundle, store it on ZFS, and set the mounted image as a backup destination – no "TMShowUnsupportedNetworkVolumes" needed.<br />
<br />
1. Create, and mount, a sparsebundle from your ZFS filesystem (e.g., with makeImage.sh).<br />
<br />
2. Set your sparsebundle as the (active) backup destination # tmutil setdestination -a /Volumes/Time\ Machine\ Backups</div>50.168.32.57https://openzfsonosx.org/wiki/EncryptionEncryption2014-05-07T18:16:52Z<p>50.168.32.57: </p>
<hr />
<div>== Core Storage (File Vault 2) ==<br />
<br />
Although the upstream OpenZFS project lists [http://open-zfs.org/wiki/Projects#Platform_agnostic_encryption_support platform-agnostic encryption support] at the ZFS dataset level as a possible future enhancement, we have an obvious solution to block-level encryption already at hand: Core Storage (FileVault 2), which uses AES-XTS.<br />
<br />
This is the OS X analogue of the following block-level encryption systems on other operating systems that support ZFS: <br />
* FreeBSD: geli<br />
* Linux: LUKS<br />
<br />
The overall procedure is, as follows: convert an empty HFS+ partition to use Core Storage and apply Core Storage encryption. Then use the Core Storage Logical Volume as a device in your zpool by supplying it to "zpool create," "zpool add," "zpool attach," etc.<br />
<br />
=== Prerequisites ===<br />
Build ZFS [[Install#Installing_from_source|from source]], or wait for the [[Downloads|next installer]], newer than 1.2.0 (for explanation, see original IRC chat).<br />
<br />
=== Caveats ===<br />
As noted in the article [[suppressing the annoying pop-up]], you will receive a pop-up claiming the disk isn't readable by this computer.<br />
This leads to one step that can be confusing: when unlocking the disk (e.g., on startup), the "bug" will make OS X believe the disk wasn't unlocked, and thus "wiggle," presenting the prompt again.<br />
<br />
Take it on faith that once you've unlocked the disk, you can safely close the dialog box (with "Cancel"). You can verify this with your pool's availability.<br />
<br />
=== Steps ===<br />
The initial layout, with disk1 being the external disk (counter-intuitively named "Internal HD") intended as encrypted ZFS device.<br />
<br />
# diskutil list<br />
/dev/disk0<br />
#: TYPE NAME SIZE IDENTIFIER<br />
0: GUID_partition_scheme *160.0 GB disk0<br />
1: EFI EFI 209.7 MB disk0s1<br />
2: Apple_HFS Macintosh HD 159.7 GB disk0s2<br />
/dev/disk1<br />
#: TYPE NAME SIZE IDENTIFIER<br />
0: GUID_partition_scheme *1.0 TB disk1<br />
1: EFI EFI 209.7 MB disk1s1<br />
2: Apple_HFS Internal HD 999.9 GB disk1s2<br />
<br />
We note that disk1s2 is the partition to be encrypted, and we convert it to Core Storage (think LVM), to enable encryption:<br />
<br />
# diskutil coreStorage convert /dev/disk1s2<br />
Started CoreStorage operation on disk1s2 Internal HD<br />
Resizing disk to fit Core Storage headers<br />
Creating Core Storage Logical Volume Group<br />
Attempting to unmount disk1s2<br />
Switching disk1s2 to Core Storage<br />
Waiting for Logical Volume to appear<br />
Mounting Logical Volume<br />
Core Storage LVG UUID: 4690972A-484E-42E2-B72D-933A58E41237<br />
Core Storage PV UUID: 22A1A783-01BA-4ABA-B4A3-2A9146506519<br />
Core Storage LV UUID: F6D16BFE-B6E9-4A9B-BC03-E5CD03772C44<br />
Core Storage disk: disk2<br />
Finished CoreStorage operation on disk1s2 Internal HD<br />
<br />
Note that we converted the existing unencrypted HFS+ partition.<br />
<br />
Next, we encrypt the logical volume, our Core Storage disk, disk2:<br />
<br />
# diskutil coreStorage encryptVolume /dev/disk2<br />
New passphrase for existing volume:<br />
Confirm new passphrase:<br />
The Core Storage Logical Volume UUID is F6D16BFE-B6E9-4A9B-BC03-E5CD03772C44<br />
Started CoreStorage operation on disk2 Internal HD<br />
Scheduling encryption of Core Storage Logical Volume<br />
Core Storage LV UUID: F6D16BFE-B6E9-4A9B-BC03-E5CD03772C44<br />
Finished CoreStorage operation on disk2 Internal HD<br />
<br />
Note that we used disk2, the logical volume, not disk1s2.<br />
<br />
This can and will take a while to complete. You can check the status by issuing:<br />
# diskutil coreStorage list | grep "Conversion Progress"<br />
<br />
Until it's done:<br />
Conversion Progress: -none-<br />
<br />
Your partition layout should now look like:<br />
<br />
# diskutil list<br />
/dev/disk0<br />
#: TYPE NAME SIZE IDENTIFIER<br />
0: GUID_partition_scheme *160.0 GB disk0<br />
1: EFI EFI 209.7 MB disk0s1<br />
2: Apple_HFS Macintosh HD 159.7 GB disk0s2<br />
/dev/disk1<br />
#: TYPE NAME SIZE IDENTIFIER<br />
0: GUID_partition_scheme *1.0 TB disk1<br />
1: EFI EFI 209.7 MB disk1s1<br />
2: Apple_CoreStorage 999.9 GB disk1s2<br />
3: Apple_Boot Boot OS X 134.2 MB disk1s3<br />
/dev/disk2<br />
#: TYPE NAME SIZE IDENTIFIER<br />
0: Apple_HFS *999.5 GB disk2<br />
<br />
disk2 being our encrypted, unlocked HFS+ device. If you have yet to be prompted for the passphrase by OS X, now would be a good time to restart your Mac and try it out.<br />
<br />
Lastly, we'll prepare the volume for ZFS, by unmounting /dev/disk2:<br />
<br />
# mount<br />
...<br />
/dev/disk2 on /Volumes/Internal HD (hfs, local, journaled)<br />
# diskutil unmount "/Volumes/Internal HD"<br />
<br />
You can now follow the article on [[Zpool#Creating_a_pool|creating a pool]]. As a simple example, you might<br />
<br />
<syntaxhighlight lang="text"><br />
# zpool list<br />
no pools available<br />
# zpool create -f -o ashift=12 ZFS_VOLUME /dev/disk2<br />
# zpool list<br />
ZFS_VOLUME 928G 20.8G 907G 2% 1.00x ONLINE -<br />
</syntaxhighlight><br />
<br />
<br />
=== Reason to "use latest" ===<br />
<syntaxhighlight lang="text"><br />
<ilovezfs> If you want encryption you have a few options<br />
<ilovezfs> https://github.com/zfsrogue/osx-zfs-crypto<br />
<lundman> :)<br />
<ilovezfs> or you can do what cbreak said, and use an encrypted sparsebundle<br />
<ilovezfs> (I'd give it its own ZFS file system)<br />
<ilovezfs> or you can create a ZVOL, and put an encrypted Core Storage/Filevault 2 HFS+<br />
file system on it<br />
<ilovezfs> or you can put the pool itself on top of Core Storage.<br />
<ilovezfs> The last option you should not do with the installer version.<br />
<ilovezfs> But wait for the next installer if that's the route you want to go<br />
<ilovezfs> or build from source.<br />
<aandy> Ah, interesting. Does FileVault 2 require HFS+? Not that it'd surprise me.<br />
<ilovezfs> No it does not.<br />
<ilovezfs> But it is not possible to set other Content Hints<br />
<ilovezfs> so it will always say HFS+ even if you do put ZFS on your logical volumes.<br />
<ilovezfs> So basically the procedure is to format the volume HFS+.<br />
<ilovezfs> Then run 'diskutil coreStorage convert' on it.<br />
<ilovezfs> Then you can encrypt it.<br />
<ilovezfs> Then you unmount the HFS+<br />
<ilovezfs> and zpool create on the logical volume.<br />
<ilovezfs> And you should be good to go.<br />
<aandy> On the original HFS+ partition, right?<br />
<ilovezfs> Right.<br />
<ilovezfs> But I'd encrypt first<br />
<ilovezfs> then put ZFS on it.<br />
<aandy> Right. Perfect.<br />
<ilovezfs> diskutil coreStorage convert ...<br />
<ilovezfs> diskutil coreStorage encryptVolume ...<br />
<ilovezfs> etc.<br />
<ilovezfs> The reason not to use the installer version, is that it will attempt to<br />
partition the Core Storage Logical Volume.<br />
<ilovezfs> But since 10.8.5 and after, Apple doesn't like that<br />
<ilovezfs> so we added new code to detect Core Storage and not partition if it sees it's<br />
Core Storage.<br />
</syntaxhighlight><br />
<br />
<br />
=== Time Machine backups ===<br />
As a follow-up, here's one approach to using ZFS for your Time Machine Backups:<br />
<br />
While it has been discussed in heated arguments (e.g., https://github.com/openzfsonosx/zfs/issues/66) I still believe there's at least one ZFS feature I'd like to test with Time Machine: compression.<br />
<br />
The hypothesis being:<br />
an HFS+ sparsebundle stored on a compressed (gzip, lz4), deduped dataset should<br />
yield a compression ratio > 1.0.<br />
(previously observed 1.4 with compression=on, dedup=off, FreeBSD network Time Machine drives).<br />
<br />
To work around compatible disks for Time Machine, we create an HFS+ sparsebundle, store it on ZFS, and set the mounted image as a backup destination – no "TMShowUnsupportedNetworkVolumes" needed.<br />
<br />
1. Create, and mount, a sparsebundle from your ZFS filesystem (e.g., with makeImage.sh).<br />
<br />
2. Set your sparsebundle as the (active) backup destination # tmutil setdestination -a /Volumes/Time\ Machine\ Backups</div>50.168.32.57https://openzfsonosx.org/wiki/EncryptionEncryption2014-05-07T18:16:04Z<p>50.168.32.57: </p>
<hr />
<div>== Core Storage (File Vault 2) ==<br />
<br />
Although the upstream OpenZFS project lists [http://open-zfs.org/wiki/Projects#Platform_agnostic_encryption_support platform-agnostic encryption support] at the ZFS dataset level as a possible future enhancement, we have an obvious solution to block-level encryption already at hand: Core Storage (FileVault 2), which uses AES-XTS.<br />
<br />
This is the OS X analogue of the following block-level encryption systems on other operating systems that support ZFS: <br />
* FreeBSD: geli<br />
* Linux: LUKS<br />
<br />
The overall procedure is, as follows: convert an empty HFS+ partition to use Core Storage and apply Core Storage encryption. Then use the Core Storage Logical Volume as a device in your zpool by supplying it to "zpool create," "zpool add," "zpool attach," etc.<br />
<br />
=== Prerequisites ===<br />
Build ZFS [[Install#Installing_from_source|from source]], or wait for the [[Downloads|next installer]], newer than 1.2.0 (for explanation, see original IRC chat).<br />
<br />
=== Caveats ===<br />
As noted in the article [[suppressing the annoying pop-up]], you will receive a pop-up claiming the disk isn't readable by this computer.<br />
This leads to one step that can be confusing: when unlocking the disk (e.g., on startup), the "bug" will make OS X believe the disk wasn't unlocked, and thus "wiggle," presenting the prompt again.<br />
<br />
Take it on faith that once you've unlocked the disk, you can safely close the dialog box (with "Cancel"). You can verify this with your pool's availability.<br />
<br />
=== Steps ===<br />
The initial layout, with disk1 being the external disk (counter-intuitively named "Internal HD") intended as encrypted ZFS device.<br />
<br />
# diskutil list<br />
/dev/disk0<br />
#: TYPE NAME SIZE IDENTIFIER<br />
0: GUID_partition_scheme *160.0 GB disk0<br />
1: EFI EFI 209.7 MB disk0s1<br />
2: Apple_HFS Macintosh HD 159.7 GB disk0s2<br />
/dev/disk1<br />
#: TYPE NAME SIZE IDENTIFIER<br />
0: GUID_partition_scheme *1.0 TB disk1<br />
1: EFI EFI 209.7 MB disk1s1<br />
2: Apple_HFS Internal HD 999.9 GB disk1s2<br />
<br />
We note that disk1s2 is the partition to be encrypted, and we convert it to Core Storage (think LVM), to enable encryption:<br />
<br />
# diskutil coreStorage convert /dev/disk1s2<br />
Started CoreStorage operation on disk1s2 Internal HD<br />
Resizing disk to fit Core Storage headers<br />
Creating Core Storage Logical Volume Group<br />
Attempting to unmount disk1s2<br />
Switching disk1s2 to Core Storage<br />
Waiting for Logical Volume to appear<br />
Mounting Logical Volume<br />
Core Storage LVG UUID: 4690972A-484E-42E2-B72D-933A58E41237<br />
Core Storage PV UUID: 22A1A783-01BA-4ABA-B4A3-2A9146506519<br />
Core Storage LV UUID: F6D16BFE-B6E9-4A9B-BC03-E5CD03772C44<br />
Core Storage disk: disk2<br />
Finished CoreStorage operation on disk1s2 Internal HD<br />
<br />
Note that we converted the existing unencrypted HFS+ partition.<br />
<br />
Next, we encrypt the logical volume, our Core Storage disk, disk2:<br />
<br />
# diskutil coreStorage encryptVolume /dev/disk2<br />
New passphrase for existing volume:<br />
Confirm new passphrase:<br />
The Core Storage Logical Volume UUID is F6D16BFE-B6E9-4A9B-BC03-E5CD03772C44<br />
Started CoreStorage operation on disk2 Internal HD<br />
Scheduling encryption of Core Storage Logical Volume<br />
Core Storage LV UUID: F6D16BFE-B6E9-4A9B-BC03-E5CD03772C44<br />
Finished CoreStorage operation on disk2 Internal HD<br />
<br />
Note that we used disk2, the logical volume, not disk1s2.<br />
<br />
This can and will take a while to complete. You can check the status by issuing:<br />
# diskutil coreStorage list | grep "Conversion Progress"<br />
<br />
Until it's done:<br />
Conversion Progress: -none-<br />
<br />
Your partition layout should now look like:<br />
<br />
# diskutil list<br />
/dev/disk0<br />
#: TYPE NAME SIZE IDENTIFIER<br />
0: GUID_partition_scheme *160.0 GB disk0<br />
1: EFI EFI 209.7 MB disk0s1<br />
2: Apple_HFS Macintosh HD 159.7 GB disk0s2<br />
/dev/disk1<br />
#: TYPE NAME SIZE IDENTIFIER<br />
0: GUID_partition_scheme *1.0 TB disk1<br />
1: EFI EFI 209.7 MB disk1s1<br />
2: Apple_CoreStorage 999.9 GB disk1s2<br />
3: Apple_Boot Boot OS X 134.2 MB disk1s3<br />
/dev/disk2<br />
#: TYPE NAME SIZE IDENTIFIER<br />
0: Apple_HFS *999.5 GB disk2<br />
<br />
disk2 being our encrypted, unlocked HFS+ device. If you have yet to be prompted for the passphrase by OS X, now would be a good time to restart your Mac and try it out.<br />
<br />
Lastly, we'll prepare the volume for ZFS, by unmounting /dev/disk2:<br />
<br />
# mount<br />
...<br />
/dev/disk2 on /Volumes/Internal HD (hfs, local, journaled)<br />
# diskutil unmount "/Volumes/Internal HD"<br />
<br />
You can now follow the article on [[Zpool#Creating_a_pool|creating a pool]]. As a simple example, you might<br />
<br />
<syntaxhighlight lang="text"><br />
# zpool list<br />
no pools available<br />
# zpool create -f -o ashift=12 ZFS_VOLUME /dev/disk2<br />
# zpool list<br />
ZFS_VOLUME 928G 20.8G 907G 2% 1.00x ONLINE -<br />
</syntaxhighlight><br />
<br />
<br />
=== Reason to "use latest" ===<br />
<syntaxhighlight lang="text"><br />
<ilovezfs> If you want encryption you have a few options<br />
<ilovezfs> https://github.com/zfsrogue/osx-zfs-crypto<br />
<lundman> :)<br />
<ilovezfs> or you can do what cbreak said, and use an encrypted sparsebundle<br />
<ilovezfs> (I'd give it its own ZFS file system)<br />
<ilovezfs> or you can create a ZVOL, and put an encrypted Core Storage/Filevault 2 HFS+<br />
file system on it<br />
<ilovezfs> or you can put the pool itself on top of Core Storage.<br />
<ilovezfs> The last option you should not do with the installer version.<br />
<ilovezfs> But wait for the next installer if that's the route you want to go<br />
<ilovezfs> or build from source.<br />
<aandy> Ah, interesting. Does FileVault 2 require HFS+? Not that it'd surprise me.<br />
<ilovezfs> No it does not.<br />
<ilovezfs> But it is not possible to set other Content Hints<br />
<ilovezfs> so it will always say HFS+ even if you do put ZFS on your logical volumes.<br />
<ilovezfs> So basically the procedure is to format the volume HFS+.<br />
<ilovezfs> Then run 'diskutil coreStorage convert' on it.<br />
<ilovezfs> Then you can encrypt it.<br />
<ilovezfs> Then you unmount the HFS+<br />
<ilovezfs> and zpool create on the logical volume.<br />
<ilovezfs> And you should be good to go.<br />
<aandy> On the original HFS+ partition, right?<br />
<ilovezfs> Right.<br />
<ilovezfs> But I'd encrypt first<br />
<ilovezfs> then put ZFS on it.<br />
<aandy> Right. Perfect.<br />
<ilovezfs> diskutil coreStorage convert ...<br />
<ilovezfs> diskutil coreStorage encryptVolume ...<br />
<ilovezfs> etc.<br />
<ilovezfs> The reason not to use the installer version, is that it will attempt to<br />
partition the Core Storage Logical Volume.<br />
<ilovezfs> But since 10.8.5 and after, Apple doesn't like that<br />
<ilovezfs> so we added new code to detect Core Storage and not partition if it sees it's<br />
Core Storage.<br />
</syntaxhighlight><br />
<br />
<br />
=== Time Machine backups ===<br />
As a follow-up, here's one approach to using ZFS for your Time Machine Backups:<br />
<br />
While it has been discussed in heated arguments (e.g., https://github.com/openzfsonosx/zfs/issues/66) I still believe there's at least one ZFS feature I'd like to test with Time Machine: compression.<br />
<br />
The hypothesis being:<br />
an HFS+ sparsebundle stored on a compressed (gzip, lz4), deduped dataset should<br />
yield a compression ratio > 1.0.<br />
(previously observed 1.4 with compression=on, dedup=off, FreeBSD network Time Machine drives).<br />
<br />
To work around compatible disks for Time Machine, we create an HFS+ sparsebundle, store it on ZFS, and set the mounted image as a backup destination—no "TMShowUnsupportedNetworkVolumes" needed.<br />
<br />
1. Create, and mount, a sparsebundle from your ZFS filesystem (e.g., with makeImage.sh).<br />
<br />
2. Set your sparsebundle as the (active) backup destination # tmutil setdestination -a /Volumes/Time\ Machine\ Backups</div>50.168.32.57https://openzfsonosx.org/wiki/InstallInstall2014-04-27T08:02:46Z<p>50.168.32.57: /* Installing from source */</p>
<hr />
<div>[[Category:About O3X]]<br />
[[Category:Getting and installing O3X]]<br />
== Installing the official release ==<br />
<br />
Download the most recent dmg from the [[Downloads]] page.<br />
<br />
Verify the checksums.<br />
<br />
$ md5 OpenZFS_on_OS_X_*.dmg<br />
$ sha1sum OpenZFS_on_OS_X_*.dmg<br />
$ openssl dgst -sha256 OpenZFS_on_OS_X_*.dmg<br />
<br />
Open the .dmg file.<br />
<br />
Read ReadMe.rtf.<br />
<br />
Start the installer by opening OpenZFS_on_OS_X_x.y.z.pkg.<br />
<br />
Follow the prompts.<br />
<br />
If you ever want to uninstall, follow the instructions for [[Uninstall#Uninstalling_a_release_version|uninstalling a release version]].<br />
<br />
== Installing from source ==<br />
(Adapted from an [http://zerobsd.tumblr.com/post/62586498252/os-x-with-zfs article by ZeroBSD].)<br />
<br />
If you have any other implementation of ZFS installed, you must uninstall it and reboot before proceeding further.<br />
<br />
If you have OpenZFS on OS X installed, please follow the [https://openzfsonosx.org/wiki/Uninstall uninstallation directions] before proceeding.<br />
<br />
We'll need to fetch the latest source from the [https://github.com/openzfsonosx repository on GitHub] and then compile it.<br />
<br />
For this, we'll need some prerequisites:<br />
<br />
* [https://developer.apple.com/xcode/ Xcode] (from [http://itunes.apple.com/us/app/xcode/id497799835?ls=1&mt=12 Mac App Store] or https://developer.apple.com/downloads/index.action)<br />
* Xcode Command Line Tools (https://developer.apple.com/downloads/index.action)<br />
* [http://brew.sh/ Homebrew] (or [http://www.macports.org/ MacPorts])<br />
<br />
<br />
To install Homebrew:<br />
<br />
<syntaxhighlight lang="bash"><br />
ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"<br />
</syntaxhighlight><br />
<br />
Paste that at a Terminal prompt.<br />
<br />
Once Homebrew is installed, we need a couple of things first:<br />
<br />
<syntaxhighlight lang="text"><br />
brew install automake libtool gawk coreutils<br />
</syntaxhighlight><br />
<br />
Create two folders in your home directory.<br />
<br />
<syntaxhighlight lang="bash"><br />
mkdir -p ~/Developer<br />
mkdir -p ~/bin<br />
</syntaxhighlight><br />
<br />
Add the ~/bin directory to your PATH.<br />
<br />
<syntaxhighlight lang="bash"><br />
echo 'export PATH=$HOME/bin:$PATH' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
To acquire the sources and build ZFS, we'll need the [[zfsadm]] script found [https://gist.github.com/ilovezfs/7713854#file-zfsadm here].<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/<br />
git clone https://gist.github.com/7713854.git zfsadm-repo<br />
cp zfsadm-repo/zfsadm ~/bin/<br />
chmod +x ~/bin/zfsadm<br />
</syntaxhighlight><br />
<br />
All set. Let's go cloning and building ZFS:<br />
<br />
<syntaxhighlight lang="bash"><br />
zfsadm<br />
</syntaxhighlight><br />
<br />
Now let it work. This should take a few minutes depending on the speed of your machine.<br />
<br />
Before using ZFS, we need to actually install it.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/zfs<br />
sudo make install<br />
cd ~/Developer/spl<br />
sudo make install<br />
</syntaxhighlight><br />
<br />
You can check to see if the kernel extensions loaded automatically with <br />
<br />
<syntaxhighlight lang="bash"><br />
sudo kextstat | grep lundman<br />
</syntaxhighlight><br />
<br />
You should see something similar to<br />
<br />
<syntaxhighlight lang="text"><br />
137 1 0xffffff803f61a800 0x20c 0x20c net.lundman.kernel.dependencies (10.0.0)<br />
144 1 0xffffff7f82720000 0xd000 0xd000 net.lundman.spl (1.0.0) <137 7 5 4 3 1><br />
145 0 0xffffff7f8272d000 0x202000 0x202000 net.lundman.zfs (1.0.0) <144 13 7 5 4 3 1><br />
</syntaxhighlight><br />
<br />
If not, make sure kextd is aware of them.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo touch /System/Library/Extensions<br />
sudo killall -HUP kextd<br />
</syntaxhighlight><br />
<br />
And check again.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo kextstat | grep lundman<br />
</syntaxhighlight><br />
<br />
If not, you can load the kexts manually.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
Now add /usr/local/sbin to your PATH. This is where you will find the command binaries (zpool, zfs, zdb, etc.).<br />
<br />
<syntaxhighlight lang="bash"><br />
echo 'export PATH=$PATH:/usr/local/sbin' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile again.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
Try running<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool<br />
</syntaxhighlight><br />
<br />
to see if everything is installed and configured properly.<br />
<br />
You can go ahead and [[zpool#Creating_a_pool|create your pools]] at this point.<br />
<br />
When you want to get the [https://github.com/openzfsonosx/zfs/commits/master latest commits] from the GitHub, here's a quick overview of things you need to run.<br />
<br />
First make sure you have exported all of your pools.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool list<br />
</syntaxhighlight><br />
<br />
For every pool listed, run<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool export $poolname<br />
</syntaxhighlight><br />
<br />
in order to prevent a kernel panic when the kexts are unloaded.<br />
<br />
Make sure they have exported successfully.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool status<br />
</syntaxhighlight><br />
<br />
It should say, "no pools available."<br />
<br />
Now you should be able to upgrade your ZFS installation safely.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer<br />
<br />
cd spl<br />
make clean<br />
cd ..<br />
<br />
cd zfs<br />
make clean<br />
cd ..<br />
<br />
zfsadm<br />
<br />
# Assuming the build completed successfully,<br />
# unload the kexts. If you did not export all of<br />
# your pools this will panic:<br />
<br />
zfsadm -u<br />
<br />
# Now install the upgrade.<br />
<br />
cd spl<br />
sudo make install<br />
cd ..<br />
<br />
cd zfs<br />
sudo make install<br />
<br />
# And verify they reloaded automatically<br />
<br />
sudo kextstat | grep lundman<br />
<br />
# If not, make sure kextd is aware of them<br />
<br />
sudo touch /System/Library/Extensions<br />
sudo killall -HUP kextd<br />
<br />
# and check again<br />
<br />
sudo kextstat | grep lundman<br />
<br />
# if they they still have not loaded automatically<br />
<br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
If net.lundman.kernel.dependencies has been updated (quite rare) a reboot would be necessary.<br />
<br />
If you ever want to uninstall, follow the instructions for [[Uninstall#Uninstalling_a_source_install|uninstalling a source install]].<br />
<br />
== Installing a development build DMG ==<br />
<br />
Development build DMGs are often released here: http://lundman.net/ftp/osx.zfs/<br />
<br />
* Export your pools and unload the kexts.<br />
<br />
* Download one of the builds.<br />
<br />
* Open the .dmg file<br />
<br />
* cd into either 64 or 32 depending on whether your architecture is i386 or x86_64<br />
<br />
<syntaxhighlight lang="bash"><br />
cd /Volumes/osx.zfs*/64<br />
</syntaxhighlight><br />
<br />
* Run install_zfs.sh<br />
<syntaxhighlight lang="bash"><br />
sudo ./install_zfs.sh<br />
</syntaxhighlight><br />
<br />
If you ever want to uninstall, follow the instructions for [[Uninstall#Uninstalling_a_development_build_DMG|uninstalling a development build dmg]].<br />
<br />
== Using without actually installing (development) ==<br />
This method is usually appropriate only for Developers.<br />
<br />
The procedure is the same as found in the section [[Install#Installing_from_source|installing from source]] except that you never run "make install." Instead you load the kexts manually, and execute the binaries directly from the source tree.<br />
<br />
You can load the kexts manually by running<br />
<br />
<syntaxhighlight lang="bash"><br />
zfsadm -k<br />
</syntaxhighlight><br />
<br />
By default, zfsadm -k will create the directory ~/Library/Extensions if it doesn't exist, remove ~/Library/Extensions/spl.kext and ~/Library/Extensions/zfs.kext if they are present, copy spl.kext and zfs.kext from the source where they were built to ~/Library/Extenions, recursively change the ownership of everything in ~/Library/Extensions/spl.kext and ~/Library/Extensions/zfs.kext to be owned to be owned by the user "root" and the group "wheel," and then load the kexts directly from ~/Library/Extensions. If you prefer to use a different directory, use the -i option in zfsadm or edit zfsadm to hard code a different directory.<br />
<br />
If you do not wish to use zfsadm, you can do all of this yourself, using whatever target directory you'd like. For example, you might do the following:<br />
<br />
<syntaxhighlight lang="bash"><br />
cd /tmp<br />
sudo rm -rf o3x<br />
sudo mkdir o3x<br />
<br />
cd ~/Developer<br />
sudo cp -r zfs/module/zfs/zfs.kext /tmp/o3x/ <br />
sudo cp -r spl/module/spl/spl.kext /tmp/o3x/<br />
<br />
cd /tmp/o3x<br />
sudo chown -R *<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
Once the kexts have been loaded, you can test the commands.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo ~/Developer/zfs/cmd.sh zfs status<br />
</syntaxhighlight><br />
<br />
== Migrating old Pools (from MacZFS or ZEVO) ==<br />
<br />
First export all of your pools, and uninstall the other implementation. It is all right if you forgot to export your pools before uninstalling. You will just need to use the '-f' option when importing into OpenZFS on OS X.<br />
<br />
To find out the pool names, you need to execute the command for pool discovery.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool import<br />
</syntaxhighlight><br />
<br />
This will tell you what pools are available to be imported, but will not actually import anything. You can see that nothing has been imported yet by using the 'zpool status' command.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool status<br />
</syntaxhighlight><br />
<br />
Now that you know what pools are available to be imported, you can actually import a pool by supplying the name or guid that you saw during pool discovery.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool import poolname (or guid)<br />
</syntaxhighlight><br />
<br />
(Notice how this differs from the command for pool discovery.)<br />
<br />
If you forgot to export before migrating, you will need to use the '-f' option.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool import -f poolname (or guid)<br />
</syntaxhighlight><br />
<br />
If you want to see the same information you saw during pool discovery, you will now need to use 'zpool status' rather than 'zpool import'.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool status<br />
</syntaxhighlight><br />
<br />
If all pools have been imported, the pool discovery command— 'zpool import' with no pool or guid specified— will return without any output.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool import<br />
</syntaxhighlight></div>50.168.32.57https://openzfsonosx.org/wiki/InstallInstall2014-04-18T06:50:00Z<p>50.168.32.57: </p>
<hr />
<div>[[Category:About O3X]]<br />
[[Category:Getting and installing O3X]]<br />
== Installing the official release ==<br />
<br />
Download the most recent dmg from the [[Downloads]] page.<br />
<br />
Verify the checksums.<br />
<br />
$ md5 OpenZFS_on_OS_X_*.dmg<br />
$ sha1sum OpenZFS_on_OS_X_*.dmg<br />
$ openssl dgst -sha256 OpenZFS_on_OS_X_*.dmg<br />
<br />
Open the .dmg file.<br />
<br />
Read ReadMe.rtf.<br />
<br />
Start the installer by opening OpenZFS_on_OS_X_x.y.z.pkg.<br />
<br />
Follow the prompts.<br />
<br />
If you ever want to uninstall, follow the instructions for [[Uninstall#Uninstalling_a_release_version|uninstalling a release version]].<br />
<br />
== Installing from source ==<br />
(Adapted from an [http://zerobsd.tumblr.com/post/62586498252/os-x-with-zfs article by ZeroBSD].)<br />
<br />
If you have any other implementation of ZFS installed, you must uninstall it and reboot before proceeding further.<br />
<br />
We'll need to fetch the latest source from the [https://github.com/openzfsonosx repository on GitHub] and then compile it.<br />
<br />
For this, we'll need some prerequisites:<br />
<br />
* [https://developer.apple.com/xcode/ Xcode] (from [http://itunes.apple.com/us/app/xcode/id497799835?ls=1&mt=12 Mac App Store] or https://developer.apple.com/downloads/index.action)<br />
* Xcode Command Line Tools (https://developer.apple.com/downloads/index.action)<br />
* [http://brew.sh/ Homebrew] (or [http://www.macports.org/ MacPorts])<br />
<br />
<br />
To install Homebrew:<br />
<br />
<syntaxhighlight lang="bash"><br />
ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"<br />
</syntaxhighlight><br />
<br />
Paste that at a Terminal prompt.<br />
<br />
Once Homebrew is installed, we need a couple of things first:<br />
<br />
<syntaxhighlight lang="text"><br />
brew install automake libtool gawk coreutils<br />
</syntaxhighlight><br />
<br />
Create two folders in your home directory.<br />
<br />
<syntaxhighlight lang="bash"><br />
mkdir -p ~/Developer<br />
mkdir -p ~/bin<br />
</syntaxhighlight><br />
<br />
Add the ~/bin directory to your PATH.<br />
<br />
<syntaxhighlight lang="bash"><br />
echo 'export PATH=$HOME/bin:$PATH' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
To acquire the sources and build ZFS, we'll need the [[zfsadm]] script found [https://gist.github.com/ilovezfs/7713854#file-zfsadm here].<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/<br />
git clone https://gist.github.com/7713854.git zfsadm-repo<br />
cp zfsadm-repo/zfsadm ~/bin/<br />
chmod +x ~/bin/zfsadm<br />
</syntaxhighlight><br />
<br />
All set. Let's go cloning and building ZFS:<br />
<br />
<syntaxhighlight lang="bash"><br />
zfsadm<br />
</syntaxhighlight><br />
<br />
Now let it work. This should take a few minutes depending on the speed of your machine.<br />
<br />
Before using ZFS, we need to actually install it.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/zfs<br />
sudo make install<br />
cd ~/Developer/spl<br />
sudo make install<br />
</syntaxhighlight><br />
<br />
You can check to see if the kernel extensions loaded automatically with <br />
<br />
<syntaxhighlight lang="bash"><br />
sudo kextstat | grep lundman<br />
</syntaxhighlight><br />
<br />
You should see something similar to<br />
<br />
<syntaxhighlight lang="text"><br />
137 1 0xffffff803f61a800 0x20c 0x20c net.lundman.kernel.dependencies (10.0.0)<br />
144 1 0xffffff7f82720000 0xd000 0xd000 net.lundman.spl (1.0.0) <137 7 5 4 3 1><br />
145 0 0xffffff7f8272d000 0x202000 0x202000 net.lundman.zfs (1.0.0) <144 13 7 5 4 3 1><br />
</syntaxhighlight><br />
<br />
If not, make sure kextd is aware of them.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo touch /System/Library/Extensions<br />
sudo killall -HUP kextd<br />
</syntaxhighlight><br />
<br />
And check again.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo kextstat | grep lundman<br />
</syntaxhighlight><br />
<br />
If not, you can load the kexts manually.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
Now add /usr/local/sbin to your PATH. This is where you will find the command binaries (zpool, zfs, zdb, etc.).<br />
<br />
<syntaxhighlight lang="bash"><br />
echo 'export PATH=$PATH:/usr/local/sbin' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile again.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
Try running<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool<br />
</syntaxhighlight><br />
<br />
to see if everything is installed and configured properly.<br />
<br />
You can go ahead and [[zpool#Creating_a_pool|create your pools]] at this point.<br />
<br />
When you want to get the [https://github.com/openzfsonosx/zfs/commits/master latest commits] from the GitHub, here's a quick overview of things you need to run.<br />
<br />
First make sure you have exported all of your pools.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool list<br />
</syntaxhighlight><br />
<br />
For every pool listed, run<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool export $poolname<br />
</syntaxhighlight><br />
<br />
in order to prevent a kernel panic when the kexts are unloaded.<br />
<br />
Make sure they have exported successfully.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool status<br />
</syntaxhighlight><br />
<br />
It should say, "no pools available."<br />
<br />
Now you should be able to upgrade your ZFS installation safely.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer<br />
<br />
cd spl<br />
make clean<br />
cd ..<br />
<br />
cd zfs<br />
make clean<br />
cd ..<br />
<br />
zfsadm<br />
<br />
# Assuming the build completed successfully,<br />
# unload the kexts. If you did not export all of<br />
# your pools this will panic:<br />
<br />
zfsadm -u<br />
<br />
# Now install the upgrade.<br />
<br />
cd spl<br />
sudo make install<br />
cd ..<br />
<br />
cd zfs<br />
sudo make install<br />
<br />
# And verify they reloaded automatically<br />
<br />
sudo kextstat | grep lundman<br />
<br />
# If not, make sure kextd is aware of them<br />
<br />
sudo touch /System/Library/Extensions<br />
sudo killall -HUP kextd<br />
<br />
# and check again<br />
<br />
sudo kextstat | grep lundman<br />
<br />
# if they they still have not loaded automatically<br />
<br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
If net.lundman.kernel.dependencies has been updated (quite rare) a reboot would be necessary.<br />
<br />
If you ever want to uninstall, follow the instructions for [[Uninstall#Uninstalling_a_source_install|uninstalling a source install]].<br />
<br />
== Installing a development build DMG ==<br />
<br />
Development build DMGs are often released here: http://lundman.net/ftp/osx.zfs/<br />
<br />
* Export your pools and unload the kexts.<br />
<br />
* Download one of the builds.<br />
<br />
* Open the .dmg file<br />
<br />
* cd into either 64 or 32 depending on whether your architecture is i386 or x86_64<br />
<br />
<syntaxhighlight lang="bash"><br />
cd /Volumes/osx.zfs*/64<br />
</syntaxhighlight><br />
<br />
* Run install_zfs.sh<br />
<syntaxhighlight lang="bash"><br />
sudo ./install_zfs.sh<br />
</syntaxhighlight><br />
<br />
If you ever want to uninstall, follow the instructions for [[Uninstall#Uninstalling_a_development_build_DMG|uninstalling a development build dmg]].<br />
<br />
== Using without actually installing (development) ==<br />
This method is usually appropriate only for Developers.<br />
<br />
The procedure is the same as found in the section [[Install#Installing_from_source|installing from source]] except that you never run "make install." Instead you load the kexts manually, and execute the binaries directly from the source tree.<br />
<br />
You can load the kexts manually by running<br />
<br />
<syntaxhighlight lang="bash"><br />
zfsadm -k<br />
</syntaxhighlight><br />
<br />
By default, zfsadm -k will create the directory ~/Library/Extensions if it doesn't exist, remove ~/Library/Extensions/spl.kext and ~/Library/Extensions/zfs.kext if they are present, copy spl.kext and zfs.kext from the source where they were built to ~/Library/Extenions, recursively change the ownership of everything in ~/Library/Extensions/spl.kext and ~/Library/Extensions/zfs.kext to be owned to be owned by the user "root" and the group "wheel," and then load the kexts directly from ~/Library/Extensions. If you prefer to use a different directory, use the -i option in zfsadm or edit zfsadm to hard code a different directory.<br />
<br />
If you do not wish to use zfsadm, you can do all of this yourself, using whatever target directory you'd like. For example, you might do the following:<br />
<br />
<syntaxhighlight lang="bash"><br />
cd /tmp<br />
sudo rm -rf o3x<br />
sudo mkdir o3x<br />
<br />
cd ~/Developer<br />
sudo cp -r zfs/module/zfs/zfs.kext /tmp/o3x/ <br />
sudo cp -r spl/module/spl/spl.kext /tmp/o3x/<br />
<br />
cd /tmp/o3x<br />
sudo chown -R *<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
Once the kexts have been loaded, you can test the commands.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo ~/Developer/zfs/cmd.sh zfs status<br />
</syntaxhighlight><br />
<br />
== Migrating old Pools (from MacZFS or ZEVO) ==<br />
<br />
First export all of your pools, and uninstall the other implementation. It is all right if you forgot to export your pools before uninstalling. You will just need to use the '-f' option when importing into OpenZFS on OS X.<br />
<br />
To find out the pool names, you need to execute the command for pool discovery.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool import<br />
</syntaxhighlight><br />
<br />
This will tell you what pools are available to be imported, but will not actually import anything. You can see that nothing has been imported yet by using the 'zpool status' command.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool status<br />
</syntaxhighlight><br />
<br />
Now that you know what pools are available to be imported, you can actually import a pool by supplying the name or guid that you saw during pool discovery.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool import poolname (or guid)<br />
</syntaxhighlight><br />
<br />
(Notice how this differs from the command for pool discovery.)<br />
<br />
If you forgot to export before migrating, you will need to use the '-f' option.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool import -f poolname (or guid)<br />
</syntaxhighlight><br />
<br />
If you want to see the same information you saw during pool discovery, you will now need to use 'zpool status' rather than 'zpool import'.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool status<br />
</syntaxhighlight><br />
<br />
If all pools have been imported, the pool discovery command— 'zpool import' with no pool or guid specified— will return without any output.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool import<br />
</syntaxhighlight></div>50.168.32.57https://openzfsonosx.org/wiki/DevelopmentDevelopment2014-04-11T07:02:06Z<p>50.168.32.57: /* Conclusions */</p>
<hr />
<div>[[Category:O3X development]]<br />
== Kernel ==<br />
=== Debugging with GDB ===<br />
<br />
Dealing with [[Panic|panics]].<br />
<br />
Apple's documentation: https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KEXTConcept/KEXTConceptDebugger/debug_tutorial.html<br />
<br />
Boot target VM with<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo nvram boot-args="-v keepsyms=y debug=0x144"<br />
</syntaxhighlight><br />
<br />
Make it panic.<br />
<br />
On your development machine, you will need the Kernel Debug Kit. Download it from Apple [https://developer.apple.com/downloads/index.action?q=Kernel%20Debug%20Kit here].<br />
<br />
<syntaxhighlight lang="bash"><br />
$ gdb /Volumes/Kernelit/mach_kernel<br />
</syntaxhighlight><br />
<syntaxhighlight lang="text"><br />
(gdb) source /Volumes/KernelDebugKit/kgmacros<br />
<br />
(gdb) target remote-kdp<br />
<br />
(gdb) kdp-reattach 192.168.30.133 # obviously use the IP of your target / crashed VM<br />
<br />
(gdb) showallkmods<br />
</syntaxhighlight><br />
<br />
Find the addresses for ZFS and SPL modules.<br />
<br />
<code>^Z</code> to suspend gdb, or, use another terminal<br />
<br />
<syntaxhighlight lang="bash"><br />
^Z<br />
$ sudo kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/<br />
<br />
$ fg # resume gdb, or go back to gdb terminal<br />
</syntaxhighlight><br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) set kext-symbol-file-path /tmp<br />
(gdb) add-kext /tmp/spl.kext <br />
(gdb) add-kext /tmp/zfs.kext<br />
(gdb) bt<br />
</syntaxhighlight><br />
<br />
=== Debugging with LLDB ===<br />
<br />
<syntaxhighlight lang="bash"><br />
$ echo "settings set target.load-script-from-symbol-file true" >> ~/.lldbinit<br />
$ lldb /Volumes/KernelDebugKit/mach_kernel<br />
(lldb) kdp-remote 192.168.30.146<br />
(lldb) showallkmods<br />
(lldb) addkext -F /tmp/spl.kext/Contents/MacOS/spl 0xffffff7f8ebb0000 (Address from showallkmods)<br />
(lldb) addkext -F /tmp/zfs.kext/Contents/MacOS/zfs 0xffffff7f8ebbf000<br />
</syntaxhighlight><br />
<br />
Then follow the guide for GDB above.<br />
<br />
=== Non-panic ===<br />
<br />
If you prefer to work in GDB, you can always panic a kernel with<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo dtrace -w -n "BEGIN{ panic();}"<br />
</syntaxhighlight><br />
<br />
But this was revealing:<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo /usr/libexec/stackshot -i -f /tmp/stackshot.log <br />
$ sudo symstacks.rb -f /tmp/stackshot.log -s -w /tmp/trace.txt<br />
$ less /tmp/trace.txt<br />
</syntaxhighlight><br />
<br />
Note that my hang is here:<br />
<br />
<syntaxhighlight lang="text"><br />
PID: 156<br />
Process: zpool<br />
Thread ID: 0x4e2<br />
Thread state: 0x9 == TH_WAIT |TH_UNINT <br />
Thread wait_event: 0xffffff8006608a6c<br />
Kernel stack: <br />
machine_switch_context (in mach_kernel) + 366 (0xffffff80002b3d3e)<br />
0xffffff800022e711 (in mach_kernel) + 1281 (0xffffff800022e711)<br />
thread_block_reason (in mach_kernel) + 300 (0xffffff800022d9dc)<br />
lck_mtx_sleep (in mach_kernel) + 78 (0xffffff80002265ce)<br />
0xffffff8000569ef6 (in mach_kernel) + 246 (0xffffff8000569ef6)<br />
msleep (in mach_kernel) + 116 (0xffffff800056a2e4)<br />
0xffffff7f80e52a76 (0xffffff7f80e52a76)<br />
0xffffff7f80e53fae (0xffffff7f80e53fae)<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
0xffffff7f80f2bb4e (0xffffff7f80f2bb4e)<br />
0xffffff7f80f1a9b7 (0xffffff7f80f1a9b7)<br />
0xffffff7f80f1b65f (0xffffff7f80f1b65f)<br />
0xffffff7f80f042ee (0xffffff7f80f042ee)<br />
0xffffff7f80f45c5b (0xffffff7f80f45c5b)<br />
0xffffff7f80f4ce92 (0xffffff7f80f4ce92)<br />
spec_ioctl (in mach_kernel) + 157 (0xffffff8000320bfd)<br />
VNOP_IOCTL (in mach_kernel) + 244 (0xffffff8000311e84)<br />
</syntaxhighlight><br />
<br />
It is a shame that it only shows the kernel symbols, and not inside SPL and ZFS, but we can ask it to load another sym file. (Alas, it cannot handle multiple symbols files. Fix this Apple.)<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo kextstat #grab the addresses of SPL and ZFS again<br />
$ sudo kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel \<br />
-e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/ <br />
<br />
$ sudo symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.spl.sym<br />
0xffffff800056a2e4 (0xffffff800056a2e4)<br />
spl_cv_wait (in net.lundman.spl.sym) + 54 (0xffffff7f80e52a76)<br />
taskq_wait (in net.lundman.spl.sym) + 78 (0xffffff7f80e53fae)<br />
taskq_destroy (in net.lundman.spl.sym) + 35 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
<br />
$ sudo symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.zfs.sym<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
vdev_open_children (in net.lundman.zfs.sym) + 336 (0xffffff7f80f1a870)<br />
vdev_root_open (in net.lundman.zfs.sym) + 94 (0xffffff7f80f2bb4e)<br />
vdev_open (in net.lundman.zfs.sym) + 311 (0xffffff7f80f1a9b7)<br />
vdev_create (in net.lundman.zfs.sym) + 31 (0xffffff7f80f1b65f)<br />
spa_create (in net.lundman.zfs.sym) + 878 (0xffffff7f80f042ee)<br />
</syntaxhighlight><br />
<br />
Voilà!<br />
<br />
=== Memory leaks ===<br />
<br />
In some cases, you may suspect memory issues, for instance if you saw the following panic:<br />
<br />
<syntaxhighlight lang="text"><br />
panic(cpu 1 caller 0xffffff80002438d8): "zalloc: \"kalloc.1024\" (100535 elements) retry fail 3, kfree_nop_count: 0"@/SourceCache/xnu/xnu-2050.7.9/osfmk/kern/zalloc.c:1826<br />
</syntaxhighlight><br />
<br />
To debug this, you can attach GDB and use the zprint command:<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) zprint<br />
ZONE COUNT TOT_SZ MAX_SZ ELT_SZ ALLOC_SZ TOT_ALLOC TOT_FREE NAME<br />
0xffffff8002a89250 1620133 18c1000 22a3599 16 1000 125203838 123583705 kalloc.16 CX<br />
0xffffff8006306c50 110335 35f000 4ce300 32 1000 13634985 13524650 kalloc.32 CX<br />
0xffffff8006306a00 133584 82a000 e6a900 64 1000 26510120 26376536 kalloc.64 CX<br />
0xffffff80063067b0 610090 4a84000 614f4c0 128 1000 50524515 49914425 kalloc.128 CX<br />
0xffffff8006306560 1070398 121a2000 1b5e4d60 256 1000 72534632 71464234 kalloc.256 CX<br />
0xffffff8006306310 399302 d423000 daf26b0 512 1000 39231204 38831902 kalloc.512 CX<br />
0xffffff80063060c0 100404 6231000 c29e980 1024 1000 22949693 22849289 kalloc.1024 CX<br />
0xffffff8006305e70 292 9a000 200000 2048 1000 77633725 77633433 kalloc.2048 CX<br />
</syntaxhighlight><br />
<br />
In this case, kalloc.256 is suspect.<br />
<br />
Reboot kernel with zlog=kalloc.256 on the command line, then we can use<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) findoldest <br />
oldest record is at log index 393:<br />
<br />
--------------- ALLOC 0xffffff803276ec00 : index 393 : ztime 21643824 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
and indeed, list any index<br />
<br />
(gdb) zstack 394<br />
<br />
--------------- ALLOC 0xffffff8032d60700 : index 394 : ztime 21648810 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
How many times was zfs_kmem_alloc involved in the leaked allocs?<br />
<br />
(gdb) countpcs 0xffffff7f80e847df<br />
occurred 3999 times in log (100% of records)<br />
</syntaxhighlight><br />
<br />
At least we know it is our fault.<br />
<br />
How many times is it arc_buf_alloc?<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) countpcs 0xffffff7f80e90649<br />
occurred 2390 times in log (59% of records)<br />
</syntaxhighlight><br />
<br />
== Flamegraphs ==<br />
<br />
Huge thanks to [http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html BrendanGregg] for so much of the dtrace magic.<br />
<br />
dtrace the kernel while running a command:<br />
<br />
dtrace -x stackframes=100 -n 'profile-997 /arg0/ {<br />
@[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks<br />
<br />
It will run for 60 seconds.<br />
<br />
Convert it to a flamegraph:<br />
<br />
./stackcollapse.pl out.stacks > out.folded<br />
./flamegraph.pl out.folded > out.svg<br />
<br />
<br />
This is '''rsync -a /usr/ /BOOM/deletea/''' running:<br />
<br />
[[File:rsyncflamegraph.svg|thumb|rsync flamegraph]]<br />
<br />
<br />
Or running '''Bonnie++''' in various stages:<br />
<br />
<gallery mode="packed-hover"><br />
File:create.svg|Create files in sequential order|alt=[[File:create.svg]]<br />
File:stat.svg|Stat files in sequential order|alt=Stat files in sequential order<br />
File:delete.svg|Delete files in sequential order|alt=Delete files in sequential order<br />
</gallery><br />
<br />
<br />
== ZVOL block size ==<br />
<br />
At the moment, we can only handle block size of 512 and 4096 in ZFS. And 512 is handled poorly. To write a single 512 block, IOKit layer will read in 8 blocks (to make up a PAGE_SIZE read) modify the buffer, then write 8 blocks. This makes ZFS think we wrote 8 blocks, and all stats are updated as such. This is undesirable since compression ratio etc cannot be reported correctly.<br />
<br />
This limitation is in specfs, which is applied to any BLK device created in /dev. For usage with Apple and the GUI, there is not much we can do. But we are planning to create a secondary blk/chr nodes (maybe in /var/run/zfs/dsk/$POOL/$name or similar for compatibility) which will have our implementation attached as vnops. This will let us handle any block size required.<br />
<br />
<br />
== vnode_create thread ==<br />
<br />
Currently, we have to protect the call to <code>vnode_create()</code> due to the fact that it can potentially call several vnops (fsync, pageout, reclaim), and so we have a "reclaim thread" to deal with this. One issue is that reclaim can be called both as a separate thread (periodic reclaims) and as the "calling thread" of <code>vnode_create</code>. This makes locking tricky.<br />
<br />
One idea is we create a "vnode_create thread" (with each dataset). Then in <code>zfs_zget</code> and <code>zfs_znode_alloc</code>, which call <code>vnode_create</code>, we simply place the newly allocated <code>zp</code> on the vnode_create thread's "request list," and resume execution. Once we have passed the "unlock" part of the functions, we can wait for the vnode_create thread to complete the request so we do not resume execution without the vp attached.<br />
<br />
In the vnode_create thread, we pop items off the list, call <code>vnode_create</code> (guaranteed as a separate thread now) and once completed, mark the node done, and signal the process which might be waiting.<br />
<br />
In theory this should let us handle reclaim, fsync, and pageout in the same manner as upstream ZFS, with no special cases required. This should alleviate the current situation where the reclaim_list grows to a very large number (230,000 nodes observed). <br />
<br />
It might mean we need to be careful in any function which could end up in <code>zfs_znode_alloc</code>, to make sure we have a <code>vp</code> attached before we resume. For example, <code>zfs_lookup</code> and <code>zfs_create</code>.<br />
<br />
=== vnode_thread branch ===<br />
<br />
The branch '''vnode_thread''' is just this idea. It creates a vnode_create_thread per dataset, and when we need to call <code>vnode_create()</code>, it simply adds the <code>zp</code> to the list of requests, then signals the thread. The thread will call <code>vnode_create()</code> and upon completion, set <code>zp->z_vnode</code> then signal back. The requester for <code>zp</code> will sit in <code>zfs_znode_wait_vnode()</code> waiting for the signal back.<br />
<br />
This means the ZFS code base is littered with calls to <code>zfs_znode_wait_vnode()</code> (46 to be exact) placed at the correct location (i.e., ''after'' all the locks are released, and <code>zil_commit()</code> has been called). It is possible that this number could be decreased, as the calls to <code>zfs_zget()</code> appear not to suffer the <code>zil_commit()</code> issue, and can probably just block at the end of <code>zfs_zget()</code>. However, the calls to <code>zfs_mknode()</code> are what cause the issue.<br />
<br />
<code>sysctl zfs.vnode_create_list</code> tracks the number of <code>zp</code> nodes in the list waiting for <code>vnode_create()</code> to complete. Typically, 0 or 1. Rarely higher.<br />
<br />
It appears to deadlock from time to time.<br />
<br />
=== vnode_threadX branch ===<br />
<br />
The second branch '''vnode_threadX''' takes a slightly different approach. Instead of a permanent vnode_create_thread, it simply spawns a thread when <code>zfs_znode_getvnode()</code> is called. This new thread calls <code>_zfs_znode_getvnode()</code> which functions as above. Call <code>vnode_create()</code> then signal back. The same <code>zfs_znode_wait_vnode()</code> blockers exist.<br />
<br />
<code>sysctl zfs.vnode_create_list</code> tracks the number of vnode_create threads we have started. Interestingly, these remain 0 or 1, rarely higher.<br />
<br />
It has not yet deadlocked.<br />
<br />
===Conclusions===<br />
<br />
* It is undesirable that we have <code>zfs_znode_wait_vnode()</code> littered all over the source, and that we must pay special attention to each one. Nonetheless, it does not hurt to call it in excess, given that no wait will occur if <code>zp->z_vnode</code> is already set.<br />
* It is unknown if it is all right to resume ZFS execution while <code>z_vnode</code> is still <code>NULL</code>, and only to block (to wait for it to be filled in) once we are close to leaving the VNOP.<br />
* However, the fact that <code>vnop_reclaim</code> is direct and can be cleaned up immediately is very desirable. We no longer need to check for the "<code>zp</code> without <code>vp</code>" case in <code>zfs_zget()</code>.<br />
* We no longer need to lock protect <code>vnop_fsync</code> or <code>vnop_pageout</code> in case they are called from <code>vnode_create()</code>.<br />
* We don't have to throttle the '''reclaim thread''' due to the list's being massive. (Populating the list is much faster than cleaning up a <code>zp</code> node—up to 250,000 nodes in the list have been observed.)<br />
<br />
<br />
[[File:VX_create.svg|thumb|Create files in sequential order]]<br />
<br />
<br><br />
<br />
[[File:iozone.svg|thumb|IOzone flamegraph]]<br />
<br />
[[File:iozoneX.svg|thumb|IOzone flamegraph (untrimmed)]]<br />
<br />
<br><br />
<br />
<br />
------<br />
<br />
== Iozone ==<br />
<br />
<br />
Quick peek at how they compare, just to see how much we should improve it by.<br />
<br />
HFS+ and ZFS were created on the same virtual disk in VMware. Of course, this is not ideal testing specs, but should serve as an indicator. <br />
<br />
The pool was created with<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo zpool create -f -o ashift=12 \<br />
-O atime=off \<br />
-O casesensitivity=insensitive \<br />
-O normalization=formD \<br />
BOOM /dev/disk1<br />
</syntaxhighlight><br />
<br />
and the HFS+ file system was created with the standard OS X Disk Utility.app, with everything default (journaled, case-insensitive).<br />
<br />
Iozone was run with standard automode, ie:<br />
<br />
# iozone -a -b outfile.xls<br />
<br />
[[File:hfs2_read.png|thumb|HFS+ read]]<br />
[[File:hfs2_write.png|thumb|HFS+ write]]<br />
[[File:zfs2_read.png|thumb|ZFS read]]<br />
[[File:zfs2_write.png|thumb|ZFS write]]<br />
<br />
As a guess, writes need to double, and reads need to triple.</div>50.168.32.57https://openzfsonosx.org/wiki/DevelopmentDevelopment2014-04-11T06:51:08Z<p>50.168.32.57: /* Iozone */</p>
<hr />
<div>[[Category:O3X development]]<br />
== Kernel ==<br />
=== Debugging with GDB ===<br />
<br />
Dealing with [[Panic|panics]].<br />
<br />
Apple's documentation: https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KEXTConcept/KEXTConceptDebugger/debug_tutorial.html<br />
<br />
Boot target VM with<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo nvram boot-args="-v keepsyms=y debug=0x144"<br />
</syntaxhighlight><br />
<br />
Make it panic.<br />
<br />
On your development machine, you will need the Kernel Debug Kit. Download it from Apple [https://developer.apple.com/downloads/index.action?q=Kernel%20Debug%20Kit here].<br />
<br />
<syntaxhighlight lang="bash"><br />
$ gdb /Volumes/Kernelit/mach_kernel<br />
</syntaxhighlight><br />
<syntaxhighlight lang="text"><br />
(gdb) source /Volumes/KernelDebugKit/kgmacros<br />
<br />
(gdb) target remote-kdp<br />
<br />
(gdb) kdp-reattach 192.168.30.133 # obviously use the IP of your target / crashed VM<br />
<br />
(gdb) showallkmods<br />
</syntaxhighlight><br />
<br />
Find the addresses for ZFS and SPL modules.<br />
<br />
<code>^Z</code> to suspend gdb, or, use another terminal<br />
<br />
<syntaxhighlight lang="bash"><br />
^Z<br />
$ sudo kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/<br />
<br />
$ fg # resume gdb, or go back to gdb terminal<br />
</syntaxhighlight><br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) set kext-symbol-file-path /tmp<br />
(gdb) add-kext /tmp/spl.kext <br />
(gdb) add-kext /tmp/zfs.kext<br />
(gdb) bt<br />
</syntaxhighlight><br />
<br />
=== Debugging with LLDB ===<br />
<br />
<syntaxhighlight lang="bash"><br />
$ echo "settings set target.load-script-from-symbol-file true" >> ~/.lldbinit<br />
$ lldb /Volumes/KernelDebugKit/mach_kernel<br />
(lldb) kdp-remote 192.168.30.146<br />
(lldb) showallkmods<br />
(lldb) addkext -F /tmp/spl.kext/Contents/MacOS/spl 0xffffff7f8ebb0000 (Address from showallkmods)<br />
(lldb) addkext -F /tmp/zfs.kext/Contents/MacOS/zfs 0xffffff7f8ebbf000<br />
</syntaxhighlight><br />
<br />
Then follow the guide for GDB above.<br />
<br />
=== Non-panic ===<br />
<br />
If you prefer to work in GDB, you can always panic a kernel with<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo dtrace -w -n "BEGIN{ panic();}"<br />
</syntaxhighlight><br />
<br />
But this was revealing:<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo /usr/libexec/stackshot -i -f /tmp/stackshot.log <br />
$ sudo symstacks.rb -f /tmp/stackshot.log -s -w /tmp/trace.txt<br />
$ less /tmp/trace.txt<br />
</syntaxhighlight><br />
<br />
Note that my hang is here:<br />
<br />
<syntaxhighlight lang="text"><br />
PID: 156<br />
Process: zpool<br />
Thread ID: 0x4e2<br />
Thread state: 0x9 == TH_WAIT |TH_UNINT <br />
Thread wait_event: 0xffffff8006608a6c<br />
Kernel stack: <br />
machine_switch_context (in mach_kernel) + 366 (0xffffff80002b3d3e)<br />
0xffffff800022e711 (in mach_kernel) + 1281 (0xffffff800022e711)<br />
thread_block_reason (in mach_kernel) + 300 (0xffffff800022d9dc)<br />
lck_mtx_sleep (in mach_kernel) + 78 (0xffffff80002265ce)<br />
0xffffff8000569ef6 (in mach_kernel) + 246 (0xffffff8000569ef6)<br />
msleep (in mach_kernel) + 116 (0xffffff800056a2e4)<br />
0xffffff7f80e52a76 (0xffffff7f80e52a76)<br />
0xffffff7f80e53fae (0xffffff7f80e53fae)<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
0xffffff7f80f2bb4e (0xffffff7f80f2bb4e)<br />
0xffffff7f80f1a9b7 (0xffffff7f80f1a9b7)<br />
0xffffff7f80f1b65f (0xffffff7f80f1b65f)<br />
0xffffff7f80f042ee (0xffffff7f80f042ee)<br />
0xffffff7f80f45c5b (0xffffff7f80f45c5b)<br />
0xffffff7f80f4ce92 (0xffffff7f80f4ce92)<br />
spec_ioctl (in mach_kernel) + 157 (0xffffff8000320bfd)<br />
VNOP_IOCTL (in mach_kernel) + 244 (0xffffff8000311e84)<br />
</syntaxhighlight><br />
<br />
It is a shame that it only shows the kernel symbols, and not inside SPL and ZFS, but we can ask it to load another sym file. (Alas, it cannot handle multiple symbols files. Fix this Apple.)<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo kextstat #grab the addresses of SPL and ZFS again<br />
$ sudo kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel \<br />
-e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/ <br />
<br />
$ sudo symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.spl.sym<br />
0xffffff800056a2e4 (0xffffff800056a2e4)<br />
spl_cv_wait (in net.lundman.spl.sym) + 54 (0xffffff7f80e52a76)<br />
taskq_wait (in net.lundman.spl.sym) + 78 (0xffffff7f80e53fae)<br />
taskq_destroy (in net.lundman.spl.sym) + 35 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
<br />
$ sudo symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.zfs.sym<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
vdev_open_children (in net.lundman.zfs.sym) + 336 (0xffffff7f80f1a870)<br />
vdev_root_open (in net.lundman.zfs.sym) + 94 (0xffffff7f80f2bb4e)<br />
vdev_open (in net.lundman.zfs.sym) + 311 (0xffffff7f80f1a9b7)<br />
vdev_create (in net.lundman.zfs.sym) + 31 (0xffffff7f80f1b65f)<br />
spa_create (in net.lundman.zfs.sym) + 878 (0xffffff7f80f042ee)<br />
</syntaxhighlight><br />
<br />
Voilà!<br />
<br />
=== Memory leaks ===<br />
<br />
In some cases, you may suspect memory issues, for instance if you saw the following panic:<br />
<br />
<syntaxhighlight lang="text"><br />
panic(cpu 1 caller 0xffffff80002438d8): "zalloc: \"kalloc.1024\" (100535 elements) retry fail 3, kfree_nop_count: 0"@/SourceCache/xnu/xnu-2050.7.9/osfmk/kern/zalloc.c:1826<br />
</syntaxhighlight><br />
<br />
To debug this, you can attach GDB and use the zprint command:<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) zprint<br />
ZONE COUNT TOT_SZ MAX_SZ ELT_SZ ALLOC_SZ TOT_ALLOC TOT_FREE NAME<br />
0xffffff8002a89250 1620133 18c1000 22a3599 16 1000 125203838 123583705 kalloc.16 CX<br />
0xffffff8006306c50 110335 35f000 4ce300 32 1000 13634985 13524650 kalloc.32 CX<br />
0xffffff8006306a00 133584 82a000 e6a900 64 1000 26510120 26376536 kalloc.64 CX<br />
0xffffff80063067b0 610090 4a84000 614f4c0 128 1000 50524515 49914425 kalloc.128 CX<br />
0xffffff8006306560 1070398 121a2000 1b5e4d60 256 1000 72534632 71464234 kalloc.256 CX<br />
0xffffff8006306310 399302 d423000 daf26b0 512 1000 39231204 38831902 kalloc.512 CX<br />
0xffffff80063060c0 100404 6231000 c29e980 1024 1000 22949693 22849289 kalloc.1024 CX<br />
0xffffff8006305e70 292 9a000 200000 2048 1000 77633725 77633433 kalloc.2048 CX<br />
</syntaxhighlight><br />
<br />
In this case, kalloc.256 is suspect.<br />
<br />
Reboot kernel with zlog=kalloc.256 on the command line, then we can use<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) findoldest <br />
oldest record is at log index 393:<br />
<br />
--------------- ALLOC 0xffffff803276ec00 : index 393 : ztime 21643824 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
and indeed, list any index<br />
<br />
(gdb) zstack 394<br />
<br />
--------------- ALLOC 0xffffff8032d60700 : index 394 : ztime 21648810 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
How many times was zfs_kmem_alloc involved in the leaked allocs?<br />
<br />
(gdb) countpcs 0xffffff7f80e847df<br />
occurred 3999 times in log (100% of records)<br />
</syntaxhighlight><br />
<br />
At least we know it is our fault.<br />
<br />
How many times is it arc_buf_alloc?<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) countpcs 0xffffff7f80e90649<br />
occurred 2390 times in log (59% of records)<br />
</syntaxhighlight><br />
<br />
== Flamegraphs ==<br />
<br />
Huge thanks to [http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html BrendanGregg] for so much of the dtrace magic.<br />
<br />
dtrace the kernel while running a command:<br />
<br />
dtrace -x stackframes=100 -n 'profile-997 /arg0/ {<br />
@[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks<br />
<br />
It will run for 60 seconds.<br />
<br />
Convert it to a flamegraph:<br />
<br />
./stackcollapse.pl out.stacks > out.folded<br />
./flamegraph.pl out.folded > out.svg<br />
<br />
<br />
This is '''rsync -a /usr/ /BOOM/deletea/''' running:<br />
<br />
[[File:rsyncflamegraph.svg|thumb|rsync flamegraph]]<br />
<br />
<br />
Or running '''Bonnie++''' in various stages:<br />
<br />
<gallery mode="packed-hover"><br />
File:create.svg|Create files in sequential order|alt=[[File:create.svg]]<br />
File:stat.svg|Stat files in sequential order|alt=Stat files in sequential order<br />
File:delete.svg|Delete files in sequential order|alt=Delete files in sequential order<br />
</gallery><br />
<br />
<br />
== ZVOL block size ==<br />
<br />
At the moment, we can only handle block size of 512 and 4096 in ZFS. And 512 is handled poorly. To write a single 512 block, IOKit layer will read in 8 blocks (to make up a PAGE_SIZE read) modify the buffer, then write 8 blocks. This makes ZFS think we wrote 8 blocks, and all stats are updated as such. This is undesirable since compression ratio etc cannot be reported correctly.<br />
<br />
This limitation is in specfs, which is applied to any BLK device created in /dev. For usage with Apple and the GUI, there is not much we can do. But we are planning to create a secondary blk/chr nodes (maybe in /var/run/zfs/dsk/$POOL/$name or similar for compatibility) which will have our implementation attached as vnops. This will let us handle any block size required.<br />
<br />
<br />
== vnode_create thread ==<br />
<br />
Currently, we have to protect the call to <code>vnode_create()</code> due to the fact that it can potentially call several vnops (fsync, pageout, reclaim), and so we have a "reclaim thread" to deal with this. One issue is that reclaim can be called both as a separate thread (periodic reclaims) and as the "calling thread" of <code>vnode_create</code>. This makes locking tricky.<br />
<br />
One idea is we create a "vnode_create thread" (with each dataset). Then in <code>zfs_zget</code> and <code>zfs_znode_alloc</code>, which call <code>vnode_create</code>, we simply place the newly allocated <code>zp</code> on the vnode_create thread's "request list," and resume execution. Once we have passed the "unlock" part of the functions, we can wait for the vnode_create thread to complete the request so we do not resume execution without the vp attached.<br />
<br />
In the vnode_create thread, we pop items off the list, call <code>vnode_create</code> (guaranteed as a separate thread now) and once completed, mark the node done, and signal the process which might be waiting.<br />
<br />
In theory this should let us handle reclaim, fsync, and pageout in the same manner as upstream ZFS, with no special cases required. This should alleviate the current situation where the reclaim_list grows to a very large number (230,000 nodes observed). <br />
<br />
It might mean we need to be careful in any function which could end up in <code>zfs_znode_alloc</code>, to make sure we have a <code>vp</code> attached before we resume. For example, <code>zfs_lookup</code> and <code>zfs_create</code>.<br />
<br />
=== vnode_thread branch ===<br />
<br />
The branch '''vnode_thread''' is just this idea. It creates a vnode_create_thread per dataset, and when we need to call <code>vnode_create()</code>, it simply adds the <code>zp</code> to the list of requests, then signals the thread. The thread will call <code>vnode_create()</code> and upon completion, set <code>zp->z_vnode</code> then signal back. The requester for <code>zp</code> will sit in <code>zfs_znode_wait_vnode()</code> waiting for the signal back.<br />
<br />
This means the ZFS code base is littered with calls to <code>zfs_znode_wait_vnode()</code> (46 to be exact) placed at the correct location (i.e., ''after'' all the locks are released, and <code>zil_commit()</code> has been called). It is possible that this number could be decreased, as the calls to <code>zfs_zget()</code> appear not to suffer the <code>zil_commit()</code> issue, and can probably just block at the end of <code>zfs_zget()</code>. However, the calls to <code>zfs_mknode()</code> are what cause the issue.<br />
<br />
<code>sysctl zfs.vnode_create_list</code> tracks the number of <code>zp</code> nodes in the list waiting for <code>vnode_create()</code> to complete. Typically, 0 or 1. Rarely higher.<br />
<br />
It appears to deadlock from time to time.<br />
<br />
=== vnode_threadX branch ===<br />
<br />
The second branch '''vnode_threadX''' takes a slightly different approach. Instead of a permanent vnode_create_thread, it simply spawns a thread when <code>zfs_znode_getvnode()</code> is called. This new thread calls <code>_zfs_znode_getvnode()</code> which functions as above. Call <code>vnode_create()</code> then signal back. The same <code>zfs_znode_wait_vnode()</code> blockers exist.<br />
<br />
<code>sysctl zfs.vnode_create_list</code> tracks the number of vnode_create threads we have started. Interestingly, these remain 0 or 1, rarely higher.<br />
<br />
It has not yet deadlocked.<br />
<br />
===Conclusions===<br />
<br />
* It is undesirable that we have <code>zfs_znode_wait_vnode()</code> littered all over the source, and that we must pay special attention to each one. Nonetheless, it does not hurt to call it in excess, given that no wait will occur if <code>zp->z_vnode</code> is already set.<br />
* It is unknown if it is all right to resume ZFS execution while <code>z_vnode</code> is still <code>NULL</code>, and only to block (to wait for it to be filled in) once we are close to leaving the VNOP.<br />
<br />
<br />
* However, the fact that <code>vnop_reclaim</code> is direct and can be cleaned up immediately is very desirable. We no longer need to check for the "<code>zp</code> without <code>vp</code>" case in <code>zfs_zget()</code>.<br />
* We no longer need to lock protect <code>vnop_fsync</code> or <code>vnop_pageout</code> in case they are called from <code>vnode_create()</code>.<br />
* We don't have to throttle the '''reclaim thread''' due to the list's being massive. (Populating the list is much faster than cleaning up a <code>zp</code> node—up to 250,000 nodes in the list have been observed.)<br />
<br />
<br />
[[File:VX_create.svg|thumb|Create files in sequential order]]<br />
<br />
<br><br />
<br />
[[File:iozone.svg|thumb|IOzone flamegraph]]<br />
<br />
[[File:iozoneX.svg|thumb|IOzone flamegraph (untrimmed)]]<br />
<br />
<br><br />
<br />
<br />
------<br />
<br />
== Iozone ==<br />
<br />
<br />
Quick peek at how they compare, just to see how much we should improve it by.<br />
<br />
HFS+ and ZFS were created on the same virtual disk in VMware. Of course, this is not ideal testing specs, but should serve as an indicator. <br />
<br />
The pool was created with<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo zpool create -f -o ashift=12 \<br />
-O atime=off \<br />
-O casesensitivity=insensitive \<br />
-O normalization=formD \<br />
BOOM /dev/disk1<br />
</syntaxhighlight><br />
<br />
and the HFS+ file system was created with the standard OS X Disk Utility.app, with everything default (journaled, case-insensitive).<br />
<br />
Iozone was run with standard automode, ie:<br />
<br />
# iozone -a -b outfile.xls<br />
<br />
[[File:hfs2_read.png|thumb|HFS+ read]]<br />
[[File:hfs2_write.png|thumb|HFS+ write]]<br />
[[File:zfs2_read.png|thumb|ZFS read]]<br />
[[File:zfs2_write.png|thumb|ZFS write]]<br />
<br />
As a guess, writes need to double, and reads need to triple.</div>50.168.32.57https://openzfsonosx.org/wiki/DevelopmentDevelopment2014-04-11T06:44:27Z<p>50.168.32.57: </p>
<hr />
<div>[[Category:O3X development]]<br />
== Kernel ==<br />
=== Debugging with GDB ===<br />
<br />
Dealing with [[Panic|panics]].<br />
<br />
Apple's documentation: https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KEXTConcept/KEXTConceptDebugger/debug_tutorial.html<br />
<br />
Boot target VM with<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo nvram boot-args="-v keepsyms=y debug=0x144"<br />
</syntaxhighlight><br />
<br />
Make it panic.<br />
<br />
On your development machine, you will need the Kernel Debug Kit. Download it from Apple [https://developer.apple.com/downloads/index.action?q=Kernel%20Debug%20Kit here].<br />
<br />
<syntaxhighlight lang="bash"><br />
$ gdb /Volumes/Kernelit/mach_kernel<br />
</syntaxhighlight><br />
<syntaxhighlight lang="text"><br />
(gdb) source /Volumes/KernelDebugKit/kgmacros<br />
<br />
(gdb) target remote-kdp<br />
<br />
(gdb) kdp-reattach 192.168.30.133 # obviously use the IP of your target / crashed VM<br />
<br />
(gdb) showallkmods<br />
</syntaxhighlight><br />
<br />
Find the addresses for ZFS and SPL modules.<br />
<br />
<code>^Z</code> to suspend gdb, or, use another terminal<br />
<br />
<syntaxhighlight lang="bash"><br />
^Z<br />
$ sudo kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/<br />
<br />
$ fg # resume gdb, or go back to gdb terminal<br />
</syntaxhighlight><br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) set kext-symbol-file-path /tmp<br />
(gdb) add-kext /tmp/spl.kext <br />
(gdb) add-kext /tmp/zfs.kext<br />
(gdb) bt<br />
</syntaxhighlight><br />
<br />
=== Debugging with LLDB ===<br />
<br />
<syntaxhighlight lang="bash"><br />
$ echo "settings set target.load-script-from-symbol-file true" >> ~/.lldbinit<br />
$ lldb /Volumes/KernelDebugKit/mach_kernel<br />
(lldb) kdp-remote 192.168.30.146<br />
(lldb) showallkmods<br />
(lldb) addkext -F /tmp/spl.kext/Contents/MacOS/spl 0xffffff7f8ebb0000 (Address from showallkmods)<br />
(lldb) addkext -F /tmp/zfs.kext/Contents/MacOS/zfs 0xffffff7f8ebbf000<br />
</syntaxhighlight><br />
<br />
Then follow the guide for GDB above.<br />
<br />
=== Non-panic ===<br />
<br />
If you prefer to work in GDB, you can always panic a kernel with<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo dtrace -w -n "BEGIN{ panic();}"<br />
</syntaxhighlight><br />
<br />
But this was revealing:<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo /usr/libexec/stackshot -i -f /tmp/stackshot.log <br />
$ sudo symstacks.rb -f /tmp/stackshot.log -s -w /tmp/trace.txt<br />
$ less /tmp/trace.txt<br />
</syntaxhighlight><br />
<br />
Note that my hang is here:<br />
<br />
<syntaxhighlight lang="text"><br />
PID: 156<br />
Process: zpool<br />
Thread ID: 0x4e2<br />
Thread state: 0x9 == TH_WAIT |TH_UNINT <br />
Thread wait_event: 0xffffff8006608a6c<br />
Kernel stack: <br />
machine_switch_context (in mach_kernel) + 366 (0xffffff80002b3d3e)<br />
0xffffff800022e711 (in mach_kernel) + 1281 (0xffffff800022e711)<br />
thread_block_reason (in mach_kernel) + 300 (0xffffff800022d9dc)<br />
lck_mtx_sleep (in mach_kernel) + 78 (0xffffff80002265ce)<br />
0xffffff8000569ef6 (in mach_kernel) + 246 (0xffffff8000569ef6)<br />
msleep (in mach_kernel) + 116 (0xffffff800056a2e4)<br />
0xffffff7f80e52a76 (0xffffff7f80e52a76)<br />
0xffffff7f80e53fae (0xffffff7f80e53fae)<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
0xffffff7f80f2bb4e (0xffffff7f80f2bb4e)<br />
0xffffff7f80f1a9b7 (0xffffff7f80f1a9b7)<br />
0xffffff7f80f1b65f (0xffffff7f80f1b65f)<br />
0xffffff7f80f042ee (0xffffff7f80f042ee)<br />
0xffffff7f80f45c5b (0xffffff7f80f45c5b)<br />
0xffffff7f80f4ce92 (0xffffff7f80f4ce92)<br />
spec_ioctl (in mach_kernel) + 157 (0xffffff8000320bfd)<br />
VNOP_IOCTL (in mach_kernel) + 244 (0xffffff8000311e84)<br />
</syntaxhighlight><br />
<br />
It is a shame that it only shows the kernel symbols, and not inside SPL and ZFS, but we can ask it to load another sym file. (Alas, it cannot handle multiple symbols files. Fix this Apple.)<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo kextstat #grab the addresses of SPL and ZFS again<br />
$ sudo kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel \<br />
-e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/ <br />
<br />
$ sudo symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.spl.sym<br />
0xffffff800056a2e4 (0xffffff800056a2e4)<br />
spl_cv_wait (in net.lundman.spl.sym) + 54 (0xffffff7f80e52a76)<br />
taskq_wait (in net.lundman.spl.sym) + 78 (0xffffff7f80e53fae)<br />
taskq_destroy (in net.lundman.spl.sym) + 35 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
<br />
$ sudo symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.zfs.sym<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
vdev_open_children (in net.lundman.zfs.sym) + 336 (0xffffff7f80f1a870)<br />
vdev_root_open (in net.lundman.zfs.sym) + 94 (0xffffff7f80f2bb4e)<br />
vdev_open (in net.lundman.zfs.sym) + 311 (0xffffff7f80f1a9b7)<br />
vdev_create (in net.lundman.zfs.sym) + 31 (0xffffff7f80f1b65f)<br />
spa_create (in net.lundman.zfs.sym) + 878 (0xffffff7f80f042ee)<br />
</syntaxhighlight><br />
<br />
Voilà!<br />
<br />
=== Memory leaks ===<br />
<br />
In some cases, you may suspect memory issues, for instance if you saw the following panic:<br />
<br />
<syntaxhighlight lang="text"><br />
panic(cpu 1 caller 0xffffff80002438d8): "zalloc: \"kalloc.1024\" (100535 elements) retry fail 3, kfree_nop_count: 0"@/SourceCache/xnu/xnu-2050.7.9/osfmk/kern/zalloc.c:1826<br />
</syntaxhighlight><br />
<br />
To debug this, you can attach GDB and use the zprint command:<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) zprint<br />
ZONE COUNT TOT_SZ MAX_SZ ELT_SZ ALLOC_SZ TOT_ALLOC TOT_FREE NAME<br />
0xffffff8002a89250 1620133 18c1000 22a3599 16 1000 125203838 123583705 kalloc.16 CX<br />
0xffffff8006306c50 110335 35f000 4ce300 32 1000 13634985 13524650 kalloc.32 CX<br />
0xffffff8006306a00 133584 82a000 e6a900 64 1000 26510120 26376536 kalloc.64 CX<br />
0xffffff80063067b0 610090 4a84000 614f4c0 128 1000 50524515 49914425 kalloc.128 CX<br />
0xffffff8006306560 1070398 121a2000 1b5e4d60 256 1000 72534632 71464234 kalloc.256 CX<br />
0xffffff8006306310 399302 d423000 daf26b0 512 1000 39231204 38831902 kalloc.512 CX<br />
0xffffff80063060c0 100404 6231000 c29e980 1024 1000 22949693 22849289 kalloc.1024 CX<br />
0xffffff8006305e70 292 9a000 200000 2048 1000 77633725 77633433 kalloc.2048 CX<br />
</syntaxhighlight><br />
<br />
In this case, kalloc.256 is suspect.<br />
<br />
Reboot kernel with zlog=kalloc.256 on the command line, then we can use<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) findoldest <br />
oldest record is at log index 393:<br />
<br />
--------------- ALLOC 0xffffff803276ec00 : index 393 : ztime 21643824 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
and indeed, list any index<br />
<br />
(gdb) zstack 394<br />
<br />
--------------- ALLOC 0xffffff8032d60700 : index 394 : ztime 21648810 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
How many times was zfs_kmem_alloc involved in the leaked allocs?<br />
<br />
(gdb) countpcs 0xffffff7f80e847df<br />
occurred 3999 times in log (100% of records)<br />
</syntaxhighlight><br />
<br />
At least we know it is our fault.<br />
<br />
How many times is it arc_buf_alloc?<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) countpcs 0xffffff7f80e90649<br />
occurred 2390 times in log (59% of records)<br />
</syntaxhighlight><br />
<br />
== Flamegraphs ==<br />
<br />
Huge thanks to [http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html BrendanGregg] for so much of the dtrace magic.<br />
<br />
dtrace the kernel while running a command:<br />
<br />
dtrace -x stackframes=100 -n 'profile-997 /arg0/ {<br />
@[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks<br />
<br />
It will run for 60 seconds.<br />
<br />
Convert it to a flamegraph:<br />
<br />
./stackcollapse.pl out.stacks > out.folded<br />
./flamegraph.pl out.folded > out.svg<br />
<br />
<br />
This is '''rsync -a /usr/ /BOOM/deletea/''' running:<br />
<br />
[[File:rsyncflamegraph.svg|thumb|rsync flamegraph]]<br />
<br />
<br />
Or running '''Bonnie++''' in various stages:<br />
<br />
<gallery mode="packed-hover"><br />
File:create.svg|Create files in sequential order|alt=[[File:create.svg]]<br />
File:stat.svg|Stat files in sequential order|alt=Stat files in sequential order<br />
File:delete.svg|Delete files in sequential order|alt=Delete files in sequential order<br />
</gallery><br />
<br />
<br />
== ZVOL block size ==<br />
<br />
At the moment, we can only handle block size of 512 and 4096 in ZFS. And 512 is handled poorly. To write a single 512 block, IOKit layer will read in 8 blocks (to make up a PAGE_SIZE read) modify the buffer, then write 8 blocks. This makes ZFS think we wrote 8 blocks, and all stats are updated as such. This is undesirable since compression ratio etc cannot be reported correctly.<br />
<br />
This limitation is in specfs, which is applied to any BLK device created in /dev. For usage with Apple and the GUI, there is not much we can do. But we are planning to create a secondary blk/chr nodes (maybe in /var/run/zfs/dsk/$POOL/$name or similar for compatibility) which will have our implementation attached as vnops. This will let us handle any block size required.<br />
<br />
<br />
== vnode_create thread ==<br />
<br />
Currently, we have to protect the call to <code>vnode_create()</code> due to the fact that it can potentially call several vnops (fsync, pageout, reclaim), and so we have a "reclaim thread" to deal with this. One issue is that reclaim can be called both as a separate thread (periodic reclaims) and as the "calling thread" of <code>vnode_create</code>. This makes locking tricky.<br />
<br />
One idea is we create a "vnode_create thread" (with each dataset). Then in <code>zfs_zget</code> and <code>zfs_znode_alloc</code>, which call <code>vnode_create</code>, we simply place the newly allocated <code>zp</code> on the vnode_create thread's "request list," and resume execution. Once we have passed the "unlock" part of the functions, we can wait for the vnode_create thread to complete the request so we do not resume execution without the vp attached.<br />
<br />
In the vnode_create thread, we pop items off the list, call <code>vnode_create</code> (guaranteed as a separate thread now) and once completed, mark the node done, and signal the process which might be waiting.<br />
<br />
In theory this should let us handle reclaim, fsync, and pageout in the same manner as upstream ZFS, with no special cases required. This should alleviate the current situation where the reclaim_list grows to a very large number (230,000 nodes observed). <br />
<br />
It might mean we need to be careful in any function which could end up in <code>zfs_znode_alloc</code>, to make sure we have a <code>vp</code> attached before we resume. For example, <code>zfs_lookup</code> and <code>zfs_create</code>.<br />
<br />
=== vnode_thread branch ===<br />
<br />
The branch '''vnode_thread''' is just this idea. It creates a vnode_create_thread per dataset, and when we need to call <code>vnode_create()</code>, it simply adds the <code>zp</code> to the list of requests, then signals the thread. The thread will call <code>vnode_create()</code> and upon completion, set <code>zp->z_vnode</code> then signal back. The requester for <code>zp</code> will sit in <code>zfs_znode_wait_vnode()</code> waiting for the signal back.<br />
<br />
This means the ZFS code base is littered with calls to <code>zfs_znode_wait_vnode()</code> (46 to be exact) placed at the correct location (i.e., ''after'' all the locks are released, and <code>zil_commit()</code> has been called). It is possible that this number could be decreased, as the calls to <code>zfs_zget()</code> appear not to suffer the <code>zil_commit()</code> issue, and can probably just block at the end of <code>zfs_zget()</code>. However, the calls to <code>zfs_mknode()</code> are what cause the issue.<br />
<br />
<code>sysctl zfs.vnode_create_list</code> tracks the number of <code>zp</code> nodes in the list waiting for <code>vnode_create()</code> to complete. Typically, 0 or 1. Rarely higher.<br />
<br />
It appears to deadlock from time to time.<br />
<br />
=== vnode_threadX branch ===<br />
<br />
The second branch '''vnode_threadX''' takes a slightly different approach. Instead of a permanent vnode_create_thread, it simply spawns a thread when <code>zfs_znode_getvnode()</code> is called. This new thread calls <code>_zfs_znode_getvnode()</code> which functions as above. Call <code>vnode_create()</code> then signal back. The same <code>zfs_znode_wait_vnode()</code> blockers exist.<br />
<br />
<code>sysctl zfs.vnode_create_list</code> tracks the number of vnode_create threads we have started. Interestingly, these remain 0 or 1, rarely higher.<br />
<br />
It has not yet deadlocked.<br />
<br />
===Conclusions===<br />
<br />
* It is undesirable that we have <code>zfs_znode_wait_vnode()</code> littered all over the source, and that we must pay special attention to each one. Nonetheless, it does not hurt to call it in excess, given that no wait will occur if <code>zp->z_vnode</code> is already set.<br />
* It is unknown if it is all right to resume ZFS execution while <code>z_vnode</code> is still <code>NULL</code>, and only to block (to wait for it to be filled in) once we are close to leaving the VNOP.<br />
<br />
<br />
* However, the fact that <code>vnop_reclaim</code> is direct and can be cleaned up immediately is very desirable. We no longer need to check for the "<code>zp</code> without <code>vp</code>" case in <code>zfs_zget()</code>.<br />
* We no longer need to lock protect <code>vnop_fsync</code> or <code>vnop_pageout</code> in case they are called from <code>vnode_create()</code>.<br />
* We don't have to throttle the '''reclaim thread''' due to the list's being massive. (Populating the list is much faster than cleaning up a <code>zp</code> node—up to 250,000 nodes in the list have been observed.)<br />
<br />
<br />
[[File:VX_create.svg|thumb|Create files in sequential order]]<br />
<br />
<br><br />
<br />
[[File:iozone.svg|thumb|IOzone flamegraph]]<br />
<br />
[[File:iozoneX.svg|thumb|IOzone flamegraph (untrimmed)]]<br />
<br />
<br><br />
<br />
<br />
------<br />
<br />
== Iozone ==<br />
<br />
<br />
Quick peek at how they compare, just to see how much we should improve it by.<br />
<br />
HFS+ and ZFS were created on the same virtual disk in VMware. No, this is not ideal testing specs, but should serve as an indicator. The <br />
pool was created with<br />
<br />
# zpool create -f -O atime=off -o ashift=12 -O casesensitivity=insensitive -O normalization=formD BOOM /dev/disk1<br />
<br />
and HFS+ created with standard OS X Disk Utility, with everything default. (Journaled).<br />
<br />
Iozone was run with standard automode, ie:<br />
<br />
# iozone -a -b outfile.xls<br />
<br />
[[File:hfs2_read.png|thumb|HFS+ read]]<br />
[[File:hfs2_write.png|thumb|HFS+ write]]<br />
[[File:zfs2_read.png|thumb|ZFS read]]<br />
[[File:zfs2_write.png|thumb|ZFS write]]<br />
<br />
As a guess, writes need to double, and reads need to triple.</div>50.168.32.57https://openzfsonosx.org/wiki/DevelopmentDevelopment2014-04-11T06:27:27Z<p>50.168.32.57: </p>
<hr />
<div>[[Category:O3X development]]<br />
== Kernel ==<br />
=== Debugging with GDB ===<br />
<br />
Dealing with [[Panic|panics]].<br />
<br />
Apple's documentation: https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KEXTConcept/KEXTConceptDebugger/debug_tutorial.html<br />
<br />
Boot target VM with<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo nvram boot-args="-v keepsyms=y debug=0x144"<br />
</syntaxhighlight><br />
<br />
Make it panic.<br />
<br />
On your development machine, you will need the Kernel Debug Kit. Download it from Apple [https://developer.apple.com/downloads/index.action?q=Kernel%20Debug%20Kit here].<br />
<br />
<syntaxhighlight lang="bash"><br />
$ gdb /Volumes/Kernelit/mach_kernel<br />
</syntaxhighlight><br />
<syntaxhighlight lang="text"><br />
(gdb) source /Volumes/KernelDebugKit/kgmacros<br />
<br />
(gdb) target remote-kdp<br />
<br />
(gdb) kdp-reattach 192.168.30.133 # obviously use the IP of your target / crashed VM<br />
<br />
(gdb) showallkmods<br />
</syntaxhighlight><br />
<br />
Find the addresses for ZFS and SPL modules.<br />
<br />
<code>^Z</code> to suspend gdb, or, use another terminal<br />
<br />
<syntaxhighlight lang="bash"><br />
^Z<br />
$ sudo kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/<br />
<br />
$ fg # resume gdb, or go back to gdb terminal<br />
</syntaxhighlight><br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) set kext-symbol-file-path /tmp<br />
(gdb) add-kext /tmp/spl.kext <br />
(gdb) add-kext /tmp/zfs.kext<br />
(gdb) bt<br />
</syntaxhighlight><br />
<br />
=== Debugging with LLDB ===<br />
<br />
<syntaxhighlight lang="bash"><br />
$ echo "settings set target.load-script-from-symbol-file true" >> ~/.lldbinit<br />
$ lldb /Volumes/KernelDebugKit/mach_kernel<br />
(lldb) kdp-remote 192.168.30.146<br />
(lldb) showallkmods<br />
(lldb) addkext -F /tmp/spl.kext/Contents/MacOS/spl 0xffffff7f8ebb0000 (Address from showallkmods)<br />
(lldb) addkext -F /tmp/zfs.kext/Contents/MacOS/zfs 0xffffff7f8ebbf000<br />
</syntaxhighlight><br />
<br />
Then follow the guide for GDB above.<br />
<br />
=== Non-panic ===<br />
<br />
If you prefer to work in GDB, you can always panic a kernel with<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo dtrace -w -n "BEGIN{ panic();}"<br />
</syntaxhighlight><br />
<br />
But this was revealing:<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo /usr/libexec/stackshot -i -f /tmp/stackshot.log <br />
$ sudo symstacks.rb -f /tmp/stackshot.log -s -w /tmp/trace.txt<br />
$ less /tmp/trace.txt<br />
</syntaxhighlight><br />
<br />
Note that my hang is here:<br />
<br />
<syntaxhighlight lang="text"><br />
PID: 156<br />
Process: zpool<br />
Thread ID: 0x4e2<br />
Thread state: 0x9 == TH_WAIT |TH_UNINT <br />
Thread wait_event: 0xffffff8006608a6c<br />
Kernel stack: <br />
machine_switch_context (in mach_kernel) + 366 (0xffffff80002b3d3e)<br />
0xffffff800022e711 (in mach_kernel) + 1281 (0xffffff800022e711)<br />
thread_block_reason (in mach_kernel) + 300 (0xffffff800022d9dc)<br />
lck_mtx_sleep (in mach_kernel) + 78 (0xffffff80002265ce)<br />
0xffffff8000569ef6 (in mach_kernel) + 246 (0xffffff8000569ef6)<br />
msleep (in mach_kernel) + 116 (0xffffff800056a2e4)<br />
0xffffff7f80e52a76 (0xffffff7f80e52a76)<br />
0xffffff7f80e53fae (0xffffff7f80e53fae)<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
0xffffff7f80f2bb4e (0xffffff7f80f2bb4e)<br />
0xffffff7f80f1a9b7 (0xffffff7f80f1a9b7)<br />
0xffffff7f80f1b65f (0xffffff7f80f1b65f)<br />
0xffffff7f80f042ee (0xffffff7f80f042ee)<br />
0xffffff7f80f45c5b (0xffffff7f80f45c5b)<br />
0xffffff7f80f4ce92 (0xffffff7f80f4ce92)<br />
spec_ioctl (in mach_kernel) + 157 (0xffffff8000320bfd)<br />
VNOP_IOCTL (in mach_kernel) + 244 (0xffffff8000311e84)<br />
</syntaxhighlight><br />
<br />
It is a shame that it only shows the kernel symbols, and not inside SPL and ZFS, but we can ask it to load another sym file. (Alas, it cannot handle multiple symbols files. Fix this Apple.)<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo kextstat #grab the addresses of SPL and ZFS again<br />
$ sudo kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel \<br />
-e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/ <br />
<br />
$ sudo symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.spl.sym<br />
0xffffff800056a2e4 (0xffffff800056a2e4)<br />
spl_cv_wait (in net.lundman.spl.sym) + 54 (0xffffff7f80e52a76)<br />
taskq_wait (in net.lundman.spl.sym) + 78 (0xffffff7f80e53fae)<br />
taskq_destroy (in net.lundman.spl.sym) + 35 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
<br />
$ sudo symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.zfs.sym<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
vdev_open_children (in net.lundman.zfs.sym) + 336 (0xffffff7f80f1a870)<br />
vdev_root_open (in net.lundman.zfs.sym) + 94 (0xffffff7f80f2bb4e)<br />
vdev_open (in net.lundman.zfs.sym) + 311 (0xffffff7f80f1a9b7)<br />
vdev_create (in net.lundman.zfs.sym) + 31 (0xffffff7f80f1b65f)<br />
spa_create (in net.lundman.zfs.sym) + 878 (0xffffff7f80f042ee)<br />
</syntaxhighlight><br />
<br />
Voilà!<br />
<br />
=== Memory leaks ===<br />
<br />
In some cases, you may suspect memory issues, for instance if you saw the following panic:<br />
<br />
<syntaxhighlight lang="text"><br />
panic(cpu 1 caller 0xffffff80002438d8): "zalloc: \"kalloc.1024\" (100535 elements) retry fail 3, kfree_nop_count: 0"@/SourceCache/xnu/xnu-2050.7.9/osfmk/kern/zalloc.c:1826<br />
</syntaxhighlight><br />
<br />
To debug this, you can attach GDB and use the zprint command:<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) zprint<br />
ZONE COUNT TOT_SZ MAX_SZ ELT_SZ ALLOC_SZ TOT_ALLOC TOT_FREE NAME<br />
0xffffff8002a89250 1620133 18c1000 22a3599 16 1000 125203838 123583705 kalloc.16 CX<br />
0xffffff8006306c50 110335 35f000 4ce300 32 1000 13634985 13524650 kalloc.32 CX<br />
0xffffff8006306a00 133584 82a000 e6a900 64 1000 26510120 26376536 kalloc.64 CX<br />
0xffffff80063067b0 610090 4a84000 614f4c0 128 1000 50524515 49914425 kalloc.128 CX<br />
0xffffff8006306560 1070398 121a2000 1b5e4d60 256 1000 72534632 71464234 kalloc.256 CX<br />
0xffffff8006306310 399302 d423000 daf26b0 512 1000 39231204 38831902 kalloc.512 CX<br />
0xffffff80063060c0 100404 6231000 c29e980 1024 1000 22949693 22849289 kalloc.1024 CX<br />
0xffffff8006305e70 292 9a000 200000 2048 1000 77633725 77633433 kalloc.2048 CX<br />
</syntaxhighlight><br />
<br />
In this case, kalloc.256 is suspect.<br />
<br />
Reboot kernel with zlog=kalloc.256 on the command line, then we can use<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) findoldest <br />
oldest record is at log index 393:<br />
<br />
--------------- ALLOC 0xffffff803276ec00 : index 393 : ztime 21643824 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
and indeed, list any index<br />
<br />
(gdb) zstack 394<br />
<br />
--------------- ALLOC 0xffffff8032d60700 : index 394 : ztime 21648810 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
How many times was zfs_kmem_alloc involved in the leaked allocs?<br />
<br />
(gdb) countpcs 0xffffff7f80e847df<br />
occurred 3999 times in log (100% of records)<br />
</syntaxhighlight><br />
<br />
At least we know it is our fault.<br />
<br />
How many times is it arc_buf_alloc?<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) countpcs 0xffffff7f80e90649<br />
occurred 2390 times in log (59% of records)<br />
</syntaxhighlight><br />
<br />
== Flamegraphs ==<br />
<br />
Huge thanks to [http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html BrendanGregg] for so much of the dtrace magic.<br />
<br />
dtrace the kernel while running a command:<br />
<br />
dtrace -x stackframes=100 -n 'profile-997 /arg0/ {<br />
@[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks<br />
<br />
It will run for 60 seconds.<br />
<br />
Convert it to a flamegraph:<br />
<br />
./stackcollapse.pl out.stacks > out.folded<br />
./flamegraph.pl out.folded > out.svg<br />
<br />
<br />
This is '''rsync -a /usr/ /BOOM/deletea/''' running:<br />
<br />
[[File:rsyncflamegraph.svg|thumb|rsync flamegraph]]<br />
<br />
<br />
Or running '''Bonnie++''' in various stages:<br />
<br />
<gallery mode="packed-hover"><br />
File:create.svg|Create files in sequential order|alt=[[File:create.svg]]<br />
File:stat.svg|Stat files in sequential order|alt=Stat files in sequential order<br />
File:delete.svg|Delete files in sequential order|alt=Delete files in sequential order<br />
</gallery><br />
<br />
<br />
== ZVOL block size ==<br />
<br />
At the moment, we can only handle block size of 512 and 4096 in ZFS. And 512 is handled poorly. To write a single 512 block, IOKit layer will read in 8 blocks (to make up a PAGE_SIZE read) modify the buffer, then write 8 blocks. This makes ZFS think we wrote 8 blocks, and all stats are updated as such. This is undesirable since compression ratio etc cannot be reported correctly.<br />
<br />
This limitation is in specfs, which is applied to any BLK device created in /dev. For usage with Apple and the GUI, there is not much we can do. But we are planning to create a secondary blk/chr nodes (maybe in /var/run/zfs/dsk/$POOL/$name or similar for compatibility) which will have our implementation attached as vnops. This will let us handle any block size required.<br />
<br />
<br />
== vnode_create thread ==<br />
<br />
Currently, we have to protect the call to <code>vnode_create()</code> due to the fact that it can potentially call several vnops (fsync, pageout, reclaim), and so we have a "reclaim thread" to deal with this. One issue is that reclaim can be called both as a separate thread (periodic reclaims) and as the "calling thread" of <code>vnode_create</code>. This makes locking tricky.<br />
<br />
One idea is we create a "vnode_create thread" (with each dataset). Then in <code>zfs_zget</code> and <code>zfs_znode_alloc</code>, which call <code>vnode_create</code>, we simply place the newly allocated <code>zp</code> on the vnode_create thread's "request list," and resume execution. Once we have passed the "unlock" part of the functions, we can wait for the vnode_create thread to complete the request so we do not resume execution without the vp attached.<br />
<br />
In the vnode_create thread, we pop items off the list, call <code>vnode_create</code> (guaranteed as a separate thread now) and once completed, mark the node done, and signal the process which might be waiting.<br />
<br />
In theory this should let us handle reclaim, fsync, and pageout in the same manner as upstream ZFS, with no special cases required. This should alleviate the current situation where the reclaim_list grows to a very large number (230,000 nodes observed). <br />
<br />
It might mean we need to be careful in any function which could end up in <code>zfs_znode_alloc</code>, to make sure we have a <code>vp</code> attached before we resume. For example, <code>zfs_lookup</code> and <code>zfs_create</code>.<br />
<br />
----<br />
<br />
The branch '''vnode_thread''' is just this idea. It creates a vnode_create_thread per dataset, and when we need to call <code>vnode_create()</code>, it simply adds the <code>zp</code> to the list of requests, then signals the thread. The thread will call <code>vnode_create()</code> and upon completion, set <code>zp->z_vnode</code> then signal back. The requester for <code>zp</code> will sit in <code>zfs_znode_wait_vnode()</code> waiting for the signal back.<br />
<br />
This means the ZFS code base is littered with calls to <code>zfs_znode_wait_vnode()</code> (46 to be exact) placed at the correct location (i.e., ''after'' all the locks are released, and <code>zil_commit()</code> has been called). It is possible that this number could be decreased, as the calls to <code>zfs_zget()</code> appear not to suffer the <code>zil_commit()</code> issue, and can probably just block at the end of <code>zfs_zget()</code>. However, the calls to <code>zfs_mknode()</code> are what cause the issue.<br />
<br />
<code>sysctl zfs.vnode_create_list</code> tracks the number of <code>zp</code> nodes in the list waiting for <code>vnode_create()</code> to complete. Typically, 0 or 1. Rarely higher.<br />
<br />
It appears to deadlock from time to time.<br />
<br />
-----<br />
<br />
The second branch '''vnode_threadX''' takes a slightly different approach. Instead of a permanent vnode_create_thread, it simply spawns a thread when <code>zfs_znode_getvnode()</code> is called. This new thread calls <code>_zfs_znode_getvnode()</code> which functions as above. Call <code>vnode_create()</code> then signal back. The same <code>zfs_znode_wait_vnode()</code> blockers exist.<br />
<br />
<code>sysctl zfs.vnode_create_list</code> tracks the number of vnode_create threads we have started. Interestingly, these remain 0 or 1, rarely higher.<br />
<br />
It has not yet deadlocked.<br />
<br />
<br />
Conclusions:<br />
<br />
* It is undesirable that we have <code>zfs_znode_wait_vnode()</code> placed all over the source, and care needs to be taken for each one. Although it does not hurt to call it in excess, as no wait will happen if <code>zp->z_vnode</code> is already set. <br />
* It is unknown if it is all right to resume ZFS execution while <code>z_vnode</code> is still <code>NULL</code>, and only block (to wait for it to be filled in) once we are close to leaving the VNOP.<br />
<br />
<br />
* However, the fact that <code>vnop_reclaim</code> is direct and can be cleaned up immediately is very desirable. We no longer need to check for the "<code>zp</code> without <code>vp</code>" case in <code>zfs_zget()</code>.<br />
* We no longer need to lock protect <code>vnop_fsync</code> or <code>vnop_pageout</code> in case they are called from <code>vnode_create()</code>.<br />
* We don't have to throttle the '''reclaim thread''' due to the list's being massive. (Populating the list is much faster than cleaning up a <code>zp</code> node—up to 250,000 nodes in the list have been observed.)<br />
<br />
<br />
[[File:VX_create.svg|thumb|Create files in sequential order]]<br />
<br />
<br><br />
<br />
[[File:iozone.svg|thumb|IOzone flamegraph]]<br />
<br />
[[File:iozoneX.svg|thumb|IOzone flamegraph (untrimmed)]]<br />
<br />
<br><br />
<br />
<br />
------<br />
<br />
== Iozone ==<br />
<br />
<br />
Quick peek at how they compare, just to see how much we should improve it by.<br />
<br />
HFS+ and ZFS were created on the same virtual disk in VMware. No, this is not ideal testing specs, but should serve as an indicator. The <br />
pool was created with<br />
<br />
# zpool create -f -O atime=off -o ashift=12 -O casesensitivity=insensitive -O normalization=formD BOOM /dev/disk1<br />
<br />
and HFS+ created with standard OS X Disk Utility, with everything default. (Journaled).<br />
<br />
Iozone was run with standard automode, ie:<br />
<br />
# iozone -a -b outfile.xls<br />
<br />
[[File:hfs2_read.png|thumb|HFS+ read]]<br />
[[File:hfs2_write.png|thumb|HFS+ write]]<br />
[[File:zfs2_read.png|thumb|ZFS read]]<br />
[[File:zfs2_write.png|thumb|ZFS write]]<br />
<br />
As a guess, writes need to double, and reads need to triple.</div>50.168.32.57https://openzfsonosx.org/wiki/DevelopmentDevelopment2014-04-11T06:23:06Z<p>50.168.32.57: </p>
<hr />
<div>[[Category:O3X development]]<br />
== Kernel ==<br />
=== Debugging with GDB ===<br />
<br />
Dealing with [[Panic|panics]].<br />
<br />
Apple's documentation: https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KEXTConcept/KEXTConceptDebugger/debug_tutorial.html<br />
<br />
Boot target VM with<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo nvram boot-args="-v keepsyms=y debug=0x144"<br />
</syntaxhighlight><br />
<br />
Make it panic.<br />
<br />
On your development machine, you will need the Kernel Debug Kit. Download it from Apple [https://developer.apple.com/downloads/index.action?q=Kernel%20Debug%20Kit here].<br />
<br />
<syntaxhighlight lang="bash"><br />
$ gdb /Volumes/Kernelit/mach_kernel<br />
</syntaxhighlight><br />
<syntaxhighlight lang="text"><br />
(gdb) source /Volumes/KernelDebugKit/kgmacros<br />
<br />
(gdb) target remote-kdp<br />
<br />
(gdb) kdp-reattach 192.168.30.133 # obviously use the IP of your target / crashed VM<br />
<br />
(gdb) showallkmods<br />
</syntaxhighlight><br />
<br />
Find the addresses for ZFS and SPL modules.<br />
<br />
<code>^Z</code> to suspend gdb, or, use another terminal<br />
<br />
<syntaxhighlight lang="bash"><br />
^Z<br />
$ sudo kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/<br />
<br />
$ fg # resume gdb, or go back to gdb terminal<br />
</syntaxhighlight><br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) set kext-symbol-file-path /tmp<br />
(gdb) add-kext /tmp/spl.kext <br />
(gdb) add-kext /tmp/zfs.kext<br />
(gdb) bt<br />
</syntaxhighlight><br />
<br />
=== Debugging with LLDB ===<br />
<br />
<syntaxhighlight lang="bash"><br />
$ echo "settings set target.load-script-from-symbol-file true" >> ~/.lldbinit<br />
$ lldb /Volumes/KernelDebugKit/mach_kernel<br />
(lldb) kdp-remote 192.168.30.146<br />
(lldb) showallkmods<br />
(lldb) addkext -F /tmp/spl.kext/Contents/MacOS/spl 0xffffff7f8ebb0000 (Address from showallkmods)<br />
(lldb) addkext -F /tmp/zfs.kext/Contents/MacOS/zfs 0xffffff7f8ebbf000<br />
</syntaxhighlight><br />
<br />
Then follow the guide for GDB above.<br />
<br />
=== Non-panic ===<br />
<br />
If you prefer to work in GDB, you can always panic a kernel with<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo dtrace -w -n "BEGIN{ panic();}"<br />
</syntaxhighlight><br />
<br />
But this was revealing:<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo /usr/libexec/stackshot -i -f /tmp/stackshot.log <br />
$ sudo symstacks.rb -f /tmp/stackshot.log -s -w /tmp/trace.txt<br />
$ less /tmp/trace.txt<br />
</syntaxhighlight><br />
<br />
Note that my hang is here:<br />
<br />
<syntaxhighlight lang="text"><br />
PID: 156<br />
Process: zpool<br />
Thread ID: 0x4e2<br />
Thread state: 0x9 == TH_WAIT |TH_UNINT <br />
Thread wait_event: 0xffffff8006608a6c<br />
Kernel stack: <br />
machine_switch_context (in mach_kernel) + 366 (0xffffff80002b3d3e)<br />
0xffffff800022e711 (in mach_kernel) + 1281 (0xffffff800022e711)<br />
thread_block_reason (in mach_kernel) + 300 (0xffffff800022d9dc)<br />
lck_mtx_sleep (in mach_kernel) + 78 (0xffffff80002265ce)<br />
0xffffff8000569ef6 (in mach_kernel) + 246 (0xffffff8000569ef6)<br />
msleep (in mach_kernel) + 116 (0xffffff800056a2e4)<br />
0xffffff7f80e52a76 (0xffffff7f80e52a76)<br />
0xffffff7f80e53fae (0xffffff7f80e53fae)<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
0xffffff7f80f2bb4e (0xffffff7f80f2bb4e)<br />
0xffffff7f80f1a9b7 (0xffffff7f80f1a9b7)<br />
0xffffff7f80f1b65f (0xffffff7f80f1b65f)<br />
0xffffff7f80f042ee (0xffffff7f80f042ee)<br />
0xffffff7f80f45c5b (0xffffff7f80f45c5b)<br />
0xffffff7f80f4ce92 (0xffffff7f80f4ce92)<br />
spec_ioctl (in mach_kernel) + 157 (0xffffff8000320bfd)<br />
VNOP_IOCTL (in mach_kernel) + 244 (0xffffff8000311e84)<br />
</syntaxhighlight><br />
<br />
It is a shame that it only shows the kernel symbols, and not inside SPL and ZFS, but we can ask it to load another sym file. (Alas, it cannot handle multiple symbols files. Fix this Apple.)<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo kextstat #grab the addresses of SPL and ZFS again<br />
$ sudo kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel \<br />
-e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/ <br />
<br />
$ sudo symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.spl.sym<br />
0xffffff800056a2e4 (0xffffff800056a2e4)<br />
spl_cv_wait (in net.lundman.spl.sym) + 54 (0xffffff7f80e52a76)<br />
taskq_wait (in net.lundman.spl.sym) + 78 (0xffffff7f80e53fae)<br />
taskq_destroy (in net.lundman.spl.sym) + 35 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
<br />
$ sudo symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.zfs.sym<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
vdev_open_children (in net.lundman.zfs.sym) + 336 (0xffffff7f80f1a870)<br />
vdev_root_open (in net.lundman.zfs.sym) + 94 (0xffffff7f80f2bb4e)<br />
vdev_open (in net.lundman.zfs.sym) + 311 (0xffffff7f80f1a9b7)<br />
vdev_create (in net.lundman.zfs.sym) + 31 (0xffffff7f80f1b65f)<br />
spa_create (in net.lundman.zfs.sym) + 878 (0xffffff7f80f042ee)<br />
</syntaxhighlight><br />
<br />
Voilà!<br />
<br />
=== Memory leaks ===<br />
<br />
In some cases, you may suspect memory issues, for instance if you saw the following panic:<br />
<br />
<syntaxhighlight lang="text"><br />
panic(cpu 1 caller 0xffffff80002438d8): "zalloc: \"kalloc.1024\" (100535 elements) retry fail 3, kfree_nop_count: 0"@/SourceCache/xnu/xnu-2050.7.9/osfmk/kern/zalloc.c:1826<br />
</syntaxhighlight><br />
<br />
To debug this, you can attach GDB and use the zprint command:<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) zprint<br />
ZONE COUNT TOT_SZ MAX_SZ ELT_SZ ALLOC_SZ TOT_ALLOC TOT_FREE NAME<br />
0xffffff8002a89250 1620133 18c1000 22a3599 16 1000 125203838 123583705 kalloc.16 CX<br />
0xffffff8006306c50 110335 35f000 4ce300 32 1000 13634985 13524650 kalloc.32 CX<br />
0xffffff8006306a00 133584 82a000 e6a900 64 1000 26510120 26376536 kalloc.64 CX<br />
0xffffff80063067b0 610090 4a84000 614f4c0 128 1000 50524515 49914425 kalloc.128 CX<br />
0xffffff8006306560 1070398 121a2000 1b5e4d60 256 1000 72534632 71464234 kalloc.256 CX<br />
0xffffff8006306310 399302 d423000 daf26b0 512 1000 39231204 38831902 kalloc.512 CX<br />
0xffffff80063060c0 100404 6231000 c29e980 1024 1000 22949693 22849289 kalloc.1024 CX<br />
0xffffff8006305e70 292 9a000 200000 2048 1000 77633725 77633433 kalloc.2048 CX<br />
</syntaxhighlight><br />
<br />
In this case, kalloc.256 is suspect.<br />
<br />
Reboot kernel with zlog=kalloc.256 on the command line, then we can use<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) findoldest <br />
oldest record is at log index 393:<br />
<br />
--------------- ALLOC 0xffffff803276ec00 : index 393 : ztime 21643824 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
and indeed, list any index<br />
<br />
(gdb) zstack 394<br />
<br />
--------------- ALLOC 0xffffff8032d60700 : index 394 : ztime 21648810 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
How many times was zfs_kmem_alloc involved in the leaked allocs?<br />
<br />
(gdb) countpcs 0xffffff7f80e847df<br />
occurred 3999 times in log (100% of records)<br />
</syntaxhighlight><br />
<br />
At least we know it is our fault.<br />
<br />
How many times is it arc_buf_alloc?<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) countpcs 0xffffff7f80e90649<br />
occurred 2390 times in log (59% of records)<br />
</syntaxhighlight><br />
<br />
== Flamegraphs ==<br />
<br />
Huge thanks to [http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html BrendanGregg] for so much of the dtrace magic.<br />
<br />
dtrace the kernel while running a command:<br />
<br />
dtrace -x stackframes=100 -n 'profile-997 /arg0/ {<br />
@[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks<br />
<br />
It will run for 60 seconds.<br />
<br />
Convert it to a flamegraph:<br />
<br />
./stackcollapse.pl out.stacks > out.folded<br />
./flamegraph.pl out.folded > out.svg<br />
<br />
<br />
This is '''rsync -a /usr/ /BOOM/deletea/''' running:<br />
<br />
[[File:rsyncflamegraph.svg|thumb|rsync flamegraph]]<br />
<br />
<br />
Or running '''Bonnie++''' in various stages:<br />
<br />
<gallery mode="packed-hover"><br />
File:create.svg|Create files in sequential order|alt=[[File:create.svg]]<br />
File:stat.svg|Stat files in sequential order|alt=Stat files in sequential order<br />
File:delete.svg|Delete files in sequential order|alt=Delete files in sequential order<br />
</gallery><br />
<br />
<br />
== ZVOL block size ==<br />
<br />
At the moment, we can only handle block size of 512 and 4096 in ZFS. And 512 is handled poorly. To write a single 512 block, IOKit layer will read in 8 blocks (to make up a PAGE_SIZE read) modify the buffer, then write 8 blocks. This makes ZFS think we wrote 8 blocks, and all stats are updated as such. This is undesirable since compression ratio etc cannot be reported correctly.<br />
<br />
This limitation is in specfs, which is applied to any BLK device created in /dev. For usage with Apple and the GUI, there is not much we can do. But we are planning to create a secondary blk/chr nodes (maybe in /var/run/zfs/dsk/$POOL/$name or similar for compatibility) which will have our implementation attached as vnops. This will let us handle any block size required.<br />
<br />
<br />
== vnode_create thread ==<br />
<br />
Currently, we have to protect the call to <code>vnode_create()</code> due to the fact that it can potentially call several vnops (fsync, pageout, reclaim), and so we have a '''reclaim thread''' to deal with this. One issue is that reclaim can be called both as a separate thread (periodic reclaims) and as the "calling thread" of <code>vnode_create</code>. This makes locking tricky.<br />
<br />
One idea is we create a '''vnode_create thread''' (with each dataset). Then in <code>zfs_zget</code> and <code>zfs_znode_alloc</code>, which call <code>vnode_create</code>, we simply place the newly allocated <code>zp</code> on the vnode_create thread's "request list," and resume execution. Once we have passed the "unlock" part of the functions, we can wait for the vnode_create thread to complete the request so we do not resume execution without the vp attached.<br />
<br />
In the vnode_create thread, we pop items off the list, call <code>vnode_create</code> (guaranteed as a separate thread now) and once completed, mark the node done, and signal the process which might be waiting.<br />
<br />
In theory this should let us handle reclaim, fsync, and pageout in the same manner as upstream ZFS, with no special cases required. This should alleviate the current situation where the reclaim_list grows to a very large number (230,000 nodes observed). <br />
<br />
It might mean we need to be careful in any function which could end up in <code>zfs_znode_alloc</code>, to make sure we have a <code>vp</code> attached before we resume. For example, <code>zfs_lookup</code> and <code>zfs_create</code>.<br />
<br />
----<br />
<br />
The branch '''vnode_thread''' is just this idea. It creates a vnode_create_thread per dataset, and when we need to call <code>vnode_create()</code>, it simply adds the <code>zp</code> to the list of requests, then signals the thread. The thread will call <code>vnode_create()</code> and upon completion, set <code>zp->z_vnode</code> then signal back. The requester for <code>zp</code> will sit in <code>zfs_znode_wait_vnode()</code> waiting for the signal back.<br />
<br />
This means the ZFS code base is littered with calls to <code>zfs_znode_wait_vnode()</code> (46 to be exact) placed at the correct location (i.e, '''after''' all the locks are released, and <code>zil_commit()</code> has been called). It is possible that this number could be decreased, as the calls to <code>zfs_zget()</code> appear not to suffer the <code>zil_commit()</code> issue, and can probably just block at the end of <code>zfs_zget()</code>. However, the calls to <code>zfs_mknode()</code> are what cause the issue.<br />
<br />
<code>sysctl zfs.vnode_create_list</code> tracks the number of <code>zp</code> nodes in the list waiting for <code>vnode_create()</code> to complete. Typically, 0 or 1. Rarely higher.<br />
<br />
It appears to deadlock from time to time.<br />
<br />
-----<br />
<br />
The second branch '''vnode_threadX''' takes a slightly different approach. Instead of a permanent vnode_create_thread, it simply spawns a thread when <code>zfs_znode_getvnode()</code> is called. This new thread calls <code>_zfs_znode_getvnode()</code> which functions as above. Call <code>vnode_create()</code> then signal back. The same <code>zfs_znode_wait_vnode()</code> blockers exist.<br />
<br />
<code>sysctl zfs.vnode_create_list</code> tracks the number of vnode_create threads we have started. Interestingly, these remain 0 or 1, rarely higher.<br />
<br />
It has not yet deadlocked.<br />
<br />
<br />
Conclusions:<br />
<br />
* It is undesirable that we have <code>zfs_znode_wait_vnode()</code> placed all over the source, and care needs to be taken for each one. Although it does not hurt to call it in excess, as no wait will happen if <code>zp->z_vnode</code> is already set. <br />
* It is unknown if it is all right to resume ZFS execution while <code>z_vnode</code> is still <code>NULL</code>, and only block (to wait for it to be filled in) once we are close to leaving the VNOP.<br />
<br />
<br />
* However, the fact that <code>vnop_reclaim</code> is direct and can be cleaned up immediately is very desirable. We no longer need to check for the "<code>zp</code> without <code>vp</code>" case in <code>zfs_zget()</code>.<br />
* We no longer need to lock protect <code>vnop_fsync</code> or <code>vnop_pageout</code> in case they are called from <code>vnode_create()</code>.<br />
* We don't have to throttle the '''reclaim thread''' due to the list's being massive. (Populating the list is much faster than cleaning up a <code>zp</code> node—up to 250,000 nodes in the list have been observed.)<br />
<br />
<br />
[[File:VX_create.svg|thumb|Create files in sequential order]]<br />
<br />
<br><br />
<br />
[[File:iozone.svg|thumb|IOzone flamegraph]]<br />
<br />
[[File:iozoneX.svg|thumb|IOzone flamegraph (untrimmed)]]<br />
<br />
<br><br />
<br />
<br />
------<br />
<br />
== Iozone ==<br />
<br />
<br />
Quick peek at how they compare, just to see how much we should improve it by.<br />
<br />
HFS+ and ZFS were created on the same virtual disk in VMware. No, this is not ideal testing specs, but should serve as an indicator. The <br />
pool was created with<br />
<br />
# zpool create -f -O atime=off -o ashift=12 -O casesensitivity=insensitive -O normalization=formD BOOM /dev/disk1<br />
<br />
and HFS+ created with standard OS X Disk Utility, with everything default. (Journaled).<br />
<br />
Iozone was run with standard automode, ie:<br />
<br />
# iozone -a -b outfile.xls<br />
<br />
[[File:hfs2_read.png|thumb|HFS+ read]]<br />
[[File:hfs2_write.png|thumb|HFS+ write]]<br />
[[File:zfs2_read.png|thumb|ZFS read]]<br />
[[File:zfs2_write.png|thumb|ZFS write]]<br />
<br />
As a guess, writes need to double, and reads need to triple.</div>50.168.32.57https://openzfsonosx.org/wiki/DevelopmentDevelopment2014-04-11T05:48:18Z<p>50.168.32.57: </p>
<hr />
<div>[[Category:O3X development]]<br />
== Kernel ==<br />
=== Debugging with GDB ===<br />
<br />
Dealing with [[Panic|panics]].<br />
<br />
Apple's documentation: https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KEXTConcept/KEXTConceptDebugger/debug_tutorial.html<br />
<br />
Boot target VM with<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo nvram boot-args="-v keepsyms=y debug=0x144"<br />
</syntaxhighlight><br />
<br />
Make it panic.<br />
<br />
On your development machine, you will need the Kernel Debug Kit. Download it from Apple [https://developer.apple.com/downloads/index.action?q=Kernel%20Debug%20Kit here].<br />
<br />
<syntaxhighlight lang="bash"><br />
$ gdb /Volumes/Kernelit/mach_kernel<br />
</syntaxhighlight><br />
<syntaxhighlight lang="text"><br />
(gdb) source /Volumes/KernelDebugKit/kgmacros<br />
<br />
(gdb) target remote-kdp<br />
<br />
(gdb) kdp-reattach 192.168.30.133 # obviously use the IP of your target / crashed VM<br />
<br />
(gdb) showallkmods<br />
</syntaxhighlight><br />
<br />
Find the addresses for ZFS and SPL modules.<br />
<br />
<code>^Z</code> to suspend gdb, or, use another terminal<br />
<br />
<syntaxhighlight lang="bash"><br />
^Z<br />
$ sudo kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/<br />
<br />
$ fg # resume gdb, or go back to gdb terminal<br />
</syntaxhighlight><br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) set kext-symbol-file-path /tmp<br />
(gdb) add-kext /tmp/spl.kext <br />
(gdb) add-kext /tmp/zfs.kext<br />
(gdb) bt<br />
</syntaxhighlight><br />
<br />
=== Debugging with LLDB ===<br />
<br />
<syntaxhighlight lang="bash"><br />
$ echo "settings set target.load-script-from-symbol-file true" >> ~/.lldbinit<br />
$ lldb /Volumes/KernelDebugKit/mach_kernel<br />
(lldb) kdp-remote 192.168.30.146<br />
(lldb) showallkmods<br />
(lldb) addkext -F /tmp/spl.kext/Contents/MacOS/spl 0xffffff7f8ebb0000 (Address from showallkmods)<br />
(lldb) addkext -F /tmp/zfs.kext/Contents/MacOS/zfs 0xffffff7f8ebbf000<br />
</syntaxhighlight><br />
<br />
Then follow the guide for GDB above.<br />
<br />
=== Non-panic ===<br />
<br />
If you prefer to work in GDB, you can always panic a kernel with<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo dtrace -w -n "BEGIN{ panic();}"<br />
</syntaxhighlight><br />
<br />
But this was revealing:<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo /usr/libexec/stackshot -i -f /tmp/stackshot.log <br />
$ sudo symstacks.rb -f /tmp/stackshot.log -s -w /tmp/trace.txt<br />
$ less /tmp/trace.txt<br />
</syntaxhighlight><br />
<br />
Note that my hang is here:<br />
<br />
<syntaxhighlight lang="text"><br />
PID: 156<br />
Process: zpool<br />
Thread ID: 0x4e2<br />
Thread state: 0x9 == TH_WAIT |TH_UNINT <br />
Thread wait_event: 0xffffff8006608a6c<br />
Kernel stack: <br />
machine_switch_context (in mach_kernel) + 366 (0xffffff80002b3d3e)<br />
0xffffff800022e711 (in mach_kernel) + 1281 (0xffffff800022e711)<br />
thread_block_reason (in mach_kernel) + 300 (0xffffff800022d9dc)<br />
lck_mtx_sleep (in mach_kernel) + 78 (0xffffff80002265ce)<br />
0xffffff8000569ef6 (in mach_kernel) + 246 (0xffffff8000569ef6)<br />
msleep (in mach_kernel) + 116 (0xffffff800056a2e4)<br />
0xffffff7f80e52a76 (0xffffff7f80e52a76)<br />
0xffffff7f80e53fae (0xffffff7f80e53fae)<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
0xffffff7f80f2bb4e (0xffffff7f80f2bb4e)<br />
0xffffff7f80f1a9b7 (0xffffff7f80f1a9b7)<br />
0xffffff7f80f1b65f (0xffffff7f80f1b65f)<br />
0xffffff7f80f042ee (0xffffff7f80f042ee)<br />
0xffffff7f80f45c5b (0xffffff7f80f45c5b)<br />
0xffffff7f80f4ce92 (0xffffff7f80f4ce92)<br />
spec_ioctl (in mach_kernel) + 157 (0xffffff8000320bfd)<br />
VNOP_IOCTL (in mach_kernel) + 244 (0xffffff8000311e84)<br />
</syntaxhighlight><br />
<br />
It is a shame that it only shows the kernel symbols, and not inside SPL and ZFS, but we can ask it to load another sym file. (Alas, it cannot handle multiple symbols files. Fix this Apple.)<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo kextstat #grab the addresses of SPL and ZFS again<br />
$ sudo kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel \<br />
-e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/ <br />
<br />
$ sudo symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.spl.sym<br />
0xffffff800056a2e4 (0xffffff800056a2e4)<br />
spl_cv_wait (in net.lundman.spl.sym) + 54 (0xffffff7f80e52a76)<br />
taskq_wait (in net.lundman.spl.sym) + 78 (0xffffff7f80e53fae)<br />
taskq_destroy (in net.lundman.spl.sym) + 35 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
<br />
$ sudo symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.zfs.sym<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
vdev_open_children (in net.lundman.zfs.sym) + 336 (0xffffff7f80f1a870)<br />
vdev_root_open (in net.lundman.zfs.sym) + 94 (0xffffff7f80f2bb4e)<br />
vdev_open (in net.lundman.zfs.sym) + 311 (0xffffff7f80f1a9b7)<br />
vdev_create (in net.lundman.zfs.sym) + 31 (0xffffff7f80f1b65f)<br />
spa_create (in net.lundman.zfs.sym) + 878 (0xffffff7f80f042ee)<br />
</syntaxhighlight><br />
<br />
Voilà!<br />
<br />
=== Memory leaks ===<br />
<br />
In some cases, you may suspect memory issues, for instance if you saw the following panic:<br />
<br />
<syntaxhighlight lang="text"><br />
panic(cpu 1 caller 0xffffff80002438d8): "zalloc: \"kalloc.1024\" (100535 elements) retry fail 3, kfree_nop_count: 0"@/SourceCache/xnu/xnu-2050.7.9/osfmk/kern/zalloc.c:1826<br />
</syntaxhighlight><br />
<br />
To debug this, you can attach GDB and use the zprint command:<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) zprint<br />
ZONE COUNT TOT_SZ MAX_SZ ELT_SZ ALLOC_SZ TOT_ALLOC TOT_FREE NAME<br />
0xffffff8002a89250 1620133 18c1000 22a3599 16 1000 125203838 123583705 kalloc.16 CX<br />
0xffffff8006306c50 110335 35f000 4ce300 32 1000 13634985 13524650 kalloc.32 CX<br />
0xffffff8006306a00 133584 82a000 e6a900 64 1000 26510120 26376536 kalloc.64 CX<br />
0xffffff80063067b0 610090 4a84000 614f4c0 128 1000 50524515 49914425 kalloc.128 CX<br />
0xffffff8006306560 1070398 121a2000 1b5e4d60 256 1000 72534632 71464234 kalloc.256 CX<br />
0xffffff8006306310 399302 d423000 daf26b0 512 1000 39231204 38831902 kalloc.512 CX<br />
0xffffff80063060c0 100404 6231000 c29e980 1024 1000 22949693 22849289 kalloc.1024 CX<br />
0xffffff8006305e70 292 9a000 200000 2048 1000 77633725 77633433 kalloc.2048 CX<br />
</syntaxhighlight><br />
<br />
In this case, kalloc.256 is suspect.<br />
<br />
Reboot kernel with zlog=kalloc.256 on the command line, then we can use<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) findoldest <br />
oldest record is at log index 393:<br />
<br />
--------------- ALLOC 0xffffff803276ec00 : index 393 : ztime 21643824 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
and indeed, list any index<br />
<br />
(gdb) zstack 394<br />
<br />
--------------- ALLOC 0xffffff8032d60700 : index 394 : ztime 21648810 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
How many times was zfs_kmem_alloc involved in the leaked allocs?<br />
<br />
(gdb) countpcs 0xffffff7f80e847df<br />
occurred 3999 times in log (100% of records)<br />
</syntaxhighlight><br />
<br />
At least we know it is our fault.<br />
<br />
How many times is it arc_buf_alloc?<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) countpcs 0xffffff7f80e90649<br />
occurred 2390 times in log (59% of records)<br />
</syntaxhighlight><br />
<br />
== Flamegraphs ==<br />
<br />
Huge thanks to [http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html BrendanGregg] for so much of the dtrace magic.<br />
<br />
dtrace the kernel while running a command:<br />
<br />
dtrace -x stackframes=100 -n 'profile-997 /arg0/ {<br />
@[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks<br />
<br />
It will run for 60 seconds.<br />
<br />
Convert it to a flamegraph:<br />
<br />
./stackcollapse.pl out.stacks > out.folded<br />
./flamegraph.pl out.folded > out.svg<br />
<br />
<br />
This is '''rsync -a /usr/ /BOOM/deletea/''' running:<br />
<br />
[[File:rsyncflamegraph.svg|thumb|rsync flamegraph]]<br />
<br />
<br />
Or running '''Bonnie++''' in various stages:<br />
<br />
<gallery mode="packed-hover"><br />
File:create.svg|Create files in sequential order|alt=[[File:create.svg]]<br />
File:stat.svg|Stat files in sequential order|alt=Stat files in sequential order<br />
File:delete.svg|Delete files in sequential order|alt=Delete files in sequential order<br />
</gallery><br />
<br />
<br />
== ZVOL block size ==<br />
<br />
At the moment, we can only handle block size of 512 and 4096 in ZFS. And 512 is handled poorly. To write a single 512 block, IOKit layer will read in 8 blocks (to make up a PAGE_SIZE read) modify the buffer, then write 8 blocks. This makes ZFS think we wrote 8 blocks, and all stats are updated as such. This is undesirable since compression ratio etc cannot be reported correctly.<br />
<br />
This limitation is in specfs, which is applied to any BLK device created in /dev. For usage with Apple and the GUI, there is not much we can do. But we are planning to create a secondary blk/chr nodes (maybe in /var/run/zfs/dsk/$POOL/$name or similar for compatibility) which will have our implementation attached as vnops. This will let us handle any block size required.<br />
<br />
<br />
== vnode_create thread ==<br />
<br />
Currently, we have to protect the call to vnode_create() due to the possibility that it calls several vnops (fsync, pageout, reclaim) and have a reclaim thread to deal with that. One issue is reclaim can both be called as a separate thread (periodic reclaims) and as the ''calling thread'' of vnode_create. This makes locking tricky.<br />
<br />
One idea is we create a vnode_create thread (with each dataset). The in zfs_zget and zfs_znode_alloc, which calls vnode_create, we simply place the newly allocated zp on the vnode_create thread's ''request list'', and resume execution. Once we have passed the "unlock" part of the functions, we can wait for the vnode_create thread to complete the request so we do not resume execution without the vp attached.<br />
<br />
In the vnode_create thread, we pop items off the list, call vnode_create (guaranteed as a separate thread now) and once completed, mark the node done, and signal the process which might be waiting.<br />
<br />
In theory this should let us handle reclaim, fsync, pageout as normal upstream ZFS. no special cases required. This should alleviate the current situation where the reclaim_list grows to very large numbers (230,000 nodes observed). <br />
<br />
It might mean we need to be careful in any function which might end up in zfs_znode_alloc, to make sure we have a vp attached before we resume. For example, zfs_lookup and zfs_create.<br />
<br />
----<br />
<br />
The branch '''vnode_thread''' is just this idea, it creates a vnode_create_thread per dataset, when we need to call ''vnode_create()'' it simply adds the '''zp''' to the list of requests, then signals the thread. The thread will call ''vnode_create()'' and upon completion, set '''zp->z_vnode''' then signal back. The requester for '''zp''' will sit in ''zfs_znode_wait_vnode()'' waiting for the signal back.<br />
<br />
This means the ZFS code base is littered with calls to ''zfs_znode_wait_vnode()'' (46 to be exact) placed at the correct location. Ie, '''after''' all the locks are released, and ''zil_commit()'' has been called. It is possible that this number could be decreased, as the calls to ''zfs_zget()'' appear to not suffer the ''zil_commit()'' issue, and can probably just block at the end of ''zfs_zget()''. However the calls to ''zfs_mknode()'' is what causes the issue.<br />
<br />
'''sysctl zfs.vnode_create_list''' tracks the number of '''zp''' nodes in the list waiting for ''vnode_create()'' to complete. Typically, 0, or 1. Rarely higher.<br />
<br />
Appears to deadlock from time to time. <br />
<br />
-----<br />
<br />
The second branch '''vnode_threadX''' takes a slightly different approach. Instead of a permanent vnode_create_thread, it simply spawns a thread when ''zfs_znode_getvnode()'' is called. This new thread calls ''_zfs_znode_getvnode()'' which functions as above. Call ''vnode_create()'' then signal back. The same ''zfs_znode_wait_vnode()'' blockers exist.<br />
<br />
'''sysctl zfs.vnode_create_list''' tracks the number of '''vnode_create threads''' we have started. Interestingly, these remain 0, or 1. Rarely higher.<br />
<br />
Has not yet deadlocked.<br />
<br />
<br />
<br />
Conclusions:<br />
<br />
* It is undesirable that we have ''zfs_znode_wait_vnode()'' placed all over the source, and care needs to be taken for each one. Although it does not hurt to call it in excess, as no wait will happen if '''zp->z_vnode''' is already set. <br />
* It is unknown if it is OK to resume ZFS execution while '''z_vnode''' is still NULL, and only block (to wait for it to be filled in) once we are close to leaving the VNOP.<br />
<br />
<br />
* However, that '''vnop_reclaim''' are direct and can be cleaned up immediately is very desirable. We no longer need to check for the '''zp without vp''' case in ''zfs_zget()''. <br />
* We no longer need to lock protect '''vnop_fsync''', '''vnop_pageout''' in case they are called from ''vnode_create()''.<br />
* We don't have to throttle the '''reclaim thread''' due to the list being massive (populating the list is much faster than cleaning up a '''zp''' node - up to 250,000 nodes in the list has been observed).<br />
<br />
<br />
[[File:VX_create.svg|thumb|Create files in sequential order]]<br />
<br />
<br><br />
<br />
[[File:iozone.svg|thumb|IOzone flamegraph]]<br />
<br />
[[File:iozoneX.svg|thumb|IOzone flamegraph (untrimmed)]]<br />
<br />
<br><br />
<br />
<br />
------<br />
<br />
== Iozone ==<br />
<br />
<br />
Quick peek at how they compare, just to see how much we should improve it by.<br />
<br />
HFS+ and ZFS were created on the same virtual disk in VMware. No, this is not ideal testing specs, but should serve as an indicator. The <br />
pool was created with<br />
<br />
# zpool create -f -O atime=off -o ashift=12 -O casesensitivity=insensitive -O normalization=formD BOOM /dev/disk1<br />
<br />
and HFS+ created with standard OS X Disk Utility, with everything default. (Journaled).<br />
<br />
Iozone was run with standard automode, ie:<br />
<br />
# iozone -a -b outfile.xls<br />
<br />
[[File:hfs2_read.png|thumb|HFS+ read]]<br />
[[File:hfs2_write.png|thumb|HFS+ write]]<br />
[[File:zfs2_read.png|thumb|ZFS read]]<br />
[[File:zfs2_write.png|thumb|ZFS write]]<br />
<br />
As a guess, writes need to double, and reads need to triple.</div>50.168.32.57https://openzfsonosx.org/wiki/DevelopmentDevelopment2014-04-11T05:43:54Z<p>50.168.32.57: </p>
<hr />
<div>[[Category:O3X development]]<br />
== Kernel ==<br />
=== Debugging with GDB ===<br />
<br />
Dealing with [[Panic|panics]].<br />
<br />
Apple's documentation: https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KEXTConcept/KEXTConceptDebugger/debug_tutorial.html<br />
<br />
Boot target VM with<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo nvram boot-args="-v keepsyms=y debug=0x144"<br />
</syntaxhighlight><br />
<br />
Make it panic.<br />
<br />
On your development machine, you will need the Kernel Debug Kit. Download it from Apple [https://developer.apple.com/downloads/index.action?q=Kernel%20Debug%20Kit here].<br />
<br />
<syntaxhighlight lang="bash"><br />
$ gdb /Volumes/Kernelit/mach_kernel<br />
</syntaxhighlight><br />
<syntaxhighlight lang="text"><br />
(gdb) source /Volumes/KernelDebugKit/kgmacros<br />
<br />
(gdb) target remote-kdp<br />
<br />
(gdb) kdp-reattach 192.168.30.133 # obviously use the IP of your target / crashed VM<br />
<br />
(gdb) showallkmods<br />
</syntaxhighlight><br />
<br />
Find the addresses for ZFS and SPL modules.<br />
<br />
<code>^Z</code> to suspend gdb, or, use another terminal<br />
<br />
<syntaxhighlight lang="bash"><br />
^Z<br />
$ sudo kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/<br />
<br />
$ fg # resume gdb, or go back to gdb terminal<br />
</syntaxhighlight><br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) set kext-symbol-file-path /tmp<br />
(gdb) add-kext /tmp/spl.kext <br />
(gdb) add-kext /tmp/zfs.kext<br />
(gdb) bt<br />
</syntaxhighlight><br />
<br />
=== Debugging with LLDB ===<br />
<br />
<syntaxhighlight lang="bash"><br />
$ echo "settings set target.load-script-from-symbol-file true" >> ~/.lldbinit<br />
$ lldb /Volumes/KernelDebugKit/mach_kernel<br />
(lldb) kdp-remote 192.168.30.146<br />
(lldb) showallkmods<br />
(lldb) addkext -F /tmp/spl.kext/Contents/MacOS/spl 0xffffff7f8ebb0000 (Address from showallkmods)<br />
(lldb) addkext -F /tmp/zfs.kext/Contents/MacOS/zfs 0xffffff7f8ebbf000<br />
</syntaxhighlight><br />
<br />
Then follow the guide for GDB above.<br />
<br />
=== Non-panic ===<br />
<br />
If you prefer to work in GDB, you can always panic a kernel with<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo dtrace -w -n "BEGIN{ panic();}"<br />
</syntaxhighlight><br />
<br />
But this was revealing:<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo /usr/libexec/stackshot -i -f /tmp/stackshot.log <br />
$ sudo symstacks.rb -f /tmp/stackshot.log -s -w /tmp/trace.txt<br />
$ less /tmp/trace.txt<br />
</syntaxhighlight><br />
<br />
Note that my hang is here:<br />
<br />
<syntaxhighlight lang="text"><br />
PID: 156<br />
Process: zpool<br />
Thread ID: 0x4e2<br />
Thread state: 0x9 == TH_WAIT |TH_UNINT <br />
Thread wait_event: 0xffffff8006608a6c<br />
Kernel stack: <br />
machine_switch_context (in mach_kernel) + 366 (0xffffff80002b3d3e)<br />
0xffffff800022e711 (in mach_kernel) + 1281 (0xffffff800022e711)<br />
thread_block_reason (in mach_kernel) + 300 (0xffffff800022d9dc)<br />
lck_mtx_sleep (in mach_kernel) + 78 (0xffffff80002265ce)<br />
0xffffff8000569ef6 (in mach_kernel) + 246 (0xffffff8000569ef6)<br />
msleep (in mach_kernel) + 116 (0xffffff800056a2e4)<br />
0xffffff7f80e52a76 (0xffffff7f80e52a76)<br />
0xffffff7f80e53fae (0xffffff7f80e53fae)<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
0xffffff7f80f2bb4e (0xffffff7f80f2bb4e)<br />
0xffffff7f80f1a9b7 (0xffffff7f80f1a9b7)<br />
0xffffff7f80f1b65f (0xffffff7f80f1b65f)<br />
0xffffff7f80f042ee (0xffffff7f80f042ee)<br />
0xffffff7f80f45c5b (0xffffff7f80f45c5b)<br />
0xffffff7f80f4ce92 (0xffffff7f80f4ce92)<br />
spec_ioctl (in mach_kernel) + 157 (0xffffff8000320bfd)<br />
VNOP_IOCTL (in mach_kernel) + 244 (0xffffff8000311e84)<br />
</syntaxhighlight><br />
<br />
It is a shame that it only shows the kernel symbols, and not inside SPL and ZFS, but we can ask it to load another sym file. (Alas, it cannot handle multiple symbols files. Fix this Apple.)<br />
<br />
<syntaxhighlight lang="bash"><br />
$ sudo kextstat #grab the addresses of SPL and ZFS again<br />
$ sudo kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/ <br />
<br />
symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.spl.sym<br />
0xffffff800056a2e4 (0xffffff800056a2e4)<br />
spl_cv_wait (in net.lundman.spl.sym) + 54 (0xffffff7f80e52a76)<br />
taskq_wait (in net.lundman.spl.sym) + 78 (0xffffff7f80e53fae)<br />
taskq_destroy (in net.lundman.spl.sym) + 35 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
<br />
symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.zfs.sym<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
vdev_open_children (in net.lundman.zfs.sym) + 336 (0xffffff7f80f1a870)<br />
vdev_root_open (in net.lundman.zfs.sym) + 94 (0xffffff7f80f2bb4e)<br />
vdev_open (in net.lundman.zfs.sym) + 311 (0xffffff7f80f1a9b7)<br />
vdev_create (in net.lundman.zfs.sym) + 31 (0xffffff7f80f1b65f)<br />
spa_create (in net.lundman.zfs.sym) + 878 (0xffffff7f80f042ee)<br />
</syntaxhighlight><br />
<br />
Voilà!<br />
<br />
=== Memory leaks ===<br />
<br />
In some cases, you may suspect memory issues, for instance if you saw the following panic:<br />
<br />
<syntaxhighlight lang="text"><br />
panic(cpu 1 caller 0xffffff80002438d8): "zalloc: \"kalloc.1024\" (100535 elements) retry fail 3, kfree_nop_count: 0"@/SourceCache/xnu/xnu-2050.7.9/osfmk/kern/zalloc.c:1826<br />
</syntaxhighlight><br />
<br />
To debug this, you can attach GDB and use the zprint command:<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) zprint<br />
ZONE COUNT TOT_SZ MAX_SZ ELT_SZ ALLOC_SZ TOT_ALLOC TOT_FREE NAME<br />
0xffffff8002a89250 1620133 18c1000 22a3599 16 1000 125203838 123583705 kalloc.16 CX<br />
0xffffff8006306c50 110335 35f000 4ce300 32 1000 13634985 13524650 kalloc.32 CX<br />
0xffffff8006306a00 133584 82a000 e6a900 64 1000 26510120 26376536 kalloc.64 CX<br />
0xffffff80063067b0 610090 4a84000 614f4c0 128 1000 50524515 49914425 kalloc.128 CX<br />
0xffffff8006306560 1070398 121a2000 1b5e4d60 256 1000 72534632 71464234 kalloc.256 CX<br />
0xffffff8006306310 399302 d423000 daf26b0 512 1000 39231204 38831902 kalloc.512 CX<br />
0xffffff80063060c0 100404 6231000 c29e980 1024 1000 22949693 22849289 kalloc.1024 CX<br />
0xffffff8006305e70 292 9a000 200000 2048 1000 77633725 77633433 kalloc.2048 CX<br />
</syntaxhighlight><br />
<br />
In this case, kalloc.256 is suspect.<br />
<br />
Reboot kernel with zlog=kalloc.256 on the command line, then we can use<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) findoldest <br />
oldest record is at log index 393:<br />
<br />
--------------- ALLOC 0xffffff803276ec00 : index 393 : ztime 21643824 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
and indeed, list any index<br />
<br />
(gdb) zstack 394<br />
<br />
--------------- ALLOC 0xffffff8032d60700 : index 394 : ztime 21648810 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
How many times was zfs_kmem_alloc involved in the leaked allocs?<br />
<br />
(gdb) countpcs 0xffffff7f80e847df<br />
occurred 3999 times in log (100% of records)<br />
</syntaxhighlight><br />
<br />
At least we know it is our fault.<br />
<br />
How many times is it arc_buf_alloc?<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) countpcs 0xffffff7f80e90649<br />
occurred 2390 times in log (59% of records)<br />
</syntaxhighlight><br />
<br />
== Flamegraphs ==<br />
<br />
Huge thanks to [http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html BrendanGregg] for so much of the dtrace magic.<br />
<br />
dtrace the kernel while running a command:<br />
<br />
dtrace -x stackframes=100 -n 'profile-997 /arg0/ {<br />
@[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks<br />
<br />
It will run for 60 seconds.<br />
<br />
Convert it to a flamegraph:<br />
<br />
./stackcollapse.pl out.stacks > out.folded<br />
./flamegraph.pl out.folded > out.svg<br />
<br />
<br />
This is '''rsync -a /usr/ /BOOM/deletea/''' running:<br />
<br />
[[File:rsyncflamegraph.svg|thumb|rsync flamegraph]]<br />
<br />
<br />
Or running '''Bonnie++''' in various stages:<br />
<br />
<gallery mode="packed-hover"><br />
File:create.svg|Create files in sequential order|alt=[[File:create.svg]]<br />
File:stat.svg|Stat files in sequential order|alt=Stat files in sequential order<br />
File:delete.svg|Delete files in sequential order|alt=Delete files in sequential order<br />
</gallery><br />
<br />
<br />
== ZVOL block size ==<br />
<br />
At the moment, we can only handle block size of 512 and 4096 in ZFS. And 512 is handled poorly. To write a single 512 block, IOKit layer will read in 8 blocks (to make up a PAGE_SIZE read) modify the buffer, then write 8 blocks. This makes ZFS think we wrote 8 blocks, and all stats are updated as such. This is undesirable since compression ratio etc cannot be reported correctly.<br />
<br />
This limitation is in specfs, which is applied to any BLK device created in /dev. For usage with Apple and the GUI, there is not much we can do. But we are planning to create a secondary blk/chr nodes (maybe in /var/run/zfs/dsk/$POOL/$name or similar for compatibility) which will have our implementation attached as vnops. This will let us handle any block size required.<br />
<br />
<br />
== vnode_create thread ==<br />
<br />
Currently, we have to protect the call to vnode_create() due to the possibility that it calls several vnops (fsync, pageout, reclaim) and have a reclaim thread to deal with that. One issue is reclaim can both be called as a separate thread (periodic reclaims) and as the ''calling thread'' of vnode_create. This makes locking tricky.<br />
<br />
One idea is we create a vnode_create thread (with each dataset). The in zfs_zget and zfs_znode_alloc, which calls vnode_create, we simply place the newly allocated zp on the vnode_create thread's ''request list'', and resume execution. Once we have passed the "unlock" part of the functions, we can wait for the vnode_create thread to complete the request so we do not resume execution without the vp attached.<br />
<br />
In the vnode_create thread, we pop items off the list, call vnode_create (guaranteed as a separate thread now) and once completed, mark the node done, and signal the process which might be waiting.<br />
<br />
In theory this should let us handle reclaim, fsync, pageout as normal upstream ZFS. no special cases required. This should alleviate the current situation where the reclaim_list grows to very large numbers (230,000 nodes observed). <br />
<br />
It might mean we need to be careful in any function which might end up in zfs_znode_alloc, to make sure we have a vp attached before we resume. For example, zfs_lookup and zfs_create.<br />
<br />
----<br />
<br />
The branch '''vnode_thread''' is just this idea, it creates a vnode_create_thread per dataset, when we need to call ''vnode_create()'' it simply adds the '''zp''' to the list of requests, then signals the thread. The thread will call ''vnode_create()'' and upon completion, set '''zp->z_vnode''' then signal back. The requester for '''zp''' will sit in ''zfs_znode_wait_vnode()'' waiting for the signal back.<br />
<br />
This means the ZFS code base is littered with calls to ''zfs_znode_wait_vnode()'' (46 to be exact) placed at the correct location. Ie, '''after''' all the locks are released, and ''zil_commit()'' has been called. It is possible that this number could be decreased, as the calls to ''zfs_zget()'' appear to not suffer the ''zil_commit()'' issue, and can probably just block at the end of ''zfs_zget()''. However the calls to ''zfs_mknode()'' is what causes the issue.<br />
<br />
'''sysctl zfs.vnode_create_list''' tracks the number of '''zp''' nodes in the list waiting for ''vnode_create()'' to complete. Typically, 0, or 1. Rarely higher.<br />
<br />
Appears to deadlock from time to time. <br />
<br />
-----<br />
<br />
The second branch '''vnode_threadX''' takes a slightly different approach. Instead of a permanent vnode_create_thread, it simply spawns a thread when ''zfs_znode_getvnode()'' is called. This new thread calls ''_zfs_znode_getvnode()'' which functions as above. Call ''vnode_create()'' then signal back. The same ''zfs_znode_wait_vnode()'' blockers exist.<br />
<br />
'''sysctl zfs.vnode_create_list''' tracks the number of '''vnode_create threads''' we have started. Interestingly, these remain 0, or 1. Rarely higher.<br />
<br />
Has not yet deadlocked.<br />
<br />
<br />
<br />
Conclusions:<br />
<br />
* It is undesirable that we have ''zfs_znode_wait_vnode()'' placed all over the source, and care needs to be taken for each one. Although it does not hurt to call it in excess, as no wait will happen if '''zp->z_vnode''' is already set. <br />
* It is unknown if it is OK to resume ZFS execution while '''z_vnode''' is still NULL, and only block (to wait for it to be filled in) once we are close to leaving the VNOP.<br />
<br />
<br />
* However, that '''vnop_reclaim''' are direct and can be cleaned up immediately is very desirable. We no longer need to check for the '''zp without vp''' case in ''zfs_zget()''. <br />
* We no longer need to lock protect '''vnop_fsync''', '''vnop_pageout''' in case they are called from ''vnode_create()''.<br />
* We don't have to throttle the '''reclaim thread''' due to the list being massive (populating the list is much faster than cleaning up a '''zp''' node - up to 250,000 nodes in the list has been observed).<br />
<br />
<br />
[[File:VX_create.svg|thumb|Create files in sequential order]]<br />
<br />
<br><br />
<br />
[[File:iozone.svg|thumb|IOzone flamegraph]]<br />
<br />
[[File:iozoneX.svg|thumb|IOzone flamegraph (untrimmed)]]<br />
<br />
<br><br />
<br />
<br />
------<br />
<br />
== Iozone ==<br />
<br />
<br />
Quick peek at how they compare, just to see how much we should improve it by.<br />
<br />
HFS+ and ZFS were created on the same virtual disk in VMware. No, this is not ideal testing specs, but should serve as an indicator. The <br />
pool was created with<br />
<br />
# zpool create -f -O atime=off -o ashift=12 -O casesensitivity=insensitive -O normalization=formD BOOM /dev/disk1<br />
<br />
and HFS+ created with standard OS X Disk Utility, with everything default. (Journaled).<br />
<br />
Iozone was run with standard automode, ie:<br />
<br />
# iozone -a -b outfile.xls<br />
<br />
[[File:hfs2_read.png|thumb|HFS+ read]]<br />
[[File:hfs2_write.png|thumb|HFS+ write]]<br />
[[File:zfs2_read.png|thumb|ZFS read]]<br />
[[File:zfs2_write.png|thumb|ZFS write]]<br />
<br />
As a guess, writes need to double, and reads need to triple.</div>50.168.32.57https://openzfsonosx.org/wiki/DevelopmentDevelopment2014-04-11T05:36:11Z<p>50.168.32.57: </p>
<hr />
<div>[[Category:O3X development]]<br />
== Kernel ==<br />
=== Debugging with GDB ===<br />
<br />
Dealing with [[Panic|panics]].<br />
<br />
Apple's documentation: https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KEXTConcept/KEXTConceptDebugger/debug_tutorial.html<br />
<br />
Boot target VM with<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo nvram boot-args="-v keepsyms=y debug=0x144"<br />
</syntaxhighlight><br />
<br />
Make it panic.<br />
<br />
On your development machine, you will need the Kernel Debug Kit. Download it from Apple [https://developer.apple.com/downloads/index.action?q=Kernel%20Debug%20Kit here].<br />
<br />
<syntaxhighlight lang="bash"><br />
$ gdb /Volumes/Kernelit/mach_kernel<br />
</syntaxhighlight><br />
<syntaxhighlight lang="text"><br />
(gdb) source /Volumes/KernelDebugKit/kgmacros<br />
<br />
(gdb) target remote-kdp<br />
<br />
(gdb) kdp-reattach 192.168.30.133 # obviously use the IP of your target / crashed VM<br />
<br />
(gdb) showallkmods<br />
</syntaxhighlight><br />
<br />
Find the addresses for ZFS and SPL modules.<br />
<br />
<code>^Z</code> to suspend gdb, or, use another terminal<br />
<br />
<syntaxhighlight lang="bash"><br />
^Z<br />
$ sudo kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/<br />
<br />
$ fg # resume gdb, or go back to gdb terminal<br />
</syntaxhighlight><br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) set kext-symbol-file-path /tmp<br />
<br />
(gdb) add-kext /tmp/spl.kext <br />
(gdb) add-kext /tmp/zfs.kext<br />
<br />
(gdb) bt<br />
</syntaxhighlight><br />
<br />
=== Debugging with LLDB ===<br />
<br />
<syntaxhighlight lang="text"><br />
echo "settings set target.load-script-from-symbol-file true" >> ~/.lldbinit<br />
lldb /Volumes/KernelDebugKit/mach_kernel<br />
kdp-remote 192.168.30.146<br />
showallkmods<br />
addkext -F /tmp/spl.kext/Contents/MacOS/spl 0xffffff7f8ebb0000 (Address from showallkmods)<br />
addkext -F /tmp/zfs.kext/Contents/MacOS/zfs 0xffffff7f8ebbf000<br />
</syntaxhighlight><br />
<br />
Then follow the guide for GDB above.<br />
<br />
=== Non-panic ===<br />
<br />
If you prefer to work in gdb, you can always panic a kernel with<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo dtrace -w -n "BEGIN{ panic();}"<br />
</syntaxhighlight><br />
<br />
But this was revealing:<br />
<br />
<syntaxhighlight lang="bash"><br />
/usr/libexec/stackshot -i -f /tmp/stackshot.log <br />
symstacks.rb -f /tmp/stackshot.log -s -w /tmp/trace.txt<br />
less /tmp/trace.txt<br />
</syntaxhighlight><br />
<br />
Note that my hang is here:<br />
<br />
<syntaxhighlight lang="text"><br />
PID: 156<br />
Process: zpool<br />
Thread ID: 0x4e2<br />
Thread state: 0x9 == TH_WAIT |TH_UNINT <br />
Thread wait_event: 0xffffff8006608a6c<br />
Kernel stack: <br />
machine_switch_context (in mach_kernel) + 366 (0xffffff80002b3d3e)<br />
0xffffff800022e711 (in mach_kernel) + 1281 (0xffffff800022e711)<br />
thread_block_reason (in mach_kernel) + 300 (0xffffff800022d9dc)<br />
lck_mtx_sleep (in mach_kernel) + 78 (0xffffff80002265ce)<br />
0xffffff8000569ef6 (in mach_kernel) + 246 (0xffffff8000569ef6)<br />
msleep (in mach_kernel) + 116 (0xffffff800056a2e4)<br />
0xffffff7f80e52a76 (0xffffff7f80e52a76)<br />
0xffffff7f80e53fae (0xffffff7f80e53fae)<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
0xffffff7f80f2bb4e (0xffffff7f80f2bb4e)<br />
0xffffff7f80f1a9b7 (0xffffff7f80f1a9b7)<br />
0xffffff7f80f1b65f (0xffffff7f80f1b65f)<br />
0xffffff7f80f042ee (0xffffff7f80f042ee)<br />
0xffffff7f80f45c5b (0xffffff7f80f45c5b)<br />
0xffffff7f80f4ce92 (0xffffff7f80f4ce92)<br />
spec_ioctl (in mach_kernel) + 157 (0xffffff8000320bfd)<br />
VNOP_IOCTL (in mach_kernel) + 244 (0xffffff8000311e84)<br />
</syntaxhighlight><br />
<br />
It is a shame that it only shows the kernel symbols, and not inside SPL and ZFS, but we can ask it to load another sym file. (Alas, it cannot handle multiple symbols files. Fix this Apple.)<br />
<br />
<syntaxhighlight lang="bash"><br />
kextstat<br />
</syntaxhighlight><br />
Grab the addresses of spl and zfs again. Then<br />
<syntaxhighlight lang="bash"><br />
kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/ <br />
<br />
symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.spl.sym<br />
0xffffff800056a2e4 (0xffffff800056a2e4)<br />
spl_cv_wait (in net.lundman.spl.sym) + 54 (0xffffff7f80e52a76)<br />
taskq_wait (in net.lundman.spl.sym) + 78 (0xffffff7f80e53fae)<br />
taskq_destroy (in net.lundman.spl.sym) + 35 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
<br />
symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.zfs.sym<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
vdev_open_children (in net.lundman.zfs.sym) + 336 (0xffffff7f80f1a870)<br />
vdev_root_open (in net.lundman.zfs.sym) + 94 (0xffffff7f80f2bb4e)<br />
vdev_open (in net.lundman.zfs.sym) + 311 (0xffffff7f80f1a9b7)<br />
vdev_create (in net.lundman.zfs.sym) + 31 (0xffffff7f80f1b65f)<br />
spa_create (in net.lundman.zfs.sym) + 878 (0xffffff7f80f042ee)<br />
</syntaxhighlight><br />
<br />
Voilà!<br />
<br />
=== Memory leaks ===<br />
<br />
If you suspect memory issues, for example from a panic like<br />
<br />
<syntaxhighlight lang="text"><br />
panic(cpu 1 caller 0xffffff80002438d8): "zalloc: \"kalloc.1024\" (100535 elements) retry fail 3, kfree_nop_count: 0"@/SourceCache/xnu/xnu-2050.7.9/osfmk/kern/zalloc.c:1826<br />
</syntaxhighlight><br />
<br />
you can attach gdb and use the zprint command:<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) zprint<br />
ZONE COUNT TOT_SZ MAX_SZ ELT_SZ ALLOC_SZ TOT_ALLOC TOT_FREE NAME<br />
0xffffff8002a89250 1620133 18c1000 22a3599 16 1000 125203838 123583705 kalloc.16 CX<br />
0xffffff8006306c50 110335 35f000 4ce300 32 1000 13634985 13524650 kalloc.32 CX<br />
0xffffff8006306a00 133584 82a000 e6a900 64 1000 26510120 26376536 kalloc.64 CX<br />
0xffffff80063067b0 610090 4a84000 614f4c0 128 1000 50524515 49914425 kalloc.128 CX<br />
0xffffff8006306560 1070398 121a2000 1b5e4d60 256 1000 72534632 71464234 kalloc.256 CX<br />
0xffffff8006306310 399302 d423000 daf26b0 512 1000 39231204 38831902 kalloc.512 CX<br />
0xffffff80063060c0 100404 6231000 c29e980 1024 1000 22949693 22849289 kalloc.1024 CX<br />
0xffffff8006305e70 292 9a000 200000 2048 1000 77633725 77633433 kalloc.2048 CX<br />
</syntaxhighlight><br />
<br />
In this case, kalloc.256 is suspect.<br />
<br />
Reboot kernel with zlog=kalloc.256 on the command line, then we can use<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) findoldest <br />
oldest record is at log index 393:<br />
<br />
--------------- ALLOC 0xffffff803276ec00 : index 393 : ztime 21643824 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
and indeed, list any index<br />
<br />
(gdb) zstack 394<br />
<br />
--------------- ALLOC 0xffffff8032d60700 : index 394 : ztime 21648810 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
How many times was zfs_kmem_alloc involved in the leaked allocs?<br />
<br />
(gdb) countpcs 0xffffff7f80e847df<br />
occurred 3999 times in log (100% of records)<br />
</syntaxhighlight><br />
<br />
At least we know it is our fault.<br />
<br />
How many times is it arc_buf_alloc?<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) countpcs 0xffffff7f80e90649<br />
occurred 2390 times in log (59% of records)<br />
</syntaxhighlight><br />
<br />
== Flamegraphs ==<br />
<br />
Huge thanks to [http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html BrendanGregg] for so much of the dtrace magic.<br />
<br />
dtrace the kernel while running a command:<br />
<br />
dtrace -x stackframes=100 -n 'profile-997 /arg0/ {<br />
@[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks<br />
<br />
It will run for 60 seconds.<br />
<br />
Convert it to a flamegraph:<br />
<br />
./stackcollapse.pl out.stacks > out.folded<br />
./flamegraph.pl out.folded > out.svg<br />
<br />
<br />
This is '''rsync -a /usr/ /BOOM/deletea/''' running:<br />
<br />
[[File:rsyncflamegraph.svg|thumb|rsync flamegraph]]<br />
<br />
<br />
Or running '''Bonnie++''' in various stages:<br />
<br />
<gallery mode="packed-hover"><br />
File:create.svg|Create files in sequential order|alt=[[File:create.svg]]<br />
File:stat.svg|Stat files in sequential order|alt=Stat files in sequential order<br />
File:delete.svg|Delete files in sequential order|alt=Delete files in sequential order<br />
</gallery><br />
<br />
<br />
== ZVOL block size ==<br />
<br />
At the moment, we can only handle block size of 512 and 4096 in ZFS. And 512 is handled poorly. To write a single 512 block, IOKit layer will read in 8 blocks (to make up a PAGE_SIZE read) modify the buffer, then write 8 blocks. This makes ZFS think we wrote 8 blocks, and all stats are updated as such. This is undesirable since compression ratio etc cannot be reported correctly.<br />
<br />
This limitation is in specfs, which is applied to any BLK device created in /dev. For usage with Apple and the GUI, there is not much we can do. But we are planning to create a secondary blk/chr nodes (maybe in /var/run/zfs/dsk/$POOL/$name or similar for compatibility) which will have our implementation attached as vnops. This will let us handle any block size required.<br />
<br />
<br />
== vnode_create thread ==<br />
<br />
Currently, we have to protect the call to vnode_create() due to the possibility that it calls several vnops (fsync, pageout, reclaim) and have a reclaim thread to deal with that. One issue is reclaim can both be called as a separate thread (periodic reclaims) and as the ''calling thread'' of vnode_create. This makes locking tricky.<br />
<br />
One idea is we create a vnode_create thread (with each dataset). The in zfs_zget and zfs_znode_alloc, which calls vnode_create, we simply place the newly allocated zp on the vnode_create thread's ''request list'', and resume execution. Once we have passed the "unlock" part of the functions, we can wait for the vnode_create thread to complete the request so we do not resume execution without the vp attached.<br />
<br />
In the vnode_create thread, we pop items off the list, call vnode_create (guaranteed as a separate thread now) and once completed, mark the node done, and signal the process which might be waiting.<br />
<br />
In theory this should let us handle reclaim, fsync, pageout as normal upstream ZFS. no special cases required. This should alleviate the current situation where the reclaim_list grows to very large numbers (230,000 nodes observed). <br />
<br />
It might mean we need to be careful in any function which might end up in zfs_znode_alloc, to make sure we have a vp attached before we resume. For example, zfs_lookup and zfs_create.<br />
<br />
----<br />
<br />
The branch '''vnode_thread''' is just this idea, it creates a vnode_create_thread per dataset, when we need to call ''vnode_create()'' it simply adds the '''zp''' to the list of requests, then signals the thread. The thread will call ''vnode_create()'' and upon completion, set '''zp->z_vnode''' then signal back. The requester for '''zp''' will sit in ''zfs_znode_wait_vnode()'' waiting for the signal back.<br />
<br />
This means the ZFS code base is littered with calls to ''zfs_znode_wait_vnode()'' (46 to be exact) placed at the correct location. Ie, '''after''' all the locks are released, and ''zil_commit()'' has been called. It is possible that this number could be decreased, as the calls to ''zfs_zget()'' appear to not suffer the ''zil_commit()'' issue, and can probably just block at the end of ''zfs_zget()''. However the calls to ''zfs_mknode()'' is what causes the issue.<br />
<br />
'''sysctl zfs.vnode_create_list''' tracks the number of '''zp''' nodes in the list waiting for ''vnode_create()'' to complete. Typically, 0, or 1. Rarely higher.<br />
<br />
Appears to deadlock from time to time. <br />
<br />
-----<br />
<br />
The second branch '''vnode_threadX''' takes a slightly different approach. Instead of a permanent vnode_create_thread, it simply spawns a thread when ''zfs_znode_getvnode()'' is called. This new thread calls ''_zfs_znode_getvnode()'' which functions as above. Call ''vnode_create()'' then signal back. The same ''zfs_znode_wait_vnode()'' blockers exist.<br />
<br />
'''sysctl zfs.vnode_create_list''' tracks the number of '''vnode_create threads''' we have started. Interestingly, these remain 0, or 1. Rarely higher.<br />
<br />
Has not yet deadlocked.<br />
<br />
<br />
<br />
Conclusions:<br />
<br />
* It is undesirable that we have ''zfs_znode_wait_vnode()'' placed all over the source, and care needs to be taken for each one. Although it does not hurt to call it in excess, as no wait will happen if '''zp->z_vnode''' is already set. <br />
* It is unknown if it is OK to resume ZFS execution while '''z_vnode''' is still NULL, and only block (to wait for it to be filled in) once we are close to leaving the VNOP.<br />
<br />
<br />
* However, that '''vnop_reclaim''' are direct and can be cleaned up immediately is very desirable. We no longer need to check for the '''zp without vp''' case in ''zfs_zget()''. <br />
* We no longer need to lock protect '''vnop_fsync''', '''vnop_pageout''' in case they are called from ''vnode_create()''.<br />
* We don't have to throttle the '''reclaim thread''' due to the list being massive (populating the list is much faster than cleaning up a '''zp''' node - up to 250,000 nodes in the list has been observed).<br />
<br />
<br />
[[File:VX_create.svg|thumb|Create files in sequential order]]<br />
<br />
<br><br />
<br />
[[File:iozone.svg|thumb|IOzone flamegraph]]<br />
<br />
[[File:iozoneX.svg|thumb|IOzone flamegraph (untrimmed)]]<br />
<br />
<br><br />
<br />
<br />
------<br />
<br />
== Iozone ==<br />
<br />
<br />
Quick peek at how they compare, just to see how much we should improve it by.<br />
<br />
HFS+ and ZFS were created on the same virtual disk in VMware. No, this is not ideal testing specs, but should serve as an indicator. The <br />
pool was created with<br />
<br />
# zpool create -f -O atime=off -o ashift=12 -O casesensitivity=insensitive -O normalization=formD BOOM /dev/disk1<br />
<br />
and HFS+ created with standard OS X Disk Utility, with everything default. (Journaled).<br />
<br />
Iozone was run with standard automode, ie:<br />
<br />
# iozone -a -b outfile.xls<br />
<br />
[[File:hfs2_read.png|thumb|HFS+ read]]<br />
[[File:hfs2_write.png|thumb|HFS+ write]]<br />
[[File:zfs2_read.png|thumb|ZFS read]]<br />
[[File:zfs2_write.png|thumb|ZFS write]]<br />
<br />
As a guess, writes need to double, and reads need to triple.</div>50.168.32.57https://openzfsonosx.org/wiki/DevelopmentDevelopment2014-04-11T05:18:02Z<p>50.168.32.57: </p>
<hr />
<div>[[Category:O3X development]]<br />
== Kernel ==<br />
=== Debugging with GDB ===<br />
<br />
Dealing with [[Panic|panics]].<br />
<br />
Apple's documentation: https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KEXTConcept/KEXTConceptDebugger/debug_tutorial.html<br />
<br />
Boot target VM with<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo nvram boot-args="-v keepsyms=y debug=0x144"<br />
</syntaxhighlight><br />
<br />
Make it panic.<br />
<br />
On your development machine, you will need the Kernel Debug Kit. Download it from Apple [https://developer.apple.com/downloads/index.action?q=Kernel%20Debug%20Kit here].<br />
<br />
<syntaxhighlight lang="text"><br />
gdb /Volumes/Kernelit/mach_kernel<br />
source /Volumes/KernelDebugKit/kgmacros<br />
target remote-kdp<br />
<br />
kdp-reattach 192.168.30.133 # obviously use the IP of your target / crashed VM<br />
<br />
showallkmods<br />
(find "address" for zfs and spl modules)<br />
^Z # suspend gdb, or, use another terminal<br />
<br />
kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/<br />
<br />
fg # resume gdb, or go back to gdb terminal<br />
set kext-symbol-file-path /tmp<br />
<br />
add-kext /tmp/spl.kext <br />
add-kext /tmp/zfs.kext<br />
<br />
bt<br />
</syntaxhighlight><br />
<br />
=== Debugging with LLDB ===<br />
<br />
<syntaxhighlight lang="text"><br />
echo "settings set target.load-script-from-symbol-file true" >> ~/.lldbinit<br />
lldb /Volumes/KernelDebugKit/mach_kernel<br />
kdp-remote 192.168.30.146<br />
showallkmods<br />
addkext -F /tmp/spl.kext/Contents/MacOS/spl 0xffffff7f8ebb0000 (Address from showallkmods)<br />
addkext -F /tmp/zfs.kext/Contents/MacOS/zfs 0xffffff7f8ebbf000<br />
</syntaxhighlight><br />
<br />
Then follow the guide for GDB above.<br />
<br />
=== Non-panic ===<br />
<br />
If you prefer to work in gdb, you can always panic a kernel with<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo dtrace -w -n "BEGIN{ panic();}"<br />
</syntaxhighlight><br />
<br />
But this was revealing:<br />
<br />
# /usr/libexec/stackshot -i -f /tmp/stackshot.log <br />
# symstacks.rb -f /tmp/stackshot.log -s -w /tmp/trace.txt<br />
# less /tmp/trace.txt<br />
Note that my hang is here:<br />
<br />
PID: 156<br />
Process: zpool<br />
Thread ID: 0x4e2<br />
Thread state: 0x9 == TH_WAIT |TH_UNINT <br />
Thread wait_event: 0xffffff8006608a6c<br />
Kernel stack: <br />
machine_switch_context (in mach_kernel) + 366 (0xffffff80002b3d3e)<br />
0xffffff800022e711 (in mach_kernel) + 1281 (0xffffff800022e711)<br />
thread_block_reason (in mach_kernel) + 300 (0xffffff800022d9dc)<br />
lck_mtx_sleep (in mach_kernel) + 78 (0xffffff80002265ce)<br />
0xffffff8000569ef6 (in mach_kernel) + 246 (0xffffff8000569ef6)<br />
msleep (in mach_kernel) + 116 (0xffffff800056a2e4)<br />
0xffffff7f80e52a76 (0xffffff7f80e52a76)<br />
0xffffff7f80e53fae (0xffffff7f80e53fae)<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
0xffffff7f80f2bb4e (0xffffff7f80f2bb4e)<br />
0xffffff7f80f1a9b7 (0xffffff7f80f1a9b7)<br />
0xffffff7f80f1b65f (0xffffff7f80f1b65f)<br />
0xffffff7f80f042ee (0xffffff7f80f042ee)<br />
0xffffff7f80f45c5b (0xffffff7f80f45c5b)<br />
0xffffff7f80f4ce92 (0xffffff7f80f4ce92)<br />
spec_ioctl (in mach_kernel) + 157 (0xffffff8000320bfd)<br />
VNOP_IOCTL (in mach_kernel) + 244 (0xffffff8000311e84)<br />
It is a shame that it only shows the kernel symbols, and not inside SPL and ZFS, but we can ask it to load another sym file. (Alas, it cannot handle multiple symbols files. Fix this Apple.)<br />
<br />
# kextstat<br />
# Grab the addresses of spl and zfs again<br />
# kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/ <br />
<br />
# symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.spl.sym<br />
0xffffff800056a2e4 (0xffffff800056a2e4)<br />
spl_cv_wait (in net.lundman.spl.sym) + 54 (0xffffff7f80e52a76)<br />
taskq_wait (in net.lundman.spl.sym) + 78 (0xffffff7f80e53fae)<br />
taskq_destroy (in net.lundman.spl.sym) + 35 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
<br />
# symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.zfs.sym<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
vdev_open_children (in net.lundman.zfs.sym) + 336 (0xffffff7f80f1a870)<br />
vdev_root_open (in net.lundman.zfs.sym) + 94 (0xffffff7f80f2bb4e)<br />
vdev_open (in net.lundman.zfs.sym) + 311 (0xffffff7f80f1a9b7)<br />
vdev_create (in net.lundman.zfs.sym) + 31 (0xffffff7f80f1b65f)<br />
spa_create (in net.lundman.zfs.sym) + 878 (0xffffff7f80f042ee)<br />
<br />
Voilà!<br />
<br />
=== Memory leaks ===<br />
<br />
If you suspect memory issues, for example from a panic like<br />
<br />
<syntaxhighlight lang="text"><br />
panic(cpu 1 caller 0xffffff80002438d8): "zalloc: \"kalloc.1024\" (100535 elements) retry fail 3, kfree_nop_count: 0"@/SourceCache/xnu/xnu-2050.7.9/osfmk/kern/zalloc.c:1826<br />
</syntaxhighlight><br />
<br />
you can attach gdb and use the zprint command:<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) zprint<br />
ZONE COUNT TOT_SZ MAX_SZ ELT_SZ ALLOC_SZ TOT_ALLOC TOT_FREE NAME<br />
0xffffff8002a89250 1620133 18c1000 22a3599 16 1000 125203838 123583705 kalloc.16 CX<br />
0xffffff8006306c50 110335 35f000 4ce300 32 1000 13634985 13524650 kalloc.32 CX<br />
0xffffff8006306a00 133584 82a000 e6a900 64 1000 26510120 26376536 kalloc.64 CX<br />
0xffffff80063067b0 610090 4a84000 614f4c0 128 1000 50524515 49914425 kalloc.128 CX<br />
0xffffff8006306560 1070398 121a2000 1b5e4d60 256 1000 72534632 71464234 kalloc.256 CX<br />
0xffffff8006306310 399302 d423000 daf26b0 512 1000 39231204 38831902 kalloc.512 CX<br />
0xffffff80063060c0 100404 6231000 c29e980 1024 1000 22949693 22849289 kalloc.1024 CX<br />
0xffffff8006305e70 292 9a000 200000 2048 1000 77633725 77633433 kalloc.2048 CX<br />
</syntaxhighlight><br />
<br />
In this case, kalloc.256 is suspect.<br />
<br />
Reboot kernel with zlog=kalloc.256 on the command line, then we can use<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) findoldest <br />
oldest record is at log index 393:<br />
<br />
--------------- ALLOC 0xffffff803276ec00 : index 393 : ztime 21643824 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
and indeed, list any index<br />
<br />
(gdb) zstack 394<br />
<br />
--------------- ALLOC 0xffffff8032d60700 : index 394 : ztime 21648810 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
How many times was zfs_kmem_alloc involved in the leaked allocs?<br />
<br />
(gdb) countpcs 0xffffff7f80e847df<br />
occurred 3999 times in log (100% of records)<br />
</syntaxhighlight><br />
<br />
At least we know it is our fault.<br />
<br />
How many times is it arc_buf_alloc?<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) countpcs 0xffffff7f80e90649<br />
occurred 2390 times in log (59% of records)<br />
</syntaxhighlight><br />
<br />
== Flamegraphs ==<br />
<br />
Huge thanks to [http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html BrendanGregg] for so much of the dtrace magic.<br />
<br />
dtrace the kernel while running a command:<br />
<br />
dtrace -x stackframes=100 -n 'profile-997 /arg0/ {<br />
@[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks<br />
<br />
It will run for 60 seconds.<br />
<br />
Convert it to a flamegraph:<br />
<br />
./stackcollapse.pl out.stacks > out.folded<br />
./flamegraph.pl out.folded > out.svg<br />
<br />
<br />
This is '''rsync -a /usr/ /BOOM/deletea/''' running:<br />
<br />
[[File:rsyncflamegraph.svg|thumb|rsync flamegraph]]<br />
<br />
<br />
Or running '''Bonnie++''' in various stages:<br />
<br />
<gallery mode="packed-hover"><br />
File:create.svg|Create files in sequential order|alt=[[File:create.svg]]<br />
File:stat.svg|Stat files in sequential order|alt=Stat files in sequential order<br />
File:delete.svg|Delete files in sequential order|alt=Delete files in sequential order<br />
</gallery><br />
<br />
<br />
== ZVOL block size ==<br />
<br />
At the moment, we can only handle block size of 512 and 4096 in ZFS. And 512 is handled poorly. To write a single 512 block, IOKit layer will read in 8 blocks (to make up a PAGE_SIZE read) modify the buffer, then write 8 blocks. This makes ZFS think we wrote 8 blocks, and all stats are updated as such. This is undesirable since compression ratio etc cannot be reported correctly.<br />
<br />
This limitation is in specfs, which is applied to any BLK device created in /dev. For usage with Apple and the GUI, there is not much we can do. But we are planning to create a secondary blk/chr nodes (maybe in /var/run/zfs/dsk/$POOL/$name or similar for compatibility) which will have our implementation attached as vnops. This will let us handle any block size required.<br />
<br />
<br />
== vnode_create thread ==<br />
<br />
Currently, we have to protect the call to vnode_create() due to the possibility that it calls several vnops (fsync, pageout, reclaim) and have a reclaim thread to deal with that. One issue is reclaim can both be called as a separate thread (periodic reclaims) and as the ''calling thread'' of vnode_create. This makes locking tricky.<br />
<br />
One idea is we create a vnode_create thread (with each dataset). The in zfs_zget and zfs_znode_alloc, which calls vnode_create, we simply place the newly allocated zp on the vnode_create thread's ''request list'', and resume execution. Once we have passed the "unlock" part of the functions, we can wait for the vnode_create thread to complete the request so we do not resume execution without the vp attached.<br />
<br />
In the vnode_create thread, we pop items off the list, call vnode_create (guaranteed as a separate thread now) and once completed, mark the node done, and signal the process which might be waiting.<br />
<br />
In theory this should let us handle reclaim, fsync, pageout as normal upstream ZFS. no special cases required. This should alleviate the current situation where the reclaim_list grows to very large numbers (230,000 nodes observed). <br />
<br />
It might mean we need to be careful in any function which might end up in zfs_znode_alloc, to make sure we have a vp attached before we resume. For example, zfs_lookup and zfs_create.<br />
<br />
----<br />
<br />
The branch '''vnode_thread''' is just this idea, it creates a vnode_create_thread per dataset, when we need to call ''vnode_create()'' it simply adds the '''zp''' to the list of requests, then signals the thread. The thread will call ''vnode_create()'' and upon completion, set '''zp->z_vnode''' then signal back. The requester for '''zp''' will sit in ''zfs_znode_wait_vnode()'' waiting for the signal back.<br />
<br />
This means the ZFS code base is littered with calls to ''zfs_znode_wait_vnode()'' (46 to be exact) placed at the correct location. Ie, '''after''' all the locks are released, and ''zil_commit()'' has been called. It is possible that this number could be decreased, as the calls to ''zfs_zget()'' appear to not suffer the ''zil_commit()'' issue, and can probably just block at the end of ''zfs_zget()''. However the calls to ''zfs_mknode()'' is what causes the issue.<br />
<br />
'''sysctl zfs.vnode_create_list''' tracks the number of '''zp''' nodes in the list waiting for ''vnode_create()'' to complete. Typically, 0, or 1. Rarely higher.<br />
<br />
Appears to deadlock from time to time. <br />
<br />
-----<br />
<br />
The second branch '''vnode_threadX''' takes a slightly different approach. Instead of a permanent vnode_create_thread, it simply spawns a thread when ''zfs_znode_getvnode()'' is called. This new thread calls ''_zfs_znode_getvnode()'' which functions as above. Call ''vnode_create()'' then signal back. The same ''zfs_znode_wait_vnode()'' blockers exist.<br />
<br />
'''sysctl zfs.vnode_create_list''' tracks the number of '''vnode_create threads''' we have started. Interestingly, these remain 0, or 1. Rarely higher.<br />
<br />
Has not yet deadlocked.<br />
<br />
<br />
<br />
Conclusions:<br />
<br />
* It is undesirable that we have ''zfs_znode_wait_vnode()'' placed all over the source, and care needs to be taken for each one. Although it does not hurt to call it in excess, as no wait will happen if '''zp->z_vnode''' is already set. <br />
* It is unknown if it is OK to resume ZFS execution while '''z_vnode''' is still NULL, and only block (to wait for it to be filled in) once we are close to leaving the VNOP.<br />
<br />
<br />
* However, that '''vnop_reclaim''' are direct and can be cleaned up immediately is very desirable. We no longer need to check for the '''zp without vp''' case in ''zfs_zget()''. <br />
* We no longer need to lock protect '''vnop_fsync''', '''vnop_pageout''' in case they are called from ''vnode_create()''.<br />
* We don't have to throttle the '''reclaim thread''' due to the list being massive (populating the list is much faster than cleaning up a '''zp''' node - up to 250,000 nodes in the list has been observed).<br />
<br />
<br />
[[File:VX_create.svg|thumb|Create files in sequential order]]<br />
<br />
<br><br />
<br />
[[File:iozone.svg|thumb|IOzone flamegraph]]<br />
<br />
[[File:iozoneX.svg|thumb|IOzone flamegraph (untrimmed)]]<br />
<br />
<br><br />
<br />
<br />
------<br />
<br />
== Iozone ==<br />
<br />
<br />
Quick peek at how they compare, just to see how much we should improve it by.<br />
<br />
HFS+ and ZFS were created on the same virtual disk in VMware. No, this is not ideal testing specs, but should serve as an indicator. The <br />
pool was created with<br />
<br />
# zpool create -f -O atime=off -o ashift=12 -O casesensitivity=insensitive -O normalization=formD BOOM /dev/disk1<br />
<br />
and HFS+ created with standard OS X Disk Utility, with everything default. (Journaled).<br />
<br />
Iozone was run with standard automode, ie:<br />
<br />
# iozone -a -b outfile.xls<br />
<br />
[[File:hfs2_read.png|thumb|HFS+ read]]<br />
[[File:hfs2_write.png|thumb|HFS+ write]]<br />
[[File:zfs2_read.png|thumb|ZFS read]]<br />
[[File:zfs2_write.png|thumb|ZFS write]]<br />
<br />
As a guess, writes need to double, and reads need to triple.</div>50.168.32.57https://openzfsonosx.org/wiki/DevelopmentDevelopment2014-04-11T05:15:34Z<p>50.168.32.57: </p>
<hr />
<div>[[Category:O3X development]]<br />
== Kernel ==<br />
=== Debugging with GDB ===<br />
<br />
Dealing with [[Panic|panics]].<br />
<br />
Apple's documentation: https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KEXTConcept/KEXTConceptDebugger/debug_tutorial.html<br />
<br />
Boot target VM with<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo nvram boot-args="-v keepsyms=y debug=0x144"<br />
</syntaxhighlight><br />
<br />
Make it panic.<br />
<br />
On your development machine, you will need the Kernel Debug Kit. Download it from Apple [https://developer.apple.com/downloads/index.action?q=Kernel%20Debug%20Kit here].<br />
<br />
<pre><br />
gdb /Volumes/Kernelit/mach_kernel<br />
source /Volumes/KernelDebugKit/kgmacros<br />
target remote-kdp<br />
<br />
kdp-reattach 192.168.30.133 # obviously use the IP of your target / crashed VM<br />
<br />
showallkmods<br />
(find "address" for zfs and spl modules)<br />
^Z # suspend gdb, or, use another terminal<br />
<br />
kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/<br />
<br />
fg # resume gdb, or go back to gdb terminal<br />
set kext-symbol-file-path /tmp<br />
<br />
add-kext /tmp/spl.kext <br />
add-kext /tmp/zfs.kext<br />
<br />
bt<br />
</pre><br />
<br />
=== Debugging with LLDB ===<br />
<br />
<syntaxhighlight lang="text"><br />
echo "settings set target.load-script-from-symbol-file true" >> ~/.lldbinit<br />
lldb /Volumes/KernelDebugKit/mach_kernel<br />
kdp-remote 192.168.30.146<br />
showallkmods<br />
addkext -F /tmp/spl.kext/Contents/MacOS/spl 0xffffff7f8ebb0000 (Address from showallkmods)<br />
addkext -F /tmp/zfs.kext/Contents/MacOS/zfs 0xffffff7f8ebbf000<br />
</syntaxhighlight><br />
<br />
Then follow the guide for GDB above.<br />
<br />
=== Non-panic ===<br />
<br />
If you prefer to work in gdb, you can always panic a kernel with<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo dtrace -w -n "BEGIN{ panic();}"<br />
</syntaxhighlight><br />
<br />
But this was revealing:<br />
<br />
# /usr/libexec/stackshot -i -f /tmp/stackshot.log <br />
# symstacks.rb -f /tmp/stackshot.log -s -w /tmp/trace.txt<br />
# less /tmp/trace.txt<br />
Note that my hang is here:<br />
<br />
PID: 156<br />
Process: zpool<br />
Thread ID: 0x4e2<br />
Thread state: 0x9 == TH_WAIT |TH_UNINT <br />
Thread wait_event: 0xffffff8006608a6c<br />
Kernel stack: <br />
machine_switch_context (in mach_kernel) + 366 (0xffffff80002b3d3e)<br />
0xffffff800022e711 (in mach_kernel) + 1281 (0xffffff800022e711)<br />
thread_block_reason (in mach_kernel) + 300 (0xffffff800022d9dc)<br />
lck_mtx_sleep (in mach_kernel) + 78 (0xffffff80002265ce)<br />
0xffffff8000569ef6 (in mach_kernel) + 246 (0xffffff8000569ef6)<br />
msleep (in mach_kernel) + 116 (0xffffff800056a2e4)<br />
0xffffff7f80e52a76 (0xffffff7f80e52a76)<br />
0xffffff7f80e53fae (0xffffff7f80e53fae)<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
0xffffff7f80f2bb4e (0xffffff7f80f2bb4e)<br />
0xffffff7f80f1a9b7 (0xffffff7f80f1a9b7)<br />
0xffffff7f80f1b65f (0xffffff7f80f1b65f)<br />
0xffffff7f80f042ee (0xffffff7f80f042ee)<br />
0xffffff7f80f45c5b (0xffffff7f80f45c5b)<br />
0xffffff7f80f4ce92 (0xffffff7f80f4ce92)<br />
spec_ioctl (in mach_kernel) + 157 (0xffffff8000320bfd)<br />
VNOP_IOCTL (in mach_kernel) + 244 (0xffffff8000311e84)<br />
It is a shame that it only shows the kernel symbols, and not inside SPL and ZFS, but we can ask it to load another sym file. (Alas, it cannot handle multiple symbols files. Fix this Apple.)<br />
<br />
# kextstat<br />
# Grab the addresses of spl and zfs again<br />
# kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/ <br />
<br />
# symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.spl.sym<br />
0xffffff800056a2e4 (0xffffff800056a2e4)<br />
spl_cv_wait (in net.lundman.spl.sym) + 54 (0xffffff7f80e52a76)<br />
taskq_wait (in net.lundman.spl.sym) + 78 (0xffffff7f80e53fae)<br />
taskq_destroy (in net.lundman.spl.sym) + 35 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
<br />
# symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.zfs.sym<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
vdev_open_children (in net.lundman.zfs.sym) + 336 (0xffffff7f80f1a870)<br />
vdev_root_open (in net.lundman.zfs.sym) + 94 (0xffffff7f80f2bb4e)<br />
vdev_open (in net.lundman.zfs.sym) + 311 (0xffffff7f80f1a9b7)<br />
vdev_create (in net.lundman.zfs.sym) + 31 (0xffffff7f80f1b65f)<br />
spa_create (in net.lundman.zfs.sym) + 878 (0xffffff7f80f042ee)<br />
<br />
Voilà!<br />
<br />
=== Memory leaks ===<br />
<br />
If you suspect memory issues, for example from a panic like<br />
<br />
<syntaxhighlight lang="text"><br />
panic(cpu 1 caller 0xffffff80002438d8): "zalloc: \"kalloc.1024\" (100535 elements) retry fail 3, kfree_nop_count: 0"@/SourceCache/xnu/xnu-2050.7.9/osfmk/kern/zalloc.c:1826<br />
</syntaxhighlight><br />
<br />
you can attach gdb and use the zprint command:<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) zprint<br />
ZONE COUNT TOT_SZ MAX_SZ ELT_SZ ALLOC_SZ TOT_ALLOC TOT_FREE NAME<br />
0xffffff8002a89250 1620133 18c1000 22a3599 16 1000 125203838 123583705 kalloc.16 CX<br />
0xffffff8006306c50 110335 35f000 4ce300 32 1000 13634985 13524650 kalloc.32 CX<br />
0xffffff8006306a00 133584 82a000 e6a900 64 1000 26510120 26376536 kalloc.64 CX<br />
0xffffff80063067b0 610090 4a84000 614f4c0 128 1000 50524515 49914425 kalloc.128 CX<br />
0xffffff8006306560 1070398 121a2000 1b5e4d60 256 1000 72534632 71464234 kalloc.256 CX<br />
0xffffff8006306310 399302 d423000 daf26b0 512 1000 39231204 38831902 kalloc.512 CX<br />
0xffffff80063060c0 100404 6231000 c29e980 1024 1000 22949693 22849289 kalloc.1024 CX<br />
0xffffff8006305e70 292 9a000 200000 2048 1000 77633725 77633433 kalloc.2048 CX<br />
</syntaxhighlight><br />
<br />
In this case, kalloc.256 is suspect.<br />
<br />
Reboot kernel with zlog=kalloc.256 on the command line, then we can use<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) findoldest <br />
oldest record is at log index 393:<br />
<br />
--------------- ALLOC 0xffffff803276ec00 : index 393 : ztime 21643824 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
and indeed, list any index<br />
<br />
(gdb) zstack 394<br />
<br />
--------------- ALLOC 0xffffff8032d60700 : index 394 : ztime 21648810 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
How many times was zfs_kmem_alloc involved in the leaked allocs?<br />
<br />
(gdb) countpcs 0xffffff7f80e847df<br />
occurred 3999 times in log (100% of records)<br />
</syntaxhighlight><br />
<br />
At least we know it is our fault.<br />
<br />
How many times is it arc_buf_alloc?<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) countpcs 0xffffff7f80e90649<br />
occurred 2390 times in log (59% of records)<br />
</syntaxhighlight><br />
<br />
== Flamegraphs ==<br />
<br />
Huge thanks to [http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html BrendanGregg] for so much of the dtrace magic.<br />
<br />
dtrace the kernel while running a command:<br />
<br />
dtrace -x stackframes=100 -n 'profile-997 /arg0/ {<br />
@[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks<br />
<br />
It will run for 60 seconds.<br />
<br />
Convert it to a flamegraph:<br />
<br />
./stackcollapse.pl out.stacks > out.folded<br />
./flamegraph.pl out.folded > out.svg<br />
<br />
<br />
This is '''rsync -a /usr/ /BOOM/deletea/''' running:<br />
<br />
[[File:rsyncflamegraph.svg|thumb|rsync flamegraph]]<br />
<br />
<br />
Or running '''Bonnie++''' in various stages:<br />
<br />
<gallery mode="packed-hover"><br />
File:create.svg|Create files in sequential order|alt=[[File:create.svg]]<br />
File:stat.svg|Stat files in sequential order|alt=Stat files in sequential order<br />
File:delete.svg|Delete files in sequential order|alt=Delete files in sequential order<br />
</gallery><br />
<br />
<br />
== ZVOL block size ==<br />
<br />
At the moment, we can only handle block size of 512 and 4096 in ZFS. And 512 is handled poorly. To write a single 512 block, IOKit layer will read in 8 blocks (to make up a PAGE_SIZE read) modify the buffer, then write 8 blocks. This makes ZFS think we wrote 8 blocks, and all stats are updated as such. This is undesirable since compression ratio etc cannot be reported correctly.<br />
<br />
This limitation is in specfs, which is applied to any BLK device created in /dev. For usage with Apple and the GUI, there is not much we can do. But we are planning to create a secondary blk/chr nodes (maybe in /var/run/zfs/dsk/$POOL/$name or similar for compatibility) which will have our implementation attached as vnops. This will let us handle any block size required.<br />
<br />
<br />
== vnode_create thread ==<br />
<br />
Currently, we have to protect the call to vnode_create() due to the possibility that it calls several vnops (fsync, pageout, reclaim) and have a reclaim thread to deal with that. One issue is reclaim can both be called as a separate thread (periodic reclaims) and as the ''calling thread'' of vnode_create. This makes locking tricky.<br />
<br />
One idea is we create a vnode_create thread (with each dataset). The in zfs_zget and zfs_znode_alloc, which calls vnode_create, we simply place the newly allocated zp on the vnode_create thread's ''request list'', and resume execution. Once we have passed the "unlock" part of the functions, we can wait for the vnode_create thread to complete the request so we do not resume execution without the vp attached.<br />
<br />
In the vnode_create thread, we pop items off the list, call vnode_create (guaranteed as a separate thread now) and once completed, mark the node done, and signal the process which might be waiting.<br />
<br />
In theory this should let us handle reclaim, fsync, pageout as normal upstream ZFS. no special cases required. This should alleviate the current situation where the reclaim_list grows to very large numbers (230,000 nodes observed). <br />
<br />
It might mean we need to be careful in any function which might end up in zfs_znode_alloc, to make sure we have a vp attached before we resume. For example, zfs_lookup and zfs_create.<br />
<br />
----<br />
<br />
The branch '''vnode_thread''' is just this idea, it creates a vnode_create_thread per dataset, when we need to call ''vnode_create()'' it simply adds the '''zp''' to the list of requests, then signals the thread. The thread will call ''vnode_create()'' and upon completion, set '''zp->z_vnode''' then signal back. The requester for '''zp''' will sit in ''zfs_znode_wait_vnode()'' waiting for the signal back.<br />
<br />
This means the ZFS code base is littered with calls to ''zfs_znode_wait_vnode()'' (46 to be exact) placed at the correct location. Ie, '''after''' all the locks are released, and ''zil_commit()'' has been called. It is possible that this number could be decreased, as the calls to ''zfs_zget()'' appear to not suffer the ''zil_commit()'' issue, and can probably just block at the end of ''zfs_zget()''. However the calls to ''zfs_mknode()'' is what causes the issue.<br />
<br />
'''sysctl zfs.vnode_create_list''' tracks the number of '''zp''' nodes in the list waiting for ''vnode_create()'' to complete. Typically, 0, or 1. Rarely higher.<br />
<br />
Appears to deadlock from time to time. <br />
<br />
-----<br />
<br />
The second branch '''vnode_threadX''' takes a slightly different approach. Instead of a permanent vnode_create_thread, it simply spawns a thread when ''zfs_znode_getvnode()'' is called. This new thread calls ''_zfs_znode_getvnode()'' which functions as above. Call ''vnode_create()'' then signal back. The same ''zfs_znode_wait_vnode()'' blockers exist.<br />
<br />
'''sysctl zfs.vnode_create_list''' tracks the number of '''vnode_create threads''' we have started. Interestingly, these remain 0, or 1. Rarely higher.<br />
<br />
Has not yet deadlocked.<br />
<br />
<br />
<br />
Conclusions:<br />
<br />
* It is undesirable that we have ''zfs_znode_wait_vnode()'' placed all over the source, and care needs to be taken for each one. Although it does not hurt to call it in excess, as no wait will happen if '''zp->z_vnode''' is already set. <br />
* It is unknown if it is OK to resume ZFS execution while '''z_vnode''' is still NULL, and only block (to wait for it to be filled in) once we are close to leaving the VNOP.<br />
<br />
<br />
* However, that '''vnop_reclaim''' are direct and can be cleaned up immediately is very desirable. We no longer need to check for the '''zp without vp''' case in ''zfs_zget()''. <br />
* We no longer need to lock protect '''vnop_fsync''', '''vnop_pageout''' in case they are called from ''vnode_create()''.<br />
* We don't have to throttle the '''reclaim thread''' due to the list being massive (populating the list is much faster than cleaning up a '''zp''' node - up to 250,000 nodes in the list has been observed).<br />
<br />
<br />
[[File:VX_create.svg|thumb|Create files in sequential order]]<br />
<br />
<br><br />
<br />
[[File:iozone.svg|thumb|IOzone flamegraph]]<br />
<br />
[[File:iozoneX.svg|thumb|IOzone flamegraph (untrimmed)]]<br />
<br />
<br><br />
<br />
<br />
------<br />
<br />
== Iozone ==<br />
<br />
<br />
Quick peek at how they compare, just to see how much we should improve it by.<br />
<br />
HFS+ and ZFS were created on the same virtual disk in VMware. No, this is not ideal testing specs, but should serve as an indicator. The <br />
pool was created with<br />
<br />
# zpool create -f -O atime=off -o ashift=12 -O casesensitivity=insensitive -O normalization=formD BOOM /dev/disk1<br />
<br />
and HFS+ created with standard OS X Disk Utility, with everything default. (Journaled).<br />
<br />
Iozone was run with standard automode, ie:<br />
<br />
# iozone -a -b outfile.xls<br />
<br />
[[File:hfs2_read.png|thumb|HFS+ read]]<br />
[[File:hfs2_write.png|thumb|HFS+ write]]<br />
[[File:zfs2_read.png|thumb|ZFS read]]<br />
[[File:zfs2_write.png|thumb|ZFS write]]<br />
<br />
As a guess, writes need to double, and reads need to triple.</div>50.168.32.57https://openzfsonosx.org/wiki/DevelopmentDevelopment2014-04-11T05:13:08Z<p>50.168.32.57: </p>
<hr />
<div>[[Category:O3X development]]<br />
== Kernel ==<br />
=== Debugging with GDB ===<br />
<br />
Dealing with [[Panic|panics]].<br />
<br />
Apple's documentation: https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KEXTConcept/KEXTConceptDebugger/debug_tutorial.html<br />
<br />
Boot target VM with<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo nvram boot-args="-v keepsyms=y debug=0x144"<br />
</syntaxhighlight><br />
<br />
Make it panic.<br />
<br />
On your development machine, you will need the Kernel Debug Kit. Download it from Apple [https://developer.apple.com/downloads/index.action?q=Kernel%20Debug%20Kit here].<br />
<br />
<syntaxhighlight lang="text"><br />
gdb /Volumes/Kernelit/mach_kernel<br />
source /Volumes/KernelDebugKit/kgmacros<br />
target remote-kdp<br />
<br />
kdp-reattach 192.168.30.133 # obviously use the IP of your target / crashed VM<br />
<br />
showallkmods<br />
(find "address" for zfs and spl modules)<br />
^Z # suspend gdb, or, use another terminal<br />
<br />
kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/<br />
<br />
fg # resume gdb, or go back to gdb terminal<br />
set kext-symbol-file-path /tmp<br />
<br />
add-kext /tmp/spl.kext <br />
add-kext /tmp/zfs.kext<br />
<br />
bt<br />
</syntaxhighlight><br />
<br />
=== Debugging with LLDB ===<br />
<br />
<syntaxhighlight lang="text"><br />
echo "settings set target.load-script-from-symbol-file true" >> ~/.lldbinit<br />
lldb /Volumes/KernelDebugKit/mach_kernel<br />
kdp-remote 192.168.30.146<br />
showallkmods<br />
addkext -F /tmp/spl.kext/Contents/MacOS/spl 0xffffff7f8ebb0000 (Address from showallkmods)<br />
addkext -F /tmp/zfs.kext/Contents/MacOS/zfs 0xffffff7f8ebbf000<br />
</syntaxhighlight><br />
<br />
Then follow the guide for GDB above.<br />
<br />
=== Non-panic ===<br />
<br />
If you prefer to work in gdb, you can always panic a kernel with<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo dtrace -w -n "BEGIN{ panic();}"<br />
</syntaxhighlight><br />
<br />
But this was revealing:<br />
<br />
# /usr/libexec/stackshot -i -f /tmp/stackshot.log <br />
# symstacks.rb -f /tmp/stackshot.log -s -w /tmp/trace.txt<br />
# less /tmp/trace.txt<br />
Note that my hang is here:<br />
<br />
PID: 156<br />
Process: zpool<br />
Thread ID: 0x4e2<br />
Thread state: 0x9 == TH_WAIT |TH_UNINT <br />
Thread wait_event: 0xffffff8006608a6c<br />
Kernel stack: <br />
machine_switch_context (in mach_kernel) + 366 (0xffffff80002b3d3e)<br />
0xffffff800022e711 (in mach_kernel) + 1281 (0xffffff800022e711)<br />
thread_block_reason (in mach_kernel) + 300 (0xffffff800022d9dc)<br />
lck_mtx_sleep (in mach_kernel) + 78 (0xffffff80002265ce)<br />
0xffffff8000569ef6 (in mach_kernel) + 246 (0xffffff8000569ef6)<br />
msleep (in mach_kernel) + 116 (0xffffff800056a2e4)<br />
0xffffff7f80e52a76 (0xffffff7f80e52a76)<br />
0xffffff7f80e53fae (0xffffff7f80e53fae)<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
0xffffff7f80f2bb4e (0xffffff7f80f2bb4e)<br />
0xffffff7f80f1a9b7 (0xffffff7f80f1a9b7)<br />
0xffffff7f80f1b65f (0xffffff7f80f1b65f)<br />
0xffffff7f80f042ee (0xffffff7f80f042ee)<br />
0xffffff7f80f45c5b (0xffffff7f80f45c5b)<br />
0xffffff7f80f4ce92 (0xffffff7f80f4ce92)<br />
spec_ioctl (in mach_kernel) + 157 (0xffffff8000320bfd)<br />
VNOP_IOCTL (in mach_kernel) + 244 (0xffffff8000311e84)<br />
It is a shame that it only shows the kernel symbols, and not inside SPL and ZFS, but we can ask it to load another sym file. (Alas, it cannot handle multiple symbols files. Fix this Apple.)<br />
<br />
# kextstat<br />
# Grab the addresses of spl and zfs again<br />
# kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/ <br />
<br />
# symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.spl.sym<br />
0xffffff800056a2e4 (0xffffff800056a2e4)<br />
spl_cv_wait (in net.lundman.spl.sym) + 54 (0xffffff7f80e52a76)<br />
taskq_wait (in net.lundman.spl.sym) + 78 (0xffffff7f80e53fae)<br />
taskq_destroy (in net.lundman.spl.sym) + 35 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
<br />
# symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.zfs.sym<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
vdev_open_children (in net.lundman.zfs.sym) + 336 (0xffffff7f80f1a870)<br />
vdev_root_open (in net.lundman.zfs.sym) + 94 (0xffffff7f80f2bb4e)<br />
vdev_open (in net.lundman.zfs.sym) + 311 (0xffffff7f80f1a9b7)<br />
vdev_create (in net.lundman.zfs.sym) + 31 (0xffffff7f80f1b65f)<br />
spa_create (in net.lundman.zfs.sym) + 878 (0xffffff7f80f042ee)<br />
<br />
Voilà!<br />
<br />
=== Memory leaks ===<br />
<br />
If you suspect memory issues, for example from a panic like<br />
<br />
<syntaxhighlight lang="text"><br />
panic(cpu 1 caller 0xffffff80002438d8): "zalloc: \"kalloc.1024\" (100535 elements) retry fail 3, kfree_nop_count: 0"@/SourceCache/xnu/xnu-2050.7.9/osfmk/kern/zalloc.c:1826<br />
</syntaxhighlight><br />
<br />
you can attach gdb and use the zprint command:<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) zprint<br />
ZONE COUNT TOT_SZ MAX_SZ ELT_SZ ALLOC_SZ TOT_ALLOC TOT_FREE NAME<br />
0xffffff8002a89250 1620133 18c1000 22a3599 16 1000 125203838 123583705 kalloc.16 CX<br />
0xffffff8006306c50 110335 35f000 4ce300 32 1000 13634985 13524650 kalloc.32 CX<br />
0xffffff8006306a00 133584 82a000 e6a900 64 1000 26510120 26376536 kalloc.64 CX<br />
0xffffff80063067b0 610090 4a84000 614f4c0 128 1000 50524515 49914425 kalloc.128 CX<br />
0xffffff8006306560 1070398 121a2000 1b5e4d60 256 1000 72534632 71464234 kalloc.256 CX<br />
0xffffff8006306310 399302 d423000 daf26b0 512 1000 39231204 38831902 kalloc.512 CX<br />
0xffffff80063060c0 100404 6231000 c29e980 1024 1000 22949693 22849289 kalloc.1024 CX<br />
0xffffff8006305e70 292 9a000 200000 2048 1000 77633725 77633433 kalloc.2048 CX<br />
</syntaxhighlight><br />
<br />
In this case, kalloc.256 is suspect.<br />
<br />
Reboot kernel with zlog=kalloc.256 on the command line, then we can use<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) findoldest <br />
oldest record is at log index 393:<br />
<br />
--------------- ALLOC 0xffffff803276ec00 : index 393 : ztime 21643824 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
and indeed, list any index<br />
<br />
(gdb) zstack 394<br />
<br />
--------------- ALLOC 0xffffff8032d60700 : index 394 : ztime 21648810 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
How many times was zfs_kmem_alloc involved in the leaked allocs?<br />
<br />
(gdb) countpcs 0xffffff7f80e847df<br />
occurred 3999 times in log (100% of records)<br />
</syntaxhighlight><br />
<br />
At least we know it is our fault.<br />
<br />
How many times is it arc_buf_alloc?<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) countpcs 0xffffff7f80e90649<br />
occurred 2390 times in log (59% of records)<br />
</syntaxhighlight><br />
<br />
== Flamegraphs ==<br />
<br />
Huge thanks to [http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html BrendanGregg] for so much of the dtrace magic.<br />
<br />
dtrace the kernel while running a command:<br />
<br />
dtrace -x stackframes=100 -n 'profile-997 /arg0/ {<br />
@[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks<br />
<br />
It will run for 60 seconds.<br />
<br />
Convert it to a flamegraph:<br />
<br />
./stackcollapse.pl out.stacks > out.folded<br />
./flamegraph.pl out.folded > out.svg<br />
<br />
<br />
This is '''rsync -a /usr/ /BOOM/deletea/''' running:<br />
<br />
[[File:rsyncflamegraph.svg|thumb|rsync flamegraph]]<br />
<br />
<br />
Or running '''Bonnie++''' in various stages:<br />
<br />
<gallery mode="packed-hover"><br />
File:create.svg|Create files in sequential order|alt=[[File:create.svg]]<br />
File:stat.svg|Stat files in sequential order|alt=Stat files in sequential order<br />
File:delete.svg|Delete files in sequential order|alt=Delete files in sequential order<br />
</gallery><br />
<br />
<br />
== ZVOL block size ==<br />
<br />
At the moment, we can only handle block size of 512 and 4096 in ZFS. And 512 is handled poorly. To write a single 512 block, IOKit layer will read in 8 blocks (to make up a PAGE_SIZE read) modify the buffer, then write 8 blocks. This makes ZFS think we wrote 8 blocks, and all stats are updated as such. This is undesirable since compression ratio etc cannot be reported correctly.<br />
<br />
This limitation is in specfs, which is applied to any BLK device created in /dev. For usage with Apple and the GUI, there is not much we can do. But we are planning to create a secondary blk/chr nodes (maybe in /var/run/zfs/dsk/$POOL/$name or similar for compatibility) which will have our implementation attached as vnops. This will let us handle any block size required.<br />
<br />
<br />
== vnode_create thread ==<br />
<br />
Currently, we have to protect the call to vnode_create() due to the possibility that it calls several vnops (fsync, pageout, reclaim) and have a reclaim thread to deal with that. One issue is reclaim can both be called as a separate thread (periodic reclaims) and as the ''calling thread'' of vnode_create. This makes locking tricky.<br />
<br />
One idea is we create a vnode_create thread (with each dataset). The in zfs_zget and zfs_znode_alloc, which calls vnode_create, we simply place the newly allocated zp on the vnode_create thread's ''request list'', and resume execution. Once we have passed the "unlock" part of the functions, we can wait for the vnode_create thread to complete the request so we do not resume execution without the vp attached.<br />
<br />
In the vnode_create thread, we pop items off the list, call vnode_create (guaranteed as a separate thread now) and once completed, mark the node done, and signal the process which might be waiting.<br />
<br />
In theory this should let us handle reclaim, fsync, pageout as normal upstream ZFS. no special cases required. This should alleviate the current situation where the reclaim_list grows to very large numbers (230,000 nodes observed). <br />
<br />
It might mean we need to be careful in any function which might end up in zfs_znode_alloc, to make sure we have a vp attached before we resume. For example, zfs_lookup and zfs_create.<br />
<br />
----<br />
<br />
The branch '''vnode_thread''' is just this idea, it creates a vnode_create_thread per dataset, when we need to call ''vnode_create()'' it simply adds the '''zp''' to the list of requests, then signals the thread. The thread will call ''vnode_create()'' and upon completion, set '''zp->z_vnode''' then signal back. The requester for '''zp''' will sit in ''zfs_znode_wait_vnode()'' waiting for the signal back.<br />
<br />
This means the ZFS code base is littered with calls to ''zfs_znode_wait_vnode()'' (46 to be exact) placed at the correct location. Ie, '''after''' all the locks are released, and ''zil_commit()'' has been called. It is possible that this number could be decreased, as the calls to ''zfs_zget()'' appear to not suffer the ''zil_commit()'' issue, and can probably just block at the end of ''zfs_zget()''. However the calls to ''zfs_mknode()'' is what causes the issue.<br />
<br />
'''sysctl zfs.vnode_create_list''' tracks the number of '''zp''' nodes in the list waiting for ''vnode_create()'' to complete. Typically, 0, or 1. Rarely higher.<br />
<br />
Appears to deadlock from time to time. <br />
<br />
-----<br />
<br />
The second branch '''vnode_threadX''' takes a slightly different approach. Instead of a permanent vnode_create_thread, it simply spawns a thread when ''zfs_znode_getvnode()'' is called. This new thread calls ''_zfs_znode_getvnode()'' which functions as above. Call ''vnode_create()'' then signal back. The same ''zfs_znode_wait_vnode()'' blockers exist.<br />
<br />
'''sysctl zfs.vnode_create_list''' tracks the number of '''vnode_create threads''' we have started. Interestingly, these remain 0, or 1. Rarely higher.<br />
<br />
Has not yet deadlocked.<br />
<br />
<br />
<br />
Conclusions:<br />
<br />
* It is undesirable that we have ''zfs_znode_wait_vnode()'' placed all over the source, and care needs to be taken for each one. Although it does not hurt to call it in excess, as no wait will happen if '''zp->z_vnode''' is already set. <br />
* It is unknown if it is OK to resume ZFS execution while '''z_vnode''' is still NULL, and only block (to wait for it to be filled in) once we are close to leaving the VNOP.<br />
<br />
<br />
* However, that '''vnop_reclaim''' are direct and can be cleaned up immediately is very desirable. We no longer need to check for the '''zp without vp''' case in ''zfs_zget()''. <br />
* We no longer need to lock protect '''vnop_fsync''', '''vnop_pageout''' in case they are called from ''vnode_create()''.<br />
* We don't have to throttle the '''reclaim thread''' due to the list being massive (populating the list is much faster than cleaning up a '''zp''' node - up to 250,000 nodes in the list has been observed).<br />
<br />
<br />
[[File:VX_create.svg|thumb|Create files in sequential order]]<br />
<br />
<br><br />
<br />
[[File:iozone.svg|thumb|IOzone flamegraph]]<br />
<br />
[[File:iozoneX.svg|thumb|IOzone flamegraph (untrimmed)]]<br />
<br />
<br><br />
<br />
<br />
------<br />
<br />
== Iozone ==<br />
<br />
<br />
Quick peek at how they compare, just to see how much we should improve it by.<br />
<br />
HFS+ and ZFS were created on the same virtual disk in VMware. No, this is not ideal testing specs, but should serve as an indicator. The <br />
pool was created with<br />
<br />
# zpool create -f -O atime=off -o ashift=12 -O casesensitivity=insensitive -O normalization=formD BOOM /dev/disk1<br />
<br />
and HFS+ created with standard OS X Disk Utility, with everything default. (Journaled).<br />
<br />
Iozone was run with standard automode, ie:<br />
<br />
# iozone -a -b outfile.xls<br />
<br />
[[File:hfs2_read.png|thumb|HFS+ read]]<br />
[[File:hfs2_write.png|thumb|HFS+ write]]<br />
[[File:zfs2_read.png|thumb|ZFS read]]<br />
[[File:zfs2_write.png|thumb|ZFS write]]<br />
<br />
As a guess, writes need to double, and reads need to triple.</div>50.168.32.57https://openzfsonosx.org/wiki/DevelopmentDevelopment2014-04-11T05:08:28Z<p>50.168.32.57: /* Memory leaks */</p>
<hr />
<div>[[Category:O3X development]]<br />
== Kernel ==<br />
=== Debugging with GDB ===<br />
<br />
Dealing with [[Panic|panics]].<br />
<br />
Apple's documentation: https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KEXTConcept/KEXTConceptDebugger/debug_tutorial.html<br />
<br />
Boot target VM with<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo nvram boot-args="-v keepsyms=y debug=0x144"<br />
</syntaxhighlight><br />
<br />
Make it panic.<br />
<br />
On your development machine, you will need the Kernel Debug Kit. Download it from Apple [https://developer.apple.com/downloads/index.action?q=Kernel%20Debug%20Kit here].<br />
<br />
<syntaxhighlight lang="text"><br />
gdb /Volumes/Kernelit/mach_kernel<br />
source /Volumes/KernelDebugKit/kgmacros<br />
target remote-kdp<br />
<br />
kdp-reattach 192.168.30.133 # obviously use the IP of your target / crashed VM<br />
<br />
showallkmods<br />
(find "address" for zfs and spl modules)<br />
^Z # suspend gdb, or, use another terminal<br />
<br />
kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/<br />
<br />
fg # resume gdb, or go back to gdb terminal<br />
set kext-symbol-file-path /tmp<br />
<br />
add-kext /tmp/spl.kext <br />
add-kext /tmp/zfs.kext<br />
<br />
bt<br />
</syntaxhighlight><br />
<br />
=== Debugging with LLDB ===<br />
<br />
<syntaxhighlight lang="text"><br />
echo "settings set target.load-script-from-symbol-file true" >> ~/.lldbinit<br />
lldb /Volumes/KernelDebugKit/mach_kernel<br />
kdp-remote 192.168.30.146<br />
showallkmods<br />
addkext -F /tmp/spl.kext/Contents/MacOS/spl 0xffffff7f8ebb0000 (Address from showallkmods)<br />
addkext -F /tmp/zfs.kext/Contents/MacOS/zfs 0xffffff7f8ebbf000<br />
</syntaxhighlight><br />
<br />
Then follow the guide for GDB above.<br />
<br />
=== Non-panic ===<br />
<br />
If you prefer to work in gdb, you can always panic a kernel with<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo dtrace -w -n "BEGIN{ panic();}"<br />
</syntaxhighlight><br />
<br />
But this was revealing:<br />
<br />
# /usr/libexec/stackshot -i -f /tmp/stackshot.log <br />
# symstacks.rb -f /tmp/stackshot.log -s -w /tmp/trace.txt<br />
# less /tmp/trace.txt<br />
Note that my hang is here:<br />
<br />
PID: 156<br />
Process: zpool<br />
Thread ID: 0x4e2<br />
Thread state: 0x9 == TH_WAIT |TH_UNINT <br />
Thread wait_event: 0xffffff8006608a6c<br />
Kernel stack: <br />
machine_switch_context (in mach_kernel) + 366 (0xffffff80002b3d3e)<br />
0xffffff800022e711 (in mach_kernel) + 1281 (0xffffff800022e711)<br />
thread_block_reason (in mach_kernel) + 300 (0xffffff800022d9dc)<br />
lck_mtx_sleep (in mach_kernel) + 78 (0xffffff80002265ce)<br />
0xffffff8000569ef6 (in mach_kernel) + 246 (0xffffff8000569ef6)<br />
msleep (in mach_kernel) + 116 (0xffffff800056a2e4)<br />
0xffffff7f80e52a76 (0xffffff7f80e52a76)<br />
0xffffff7f80e53fae (0xffffff7f80e53fae)<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
0xffffff7f80f2bb4e (0xffffff7f80f2bb4e)<br />
0xffffff7f80f1a9b7 (0xffffff7f80f1a9b7)<br />
0xffffff7f80f1b65f (0xffffff7f80f1b65f)<br />
0xffffff7f80f042ee (0xffffff7f80f042ee)<br />
0xffffff7f80f45c5b (0xffffff7f80f45c5b)<br />
0xffffff7f80f4ce92 (0xffffff7f80f4ce92)<br />
spec_ioctl (in mach_kernel) + 157 (0xffffff8000320bfd)<br />
VNOP_IOCTL (in mach_kernel) + 244 (0xffffff8000311e84)<br />
It is a shame that it only shows the kernel symbols, and not inside SPL and ZFS, but we can ask it to load another sym file. (Alas, it cannot handle multiple symbols files. Fix this Apple.)<br />
<br />
# kextstat<br />
# Grab the addresses of spl and zfs again<br />
# kextutil -s /tmp -n -k /Volumes/KernelDebugKit/mach_kernel -e -r /Volumes/KernelDebugKit module/zfs/zfs.kext/ ../spl/module/spl/spl.kext/ <br />
<br />
# symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.spl.sym<br />
0xffffff800056a2e4 (0xffffff800056a2e4)<br />
spl_cv_wait (in net.lundman.spl.sym) + 54 (0xffffff7f80e52a76)<br />
taskq_wait (in net.lundman.spl.sym) + 78 (0xffffff7f80e53fae)<br />
taskq_destroy (in net.lundman.spl.sym) + 35 (0xffffff7f80e54173)<br />
0xffffff7f80f1a870 (0xffffff7f80f1a870)<br />
<br />
# symstacks.rb -f /tmp/stackshot.log -s -k /tmp/net.lundman.zfs.sym<br />
0xffffff7f80e54173 (0xffffff7f80e54173)<br />
vdev_open_children (in net.lundman.zfs.sym) + 336 (0xffffff7f80f1a870)<br />
vdev_root_open (in net.lundman.zfs.sym) + 94 (0xffffff7f80f2bb4e)<br />
vdev_open (in net.lundman.zfs.sym) + 311 (0xffffff7f80f1a9b7)<br />
vdev_create (in net.lundman.zfs.sym) + 31 (0xffffff7f80f1b65f)<br />
spa_create (in net.lundman.zfs.sym) + 878 (0xffffff7f80f042ee)<br />
<br />
Voilà!<br />
<br />
=== Memory leaks ===<br />
<br />
If you suspect memory issues, for example from a panic like<br />
<br />
<syntaxhighlight lang="text"><br />
panic(cpu 1 caller 0xffffff80002438d8): "zalloc: \"kalloc.1024\" (100535 elements) retry fail 3, kfree_nop_count: 0"@/SourceCache/xnu/xnu-2050.7.9/osfmk/kern/zalloc.c:1826<br />
</syntaxhighlight><br />
<br />
you can attach gdb and use the zprint command:<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) zprint<br />
ZONE COUNT TOT_SZ MAX_SZ ELT_SZ ALLOC_SZ TOT_ALLOC TOT_FREE NAME<br />
0xffffff8002a89250 1620133 18c1000 22a3599 16 1000 125203838 123583705 kalloc.16 CX<br />
0xffffff8006306c50 110335 35f000 4ce300 32 1000 13634985 13524650 kalloc.32 CX<br />
0xffffff8006306a00 133584 82a000 e6a900 64 1000 26510120 26376536 kalloc.64 CX<br />
0xffffff80063067b0 610090 4a84000 614f4c0 128 1000 50524515 49914425 kalloc.128 CX<br />
0xffffff8006306560 1070398 121a2000 1b5e4d60 256 1000 72534632 71464234 kalloc.256 CX<br />
0xffffff8006306310 399302 d423000 daf26b0 512 1000 39231204 38831902 kalloc.512 CX<br />
0xffffff80063060c0 100404 6231000 c29e980 1024 1000 22949693 22849289 kalloc.1024 CX<br />
0xffffff8006305e70 292 9a000 200000 2048 1000 77633725 77633433 kalloc.2048 CX<br />
</syntaxhighlight><br />
<br />
In this case, kalloc.256 is suspect.<br />
<br />
Reboot kernel with zlog=kalloc.256 on the command line, then we can use<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) findoldest <br />
oldest record is at log index 393:<br />
<br />
--------------- ALLOC 0xffffff803276ec00 : index 393 : ztime 21643824 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
and indeed, list any index<br />
<br />
(gdb) zstack 394<br />
<br />
--------------- ALLOC 0xffffff8032d60700 : index 394 : ztime 21648810 -------------<br />
0xffffff800024352e <zalloc_canblock+78>: mov %eax,-0xcc(%rbp)<br />
0xffffff80002245bd <get_zone_search+23>: jmpq 0xffffff80002246d8 <KALLOC_ZINFO_SALLOC+35><br />
0xffffff8000224c39 <OSMalloc+89>: mov %rax,-0x18(%rbp)<br />
0xffffff7f80e847df <zfs_kmem_alloc+15>: mov %rax,%r15<br />
0xffffff7f80e90649 <arc_buf_alloc+41>: mov %rax,-0x28(%rbp)<br />
How many times was zfs_kmem_alloc involved in the leaked allocs?<br />
<br />
(gdb) countpcs 0xffffff7f80e847df<br />
occurred 3999 times in log (100% of records)<br />
</syntaxhighlight><br />
<br />
At least we know it is our fault.<br />
<br />
How many times is it arc_buf_alloc?<br />
<br />
<syntaxhighlight lang="text"><br />
(gdb) countpcs 0xffffff7f80e90649<br />
occurred 2390 times in log (59% of records)<br />
</syntaxhighlight><br />
<br />
== Flamegraphs ==<br />
<br />
Huge thanks to [http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html BrendanGregg] for so much of the dtrace magic.<br />
<br />
dtrace the kernel while running a command:<br />
<br />
dtrace -x stackframes=100 -n 'profile-997 /arg0/ {<br />
@[stack()] = count(); } tick-60s { exit(0); }' -o out.stacks<br />
<br />
It will run for 60 seconds.<br />
<br />
Convert it to a flamegraph:<br />
<br />
./stackcollapse.pl out.stacks > out.folded<br />
./flamegraph.pl out.folded > out.svg<br />
<br />
<br />
This is '''rsync -a /usr/ /BOOM/deletea/''' running:<br />
<br />
[[File:rsyncflamegraph.svg|thumb|rsync flamegraph]]<br />
<br />
<br />
Or running '''bonnie++''' in various stages:<br />
<br />
<gallery mode="packed-hover"><br />
File:create.svg|Create files in sequential order|alt=[[File:create.svg]]<br />
File:stat.svg|Stat files in sequential order|alt=Stat files in sequential order<br />
File:delete.svg|Delete files in sequential order|alt=Delete files in sequential order<br />
</gallery><br />
<br />
<br />
== ZVOL block size ==<br />
<br />
At the moment, we can only handle block size of 512 and 4096 in ZFS. And 512 is handled poorly. To write a single 512 block, IOKit layer will read in 8 blocks (to make up a PAGE_SIZE read) modify the buffer, then write 8 blocks. This makes ZFS think we wrote 8 blocks, and all stats are updated as such. This is undesirable since compression ratio etc cannot be reported correctly.<br />
<br />
This limitation is in specfs, which is applied to any BLK device created in /dev. For usage with Apple and the GUI, there is not much we can do. But we are planning to create a secondary blk/chr nodes (maybe in /var/run/zfs/dsk/$POOL/$name or similar for compatibility) which will have our implementation attached as vnops. This will let us handle any block size required.<br />
<br />
<br />
== vnode_create thread ==<br />
<br />
Currently, we have to protect the call to vnode_create() due to the possibility that it calls several vnops (fsync, pageout, reclaim) and have a reclaim thread to deal with that. One issue is reclaim can both be called as a separate thread (periodic reclaims) and as the ''calling thread'' of vnode_create. This makes locking tricky.<br />
<br />
One idea is we create a vnode_create thread (with each dataset). The in zfs_zget and zfs_znode_alloc, which calls vnode_create, we simply place the newly allocated zp on the vnode_create thread's ''request list'', and resume execution. Once we have passed the "unlock" part of the functions, we can wait for the vnode_create thread to complete the request so we do not resume execution without the vp attached.<br />
<br />
In the vnode_create thread, we pop items off the list, call vnode_create (guaranteed as a separate thread now) and once completed, mark the node done, and signal the process which might be waiting.<br />
<br />
In theory this should let us handle reclaim, fsync, pageout as normal upstream ZFS. no special cases required. This should alleviate the current situation where the reclaim_list grows to very large numbers (230,000 nodes observed). <br />
<br />
It might mean we need to be careful in any function which might end up in zfs_znode_alloc, to make sure we have a vp attached before we resume. For example, zfs_lookup and zfs_create.<br />
<br />
----<br />
<br />
The branch '''vnode_thread''' is just this idea, it creates a vnode_create_thread per dataset, when we need to call ''vnode_create()'' it simply adds the '''zp''' to the list of requests, then signals the thread. The thread will call ''vnode_create()'' and upon completion, set '''zp->z_vnode''' then signal back. The requester for '''zp''' will sit in ''zfs_znode_wait_vnode()'' waiting for the signal back.<br />
<br />
This means the ZFS code base is littered with calls to ''zfs_znode_wait_vnode()'' (46 to be exact) placed at the correct location. Ie, '''after''' all the locks are released, and ''zil_commit()'' has been called. It is possible that this number could be decreased, as the calls to ''zfs_zget()'' appear to not suffer the ''zil_commit()'' issue, and can probably just block at the end of ''zfs_zget()''. However the calls to ''zfs_mknode()'' is what causes the issue.<br />
<br />
'''sysctl zfs.vnode_create_list''' tracks the number of '''zp''' nodes in the list waiting for ''vnode_create()'' to complete. Typically, 0, or 1. Rarely higher.<br />
<br />
Appears to deadlock from time to time. <br />
<br />
-----<br />
<br />
The second branch '''vnode_threadX''' takes a slightly different approach. Instead of a permanent vnode_create_thread, it simply spawns a thread when ''zfs_znode_getvnode()'' is called. This new thread calls ''_zfs_znode_getvnode()'' which functions as above. Call ''vnode_create()'' then signal back. The same ''zfs_znode_wait_vnode()'' blockers exist.<br />
<br />
'''sysctl zfs.vnode_create_list''' tracks the number of '''vnode_create threads''' we have started. Interestingly, these remain 0, or 1. Rarely higher.<br />
<br />
Has not yet deadlocked.<br />
<br />
<br />
<br />
Conclusions:<br />
<br />
* It is undesirable that we have ''zfs_znode_wait_vnode()'' placed all over the source, and care needs to be taken for each one. Although it does not hurt to call it in excess, as no wait will happen if '''zp->z_vnode''' is already set. <br />
* It is unknown if it is OK to resume ZFS execution while '''z_vnode''' is still NULL, and only block (to wait for it to be filled in) once we are close to leaving the VNOP.<br />
<br />
<br />
* However, that '''vnop_reclaim''' are direct and can be cleaned up immediately is very desirable. We no longer need to check for the '''zp without vp''' case in ''zfs_zget()''. <br />
* We no longer need to lock protect '''vnop_fsync''', '''vnop_pageout''' in case they are called from ''vnode_create()''.<br />
* We don't have to throttle the '''reclaim thread''' due to the list being massive (populating the list is much faster than cleaning up a '''zp''' node - up to 250,000 nodes in the list has been observed).<br />
<br />
<br />
[[File:VX_create.svg|thumb|Create files in sequential order]]<br />
<br />
<br><br />
<br />
[[File:iozone.svg|thumb|IOzone flamegraph]]<br />
<br />
[[File:iozoneX.svg|thumb|IOzone flamegraph (untrimmed)]]<br />
<br />
<br><br />
<br />
<br />
------<br />
<br />
== Iozone ==<br />
<br />
<br />
Quick peek at how they compare, just to see how much we should improve it by.<br />
<br />
HFS+ and ZFS were created on the same virtual disk in VMware. No, this is not ideal testing specs, but should serve as an indicator. The <br />
pool was created with<br />
<br />
# zpool create -f -O atime=off -o ashift=12 -O casesensitivity=insensitive -O normalization=formD BOOM /dev/disk1<br />
<br />
and HFS+ created with standard OS X Disk Utility, with everything default. (Journaled).<br />
<br />
Iozone was run with standard automode, ie:<br />
<br />
# iozone -a -b outfile.xls<br />
<br />
[[File:hfs2_read.png|thumb|HFS+ read]]<br />
[[File:hfs2_write.png|thumb|HFS+ write]]<br />
[[File:zfs2_read.png|thumb|ZFS read]]<br />
[[File:zfs2_write.png|thumb|ZFS write]]<br />
<br />
As a guess, writes need to double, and reads need to triple.</div>50.168.32.57https://openzfsonosx.org/wiki/PerformancePerformance2014-04-11T03:59:06Z<p>50.168.32.57: /* Current Benchmarks */</p>
<hr />
<div><br />
== Status ==<br />
<br />
Currently OpenZFS on OS X is active development, with priority being given to stability and integration enhancements, before performance.<br />
<br />
== Current Benchmarks ==<br />
<br />
In order to establish a baseline of current performance of OpenZFS on OS X, measurements have been made using the iozone [http://www.iozone.org] benchmarking tool. The benchmark consists of running various ZFS implementations inside a VMware 6.0.2 VM on a 2011 iMac. Each VM is provisioned with 8GB of RAM, an OS boot drive, and a 5 GB second HDD containing a ZFS dataset. The HDDs are standard VMware .vmx files. <br />
<br />
The test zpool was created using the following command:<br />
<br />
::zpool create -o ashift=12 -f tank <disk device name><br />
<br />
The benchmark consists of the following steps:<br />
<br />
# Start the VM.<br />
# Import the tank dataset<br />
# Execute -> mkdir /tank/tmp && cd /tank/tmp<br />
# Execute -> time iozone -a<br />
# Record Time 1<br />
# Execute -> time iozone -a<br />
# Record time 2<br />
# Terminate the VM and VMware before moving to the next ZFS implementation/OS combination.<br />
<br />
<br />
The results are as follows:<br />
<br />
<br />
[[File:ZFS_iozone_time.png|left|frame|Comparison of ZFS implementations]]</div>50.168.32.57https://openzfsonosx.org/wiki/PerformancePerformance2014-04-11T03:58:43Z<p>50.168.32.57: /* Status */</p>
<hr />
<div><br />
== Status ==<br />
<br />
Currently OpenZFS on OS X is active development, with priority being given to stability and integration enhancements, before performance.<br />
<br />
== Current Benchmarks ==<br />
<br />
In order to establish a baseline of current performance of OpenZFS on OSX, measurements have been made using the iozone [http://www.iozone.org] benchmarking tool. The benchmark consists of running various ZFS implementations inside a VMware 6.0.2 VM on a 2011 iMac. Each VM is provisioned with 8GB of RAM, an OS boot drive, and a 5 GB second HDD containing a ZFS dataset. The HDDs are standard VMware .vmx files. <br />
<br />
The test zpool was created using the following command:<br />
<br />
::zpool create -o ashift=12 -f tank <disk device name><br />
<br />
The benchmark consists of the following steps:<br />
<br />
# Start the VM.<br />
# Import the tank dataset<br />
# Execute -> mkdir /tank/tmp && cd /tank/tmp<br />
# Execute -> time iozone -a<br />
# Record Time 1<br />
# Execute -> time iozone -a<br />
# Record time 2<br />
# Terminate the VM and VMware before moving to the next ZFS implementation/OS combination.<br />
<br />
<br />
The results are as follows:<br />
<br />
<br />
[[File:ZFS_iozone_time.png|left|frame|Comparison of ZFS implementations]]</div>50.168.32.57https://openzfsonosx.org/wiki/DocumentationDocumentation2014-04-06T21:42:28Z<p>50.168.32.57: </p>
<hr />
<div>[[Category:About O3X]]<br />
[[Category:Getting and installing O3X]]<br />
<br />
== Documentation ==<br />
<br />
General OpenZFS usage can be found on the [http://open-zfs.org OpenZFS wiki]. But we do need to document some Apple OS X specific things.<br />
<br />
=== Getting started ===<br />
<br />
* [[zpool|Creating a pool]]<br />
<br />
=== Integration ===<br />
<br />
* [[Suppressing the annoying popup]]<br />
<br />
* [[Autoimport|Automatically importing pools]]<br />
<br />
=== Troubleshooting ===<br />
<br />
* [[DegradedPool|Degraded pool]]<br />
<br />
* [[panic|Kernel panic]]</div>50.168.32.57https://openzfsonosx.org/wiki/O3XWiki:DonationsO3XWiki:Donations2014-03-29T03:45:01Z<p>50.168.32.57: </p>
<hr />
<div>== Thanks ==<br />
<br />
The OpenZFS on OS X project would like to thank the following companies:<br />
<br />
'''GMO Internet''' [http://www.gmo.jp] for the hosting and rack space<br />
<br />
'''GlobalSign''' [http://globalsign.com] for the Open Source free SSL certificate<br />
<br />
'''OpenZFS''' [http://open-zfs.org] The main ZFS software collective<br />
<br />
== Donations ==<br />
<br />
The best way to show your appreciation for the OpenZFS project is to donate to the upstream project at [http://open-zfs.org http://open-zfs.org]<br />
<br />
If you wish to donate specifically to the OS X project, you can do so with PayPal at '''japan@lundman.net'''. But be aware any donations will most likely be used on beer and pizza, and other OpenZFS conferences, possibly not on the feature you wish for. :)</div>50.168.32.57https://openzfsonosx.org/wiki/O3XWiki:DonationsO3XWiki:Donations2014-03-29T03:44:52Z<p>50.168.32.57: </p>
<hr />
<div>== Thanks ==<br />
<br />
The OpenZFS on OS X project would like to thank the following companies:<br />
<br />
'''GMO Internet''' [http://www.gmo.jp] for the hosting and rack space<br />
<br />
'''GlobalSign''' [http://globalsign.com] for the Open Source free SSL certificate<br />
<br />
'''OpenZFS''' [http://open-zfs.org] The main ZFS software collective<br />
<br />
== Donations ==<br />
<br />
The best way to show your appreciation for the OpenZFS project is to donate to the upstream project at [http://open-zfs.org http://open-zfs.org]<br />
<br />
If you wish to donate specifically to the OS X project, you can do so with PayPal at '''japan@lundman.net'''. But be aware any donations will most likely be used on beer and pizza, and other OpenZFS conferences, possibly not on the feature you wish for. :)<br />
<br />
http://helloworld.com</div>50.168.32.57https://openzfsonosx.org/wiki/O3XWiki:DonationsO3XWiki:Donations2014-03-29T03:37:36Z<p>50.168.32.57: </p>
<hr />
<div>== Thanks ==<br />
<br />
The OpenZFS on OS X project would like to thank the following companies:<br />
<br />
'''GMO Internet''' [http://www.gmo.jp] for the hosting and rack space<br />
<br />
'''GlobalSign''' [http://globalsign.com] for the Open Source free SSL certificate<br />
<br />
'''OpenZFS''' [http://open-zfs.org] The main ZFS software collective<br />
<br />
== Donations ==<br />
<br />
The best way to show your appreciation for the OpenZFS project is to donate to the upstream project at [http://open-zfs.org http://open-zfs.org]<br />
<br />
If you wish to donate specifically to the OS X project, you can do so with PayPal at '''japan@lundman.net'''. But be aware any donations will most likely be used on beer and pizza, and other OpenZFS conferences, possibly not on the feature you wish for. :)</div>50.168.32.57https://openzfsonosx.org/wiki/DocumentationDocumentation2014-03-20T05:07:00Z<p>50.168.32.57: </p>
<hr />
<div><br />
== Documentation ==<br />
<br />
General OpenZFS usage can be found on the [http://open-zfs.org OpenZFS wiki]. But we do need to document some Apple OS X specific things.<br />
<br />
=== Getting started ===<br />
<br />
* [[zpool|Creating a pool]]<br />
<br />
<br />
=== Kernel Panics ===<br />
<br />
* [[panic|Kernel Panic]]</div>50.168.32.57https://openzfsonosx.org/wiki/Main_PageMain Page2014-03-20T04:24:08Z<p>50.168.32.57: </p>
<hr />
<div>== Open ZFS on OS X ==<br />
<br />
'''O'''penZFS '''o'''n '''O'''S '''X''' <==> OOOX <==> O3X<br />
<br />
This wiki will contain information regarding '''OpenZFS on OS X''', please help fill it. Ask '''lundman''' for editing privileges.<br />
<br />
If you are running '''MacZFS''', please use their [http://code.google.com/p/maczfs/w/list site].<br />
<br />
=== Downloads ===<br />
<br />
* [[Downloads]]<br />
<br />
=== Documentation ===<br />
<br />
* [[Documentation|Documentation]]<br />
<br />
=== Installing ===<br />
<br />
* [[Install#Installing_the_Official_Release|Installing the official release]] <br />
<br />
* [[Install#Installing_from_Source|Installing from source]]<br />
<br />
=== Uninstalling ===<br />
<br />
* [[Uninstall#Uninstalling_a_Release_Version|Uninstalling a release version]]<br />
<br />
* [[Uninstall#Uninstalling_a_Source_Install|Uninstalling a source install]]<br />
<br />
=== Sources ===<br />
<br />
* [https://github.com/zfs-osx/zfs ZFS repository]<br />
* [https://github.com/zfs-osx/spl SPL repository]<br />
* [https://github.com/zfs-osx/zfs/archive/zfs-0.6.2-rc1.zip ZFS source code zip file]<br />
* [https://github.com/zfs-osx/spl/archive/spl-0.6.2-rc1.zip SPL source code zip file]<br />
<br />
=== Help ===<br />
<br />
* The o3x [https://openzfsonosx.org/forum/ forums]<br />
* The o3x IRC channel '''#openzfs-osx''' on freenode<br />
<br />
== Development ==<br />
<br />
* [[Development]]<br />
<br />
== Twitter ==<br />
<br />
<html><br />
<a class="twitter-timeline" href="https://twitter.com/openzfsonosx" data-widget-id="444275713776951296">Tweets by @OpenZFSonOSX</a><br />
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+"://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script><br />
<br />
<span><br />
<br><br />
<a href="https://twitter.com/openzfsonosx" class="twitter-follow-button" data-show-count="true">Follow @openzfsonosx</a><br />
<br><br />
<a href="https://twitter.com/share" class="twitter-share-button" data-via="OpenZFSonOSX">Tweet</a><br />
</span><br />
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script><br />
</html></div>50.168.32.57https://openzfsonosx.org/wiki/Main_PageMain Page2014-03-20T04:23:24Z<p>50.168.32.57: /* Open ZFS on OS X */</p>
<hr />
<div>== Open ZFS on OS X ==<br />
<br />
'''O'''penZFS '''o'''n '''O'''S '''X''' <==> OOOX <==> O3X<br />
<br />
This wiki will contain information regarding '''Open ZFS on OS X''', please help fill it. Ask '''lundman''' for editing privileges.<br />
<br />
If you are running '''MacZFS''', please use their [http://code.google.com/p/maczfs/w/list site].<br />
<br />
=== Downloads ===<br />
<br />
* [[Downloads]]<br />
<br />
=== Documentation ===<br />
<br />
* [[Documentation|Documentation]]<br />
<br />
=== Installing ===<br />
<br />
* [[Install#Installing_the_Official_Release|Installing the official release]] <br />
<br />
* [[Install#Installing_from_Source|Installing from source]]<br />
<br />
=== Uninstalling ===<br />
<br />
* [[Uninstall#Uninstalling_a_Release_Version|Uninstalling a release version]]<br />
<br />
* [[Uninstall#Uninstalling_a_Source_Install|Uninstalling a source install]]<br />
<br />
=== Sources ===<br />
<br />
* [https://github.com/zfs-osx/zfs ZFS repository]<br />
* [https://github.com/zfs-osx/spl SPL repository]<br />
* [https://github.com/zfs-osx/zfs/archive/zfs-0.6.2-rc1.zip ZFS source code zip file]<br />
* [https://github.com/zfs-osx/spl/archive/spl-0.6.2-rc1.zip SPL source code zip file]<br />
<br />
=== Help ===<br />
<br />
* The o3x [https://openzfsonosx.org/forum/ forums]<br />
* The o3x IRC channel '''#openzfs-osx''' on freenode<br />
<br />
== Development ==<br />
<br />
* [[Development]]<br />
<br />
== Twitter ==<br />
<br />
<html><br />
<a class="twitter-timeline" href="https://twitter.com/openzfsonosx" data-widget-id="444275713776951296">Tweets by @OpenZFSonOSX</a><br />
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+"://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script><br />
<br />
<span><br />
<br><br />
<a href="https://twitter.com/openzfsonosx" class="twitter-follow-button" data-show-count="true">Follow @openzfsonosx</a><br />
<br><br />
<a href="https://twitter.com/share" class="twitter-share-button" data-via="OpenZFSonOSX">Tweet</a><br />
</span><br />
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script><br />
</html></div>50.168.32.57https://openzfsonosx.org/wiki/Main_PageMain Page2014-03-14T09:57:16Z<p>50.168.32.57: </p>
<hr />
<div>== Open ZFS on OS X ==<br />
<br />
('''O'''pen ZFS '''o'''n '''O'''S '''X''' OOOX o3x)<br />
<br />
This wiki will contain information regarding '''Open ZFS on OS X''', please help fill it. Ask '''lundman''' for editing privileges.<br />
<br />
If you are running '''MacZFS''', please use their [http://code.google.com/p/maczfs/w/list site].<br />
<br />
=== Downloads ===<br />
<br />
* [[Downloads]]<br />
<br />
=== Getting started ===<br />
<br />
* [[zpool|Creating a pool]]<br />
<br />
=== Installing ===<br />
<br />
* [[Install#Installing_the_Official_Release|Installing the official release]] <br />
<br />
* [[Install#Installing_from_Source|Installing from source]]<br />
<br />
=== Uninstalling ===<br />
<br />
* [[Uninstall#Uninstalling_the_Official_Release|Uninstalling a release version]]<br />
<br />
* [[Uninstall#Uninstalling_an_Installation_from_Source|Uninstalling a source install]]<br />
<br />
=== Sources ===<br />
<br />
* [https://github.com/zfs-osx/zfs ZFS repository]<br />
* [https://github.com/zfs-osx/spl SPL repository]<br />
* [https://github.com/zfs-osx/zfs/archive/zfs-0.6.2-rc1.zip ZFS source code zip file]<br />
* [https://github.com/zfs-osx/spl/archive/spl-0.6.2-rc1.zip SPL source code zip file]<br />
<br />
=== Help ===<br />
<br />
* The o3x [http://openzfsonosx.org/forum/index.php forums]<br />
* The o3x IRC channel #openzfs-osx on freenode<br />
<br />
== Twitter ==<br />
<br />
<html><br />
<a class="twitter-timeline" href="https://twitter.com/openzfsonosx" data-widget-id="444275713776951296">Tweets by @OpenZFSonOSX</a><br />
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+"://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script><br />
<br />
<span><br />
<br><br />
<a href="https://twitter.com/openzfsonosx" class="twitter-follow-button" data-show-count="true">Follow @openzfsonosx</a><br />
<br><br />
<a href="https://twitter.com/share" class="twitter-share-button" data-via="OpenZFSonOSX">Tweet</a><br />
</span><br />
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script><br />
</html></div>50.168.32.57https://openzfsonosx.org/wiki/Main_PageMain Page2014-03-14T09:55:51Z<p>50.168.32.57: </p>
<hr />
<div>== Open ZFS on OS X ==<br />
<br />
('''O'''pen ZFS '''o'''n '''O'''S '''X''' OOOX o3x)<br />
<br />
This wiki will contain information regarding '''Open ZFS on OS X''', please help fill it. Ask '''lundman''' for editing privileges.<br />
<br />
If you are running '''MacZFS''', please use their [http://code.google.com/p/maczfs/w/list site].<br />
<br />
=== Downloads ===<br />
<br />
* [[Downloads]]<br />
<br />
=== Getting started ===<br />
<br />
* [[zpool|Creating a pool]]<br />
<br />
=== Installing ===<br />
<br />
* [[Install#Installing_the_Official_Release|Installing the official release]] <br />
<br />
* [[Install#Installing_from_Source|Installing from source]]<br />
<br />
=== Uninstalling ===<br />
<br />
* [[Uninstall#Uninstalling_the_Official_Release|Uninstalling a release version]]<br />
<br />
* [[Uninstall#Uninstalling_an_Installation_from_Source|Uninstalling a source install]]<br />
<br />
=== Sources ===<br />
<br />
* [https://github.com/zfs-osx/zfs ZFS repository]<br />
* [https://github.com/zfs-osx/spl SPL repository]<br />
* [https://github.com/zfs-osx/zfs/archive/zfs-0.6.2-rc1.zip ZFS source code zip file]<br />
* [https://github.com/zfs-osx/spl/archive/spl-0.6.2-rc1.zip SPL source code zip file]<br />
<br />
=== Help ===<br />
<br />
* The o3x [http://openzfsonosx.org/forum/index.php forums]<br />
* IRC freenode #openzfs-osx<br />
<br />
== Twitter ==<br />
<br />
<html><br />
<a class="twitter-timeline" href="https://twitter.com/openzfsonosx" data-widget-id="444275713776951296">Tweets by @OpenZFSonOSX</a><br />
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+"://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script><br />
<br />
<span><br />
<br><br />
<a href="https://twitter.com/openzfsonosx" class="twitter-follow-button" data-show-count="true">Follow @openzfsonosx</a><br />
<br><br />
<a href="https://twitter.com/share" class="twitter-share-button" data-via="OpenZFSonOSX">Tweet</a><br />
</span><br />
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script><br />
</html></div>50.168.32.57https://openzfsonosx.org/wiki/Main_PageMain Page2014-03-14T09:54:28Z<p>50.168.32.57: </p>
<hr />
<div>== Open ZFS on OS X ==<br />
<br />
('''O'''pen ZFS '''o'''n '''O'''S '''X''' OOOX o3x)<br />
<br />
This wiki will contain information regarding '''Open ZFS on OS X''', please help fill it. Ask '''lundman''' for editing privileges.<br />
<br />
If you are running '''MacZFS''', please use their [http://code.google.com/p/maczfs/w/list site].<br />
<br />
=== Downloads ===<br />
<br />
* [[Downloads]]<br />
<br />
=== Getting started ===<br />
<br />
* [[zpool|Creating a pool]]<br />
<br />
=== Installing ===<br />
<br />
* [[Install#Installing_the_Official_Release|Installing the official release]] <br />
<br />
* [[Install#Installing_from_Source|Installing from source]]<br />
<br />
== Uninstalling ==<br />
<br />
* [[Uninstall#Uninstalling_the_Official_Release|Uninstalling a release version]]<br />
<br />
* [[Uninstall#Uninstalling_an_Installation_from_Source|Uninstalling a source install]]<br />
<br />
=== Sources ===<br />
<br />
* [https://github.com/zfs-osx/zfs ZFS repository]<br />
* [https://github.com/zfs-osx/spl SPL repository]<br />
* [https://github.com/zfs-osx/zfs/archive/zfs-0.6.2-rc1.zip ZFS source code zip file]<br />
* [https://github.com/zfs-osx/spl/archive/spl-0.6.2-rc1.zip SPL source code zip file]<br />
<br />
=== Help ===<br />
<br />
* The o3x [http://openzfsonosx.org/forum/index.php forums]<br />
* IRC freenode #openzfs-osx<br />
<br />
== Twitter ==<br />
<br />
<html><br />
<a class="twitter-timeline" href="https://twitter.com/openzfsonosx" data-widget-id="444275713776951296">Tweets by @OpenZFSonOSX</a><br />
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+"://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script><br />
<br />
<span><br />
<br><br />
<a href="https://twitter.com/openzfsonosx" class="twitter-follow-button" data-show-count="true">Follow @openzfsonosx</a><br />
<br><br />
<a href="https://twitter.com/share" class="twitter-share-button" data-via="OpenZFSonOSX">Tweet</a><br />
</span><br />
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script><br />
</html></div>50.168.32.57https://openzfsonosx.org/wiki/UninstallUninstall2014-03-14T09:48:56Z<p>50.168.32.57: </p>
<hr />
<div>== Uninstalling the Official Release ==<br />
<br />
*If you have the folder Docs & Scripts from when you installed, go there, otherwise open the Open_ZFS_on_OS_X_x.y.z.dmg file you downloaded when you originally installed. If you no longer have the .dmg file, you can re-download it, on the [[Downloads]] page.<br />
<br />
*Open the folder Docs & Scripts.<br />
<br />
*Find the file named uninstall-openzfsonosx.sh.<br />
<br />
*Open Terminal.app. You can open it from Spotlight, or from the folder in Finder /Applications/Utilities.<br />
<br />
*Close all open files from your pools.<br />
<br />
*Export every pool.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool list<br />
<br />
#For each pool listed<br />
sudo zpool export $pool1name<br />
sudo zpool export $pool2name<br />
...<br />
#Verify all the pools have been exported.<br />
sudo zpool status<br />
</syntaxhighlight><br />
<br />
*Now drag and drop the file uninstall-openzfsonosx.sh onto your Terminal window. Or just type<br />
<br />
<syntaxhighlight lang="bash"><br />
/Volumes/OpenZFS*/Docs\ \&\ Scripts/uninstall-openzfsonosx.sh<br />
</syntaxhighlight><br />
<br />
*Press return.<br />
<br />
*You will be prompted for your password. Enter that and press return.<br />
<br />
The script will run and uninstall o3x.<br />
<br />
== Uninstalling an Installation from Source ==<br />
The procedure is the same as for [[#Uninstalling_the_Official_Release|uninstalling the official release]], except you need to use the uninstaller script named "uninstall-make-install.sh" instead of the one named "uninstall-openzfsonosx.sh." And if you wish, in Finder you can drag ~/Developer/spl and ~/Developer/zfs to the Trash.<br />
<br />
Alternatively, you can determine the list of files that were installed by installing to a DESTDIR.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer<br />
<br />
cd spl<br />
sudo make DESTDIR=~/Developer/destdir install<br />
cd ..<br />
<br />
cd zfs<br />
sudo make DESTDIR=~/Developer/destdir install<br />
cd ..<br />
<br />
cd destdir<br />
find . -type f > o3x_uninstall_list.txt<br />
open o3x_uninstall_list.txt<br />
</syntaxhighlight><br />
<br />
Now you can manually remove each of the files list either by using "rm" in Terminal.app or by dragging them to the trash from Finder. You can also remove ~/Developer/destdir, ~/Developer/spl, and ~/Developer/zfs when you're done.</div>50.168.32.57https://openzfsonosx.org/wiki/UninstallUninstall2014-03-14T09:34:38Z<p>50.168.32.57: </p>
<hr />
<div>== Uninstalling the Official Release ==<br />
<br />
*If you have the folder Docs & Scripts from when you installed, go there, otherwise open the Open_ZFS_on_OS_X_x.y.z.dmg file you downloaded when you originally installed. If you no longer have the .dmg file, you can re-download it, on the [[Downloads]] page.<br />
<br />
*Open the folder Docs & Scripts.<br />
<br />
*Find the file named uninstall-openzfsonosx.sh.<br />
<br />
*Open Terminal.app. You can open it from Spotlight, or from the folder in Finder /Applications/Utilities.<br />
<br />
*Close all open files from your pools.<br />
<br />
*Export every pool.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool list<br />
<br />
#For each pool listed<br />
sudo zpool export $pool1name<br />
sudo zpool export $pool2name<br />
...<br />
#Verify all the pools have been exported.<br />
sudo zpool status<br />
</syntaxhighlight><br />
<br />
*Now drag and drop the file uninstall-openzfsonosx.sh onto your Terminal window. Or just type<br />
<br />
<syntaxhighlight lang="bash"><br />
/Volumes/OpenZFS*/Docs\ \&\ Scripts/uninstall-openzfsonosx.sh<br />
</syntaxhighlight><br />
<br />
*Press return.<br />
<br />
*You will be prompted for your password. Enter that and press return.<br />
<br />
The script will run and uninstall o3x.</div>50.168.32.57https://openzfsonosx.org/wiki/UninstallUninstall2014-03-14T09:27:45Z<p>50.168.32.57: /* Uninstalling the Official Release */</p>
<hr />
<div>== Uninstalling the Official Release ==<br />
<br />
*If you have the folder Docs & Scripts from when you installed, go there, otherwise open the Open_ZFS_on_OS_X_x.y.z.dmg file you downloaded when you originally installed. If you no longer have the .dmg file, you can re-download it, on the [[Downloads]] page.<br />
<br />
*Open the folder Docs & Scripts.<br />
<br />
*Find the file named uninstall-openzfsonosx.sh.<br />
<br />
*Open Terminal.app. You can open it from Spotlight, or from the folder in Finder /Applications/Utilities.<br />
<br />
*Close all open files from your pools.<br />
<br />
*Export every pool.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool list<br />
</syntaxhighlight><br />
<br />
Then for each pool listed<br />
<syntaxhighlight lang="bash"><br />
sudo zpool export $poolname<br />
</syntaxhighlight><br />
<br />
*Verify all the pools have been exported.<br />
<syntaxhighlight lang="bash"><br />
sudo zpool status<br />
</syntaxhighlight><br />
<br />
*Now drag and drop the file uninstall-openzfsonosx.sh onto your Terminal window. Or just type<br />
<syntaxhighlight lang="bash"><br />
/Volumes/OpenZFS*/Docs\ \&\ Scripts/uninstall-openzfsonosx.sh<br />
</syntaxhighlight><br />
<br />
*Press return.<br />
<br />
*You will be prompted for your password. Enter that and press return.<br />
<br />
The script will run and uninstall o3x.</div>50.168.32.57https://openzfsonosx.org/wiki/UninstallUninstall2014-03-14T09:27:14Z<p>50.168.32.57: /* Uninstalling the Official Release */</p>
<hr />
<div>== Uninstalling the Official Release ==<br />
<br />
#If you have the folder Docs & Scripts from when you installed, go there, otherwise open the Open_ZFS_on_OS_X_x.y.z.dmg file you downloaded when you originally installed. If you no longer have the .dmg file, you can re-download it, on the [[Downloads]] page.<br />
*Open the folder Docs & Scripts.<br />
*Find the file named uninstall-openzfsonosx.sh.<br />
*Open Terminal.app. You can open it from Spotlight, or from the folder in Finder /Applications/Utilities.<br />
*Close all open files from your pools.<br />
*Export every pool.<br />
<syntaxhighlight lang="bash"><br />
sudo zpool list<br />
</syntaxhighlight><br />
Then for each pool listed<br />
<syntaxhighlight lang="bash"><br />
sudo zpool export $poolname<br />
</syntaxhighlight><br />
*Verify all the pools have been exported.<br />
<syntaxhighlight lang="bash"><br />
sudo zpool status<br />
</syntaxhighlight><br />
*Now drag and drop the file uninstall-openzfsonosx.sh onto your Terminal window. Or just type<br />
<syntaxhighlight lang="bash"><br />
/Volumes/OpenZFS*/Docs\ \&\ Scripts/uninstall-openzfsonosx.sh<br />
</syntaxhighlight><br />
*Press return.<br />
*You will be prompted for your password. Enter that and press return.<br />
<br />
The script will run and uninstall o3x.</div>50.168.32.57https://openzfsonosx.org/wiki/UninstallUninstall2014-03-14T09:26:05Z<p>50.168.32.57: /* Uninstalling the Official Release */</p>
<hr />
<div>== Uninstalling the Official Release ==<br />
<br />
#If you have the folder Docs & Scripts from when you installed, go there, otherwise open the Open_ZFS_on_OS_X_x.y.z.dmg file you downloaded when you originally installed. If you no longer have the .dmg file, you can re-download it, on the [[Downloads]] page.<br />
<br />
#Open the folder Docs & Scripts.<br />
#Find the file named uninstall-openzfsonosx.sh.<br />
#Open Terminal.app. You can open it from Spotlight, or from the folder in Finder /Applications/Utilities.<br />
#Close all open files from your pools.<br />
#Export every pool.<br />
<syntaxhighlight lang="bash"><br />
sudo zpool list<br />
</syntaxhighlight><br />
Then for each pool listed<br />
<syntaxhighlight lang="bash"><br />
sudo zpool export $poolname<br />
</syntaxhighlight><br />
#Verify all the pools have been exported.<br />
<syntaxhighlight lang="bash"><br />
sudo zpool status<br />
</syntaxhighlight><br />
#Now drag and drop the file uninstall-openzfsonosx.sh onto your Terminal window. Or just type<br />
<syntaxhighlight lang="bash"><br />
/Volumes/OpenZFS*/Docs\ \&\ Scripts/uninstall-openzfsonosx.sh<br />
</syntaxhighlight><br />
#Press return.<br />
#You will be prompted for your password. Enter that and press return.<br />
<br />
The script will run and uninstall o3x.</div>50.168.32.57https://openzfsonosx.org/wiki/UninstallUninstall2014-03-14T09:25:29Z<p>50.168.32.57: /* Uninstalling the Official Release */</p>
<hr />
<div>== Uninstalling the Official Release ==<br />
<br />
#If you have the folder Docs & Scripts from when you installed, go there, otherwise open the Open_ZFS_on_OS_X_x.y.z.dmg file you downloaded when you originally installed. If you no longer have the .dmg file, you can re-download it, on the [[Downloads]] page.<br />
<br />
#Open the folder Docs & Scripts.<br />
<br />
#Find the file named uninstall-openzfsonosx.sh.<br />
<br />
#Open Terminal.app. You can open it from Spotlight, or from the folder in Finder /Applications/Utilities.<br />
<br />
#Close all open files from your pools.<br />
<br />
#Export every pool.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool list<br />
</syntaxhighlight><br />
<br />
Then for each pool listed<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool export $poolname<br />
</syntaxhighlight><br />
<br />
#Verify all the pools have been exported.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool status<br />
</syntaxhighlight><br />
<br />
#Now drag and drop the file uninstall-openzfsonosx.sh onto your Terminal window. Or just type<br />
<br />
<syntaxhighlight lang="bash"><br />
/Volumes/OpenZFS*/Docs\ \&\ Scripts/uninstall-openzfsonosx.sh<br />
</syntaxhighlight><br />
<br />
#Press return.<br />
<br />
#You will be prompted for your password. Enter that and press return.<br />
<br />
The script will run and uninstall o3x.</div>50.168.32.57https://openzfsonosx.org/wiki/UninstallUninstall2014-03-14T09:23:15Z<p>50.168.32.57: </p>
<hr />
<div>== Uninstalling the Official Release ==<br />
<br />
If you have the folder Docs & Scripts from when you installed, go there, otherwise open the Open_ZFS_on_OS_X_x.y.z.dmg file you downloaded when you originally installed. If you no longer have the .dmg file, you can re-download it, on the [[Downloads]] page.<br />
<br />
Open the folder Docs & Scripts.<br />
<br />
Find the file named uninstall-openzfsonosx.sh.<br />
<br />
Open Terminal.app. You can open it from Spotlight, or from the folder in Finder /Applications/Utilities.<br />
<br />
Close all open files from your pools.<br />
<br />
Export every pool.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool list<br />
</syntaxhighlight><br />
<br />
Then for each pool listed<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool export $poolname<br />
</syntaxhighlight><br />
<br />
Verify all the pools have been exported.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool status<br />
</syntaxhighlight><br />
<br />
Now drag and drop the file uninstall-openzfsonosx.sh onto your Terminal window, and press return.<br />
<br />
Or just type<br />
<br />
<syntaxhighlight lang="bash"><br />
/Volumes/OpenZFS*/Docs\ \&\ Scripts/uninstall-openzfsonosx.sh<br />
</syntaxhighlight><br />
<br />
and press return.<br />
<br />
You will be prompted for your password. Enter that.<br />
<br />
The script will run and uninstall o3x.</div>50.168.32.57https://openzfsonosx.org/wiki/InstallInstall2014-03-14T09:07:11Z<p>50.168.32.57: /* Installing the Official Release */</p>
<hr />
<div>== Installing the Official Release ==<br />
<br />
Download the most recent dmg from the [[Downloads]] page.<br />
<br />
Verify the checksums.<br />
<br />
$ md5 OpenZFS_on_OS_X_*.dmg<br />
$ sha1sum OpenZFS_on_OS_X_*.dmg<br />
$ openssl dgst -sha256 OpenZFS_on_OS_X_*.dmg<br />
<br />
Open the .dmg file.<br />
<br />
Read ReadMe.rtf.<br />
<br />
Start the installer by opening OpenZFS_on_OS_X_x.y.z.pkg.<br />
<br />
Follow the prompts.<br />
<br />
If you need to uninstall, follow [[Uninstall|these instructions]].<br />
<br />
== Installing from Source ==<br />
(Adapted from an [http://zerobsd.tumblr.com/post/62586498252/os-x-with-zfs article by ZeroBSD].)<br />
<br />
If you have any other implementation of ZFS installed, you must uninstall it and reboot before proceeding further.<br />
<br />
We'll need to fetch the latest source from the [https://github.com/zfs-osx repository on GitHub] and then compile it.<br />
<br />
For this, we'll need some prerequisites:<br />
<br />
* [https://developer.apple.com/xcode/ Xcode] (from [http://itunes.apple.com/us/app/xcode/id497799835?ls=1&mt=12 Mac App Store] or https://developer.apple.com/downloads/index.action)<br />
* Xcode Command Line Tools (https://developer.apple.com/downloads/index.action)<br />
* [http://brew.sh/ Homebrew] (or [http://www.macports.org/ MacPorts])<br />
<br />
<br />
To install Homebrew:<br />
<br />
<syntaxhighlight lang="bash"><br />
ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"<br />
</syntaxhighlight><br />
<br />
Paste that at a Terminal prompt.<br />
<br />
Once Homebrew is installed, we need a couple of things first:<br />
<br />
<syntaxhighlight lang="text"><br />
brew install automake libtool gawk<br />
</syntaxhighlight><br />
<br />
Create two folders in your home directory.<br />
<br />
<syntaxhighlight lang="bash"><br />
mkdir ~/Developer<br />
mkdir ~/bin<br />
</syntaxhighlight><br />
<br />
Add the ~/bin directory to your PATH.<br />
<br />
<syntaxhighlight lang="bash"><br />
echo 'export PATH=$HOME/bin:$PATH' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
To acquire the sources and build ZFS, we'll need the [[zfsadm]] script found [https://gist.github.com/ilovezfs/7713854#file-zfsadm here].<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/<br />
git clone https://gist.github.com/7713854.git zfsadm-repo<br />
cp zfsadm-repo/zfsadm ~/bin/<br />
chmod +x ~/bin/zfsadm<br />
</syntaxhighlight><br />
<br />
All set. Let's go cloning and building ZFS:<br />
<br />
<syntaxhighlight lang="bash"><br />
zfsadm<br />
</syntaxhighlight><br />
<br />
Now let it work. This should take a few minutes depending on the speed of your machine.<br />
<br />
Before using ZFS, we need to actually install it.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/zfs<br />
sudo make install<br />
cd ~/Developer/spl<br />
sudo make install<br />
</syntaxhighlight><br />
<br />
You can check to see if the kernel extensions loaded automatically with <br />
<br />
<syntaxhighlight lang="bash"><br />
sudo kextstat | grep lundman<br />
</syntaxhighlight><br />
<br />
You should see something similar to<br />
<br />
<syntaxhighlight lang="text"><br />
137 1 0xffffff803f61a800 0x20c 0x20c net.lundman.kernel.dependencies (10.0.0)<br />
144 1 0xffffff7f82720000 0xd000 0xd000 net.lundman.spl (1.0.0) <137 7 5 4 3 1><br />
145 0 0xffffff7f8272d000 0x202000 0x202000 net.lundman.zfs (1.0.0) <144 13 7 5 4 3 1><br />
</syntaxhighlight><br />
<br />
If not, make sure kextd is aware of them.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo touch /System/Library/Extensions<br />
sudo killall -HUP kextd<br />
</syntaxhighlight><br />
<br />
And check again.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo kextstat | grep lundman<br />
</syntaxhighlight><br />
<br />
If not, you can load the kexts manually.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
Now add /usr/local/sbin to your PATH. This is where you will find the command binaries (zpool, zfs, zdb, etc.).<br />
<br />
<syntaxhighlight lang="bash"><br />
echo 'export PATH=$PATH:/usr/local/sbin' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile again.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
Try running<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool<br />
</syntaxhighlight><br />
<br />
to see if everything is installed and configured properly.<br />
<br />
You can go ahead and [[zpool#Creating_a_pool|create your pools]] at this point.<br />
<br />
When you want to get the [https://github.com/zfs-osx/zfs/commits/master latest commits] from the GitHub, here's a quick overview of things you need to run.<br />
<br />
First make sure you have exported all of your pools.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool list<br />
</syntaxhighlight><br />
<br />
For every pool listed, run<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool export $poolname<br />
</syntaxhighlight><br />
<br />
in order to prevent a kernel panic when the kexts are unloaded.<br />
<br />
Make sure they have exported successfully.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool status<br />
</syntaxhighlight><br />
<br />
It should say, "no pools available."<br />
<br />
Now you should be able to upgrade your ZFS installation safely.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer<br />
<br />
cd spl<br />
make clean<br />
cd ..<br />
<br />
cd zfs<br />
make clean<br />
cd ..<br />
<br />
zfsadm<br />
<br />
# Assuming the build completed successfully,<br />
# unload the kexts. If you did not export all of<br />
# your pools this will panic:<br />
<br />
zfsadm -u<br />
<br />
# Now install the upgrade.<br />
<br />
cd spl<br />
sudo make install<br />
cd ..<br />
<br />
cd zfs<br />
sudo make install<br />
<br />
# And verify they reloaded automatically<br />
<br />
sudo kextstat | grep lundman<br />
<br />
# If not, make sure kextd is aware of them<br />
<br />
sudo touch /System/Library/Extensions<br />
sudo killall -HUP kextd<br />
<br />
# and check again<br />
<br />
sudo kextstat | grep lundman<br />
<br />
# if they they still have not loaded automatically<br />
<br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
If net.lundman.kernel.dependencies has been updated (quite rare) a reboot would be necessary.<br />
<br />
If you ever want to uninstall, follow the instructions for uninstalling a "make-install installation" [[uninstall|here]].<br />
<br />
== Installing an Unofficial Disk Image (.dmg) for Testers ==<br />
<br />
== Pseudo-Installing for Development ==</div>50.168.32.57https://openzfsonosx.org/wiki/Main_PageMain Page2014-03-14T09:03:06Z<p>50.168.32.57: </p>
<hr />
<div>== Open ZFS on OS X ==<br />
<br />
('''O'''pen ZFS '''o'''n '''O'''S '''X''' OOOX o3x)<br />
<br />
This wiki will contain information regarding '''Open ZFS on OS X''', please help fill it. Ask '''lundman''' for editing privileges.<br />
<br />
If you are running '''MacZFS''', please use their [http://code.google.com/p/maczfs/w/list site].<br />
<br />
=== Downloads ===<br />
<br />
* [[Downloads]]<br />
<br />
=== Getting started ===<br />
<br />
* [[zpool|Creating a pool]]<br />
<br />
=== Installation ===<br />
<br />
* [[Install#Installing_the_Official_Release|Installing the official release]] <br />
<br />
* [[Install#Installing_from_Source|Installing from source]]<br />
<br />
=== Sources ===<br />
<br />
* [https://github.com/zfs-osx/zfs ZFS repository]<br />
* [https://github.com/zfs-osx/spl SPL repository]<br />
* [https://github.com/zfs-osx/zfs/archive/zfs-0.6.2-rc1.zip ZFS source code zip file]<br />
* [https://github.com/zfs-osx/spl/archive/spl-0.6.2-rc1.zip SPL source code zip file]<br />
<br />
=== Help ===<br />
<br />
* The o3x [http://openzfsonosx.org/forum/index.php forums]<br />
* IRC freenode #openzfs-osx<br />
<br />
== Twitter ==<br />
<br />
<html><br />
<a class="twitter-timeline" href="https://twitter.com/openzfsonosx" data-widget-id="444275713776951296">Tweets by @OpenZFSonOSX</a><br />
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+"://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script><br />
<br />
<span><br />
<br><br />
<a href="https://twitter.com/openzfsonosx" class="twitter-follow-button" data-show-count="true">Follow @openzfsonosx</a><br />
<br><br />
<a href="https://twitter.com/share" class="twitter-share-button" data-via="OpenZFSonOSX">Tweet</a><br />
</span><br />
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script><br />
</html></div>50.168.32.57https://openzfsonosx.org/wiki/Main_PageMain Page2014-03-14T09:01:34Z<p>50.168.32.57: </p>
<hr />
<div>== Open ZFS on OS X ==<br />
<br />
('''O'''pen ZFS '''o'''n '''O'''S '''X''' OOOX o3x)<br />
<br />
This wiki will contain information regarding '''Open ZFS on OS X''', please help fill it. Ask '''lundman''' for editing privileges.<br />
<br />
If you are running '''MacZFS''', please use their [http://code.google.com/p/maczfs/w/list site].<br />
<br />
=== Downloads ===<br />
<br />
* [[Downloads]]<br />
<br />
=== Getting started ===<br />
<br />
* [[zpool|Creating a pool]]<br />
<br />
=== Installation ===<br />
<br />
* [[Binary Installation]] <br />
<br />
* [[Install#Installing_from_Source|Installing from source]]<br />
<br />
=== Sources ===<br />
<br />
* [https://github.com/zfs-osx/zfs ZFS repository]<br />
* [https://github.com/zfs-osx/spl SPL repository]<br />
* [https://github.com/zfs-osx/zfs/archive/zfs-0.6.2-rc1.zip ZFS source code zip file]<br />
* [https://github.com/zfs-osx/spl/archive/spl-0.6.2-rc1.zip SPL source code zip file]<br />
<br />
=== Help ===<br />
<br />
* The o3x [http://openzfsonosx.org/forum/index.php forums]<br />
* IRC freenode #openzfs-osx<br />
<br />
== Twitter ==<br />
<br />
<html><br />
<a class="twitter-timeline" href="https://twitter.com/openzfsonosx" data-widget-id="444275713776951296">Tweets by @OpenZFSonOSX</a><br />
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+"://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script><br />
<br />
<span><br />
<br><br />
<a href="https://twitter.com/openzfsonosx" class="twitter-follow-button" data-show-count="true">Follow @openzfsonosx</a><br />
<br><br />
<a href="https://twitter.com/share" class="twitter-share-button" data-via="OpenZFSonOSX">Tweet</a><br />
</span><br />
<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script><br />
</html></div>50.168.32.57https://openzfsonosx.org/wiki/UninstallUninstall2014-03-14T06:25:52Z<p>50.168.32.57: Created page with "Run the uninstall script."</p>
<hr />
<div>Run the uninstall script.</div>50.168.32.57https://openzfsonosx.org/wiki/InstallInstall2014-03-14T06:25:26Z<p>50.168.32.57: </p>
<hr />
<div>(Adapted from an [http://zerobsd.tumblr.com/post/62586498252/os-x-with-zfs article by ZeroBSD].)<br />
<br />
If you have any other implementation of ZFS installed, you must uninstall it and reboot before proceeding further.<br />
<br />
We'll need to fetch the latest source from the [https://github.com/zfs-osx repository on GitHub] and then compile it.<br />
<br />
For this, we'll need some prerequisites:<br />
<br />
* [https://developer.apple.com/xcode/ Xcode] (from [http://itunes.apple.com/us/app/xcode/id497799835?ls=1&mt=12 Mac App Store] or https://developer.apple.com/downloads/index.action)<br />
* Xcode Command Line Tools (https://developer.apple.com/downloads/index.action)<br />
* [http://brew.sh/ Homebrew] (or [http://www.macports.org/ MacPorts])<br />
<br />
<br />
To install Homebrew:<br />
<br />
<syntaxhighlight lang="bash"><br />
ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"<br />
</syntaxhighlight><br />
<br />
Paste that at a Terminal prompt.<br />
<br />
Once Homebrew is installed, we need a couple of things first:<br />
<br />
<syntaxhighlight lang="text"><br />
brew install automake libtool gawk<br />
</syntaxhighlight><br />
<br />
Create two folders in your home directory.<br />
<br />
<syntaxhighlight lang="bash"><br />
mkdir ~/Developer<br />
mkdir ~/bin<br />
</syntaxhighlight><br />
<br />
Add the ~/bin directory to your PATH.<br />
<br />
<syntaxhighlight lang="bash"><br />
echo 'export PATH=$HOME/bin:$PATH' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
To acquire the sources and build ZFS, we'll need the [[zfsadm]] script found [https://gist.github.com/ilovezfs/7713854#file-zfsadm here].<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/<br />
git clone https://gist.github.com/7713854.git zfsadm-repo<br />
cp zfsadm-repo/zfsadm ~/bin/<br />
chmod +x ~/bin/zfsadm<br />
</syntaxhighlight><br />
<br />
All set. Let's go cloning and building ZFS:<br />
<br />
<syntaxhighlight lang="bash"><br />
zfsadm<br />
</syntaxhighlight><br />
<br />
Now let it work. This should take a few minutes depending on the speed of your machine.<br />
<br />
Before using ZFS, we need to actually install it.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/zfs<br />
sudo make install<br />
cd ~/Developer/spl<br />
sudo make install<br />
</syntaxhighlight><br />
<br />
You can check to see if the kernel extensions loaded automatically with <br />
<br />
<syntaxhighlight lang="bash"><br />
sudo kextstat | grep lundman<br />
</syntaxhighlight><br />
<br />
You should see something similar to<br />
<br />
<syntaxhighlight lang="text"><br />
137 1 0xffffff803f61a800 0x20c 0x20c net.lundman.kernel.dependencies (10.0.0)<br />
144 1 0xffffff7f82720000 0xd000 0xd000 net.lundman.spl (1.0.0) <137 7 5 4 3 1><br />
145 0 0xffffff7f8272d000 0x202000 0x202000 net.lundman.zfs (1.0.0) <144 13 7 5 4 3 1><br />
</syntaxhighlight><br />
<br />
If not, make sure kextd is aware of them.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo touch /System/Library/Extensions<br />
sudo killall -HUP kextd<br />
</syntaxhighlight><br />
<br />
And check again.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo kextstat | grep lundman<br />
</syntaxhighlight><br />
<br />
If not, you can load the kexts manually.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
Now add /usr/local/sbin to your PATH. This is where you will find the command binaries (zpool, zfs, zdb, etc.).<br />
<br />
<syntaxhighlight lang="bash"><br />
echo 'export PATH=$PATH:/usr/local/sbin' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile again.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
Try running<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool<br />
</syntaxhighlight><br />
<br />
to see if everything is installed and configured properly.<br />
<br />
You can go ahead and [[zpool#Creating_a_pool|create your pools]] at this point.<br />
<br />
When you want to get the [https://github.com/zfs-osx/zfs/commits/master latest commits] from the GitHub, here's a quick overview of things you need to run.<br />
<br />
First make sure you have exported all of your pools.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool list<br />
</syntaxhighlight><br />
<br />
For every pool listed, run<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool export $poolname<br />
</syntaxhighlight><br />
<br />
in order to prevent a kernel panic when the kexts are unloaded.<br />
<br />
Make sure they have exported successfully.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool status<br />
</syntaxhighlight><br />
<br />
It should say, "no pools available."<br />
<br />
Now you should be able to upgrade your ZFS installation safely.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer<br />
<br />
cd spl<br />
make clean<br />
cd ..<br />
<br />
cd zfs<br />
make clean<br />
cd ..<br />
<br />
zfsadm<br />
<br />
# Assuming the build completed successfully,<br />
# unload the kexts. If you did not export all of<br />
# your pools this will panic:<br />
<br />
zfsadm -u<br />
<br />
# Now install the upgrade.<br />
<br />
cd spl<br />
sudo make install<br />
cd ..<br />
<br />
cd zfs<br />
sudo make install<br />
<br />
# And verify they reloaded automatically<br />
<br />
sudo kextstat | grep lundman<br />
<br />
# If not, make sure kextd is aware of them<br />
<br />
sudo touch /System/Library/Extensions<br />
sudo killall -HUP kextd<br />
<br />
# and check again<br />
<br />
sudo kextstat | grep lundman<br />
<br />
# if they they still have not loaded automatically<br />
<br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
If net.lundman.kernel.dependencies has been updated (quite rare) a reboot would be necessary.<br />
<br />
If you ever want to uninstall, follow the instructions for uninstalling a "make-install installation" [[uninstall|here]].</div>50.168.32.57https://openzfsonosx.org/wiki/InstallInstall2014-03-14T06:20:11Z<p>50.168.32.57: </p>
<hr />
<div>(Adapted from an [http://zerobsd.tumblr.com/post/62586498252/os-x-with-zfs article by ZeroBSD].)<br />
<br />
If you have any other implementation of ZFS installed, you must uninstall it and reboot before proceeding further.<br />
<br />
We'll need to fetch the latest source from the [https://github.com/zfs-osx repository on GitHub] and then compile it.<br />
<br />
For this, we'll need some prerequisites:<br />
<br />
* [https://developer.apple.com/xcode/ Xcode] (from [http://itunes.apple.com/us/app/xcode/id497799835?ls=1&mt=12 Mac App Store] or https://developer.apple.com/downloads/index.action)<br />
* Xcode Command Line Tools (https://developer.apple.com/downloads/index.action)<br />
* [http://brew.sh/ Homebrew] (or [http://www.macports.org/ MacPorts])<br />
<br />
<br />
To install Homebrew:<br />
<br />
<syntaxhighlight lang="bash"><br />
ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"<br />
</syntaxhighlight><br />
<br />
Paste that at a Terminal prompt.<br />
<br />
Once Homebrew is installed, we need a couple of things first:<br />
<br />
<syntaxhighlight lang="text"><br />
brew install automake libtool gawk<br />
</syntaxhighlight><br />
<br />
Create two folders in your home directory.<br />
<br />
<syntaxhighlight lang="bash"><br />
mkdir ~/Developer<br />
mkdir ~/bin<br />
</syntaxhighlight><br />
<br />
Add the ~/bin directory to your PATH.<br />
<br />
<syntaxhighlight lang="bash"><br />
echo 'export PATH=$HOME/bin:$PATH' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
To acquire the sources and build ZFS, we'll need the [[zfsadm]] script found [https://gist.github.com/ilovezfs/7713854#file-zfsadm here].<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/<br />
git clone https://gist.github.com/7713854.git zfsadm-repo<br />
cp zfsadm-repo/zfsadm ~/bin/<br />
chmod +x ~/bin/zfsadm<br />
</syntaxhighlight><br />
<br />
All set. Let's go cloning and building ZFS:<br />
<br />
<syntaxhighlight lang="bash"><br />
zfsadm<br />
</syntaxhighlight><br />
<br />
Now let it work. This should take a few minutes depending on the speed of your machine.<br />
<br />
Before using ZFS, we need to actually install it.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/zfs<br />
sudo make install<br />
cd ~/Developer/spl<br />
sudo make install<br />
</syntaxhighlight><br />
<br />
You can check to see if the kernel extensions loaded automatically with <br />
<br />
<syntaxhighlight lang="bash"><br />
sudo kextstat | grep lundman<br />
</syntaxhighlight><br />
<br />
You should see something similar to<br />
<br />
<syntaxhighlight lang="text"><br />
137 1 0xffffff803f61a800 0x20c 0x20c net.lundman.kernel.dependencies (10.0.0)<br />
144 1 0xffffff7f82720000 0xd000 0xd000 net.lundman.spl (1.0.0) <137 7 5 4 3 1><br />
145 0 0xffffff7f8272d000 0x202000 0x202000 net.lundman.zfs (1.0.0) <144 13 7 5 4 3 1><br />
</syntaxhighlight><br />
<br />
If not, make sure kextd is aware of them.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo touch /System/Library/Extensions<br />
sudo killall -HUP kextd<br />
</syntaxhighlight><br />
<br />
And check again.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo kextstat | grep lundman<br />
</syntaxhighlight><br />
<br />
If not, you can load the kexts manually.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
Now add /usr/local/sbin to your PATH. This is where you will find the command binaries (zpool, zfs, zdb, etc.).<br />
<br />
<syntaxhighlight lang="bash"><br />
echo 'export PATH=$PATH:/usr/local/sbin' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile again.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
Try running<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool<br />
</syntaxhighlight><br />
<br />
to see if everything is installed and configured properly.<br />
<br />
You can go ahead and [[zpool#Creating_a_pool|create your pools]] at this point.<br />
<br />
When you want to get the [https://github.com/zfs-osx/zfs/commits/master latest commits] from the GitHub, here's a quick overview of things you need to run.<br />
<br />
First make sure you have exported all of your pools.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool list<br />
</syntaxhighlight><br />
<br />
For every pool listed, run<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool export $poolname<br />
</syntaxhighlight><br />
<br />
in order to prevent a kernel panic when the kexts are unloaded.<br />
<br />
Make sure they have exported successfully.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool status<br />
</syntaxhighlight><br />
<br />
It should say, "no pools available."<br />
<br />
Now you should be able to upgrade your ZFS installation safely.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer<br />
<br />
cd spl<br />
make clean<br />
cd ..<br />
<br />
cd zfs<br />
make clean<br />
cd ..<br />
<br />
zfsadm<br />
<br />
# Assuming the build completed successfully,<br />
# unload the kexts. If you did not export all of<br />
# your pools this will panic:<br />
<br />
zfsadm -u<br />
<br />
# Now install the upgrade.<br />
<br />
cd spl<br />
sudo make install<br />
cd ..<br />
<br />
cd zfs<br />
sudo make install<br />
<br />
# And verify they reloaded automatically<br />
<br />
sudo kextstat | grep lundman<br />
<br />
# If not, make sure kextd is aware of them<br />
<br />
sudo touch /System/Library/Extensions<br />
sudo killall -HUP kextd<br />
<br />
# and check again<br />
<br />
sudo kextstat | grep lundman<br />
<br />
# if they they still have not loaded automatically<br />
<br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
If net.lundman.kernel.dependencies has been updated (quite rare) a reboot would be necessary.</div>50.168.32.57https://openzfsonosx.org/wiki/InstallInstall2014-03-14T06:14:25Z<p>50.168.32.57: </p>
<hr />
<div>(Adapted from an [http://zerobsd.tumblr.com/post/62586498252/os-x-with-zfs article by ZeroBSD].)<br />
<br />
If you have any other implementation of ZFS installed, you must uninstall it and reboot before proceeding further.<br />
<br />
We'll need to fetch the latest source from the [https://github.com/zfs-osx repository on GitHub] and then compile it.<br />
<br />
For this, we'll need some prerequisites:<br />
<br />
* [https://developer.apple.com/xcode/ Xcode] (from [http://itunes.apple.com/us/app/xcode/id497799835?ls=1&mt=12 Mac App Store] or https://developer.apple.com/downloads/index.action)<br />
* Xcode Command Line Tools (https://developer.apple.com/downloads/index.action)<br />
* [http://brew.sh/ Homebrew] (or [http://www.macports.org/ MacPorts])<br />
<br />
<br />
To install Homebrew:<br />
<br />
<syntaxhighlight lang="bash"><br />
ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"<br />
</syntaxhighlight><br />
<br />
Paste that at a Terminal prompt.<br />
<br />
Once Homebrew is installed, we need a couple of things first:<br />
<br />
<syntaxhighlight lang="text"><br />
brew install automake libtool gawk<br />
</syntaxhighlight><br />
<br />
Create two folders in your home directory.<br />
<br />
<syntaxhighlight lang="bash"><br />
mkdir ~/Developer<br />
mkdir ~/bin<br />
</syntaxhighlight><br />
<br />
Add the ~/bin directory to your PATH.<br />
<br />
<syntaxhighlight lang="bash"><br />
echo 'export PATH=$HOME/bin:$PATH' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
To acquire the sources and build ZFS, we'll need the [[zfsadm]] script found [https://gist.github.com/ilovezfs/7713854#file-zfsadm here].<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/<br />
git clone https://gist.github.com/7713854.git zfsadm-repo<br />
cp zfsadm-repo/zfsadm ~/bin/<br />
chmod +x ~/bin/zfsadm<br />
</syntaxhighlight><br />
<br />
All set. Let's go cloning and building ZFS:<br />
<br />
<syntaxhighlight lang="bash"><br />
zfsadm<br />
</syntaxhighlight><br />
<br />
Now let it work. This should take a few minutes depending on the speed of your machine.<br />
<br />
Before using ZFS, we need to actually install it.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/zfs<br />
sudo make install<br />
cd ~/Developer/spl<br />
sudo make install<br />
</syntaxhighlight><br />
<br />
You can check to see if the kernel extensions loaded automatically with <br />
<br />
<syntaxhighlight lang="bash"><br />
sudo kextstat | grep lundman<br />
</syntaxhighlight><br />
<br />
You should see something similar to<br />
<br />
<syntaxhighlight lang="text"><br />
137 1 0xffffff803f61a800 0x20c 0x20c net.lundman.kernel.dependencies (10.0.0)<br />
144 1 0xffffff7f82720000 0xd000 0xd000 net.lundman.spl (1.0.0) <137 7 5 4 3 1><br />
145 0 0xffffff7f8272d000 0x202000 0x202000 net.lundman.zfs (1.0.0) <144 13 7 5 4 3 1><br />
</syntaxhighlight><br />
<br />
# If not, make sure kextd is aware of them<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo touch /System/Library/Extensions<br />
sudo killall -HUP kextd<br />
</syntaxhighlight><br />
<br />
# and check again<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo kextstat | grep lundman<br />
</syntaxhighlight><br />
<br />
If not, you can load the kexts manually.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
Now add /usr/local/sbin to your PATH. This is where you will find the command binaries (zpool, zfs, zdb, etc.).<br />
<br />
<syntaxhighlight lang="bash"><br />
echo 'export PATH=$PATH:/usr/local/sbin' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile again.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
Try running<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool<br />
</syntaxhighlight><br />
<br />
to see if everything is installed and configured properly.<br />
<br />
You can go ahead and [[zpool#Creating_a_pool|create your pools]] at this point.<br />
<br />
When you want to get the [https://github.com/zfs-osx/zfs/commits/master latest commits] from the GitHub, here's a quick overview of things you need to run.<br />
<br />
First make sure you have exported all of your pools.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool list<br />
</syntaxhighlight><br />
<br />
For every pool listed, run<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool export $poolname<br />
</syntaxhighlight><br />
<br />
in order to prevent a kernel panic when the kexts are unloaded.<br />
<br />
Now you should be able to upgrade your ZFS installation safely.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer<br />
<br />
cd spl<br />
make clean<br />
cd ..<br />
<br />
cd zfs<br />
make clean<br />
cd ..<br />
<br />
zfsadm<br />
<br />
# Assuming the build completed successfully,<br />
# unload the kexts. If you did not export all of<br />
# your pools this will panic:<br />
<br />
zfsadm -u<br />
<br />
# Now install the upgrade.<br />
<br />
cd spl<br />
sudo make install<br />
cd ..<br />
<br />
cd zfs<br />
sudo make install<br />
<br />
# And verify they reloaded automatically<br />
<br />
sudo kextstat | grep lundman<br />
<br />
# If not, make sure kextd is aware of them<br />
<br />
sudo touch /System/Library/Extensions<br />
sudo killall -HUP kextd<br />
<br />
# and check again<br />
<br />
sudo kextstat | grep lundman<br />
<br />
# if they they still have not loaded automatically<br />
<br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
If net.lundman.kernel.dependencies has been updated (quite rare) a reboot would be necessary.</div>50.168.32.57https://openzfsonosx.org/wiki/InstallInstall2014-03-14T06:13:08Z<p>50.168.32.57: </p>
<hr />
<div>(Adapted from an [http://zerobsd.tumblr.com/post/62586498252/os-x-with-zfs article by ZeroBSD].)<br />
<br />
If you have any other implementation of ZFS installed, you must uninstall it and reboot before proceeding further.<br />
<br />
We'll need to fetch the latest source from the [https://github.com/zfs-osx repository on GitHub] and then compile it.<br />
<br />
For this, we'll need some prerequisites:<br />
<br />
* [https://developer.apple.com/xcode/ Xcode] (from [http://itunes.apple.com/us/app/xcode/id497799835?ls=1&mt=12 Mac App Store] or https://developer.apple.com/downloads/index.action)<br />
* Xcode Command Line Tools (https://developer.apple.com/downloads/index.action)<br />
* [http://brew.sh/ Homebrew] (or [http://www.macports.org/ MacPorts])<br />
<br />
<br />
To install Homebrew:<br />
<br />
<syntaxhighlight lang="bash"><br />
ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"<br />
</syntaxhighlight><br />
<br />
Paste that at a Terminal prompt.<br />
<br />
Once Homebrew is installed, we need a couple of things first:<br />
<br />
<syntaxhighlight lang="text"><br />
brew install automake libtool gawk<br />
</syntaxhighlight><br />
<br />
Create two folders in your home directory.<br />
<br />
<syntaxhighlight lang="bash"><br />
mkdir ~/Developer<br />
mkdir ~/bin<br />
</syntaxhighlight><br />
<br />
Add the ~/bin directory to your PATH.<br />
<br />
<syntaxhighlight lang="bash"><br />
echo 'export PATH=$HOME/bin:$PATH' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
To acquire the sources and build ZFS, we'll need the [[zfsadm]] script found [https://gist.github.com/ilovezfs/7713854#file-zfsadm here].<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/<br />
git clone https://gist.github.com/7713854.git zfsadm-repo<br />
cp zfsadm-repo/zfsadm ~/bin/<br />
chmod +x ~/bin/zfsadm<br />
</syntaxhighlight><br />
<br />
All set. Let's go cloning and building ZFS:<br />
<br />
<syntaxhighlight lang="bash"><br />
zfsadm<br />
</syntaxhighlight><br />
<br />
Now let it work. This should take a few minutes depending on the speed of your machine.<br />
<br />
Before using ZFS, we need to actually install it.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/zfs<br />
sudo make install<br />
cd ~/Developer/spl<br />
sudo make install<br />
</syntaxhighlight><br />
<br />
You can check to see if the kernel extensions loaded automatically with <br />
<br />
<syntaxhighlight lang="bash"><br />
sudo kextstat | grep lundman<br />
</syntaxhighlight><br />
<br />
You should see something similar to<br />
<br />
<syntaxhighlight lang="text"><br />
0xffffff7f80b52000 0xc000 0xc000 net.lundman.spl (1.0.0) <7 5 4 3 1><br />
0xffffff800b070a00 0x180 0x180 net.lundman.kernel.dependencies (10.0.0)<br />
0xffffff7f8129a000 0x1f3000 0x1f3000 net.lundman.zfs (1.0.0) <92 91 16 7 5 4 3 1><br />
</syntaxhighlight><br />
<br />
# If not, make sure kextd is aware of them<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo touch /System/Library/Extensions<br />
sudo killall -HUP kextd<br />
</syntaxhighlight><br />
<br />
# and check again<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo kextstat | grep lundman<br />
</syntaxhighlight><br />
<br />
If not, you can load the kexts manually.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
Now add /usr/local/sbin to your PATH. This is where you will find the command binaries (zpool, zfs, zdb, etc.).<br />
<br />
<syntaxhighlight lang="bash"><br />
echo 'export PATH=$PATH:/usr/local/sbin' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile again.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
Try running<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool<br />
</syntaxhighlight><br />
<br />
to see if everything is installed and configured properly.<br />
<br />
You can go ahead and [[zpool#Creating_a_pool|create your pools]] at this point.<br />
<br />
When you want to get the [https://github.com/zfs-osx/zfs/commits/master latest commits] from the GitHub, here's a quick overview of things you need to run.<br />
<br />
First make sure you have exported all of your pools.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool list<br />
</syntaxhighlight><br />
<br />
For every pool listed, run<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool export $poolname<br />
</syntaxhighlight><br />
<br />
in order to prevent a kernel panic when the kexts are unloaded.<br />
<br />
Now you should be able to upgrade your ZFS installation safely.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer<br />
<br />
cd spl<br />
make clean<br />
cd ..<br />
<br />
cd zfs<br />
make clean<br />
cd ..<br />
<br />
zfsadm<br />
<br />
# Assuming the build completed successfully,<br />
# unload the kexts. If you did not export all of<br />
# your pools this will panic:<br />
<br />
zfsadm -u<br />
<br />
# Now install the upgrade.<br />
<br />
cd spl<br />
sudo make install<br />
cd ..<br />
<br />
cd zfs<br />
sudo make install<br />
<br />
# And verify they reloaded automatically<br />
<br />
sudo kextstat | grep lundman<br />
<br />
# If not, make sure kextd is aware of them<br />
<br />
sudo touch /System/Library/Extensions<br />
sudo killall -HUP kextd<br />
<br />
# and check again<br />
<br />
sudo kextstat | grep lundman<br />
<br />
# if they they still have not loaded automatically<br />
<br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
If net.lundman.kernel.dependencies has been updated (quite rare) a reboot would be necessary.</div>50.168.32.57https://openzfsonosx.org/wiki/InstallInstall2014-03-14T06:09:32Z<p>50.168.32.57: </p>
<hr />
<div>(Adapted from an [http://zerobsd.tumblr.com/post/62586498252/os-x-with-zfs article by ZeroBSD].)<br />
<br />
If you have any other implementation of ZFS installed, you must uninstall it and reboot before proceeding further.<br />
<br />
We'll need to fetch the latest source from the [https://github.com/zfs-osx repository on GitHub] and then compile it.<br />
<br />
For this, we'll need some prerequisites:<br />
<br />
* [https://developer.apple.com/xcode/ Xcode] (from [http://itunes.apple.com/us/app/xcode/id497799835?ls=1&mt=12 Mac App Store] or https://developer.apple.com/downloads/index.action)<br />
* Xcode Command Line Tools (https://developer.apple.com/downloads/index.action)<br />
* [http://brew.sh/ Homebrew] (or [http://www.macports.org/ MacPorts])<br />
<br />
<br />
To install Homebrew:<br />
<br />
<syntaxhighlight lang="bash"><br />
ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"<br />
</syntaxhighlight><br />
<br />
Paste that at a Terminal prompt.<br />
<br />
Once Homebrew is installed, we need a couple of things first:<br />
<br />
<syntaxhighlight lang="text"><br />
brew install automake libtool gawk<br />
</syntaxhighlight><br />
<br />
Create two folders in your home directory.<br />
<br />
<syntaxhighlight lang="bash"><br />
mkdir ~/Developer<br />
mkdir ~/bin<br />
</syntaxhighlight><br />
<br />
Add the ~/bin directory to your PATH.<br />
<br />
<syntaxhighlight lang="bash"><br />
echo 'export PATH=$HOME/bin:$PATH' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
To acquire the sources and build ZFS, we'll need the [[zfsadm]] script found [https://gist.github.com/ilovezfs/7713854#file-zfsadm here].<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/<br />
git clone https://gist.github.com/7713854.git zfsadm-repo<br />
cp zfsadm-repo/zfsadm ~/bin/<br />
chmod +x ~/bin/zfsadm<br />
</syntaxhighlight><br />
<br />
All set. Let's go cloning and building ZFS:<br />
<br />
<syntaxhighlight lang="bash"><br />
zfsadm<br />
</syntaxhighlight><br />
<br />
Now let it work. This should take a few minutes depending on the speed of your machine.<br />
<br />
Before using ZFS, we need to actually install it.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/zfs<br />
sudo make install<br />
cd ~/Developer/spl<br />
sudo make install<br />
</syntaxhighlight><br />
<br />
You can check to see if the kernel extensions automatically loaded with <br />
<br />
<syntaxhighlight lang="bash"><br />
sudo kextstat | grep lundman<br />
</syntaxhighlight><br />
<br />
You should see something similar to<br />
<br />
<syntaxhighlight lang="text"><br />
0xffffff7f80b52000 0xc000 0xc000 net.lundman.spl (1.0.0) <7 5 4 3 1><br />
0xffffff800b070a00 0x180 0x180 net.lundman.kernel.dependencies (10.0.0)<br />
0xffffff7f8129a000 0x1f3000 0x1f3000 net.lundman.zfs (1.0.0) <92 91 16 7 5 4 3 1><br />
</syntaxhighlight><br />
<br />
If not, you can load the kexts manually.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
Now add /usr/local/sbin to your PATH. This is where you will find the command binaries (zpool, zfs, zdb, etc.).<br />
<br />
<syntaxhighlight lang="bash"><br />
echo 'export PATH=$PATH:/usr/local/sbin' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile again.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
Try running<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool<br />
</syntaxhighlight><br />
<br />
to see if everything is installed and configured properly.<br />
<br />
You can go ahead and [[zpool#Creating_a_pool|create your pools]] at this point.<br />
<br />
When you want to get the [https://github.com/zfs-osx/zfs/commits/master latest commits] from the GitHub, here's a quick overview of things you need to run.<br />
<br />
First make sure you have exported all of your pools.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool list<br />
</syntaxhighlight><br />
<br />
For every pool listed, run<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool export $poolname<br />
</syntaxhighlight><br />
<br />
in order to prevent a kernel panic when the kexts are unloaded.<br />
<br />
Now you should be able to upgrade your ZFS installation safely.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer<br />
<br />
cd spl<br />
make clean<br />
cd ..<br />
<br />
cd zfs<br />
make clean<br />
cd ..<br />
<br />
zfsadm<br />
<br />
# Assuming the build completed successfully,<br />
# unload the kexts. If you did not export all of<br />
# your pools this will panic:<br />
<br />
zfsadm -u<br />
<br />
# Now install the upgrade.<br />
<br />
cd spl<br />
sudo make install<br />
cd ..<br />
<br />
cd zfs<br />
sudo make install<br />
<br />
# Add verify they reloaded automatically<br />
<br />
sudo kextstat | grep lundman<br />
<br />
# If not make sure kextd is aware of them<br />
<br />
sudo touch /System/Library/Extensions<br />
sudo killall -HUP kextd<br />
<br />
# and check again<br />
<br />
sudo kextstat | grep lundman<br />
<br />
# if they they still have not loaded automatically<br />
<br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
If net.lundman.kernel.dependencies has been updated (quite rare) a reboot would be necessary.</div>50.168.32.57https://openzfsonosx.org/wiki/InstallInstall2014-03-13T01:16:35Z<p>50.168.32.57: </p>
<hr />
<div>(Adapted from an [http://zerobsd.tumblr.com/post/62586498252/os-x-with-zfs article by ZeroBSD].)<br />
<br />
If you have any other implementation of ZFS installed, you must uninstall it and reboot before proceeding further.<br />
<br />
We'll need to fetch the latest source from the [https://github.com/zfs-osx repository on GitHub] and then compile it.<br />
<br />
For this, we'll need some prerequisites:<br />
<br />
* [https://developer.apple.com/xcode/ Xcode] (from [http://itunes.apple.com/us/app/xcode/id497799835?ls=1&mt=12 Mac App Store] or https://developer.apple.com/downloads/index.action)<br />
* Xcode Command Line Tools (https://developer.apple.com/downloads/index.action)<br />
* [http://brew.sh/ Homebrew] (or [http://www.macports.org/ MacPorts])<br />
<br />
<br />
To install Homebrew:<br />
<br />
<syntaxhighlight lang="bash"><br />
ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"<br />
</syntaxhighlight><br />
<br />
Paste that at a Terminal prompt.<br />
<br />
Once Homebrew is installed, we need a couple of things first:<br />
<br />
<syntaxhighlight lang="text"><br />
brew install automake libtool gawk<br />
</syntaxhighlight><br />
<br />
Create two folders in your home directory.<br />
<br />
<syntaxhighlight lang="bash"><br />
mkdir ~/Developer<br />
mkdir ~/bin<br />
</syntaxhighlight><br />
<br />
Add the ~/bin directory to your PATH.<br />
<br />
<syntaxhighlight lang="bash"><br />
echo 'export PATH=$HOME/bin:$PATH' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
To acquire the sources and build ZFS, we'll need the [[zfsadm]] script found [https://gist.github.com/ilovezfs/7713854#file-zfsadm here].<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/<br />
git clone https://gist.github.com/7713854.git zfsadm-repo<br />
cp zfsadm-repo/zfsadm ~/bin/<br />
chmod +x ~/bin/zfsadm<br />
</syntaxhighlight><br />
<br />
All set. Let's go cloning and building ZFS:<br />
<br />
<syntaxhighlight lang="bash"><br />
zfsadm<br />
</syntaxhighlight><br />
<br />
Now let it work. This should take a few minutes depending on the speed of your machine.<br />
<br />
Before using ZFS, we need to actually install it.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/zfs<br />
sudo make install<br />
cd ~/Developer/spl<br />
sudo make install<br />
</syntaxhighlight><br />
<br />
Now load the kernel extensions.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
You can check if they are loaded with <br />
<br />
<syntaxhighlight lang="bash"><br />
sudo kextstat | grep lundman<br />
</syntaxhighlight><br />
<br />
You should see something similar to<br />
<br />
<syntaxhighlight lang="text"><br />
0xffffff7f80b52000 0xc000 0xc000 net.lundman.spl (1.0.0) <7 5 4 3 1><br />
0xffffff800b070a00 0x180 0x180 net.lundman.kernel.dependencies (10.0.0)<br />
0xffffff7f8129a000 0x1f3000 0x1f3000 net.lundman.zfs (1.0.0) <92 91 16 7 5 4 3 1><br />
</syntaxhighlight><br />
<br />
Now add /usr/local/sbin to your PATH. This is where you will find the command binaries (zpool, zfs, zdb, etc.).<br />
<br />
<syntaxhighlight lang="bash"><br />
echo 'export PATH=$PATH:/usr/local/sbin' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile again.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
Try running<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool<br />
</syntaxhighlight><br />
<br />
to see if everything is installed and configured properly.<br />
<br />
You can go ahead and [[zpool#Creating_a_pool|create your pools]] at this point.<br />
<br />
When you want to get the [https://github.com/zfs-osx/zfs/commits/master latest commits] from the GitHub, here's a quick overview of things you need to run.<br />
<br />
First make sure you have exported all of your pools.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool list<br />
</syntaxhighlight><br />
<br />
For every pool listed, run<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool export $poolname<br />
</syntaxhighlight><br />
<br />
in order to prevent a kernel panic when the kexts are unloaded.<br />
<br />
Now you should be able to upgrade your ZFS installation safely.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer<br />
<br />
cd spl<br />
make clean<br />
cd ..<br />
<br />
cd zfs<br />
make clean<br />
cd ..<br />
<br />
zfsadm<br />
<br />
cd spl<br />
sudo make install<br />
cd ..<br />
<br />
cd zfs<br />
sudo make install<br />
<br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
If net.lundman.kernel.dependencies has been updated (quite rare) a reboot would be necessary.</div>50.168.32.57https://openzfsonosx.org/wiki/InstallInstall2014-03-13T01:15:00Z<p>50.168.32.57: </p>
<hr />
<div>(Adapted from an [http://zerobsd.tumblr.com/post/62586498252/os-x-with-zfs article by ZeroBSD].)<br />
<br />
If you have any other implementation of ZFS installed, you must uninstall it and reboot before proceeding further.<br />
<br />
We'll need to fetch the latest source from the [https://github.com/zfs-osx repository on GitHub] and then compile it.<br />
<br />
For this, we'll need some prerequisites:<br />
<br />
* [https://developer.apple.com/xcode/ Xcode] (from [http://itunes.apple.com/us/app/xcode/id497799835?ls=1&mt=12 Mac App Store] or https://developer.apple.com/downloads/index.action)<br />
* Xcode Command Line Tools (https://developer.apple.com/downloads/index.action)<br />
* [http://brew.sh/ Homebrew] (or [http://www.macports.org/ MacPorts])<br />
<br />
<br />
To install Homebrew:<br />
<br />
<syntaxhighlight lang="bash"><br />
ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"<br />
</syntaxhighlight><br />
<br />
Paste that at a Terminal prompt.<br />
<br />
Once Homebrew is installed, we need a couple of things first:<br />
<br />
<syntaxhighlight lang="text"><br />
brew install automake libtool gawk<br />
</syntaxhighlight><br />
<br />
Create two folders in your home directory.<br />
<br />
<syntaxhighlight lang="bash"><br />
mkdir ~/Developer<br />
mkdir ~/bin<br />
</syntaxhighlight><br />
<br />
Add the ~/bin directory to your PATH.<br />
<br />
<syntaxhighlight lang="bash"><br />
echo 'export PATH=$HOME/bin:$PATH' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
To acquire the sources and build ZFS, we'll need the [[zfsadm]] script found [https://gist.github.com/ilovezfs/7713854#file-zfsadm here].<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/<br />
git clone https://gist.github.com/7713854.git zfsadm-repo<br />
cp zfsadm-repo/zfsadm ~/bin/<br />
chmod +x ~/bin/zfsadm<br />
</syntaxhighlight><br />
<br />
All set. Let's go cloning and building ZFS:<br />
<br />
<syntaxhighlight lang="bash"><br />
zfsadm<br />
</syntaxhighlight><br />
<br />
Now let it work. This should take a few minutes depending on the speed of your machine.<br />
<br />
Before using ZFS, we need to actually install it.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer/zfs<br />
sudo make install<br />
cd ~/Developer/spl<br />
sudo make install<br />
</syntaxhighlight><br />
<br />
Now load the kernel extensions.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
You can check if they are loaded with <br />
<br />
<syntaxhighlight lang="bash"><br />
sudo kextstat | grep lundman<br />
</syntaxhighlight><br />
<br />
You should see something similar to<br />
<br />
<syntaxhighlight lang="text"><br />
0xffffff7f80b52000 0xc000 0xc000 net.lundman.spl (1.0.0) <7 5 4 3 1><br />
0xffffff800b070a00 0x180 0x180 net.lundman.kernel.dependencies (10.0.0)<br />
0xffffff7f8129a000 0x1f3000 0x1f3000 net.lundman.zfs (1.0.0) <92 91 16 7 5 4 3 1><br />
</syntaxhighlight><br />
<br />
Now add /usr/local/sbin to your PATH. This is where you will find the command binaries (zpool, zfs, zdb, etc.).<br />
<br />
<syntaxhighlight lang="bash"><br />
echo 'export PATH=$PATH:/usr/local/sbin' >> ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
and update your environment by sourcing your profile again.<br />
<br />
<syntaxhighlight lang="bash"><br />
source ~/.bash_profile<br />
</syntaxhighlight><br />
<br />
Now you can try running<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool<br />
</syntaxhighlight><br />
<br />
to see if everything is installed and configured properly.<br />
<br />
You can go ahead and [[zpool#Creating_a_pool|create your pools]] at this point.<br />
<br />
When you want to get the [https://github.com/zfs-osx/zfs/commits/master latest commits] from the GitHub, here's a quick overview of things you need to run.<br />
<br />
First make sure you have exported all of your pools.<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool list<br />
</syntaxhighlight><br />
<br />
For every pool listed, run<br />
<br />
<syntaxhighlight lang="bash"><br />
sudo zpool export $poolname<br />
</syntaxhighlight><br />
<br />
in order to prevent a kernel panic when the kexts are unloaded.<br />
<br />
Now you should be able to upgrade your ZFS installation safely.<br />
<br />
<syntaxhighlight lang="bash"><br />
cd ~/Developer<br />
<br />
cd spl<br />
make clean<br />
cd ..<br />
<br />
cd zfs<br />
make clean<br />
cd ..<br />
<br />
zfsadm<br />
<br />
cd spl<br />
sudo make install<br />
cd ..<br />
<br />
cd zfs<br />
sudo make install<br />
<br />
cd /System/Library/Extensions<br />
sudo kextload spl.kext<br />
sudo kextload -d spl.kext zfs.kext<br />
</syntaxhighlight><br />
<br />
If net.lundman.kernel.dependencies has been updated (quite rare) a reboot would be necessary.</div>50.168.32.57