Difference between revisions of "Project roadmap"

From OpenZFS on OS X
Jump to: navigation, search
(Autoimport of pools)
 
(21 intermediate revisions by one user not shown)
Line 1: Line 1:
 
==Short-term==
 
==Short-term==
  
===bmalloc===
+
===Spotlight===
* '''Labels''': Enhancement
+
* '''Labels''': Bug
* '''Description''': Include the bmalloc slice allocator to increase performance by allowing more memory to be used.
+
* '''Description''': Spotlight currently never scans and if it learns anything, it forgets it right away after next unmount. Currently our conjecture is that it cannot work until we have ZFS file systems under disk arbitration's control and /dev/disk#s# nodes for each file system.
* '''Status''': Completed
+
* '''Status''': Fixed
* '''Milestone''': 1.2.1
+
* '''Milestone''': Next release (the release following 1.3.0)
 
+
* '''Issue''': https://github.com/openzfsonosx/zfs/issues/84
===zfs recv panic===
+
* '''Labels''': Bug
+
* '''Description''': In the OpenZFS on OS X 1.2.0.dmg release <code>zfs recv -F</code> can panic in some cases.
+
* '''Status''': Work in progress
+
* '''Milestone''': 1.2.1
+
* '''Notes''': No longer panics, but now it hangs in <code>dnode_special_close</code>.
+
 
+
===zfs_zget race===
+
* '''Labels''': Bug
+
* '''Description''': <code>vnode_create</code> or <code>vnode_getwithvid</code> must precede <code>ZFS_OBJ_HOLD_EXIT</code> and <code>mutex_exit(&zp->z_lock)</code>.
+
* '''Status''': Simply moving <code>vnode_create</code> and <code>vnode_getwithvid</code> panics.
+
* '''Milestone''': 1.2.1
+
 
+
===Convert sysctl zfs_vnop_vdev_ashift to a pool property===
+
* '''Labels''': Enhancement
+
* '''Description''': Since this can vary from pool to pool, it shouldn't be set globally.
+
* '''Status''': Not started
+
* '''Milestone''': 1.2.1-1.3.# (TBD)
+
* '''Blocked by''': We need to determine whether any pool incompatibilities exist with respect to our handling of vdev_ashift and 4k/512e disks.
+
 
+
===ZFS Event Daemon (zed)===
+
* '''Labels''': Enhancement
+
* '''Description''': Userland daemon to respond to events sent by the ZFS kernel extension. Can perform userland tasks that the kernel extensions cannot perform on their own.
+
* '''Status''': Completed
+
* '''Milestone''': 1.2.1-1.3.# (TBD)
+
 
+
===launchd control of ZFS Event Daemon (zed)===
+
* '''Labels''': Enhancement
+
* '''Description''': launchd should make sure the ZFS Event Daemon is running whenever the ZFS kernel extensions are active. We need to decide whether we will do this by watching the path /dev/zfs or by registering for a notification from the kernel.
+
* '''Status''': Not started
+
* '''Milestone''': TBD (1.3.#)
+
 
+
===zpool.cache fixes===
+
* '''Labels''': Bug
+
* '''Description''': Pools need to be removed from /etc/zfs/zpool.cache when they are exported.
+
* '''Status''': Not started
+
* '''Milestone''': 1.2.1-1.3.# (TBD)
+
* '''Issue''': https://github.com/openzfsonosx/zfs/issues/144
+
 
+
===Autoimport of pools===
+
* '''Labels''': Enhancement
+
* '''Description''': Import all known pools during boot and possibly on-demand when devices are hot plugged.
+
* '''Status''': Workaround
+
* '''Milestone''': 1.2.1-1.3.# (TBD)
+
* '''Blocked by''': zpool.cache fixes ; launchd control of ZFS Event Daemon (zed).
+
* '''Notes''': It is possible that the next release could temporarily include the [[Autoimport|autoimport hack]] and have zed and zpool.cache stripped out.
+
  
 
===Port ZFS Test Suite from upstream===
 
===Port ZFS Test Suite from upstream===
Line 58: Line 12:
 
* '''Description''': Upstream OpenZFS has ported the old Solaris STF to OpenZFS and we need to port the new framework to O3X.
 
* '''Description''': Upstream OpenZFS has ported the old Solaris STF to OpenZFS and we need to port the new framework to O3X.
 
* '''Status''': Work in progress
 
* '''Status''': Work in progress
* '''Milestone''': 1.3.# (TBD)
+
* '''Milestone''': 1.3.x
 
* '''Issue''': https://github.com/openzfsonosx/zfs/issues/79
 
* '''Issue''': https://github.com/openzfsonosx/zfs/issues/79
 
* '''Notes''': Initial work began at the [http://open-zfs.org/wiki/OpenZFS_Developer_Summit OpenZFS Developer Summit Hackathon].
 
* '''Notes''': Initial work began at the [http://open-zfs.org/wiki/OpenZFS_Developer_Summit OpenZFS Developer Summit Hackathon].
  
==Long-term==
+
==Medium-term==
 +
 
 +
===Persistent configuration===
 +
* '''Labels''':  Enhancement
 +
* '''Description''': Most of the [https://github.com/zfsonlinux/zfs/blob/master/man/man5/zfs-module-parameters.5 ZFS module parameters] offered by ZFS on Linux should be exposed via a plist preference file. Some such options are already available, but only as sysctls, and the configuration is not persistent.
 +
* '''Status''': Not started
 +
* '''Milestone''': 1.3.x
 +
* '''Issue''': https://github.com/openzfsonosx/zfs/issues/149
  
 
===/dev/disk#s# device nodes for each ZFS file system===  
 
===/dev/disk#s# device nodes for each ZFS file system===  
Line 68: Line 29:
 
* '''Description''': The use of the internal ZFS dataset names (e.g., <code>foo/bar/hello</code>) in lieu of actual device nodes (e.g., <code>/dev/disk3s2s4</code>) for the <code>f_mntfromname</code> has wreaked havoc in many domains. Unfortunately, this is a convention we inherited from upstream. In order to get a device node for every file system, we will need to write new IOKit code that creates proxy objects.  
 
* '''Description''': The use of the internal ZFS dataset names (e.g., <code>foo/bar/hello</code>) in lieu of actual device nodes (e.g., <code>/dev/disk3s2s4</code>) for the <code>f_mntfromname</code> has wreaked havoc in many domains. Unfortunately, this is a convention we inherited from upstream. In order to get a device node for every file system, we will need to write new IOKit code that creates proxy objects.  
 
* '''Status''': Not started
 
* '''Status''': Not started
* '''Milestone''': 1.3.# (TBD)
+
* '''Milestone''': 1.3.x
 
* '''Issue''': https://github.com/openzfsonosx/zfs/issues/116
 
* '''Issue''': https://github.com/openzfsonosx/zfs/issues/116
  
Line 75: Line 36:
 
* '''Description''': Allow OS X disk arbitration to recognize and automatically mount ZFS file systems and snapshots.
 
* '''Description''': Allow OS X disk arbitration to recognize and automatically mount ZFS file systems and snapshots.
 
* '''Status''': Not started
 
* '''Status''': Not started
* '''Milestone''': 1.3.# (TBD)
+
* '''Milestone''': 1.3.x
* '''Blocked by''': /dev/disk#s# device nodes for each ZFS file system
+
* '''Blocked by''': /dev/disk#s# device nodes for each ZFS file system.
  
===Spotlight===
+
===Handle disk renumbering===
* '''Labels''':  Bug
+
* '''Description''': Spotlight currently never scans and if it learns anything, it forgets it right away after next unmount. Currently our conjecture is that it cannot work until we have ZFS file systems under disk arbitration's control and /dev/disk#s# nodes for each file system.
+
* '''Status''': Diagnosis
+
* '''Milestone''': 1.3.# (TBD)
+
* '''Blocked by''': /dev/disk#s# device nodes for each ZFS file system ; Disk arbitration control of ZFS file systems
+
 
+
===Handle Disk Renumbering===
+
 
* '''Labels''': Enhancement
 
* '''Labels''': Enhancement
 
* '''Description''': ZFS needs to be able to recognize a device as happy and present even if it is unplugged, then replugged in and assigned a new disk#s# number by OS X. Currently ZFS will think the disk has gone away, and the pool has become degraded.
 
* '''Description''': ZFS needs to be able to recognize a device as happy and present even if it is unplugged, then replugged in and assigned a new disk#s# number by OS X. Currently ZFS will think the disk has gone away, and the pool has become degraded.
 
* '''Status''': Needs a design
 
* '''Status''': Needs a design
* '''Milestone''': 1.4.# (TBD)
+
* '''Milestone''': 1.4.x
* '''Issue''': https://github.com/openzfsonosx/zfs/issues/104
+
* '''Issue''': https://github.com/openzfsonosx/zfs/issues/167
* '''Blocked by''': [https://github.com/openzfsonosx/zfs/issues/134 Implement the other 57% of vdev_disk.c]
+
* '''Blocked by''': [https://github.com/openzfsonosx/zfs/issues/134 Implement the other 57% of vdev_disk.c].
* '''Notes''': No design decided on yet. ZEVO used the GPT partition UUIDs but this is a non-solution since file vdevs do not have partitions, and because Core Storage does not permit partitioning. Some possible designs are mentioned in https://github.com/openzfsonosx/zfs/issues/104. It's worth noting that this exact same problem affect ZFS on Linux when the user is relying on /dev/sd[a-z] nodes which can end up re-lettered when a drive is unplugged and replugged in. This is less of an issue for ZFS on Linux because they have alternate ways of specifying devices (e.g., /dev/disk/by-id) which are unavailable on OS X. We will probably need some sort of equivalent.
+
* '''Notes''': No design decided on yet. ZEVO used the GPT partition UUIDs but this is a non-solution since file vdevs do not have partitions, and because Core Storage does not permit partitioning. It's worth noting that this exact same problem affects ZFS on Linux when the user is relying on /dev/sd[a-z] nodes, which can end up re-lettered when a drive is unplugged and replugged in. This is less of an issue for ZFS on Linux because they have alternate ways of specifying devices (e.g., /dev/disk/by-id) which are unavailable on OS X. We will probably need some sort of equivalent.
 +
 
 +
==Long-term==
 +
===GUI===
 +
===More notifications===
 +
* Scrub completed
 +
* Unhealthy zpool status
 +
* Space consumed exceeds 80%
 +
* etc.

Latest revision as of 08:14, 30 September 2014

Short-term[edit]

Spotlight[edit]

  • Labels: Bug
  • Description: Spotlight currently never scans and if it learns anything, it forgets it right away after next unmount. Currently our conjecture is that it cannot work until we have ZFS file systems under disk arbitration's control and /dev/disk#s# nodes for each file system.
  • Status: Fixed
  • Milestone: Next release (the release following 1.3.0)
  • Issue: https://github.com/openzfsonosx/zfs/issues/84

Port ZFS Test Suite from upstream[edit]

Medium-term[edit]

Persistent configuration[edit]

/dev/disk#s# device nodes for each ZFS file system[edit]

  • Labels: Enhancement
  • Description: The use of the internal ZFS dataset names (e.g., foo/bar/hello) in lieu of actual device nodes (e.g., /dev/disk3s2s4) for the f_mntfromname has wreaked havoc in many domains. Unfortunately, this is a convention we inherited from upstream. In order to get a device node for every file system, we will need to write new IOKit code that creates proxy objects.
  • Status: Not started
  • Milestone: 1.3.x
  • Issue: https://github.com/openzfsonosx/zfs/issues/116

Disk arbitration control of ZFS file systems[edit]

  • Labels: Enhancement
  • Description: Allow OS X disk arbitration to recognize and automatically mount ZFS file systems and snapshots.
  • Status: Not started
  • Milestone: 1.3.x
  • Blocked by: /dev/disk#s# device nodes for each ZFS file system.

Handle disk renumbering[edit]

  • Labels: Enhancement
  • Description: ZFS needs to be able to recognize a device as happy and present even if it is unplugged, then replugged in and assigned a new disk#s# number by OS X. Currently ZFS will think the disk has gone away, and the pool has become degraded.
  • Status: Needs a design
  • Milestone: 1.4.x
  • Issue: https://github.com/openzfsonosx/zfs/issues/167
  • Blocked by: Implement the other 57% of vdev_disk.c.
  • Notes: No design decided on yet. ZEVO used the GPT partition UUIDs but this is a non-solution since file vdevs do not have partitions, and because Core Storage does not permit partitioning. It's worth noting that this exact same problem affects ZFS on Linux when the user is relying on /dev/sd[a-z] nodes, which can end up re-lettered when a drive is unplugged and replugged in. This is less of an issue for ZFS on Linux because they have alternate ways of specifying devices (e.g., /dev/disk/by-id) which are unavailable on OS X. We will probably need some sort of equivalent.

Long-term[edit]

GUI[edit]

More notifications[edit]

  • Scrub completed
  • Unhealthy zpool status
  • Space consumed exceeds 80%
  • etc.