Difference between revisions of "Project roadmap"

From OpenZFS on OS X
Jump to: navigation, search
(zfs_zget race)
Line 26: Line 26:
 
* '''Description''': Since this can vary from pool to pool, it shouldn't be set globally.
 
* '''Description''': Since this can vary from pool to pool, it shouldn't be set globally.
 
* '''Status''': Not started
 
* '''Status''': Not started
* '''Milestone''': 1.2.1-1.3.# (TBD)
+
* '''Milestone''': 1.2.1-1.3.x
 
* '''Blocked by''': We need to determine whether any pool incompatibilities exist with respect to our handling of vdev_ashift and 4k/512e disks.
 
* '''Blocked by''': We need to determine whether any pool incompatibilities exist with respect to our handling of vdev_ashift and 4k/512e disks.
  
Line 33: Line 33:
 
* '''Description''': Userland daemon to respond to events sent by the ZFS kernel extension. Can perform userland tasks that the kernel extensions cannot perform on their own.
 
* '''Description''': Userland daemon to respond to events sent by the ZFS kernel extension. Can perform userland tasks that the kernel extensions cannot perform on their own.
 
* '''Status''': Completed
 
* '''Status''': Completed
* '''Milestone''': 1.2.1-1.3.# (TBD)
+
* '''Milestone''': 1.2.1-1.3.x
  
 
===launchd control of ZFS Event Daemon (zed)===
 
===launchd control of ZFS Event Daemon (zed)===
Line 45: Line 45:
 
* '''Description''': Pools need to be removed from /etc/zfs/zpool.cache when they are exported.
 
* '''Description''': Pools need to be removed from /etc/zfs/zpool.cache when they are exported.
 
* '''Status''': Not started
 
* '''Status''': Not started
* '''Milestone''': 1.2.1-1.3.# (TBD)
+
* '''Milestone''': 1.2.1-1.3.x
 
* '''Issue''': https://github.com/openzfsonosx/zfs/issues/144
 
* '''Issue''': https://github.com/openzfsonosx/zfs/issues/144
  
Line 52: Line 52:
 
* '''Description''': Import all known pools during boot and possibly on-demand when devices are hot plugged.
 
* '''Description''': Import all known pools during boot and possibly on-demand when devices are hot plugged.
 
* '''Status''': Workaround
 
* '''Status''': Workaround
* '''Milestone''': 1.2.1-1.3.# (TBD)
+
* '''Milestone''': 1.2.1-1.3.x
 
* '''Blocked by''': zpool.cache fixes; launchd control of ZFS Event Daemon (zed).
 
* '''Blocked by''': zpool.cache fixes; launchd control of ZFS Event Daemon (zed).
 
* '''Notes''': It is possible that the next release could temporarily include the [[Autoimport|autoimport hack]] and have zed and zpool.cache stripped out.
 
* '''Notes''': It is possible that the next release could temporarily include the [[Autoimport|autoimport hack]] and have zed and zpool.cache stripped out.
Line 60: Line 60:
 
* '''Description''': Upstream OpenZFS has ported the old Solaris STF to OpenZFS and we need to port the new framework to O3X.
 
* '''Description''': Upstream OpenZFS has ported the old Solaris STF to OpenZFS and we need to port the new framework to O3X.
 
* '''Status''': Work in progress
 
* '''Status''': Work in progress
* '''Milestone''': 1.3.# (TBD)
+
* '''Milestone''': 1.3.x
 
* '''Issue''': https://github.com/openzfsonosx/zfs/issues/79
 
* '''Issue''': https://github.com/openzfsonosx/zfs/issues/79
 
* '''Notes''': Initial work began at the [http://open-zfs.org/wiki/OpenZFS_Developer_Summit OpenZFS Developer Summit Hackathon].
 
* '''Notes''': Initial work began at the [http://open-zfs.org/wiki/OpenZFS_Developer_Summit OpenZFS Developer Summit Hackathon].
Line 70: Line 70:
 
* '''Description''': The use of the internal ZFS dataset names (e.g., <code>foo/bar/hello</code>) in lieu of actual device nodes (e.g., <code>/dev/disk3s2s4</code>) for the <code>f_mntfromname</code> has wreaked havoc in many domains. Unfortunately, this is a convention we inherited from upstream. In order to get a device node for every file system, we will need to write new IOKit code that creates proxy objects.  
 
* '''Description''': The use of the internal ZFS dataset names (e.g., <code>foo/bar/hello</code>) in lieu of actual device nodes (e.g., <code>/dev/disk3s2s4</code>) for the <code>f_mntfromname</code> has wreaked havoc in many domains. Unfortunately, this is a convention we inherited from upstream. In order to get a device node for every file system, we will need to write new IOKit code that creates proxy objects.  
 
* '''Status''': Not started
 
* '''Status''': Not started
* '''Milestone''': 1.3.# (TBD)
+
* '''Milestone''': 1.3.x
 
* '''Issue''': https://github.com/openzfsonosx/zfs/issues/116
 
* '''Issue''': https://github.com/openzfsonosx/zfs/issues/116
  
Line 77: Line 77:
 
* '''Description''': Allow OS X disk arbitration to recognize and automatically mount ZFS file systems and snapshots.
 
* '''Description''': Allow OS X disk arbitration to recognize and automatically mount ZFS file systems and snapshots.
 
* '''Status''': Not started
 
* '''Status''': Not started
* '''Milestone''': 1.3.# (TBD)
+
* '''Milestone''': 1.3.x
 
* '''Blocked by''': /dev/disk#s# device nodes for each ZFS file system.
 
* '''Blocked by''': /dev/disk#s# device nodes for each ZFS file system.
  
Line 84: Line 84:
 
* '''Description''': Spotlight currently never scans and if it learns anything, it forgets it right away after next unmount. Currently our conjecture is that it cannot work until we have ZFS file systems under disk arbitration's control and /dev/disk#s# nodes for each file system.
 
* '''Description''': Spotlight currently never scans and if it learns anything, it forgets it right away after next unmount. Currently our conjecture is that it cannot work until we have ZFS file systems under disk arbitration's control and /dev/disk#s# nodes for each file system.
 
* '''Status''': Diagnosis
 
* '''Status''': Diagnosis
* '''Milestone''': 1.3.# (TBD)
+
* '''Milestone''': 1.3.x
 
* '''Blocked by''': /dev/disk#s# device nodes for each ZFS file system; disk arbitration control of ZFS file systems.
 
* '''Blocked by''': /dev/disk#s# device nodes for each ZFS file system; disk arbitration control of ZFS file systems.
  
Line 91: Line 91:
 
* '''Description''': ZFS needs to be able to recognize a device as happy and present even if it is unplugged, then replugged in and assigned a new disk#s# number by OS X. Currently ZFS will think the disk has gone away, and the pool has become degraded.
 
* '''Description''': ZFS needs to be able to recognize a device as happy and present even if it is unplugged, then replugged in and assigned a new disk#s# number by OS X. Currently ZFS will think the disk has gone away, and the pool has become degraded.
 
* '''Status''': Needs a design
 
* '''Status''': Needs a design
* '''Milestone''': 1.4.# (TBD)
+
* '''Milestone''': 1.4.x
 
* '''Issue''': https://github.com/openzfsonosx/zfs/issues/104
 
* '''Issue''': https://github.com/openzfsonosx/zfs/issues/104
 
* '''Blocked by''': [https://github.com/openzfsonosx/zfs/issues/134 Implement the other 57% of vdev_disk.c].
 
* '''Blocked by''': [https://github.com/openzfsonosx/zfs/issues/134 Implement the other 57% of vdev_disk.c].
 
* '''Notes''': No design decided on yet. ZEVO used the GPT partition UUIDs but this is a non-solution since file vdevs do not have partitions, and because Core Storage does not permit partitioning. Some possible designs are mentioned in https://github.com/openzfsonosx/zfs/issues/104. It's worth noting that this exact same problem affects ZFS on Linux when the user is relying on /dev/sd[a-z] nodes, which can end up re-lettered when a drive is unplugged and replugged in. This is less of an issue for ZFS on Linux because they have alternate ways of specifying devices (e.g., /dev/disk/by-id) which are unavailable on OS X. We will probably need some sort of equivalent.
 
* '''Notes''': No design decided on yet. ZEVO used the GPT partition UUIDs but this is a non-solution since file vdevs do not have partitions, and because Core Storage does not permit partitioning. Some possible designs are mentioned in https://github.com/openzfsonosx/zfs/issues/104. It's worth noting that this exact same problem affects ZFS on Linux when the user is relying on /dev/sd[a-z] nodes, which can end up re-lettered when a drive is unplugged and replugged in. This is less of an issue for ZFS on Linux because they have alternate ways of specifying devices (e.g., /dev/disk/by-id) which are unavailable on OS X. We will probably need some sort of equivalent.

Revision as of 01:40, 12 April 2014

Short-term

bmalloc

  • Labels: Enhancement
  • Description: Include the bmalloc slice allocator to increase performance by allowing more memory to be used.
  • Status: Completed
  • Milestone: 1.2.1

zfs recv panic

  • Labels: Bug
  • Description: In the OpenZFS on OS X 1.2.0.dmg release zfs recv -F can panic in some cases.
  • Status: Work in progress
  • Milestone: 1.2.1
  • Notes: No longer panics, but now it hangs in dnode_special_close.

zfs_zget race

  • Labels: Bug
  • Description: vnode_create or vnode_getwithvid must precede ZFS_OBJ_HOLD_EXIT and mutex_exit(&zp->z_lock).
  • Status: Work in progress
  • Milestone: 1.2.1
  • Notes: Simply moving vnode_create and vnode_getwithvid panics.

Medium-term

Convert sysctl zfs_vnop_vdev_ashift to a pool property

  • Labels: Enhancement
  • Description: Since this can vary from pool to pool, it shouldn't be set globally.
  • Status: Not started
  • Milestone: 1.2.1-1.3.x
  • Blocked by: We need to determine whether any pool incompatibilities exist with respect to our handling of vdev_ashift and 4k/512e disks.

ZFS Event Daemon (zed)

  • Labels: Enhancement
  • Description: Userland daemon to respond to events sent by the ZFS kernel extension. Can perform userland tasks that the kernel extensions cannot perform on their own.
  • Status: Completed
  • Milestone: 1.2.1-1.3.x

launchd control of ZFS Event Daemon (zed)

  • Labels: Enhancement
  • Description: launchd should make sure the ZFS Event Daemon is running whenever the ZFS kernel extensions are active. We need to decide whether we will do this by watching the path /dev/zfs or by registering for a notification from the kernel.
  • Status: Not started
  • Milestone: TBD (1.3.#)

zpool.cache fixes

Autoimport of pools

  • Labels: Enhancement
  • Description: Import all known pools during boot and possibly on-demand when devices are hot plugged.
  • Status: Workaround
  • Milestone: 1.2.1-1.3.x
  • Blocked by: zpool.cache fixes; launchd control of ZFS Event Daemon (zed).
  • Notes: It is possible that the next release could temporarily include the autoimport hack and have zed and zpool.cache stripped out.

Port ZFS Test Suite from upstream

Long-term

/dev/disk#s# device nodes for each ZFS file system

  • Labels: Enhancement
  • Description: The use of the internal ZFS dataset names (e.g., foo/bar/hello) in lieu of actual device nodes (e.g., /dev/disk3s2s4) for the f_mntfromname has wreaked havoc in many domains. Unfortunately, this is a convention we inherited from upstream. In order to get a device node for every file system, we will need to write new IOKit code that creates proxy objects.
  • Status: Not started
  • Milestone: 1.3.x
  • Issue: https://github.com/openzfsonosx/zfs/issues/116

Disk arbitration control of ZFS file systems

  • Labels: Enhancement
  • Description: Allow OS X disk arbitration to recognize and automatically mount ZFS file systems and snapshots.
  • Status: Not started
  • Milestone: 1.3.x
  • Blocked by: /dev/disk#s# device nodes for each ZFS file system.

Spotlight

  • Labels: Bug
  • Description: Spotlight currently never scans and if it learns anything, it forgets it right away after next unmount. Currently our conjecture is that it cannot work until we have ZFS file systems under disk arbitration's control and /dev/disk#s# nodes for each file system.
  • Status: Diagnosis
  • Milestone: 1.3.x
  • Blocked by: /dev/disk#s# device nodes for each ZFS file system; disk arbitration control of ZFS file systems.

Handle disk renumbering

  • Labels: Enhancement
  • Description: ZFS needs to be able to recognize a device as happy and present even if it is unplugged, then replugged in and assigned a new disk#s# number by OS X. Currently ZFS will think the disk has gone away, and the pool has become degraded.
  • Status: Needs a design
  • Milestone: 1.4.x
  • Issue: https://github.com/openzfsonosx/zfs/issues/104
  • Blocked by: Implement the other 57% of vdev_disk.c.
  • Notes: No design decided on yet. ZEVO used the GPT partition UUIDs but this is a non-solution since file vdevs do not have partitions, and because Core Storage does not permit partitioning. Some possible designs are mentioned in https://github.com/openzfsonosx/zfs/issues/104. It's worth noting that this exact same problem affects ZFS on Linux when the user is relying on /dev/sd[a-z] nodes, which can end up re-lettered when a drive is unplugged and replugged in. This is less of an issue for ZFS on Linux because they have alternate ways of specifying devices (e.g., /dev/disk/by-id) which are unavailable on OS X. We will probably need some sort of equivalent.