Difference between revisions of "Project roadmap"
From OpenZFS on OS X
(→Disk arbitration control of ZFS file systems) |
(→Handle Disk Renumbering) |
||
Line 87: | Line 87: | ||
===Handle Disk Renumbering=== | ===Handle Disk Renumbering=== | ||
* '''Labels''': Enhancement | * '''Labels''': Enhancement | ||
− | |||
* '''Description''': ZFS needs to be able to recognize a device as happy and present even if it is unplugged, then replugged in and assigned a new disk#s# number by OS X. Currently ZFS will think the disk has gone away, and the pool has become degraded. | * '''Description''': ZFS needs to be able to recognize a device as happy and present even if it is unplugged, then replugged in and assigned a new disk#s# number by OS X. Currently ZFS will think the disk has gone away, and the pool has become degraded. | ||
+ | * '''Status''': Needs a design | ||
* '''Milestone''': 1.4.# (TBD) | * '''Milestone''': 1.4.# (TBD) | ||
* '''Issue''': https://github.com/openzfsonosx/zfs/issues/104 | * '''Issue''': https://github.com/openzfsonosx/zfs/issues/104 | ||
* '''Blocked by''': [https://github.com/openzfsonosx/zfs/issues/134 Implement the other 57% of vdev_disk.c] | * '''Blocked by''': [https://github.com/openzfsonosx/zfs/issues/134 Implement the other 57% of vdev_disk.c] | ||
* '''Notes''': No design decided on yet. ZEVO used the GPT partition UUIDs but this is a non-solution since file vdevs do not have partitions, and because Core Storage does not permit partitioning. Some possible designs are mentioned in https://github.com/openzfsonosx/zfs/issues/104. It's worth noting that this exact same problem affect ZFS on Linux when the user is relying on /dev/sd[a-z] nodes which can end up re-lettered when a drive is unplugged and replugged in. This is less of an issue for ZFS on Linux because they have alternate ways of specifying devices (e.g., /dev/disk/by-id) which are unavailable on OS X. We will probably need some sort of equivalent. | * '''Notes''': No design decided on yet. ZEVO used the GPT partition UUIDs but this is a non-solution since file vdevs do not have partitions, and because Core Storage does not permit partitioning. Some possible designs are mentioned in https://github.com/openzfsonosx/zfs/issues/104. It's worth noting that this exact same problem affect ZFS on Linux when the user is relying on /dev/sd[a-z] nodes which can end up re-lettered when a drive is unplugged and replugged in. This is less of an issue for ZFS on Linux because they have alternate ways of specifying devices (e.g., /dev/disk/by-id) which are unavailable on OS X. We will probably need some sort of equivalent. |
Revision as of 00:49, 12 April 2014
Contents
Short-term
bmalloc
- Labels: Enhancement
- Description: Include the bmalloc slice allocator to increase performance by allowing more memory to be used.
- Status: Completed
- Milestone: 1.2.1
zfs recv panic
- Labels: Bug
- Description: In the O3X 1.2.0 dmg release zfs recv -F can panic in some cases.
- Status: Work in progress
- Milestone: 1.2.1
- Notes: No longer panics, but now it hangs in
dnode_special_close
.
zfs_zget race
- Labels: Bug
- Description: vnode_create or vnode_getwithvid must precede ZFS_OBJ_HOLD_EXIT and mutex_exit(&zp->z_lock);
- Status: Simply moving vnode_create and vnode_getwithvid panics.
- Milestone: 1.2.1
Convert sysctl zfs_vnop_vdev_ashift to a pool property
- Labels: Enhancement
- Description: Since this can vary from pool to pool, it shouldn't be set globally.
- Status: Not started
- Milestone: 1.2.1-1.3.# (TBD)
- Blocked by: We need to determine whether the handling of vdev_ashift has introduced, or not dealt with, any pool incompatibilities especially with respect to 4k or 512e disks.
ZFS Event Daemon (zed)
- Labels: Enhancement
- Description: Userland daemon to respond to events sent by the ZFS kernel extension. Can perform userland tasks that the kernel extensions cannot perform on their own.
- Status: Completed
- Milestone: 1.2.1-1.3.# (TBD)
launchd control of ZFS Event Daemon (zed)
- Labels: Enhancement
- Description: launchd should make sure the ZFS Event Daemon is running whenever the ZFS kernel extensions are active. We need to decide whether we will do this by watching the path /dev/zfs or by registering for a notification from the kernel.
- Status: Not started
- Milestone: TBD (1.3.#)
zpool.cache fixes
- Labels: Bug
- Description: Pools need to be removed from /etc/zfs/zpool.cache when they are exported.
- Status: Not started
- Milestone: 1.2.1-1.3.# (TBD)
- Issue: https://github.com/openzfsonosx/zfs/issues/144
Autoimport of pools
- Labels: Enhancement
- Description: Import all known pools during boot and possibly on-demand when devices are hot plugged.
- Status: Workaround
- Milestone: 1.2.1-1.3.# (TBD)
- Blocked by: zpool.cache fixes ; launchd control of ZFS Event Daemon (zed)
- Notes: It is possible that the next release could temporarily include the autoimport hack and have zed and zpool.cache stripped out.
Port ZFS Test Suite from upstream
- Labels: Enhancement
- Description: Upstream OpenZFS has ported the old Solaris STF to OpenZFS and we need to port the new framework to O3X.
- Status: Work in progress
- Milestone: 1.3.# (TBD)
- Issue: https://github.com/openzfsonosx/zfs/issues/79
- Notes: Initial work began at the OpenZFS Developer Summit Hackathon.
Long-term
/dev/disk#s# device nodes for each ZFS file system
- Labels: Enhancement
- Description: The use of the internal ZFS dataset names (e.g.,
foo/bar/hello
) in lieu of actual device nodes (e.g.,/dev/disk3s2s4
) for thef_mntfromname
has wreaked havoc in many domains. Unfortunately, this is a convention we inherited from upstream. In order to get a device node for every file system, we will need to write new IOKit code that creates proxy objects. - Status: Not started
- Milestone: 1.3.# (TBD)
- Issue: https://github.com/openzfsonosx/zfs/issues/116
Disk arbitration control of ZFS file systems
- Labels: Enhancement
- Description: Allow OS X disk arbitration to recognize and automatically mount ZFS file systems and snapshots.
- Status: Not started
- Milestone: 1.3.# (TBD)
- Blocked by: /dev/disk#s# device nodes for each ZFS file system
Spotlight
- Labels: Bug
- Description: Spotlight currently never scans and if it learns anything, it forgets it right away after next unmount. Currently our conjecture is that it cannot work until we have ZFS file systems under disk arbitration's control and /dev/disk#s# nodes for each file system.
- Status: Diagnosis
- Milestone: 1.3.# (TBD)
- Blocked by: /dev/disk#s# device nodes for each ZFS file system ; Disk arbitration control of ZFS file systems
Handle Disk Renumbering
- Labels: Enhancement
- Description: ZFS needs to be able to recognize a device as happy and present even if it is unplugged, then replugged in and assigned a new disk#s# number by OS X. Currently ZFS will think the disk has gone away, and the pool has become degraded.
- Status: Needs a design
- Milestone: 1.4.# (TBD)
- Issue: https://github.com/openzfsonosx/zfs/issues/104
- Blocked by: Implement the other 57% of vdev_disk.c
- Notes: No design decided on yet. ZEVO used the GPT partition UUIDs but this is a non-solution since file vdevs do not have partitions, and because Core Storage does not permit partitioning. Some possible designs are mentioned in https://github.com/openzfsonosx/zfs/issues/104. It's worth noting that this exact same problem affect ZFS on Linux when the user is relying on /dev/sd[a-z] nodes which can end up re-lettered when a drive is unplugged and replugged in. This is less of an issue for ZFS on Linux because they have alternate ways of specifying devices (e.g., /dev/disk/by-id) which are unavailable on OS X. We will probably need some sort of equivalent.