Editing Development

Jump to: navigation, search

Warning: You are not logged in.

Your IP address will be recorded in this page's edit history.
The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 219: Line 219:
 
As of 1.5.2 we switched the KMEM_QUANTUM to 128k based on feedback from a user. It was believed at the time that some tuning in the allocator had enabled this improvement. Surprisingly this has lead to reduced performance and some stuttering/beachballing on various machines. There is no apparent predictability to which class of machine will suffer from this, i.e. newer fast machines are apparently susceptible to this over the reference machine (a mac mini) around which the 128k opinion was formed. It also seems that allowing wired memory to become very large can (does?) result in performance problems.
 
As of 1.5.2 we switched the KMEM_QUANTUM to 128k based on feedback from a user. It was believed at the time that some tuning in the allocator had enabled this improvement. Surprisingly this has lead to reduced performance and some stuttering/beachballing on various machines. There is no apparent predictability to which class of machine will suffer from this, i.e. newer fast machines are apparently susceptible to this over the reference machine (a mac mini) around which the 128k opinion was formed. It also seems that allowing wired memory to become very large can (does?) result in performance problems.
  
There has been further investigation into exactly why we need to gain large blocks of memory from the page allocator, when the kernels own level 2 allocator does not. It turns out that vmem does not return memory to the page allocator in general on Illumos as it is the system wide allocator. In our case we do have to release memory back to the OS under pressure situations. To achieve this we need to configure vmem to act more like libumem does in user space, that is to know that it has an upstream allocator that must be cooperated with. Furthermore it turns out that the "quantum caches" in the heap vmem arena were not active, due to the vmem arena chaining not working at all (this is a bug). While this bug remains, the size of KMEM_QUANTUM is a proxy for frequency of memory allocations/frees via the kernel page allocator. High frequency is not good - the page allocator is slow and heavily impacts operation of the machine (TLB shootdowns etc).
+
There has been further investigation into exactly why we need to gain large blocks of memory from the page allocator, when the kernels own level 2 allocator does not. It turns out that vmem does not return memory to the page allocator in general on Illumos as it is the system wide allocator. In our case we do have to release memory back to the OS under pressure situations. To achieve this we need to configure vmem to act more like libumem does in user space, that is to know that it has an upstream allocator that must be cooperated with. Furthermore it turns out that the "quantum caches" in the heap vmem arena were not active, due to the vmem arena chaining not working at all. The size of the KMEM_QUANTUM is a proxy for frequency of memory allocations/frees via the kernel page allocator. High frequency is not good - the page allocator is slow and heavily impacts operation of the machine (TLB shootdowns etc).
  
 
References:
 
References:

Please note that all contributions to OpenZFS on OS X may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see OpenZFS on OS X:Copyrights for details). Do not submit copyrighted work without permission!

Cancel | Editing help (opens in new window)