Weird perf pattern / tunables

Moderators: jhartley, MSR734, nola

Weird perf pattern / tunables

Post by Keltounet » Fri May 24, 2013 10:56 am

Hello, I'm using ZEVO with a Promise R4 disk array, running the four 1 TB disks in raidz1 mode with a thunderbolt connection on a Mac Mini. Performance is rather weird, with alternating between pikes and period of time when there is nothing happening (will provide a screenshot later).

Are there any tunables (à la sysctl or /etc/system) to display/change behaviour?
Keltounet Offline


 
Posts: 5
Joined: Thu Nov 15, 2012 8:28 am

Re: Weird perf pattern / tunables

Post by grahamperrin » Sat May 25, 2013 6:27 am

How do you measure the peaks?
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Weird perf pattern / tunables

Post by Keltounet » Mon May 27, 2013 6:44 am

grahamperrin wrote:How do you measure the peaks?


I have iStat Menus that does show me the read/write perfs, I haven't done a full benchmark. The system feels slow (which I could accept knowing my data is much safer than with HFS+ but still) and is. Launching Lightroom or a browser is slower than HFS+. I'll try to post a screenshot tonight when I'm home. Pattern for both read & write is always high-low-high-low-...
Keltounet Offline


 
Posts: 5
Joined: Thu Nov 15, 2012 8:28 am

Re: Weird perf pattern / tunables

Post by Keltounet » Mon May 27, 2013 12:32 pm

Here is the screen shot:
Image
Keltounet Offline


 
Posts: 5
Joined: Thu Nov 15, 2012 8:28 am

Re: Weird perf pattern / tunables

Post by ilovezfs » Tue May 28, 2013 11:06 am

I would assume the spikes are just the cache being flushed on writes. As for the reads showing a similar pattern, do you have the "atime" property on or off? For better performance make sure "atime" is off.

In any case, I'd recommend experimenting with mirroring to see how the performance compares.

Also, depending on the type of data, you could try setting the "sync" property to "disabled." *waits for tomatoes to be thrown at him*

From the zfs(8) man page documentation:
sync=disabled
Synchronous requests are disabled. File system transactions commit to stable storage only on the next DMU transaction group commit, which might be after many seconds. This setting gives the highest performance. However, it is very dangerous as ZFS would be ignoring the synchronous transaction demands of applications such as databases (e.g. Mail, iTunes, iPhoto, Spotlight, etc.). Expert users should only use this option only when all the risks are understood.

Another issue is Spotlight. Try turning it off (mdutil -i off) and see how performance changes.
ilovezfs Online


 
Posts: 249
Joined: Sun Feb 10, 2013 9:02 am

Link

Post by grahamperrin » Tue May 28, 2013 1:04 pm

Keltounet wrote:… Pattern for both read & write is always high-low-high-low-...


I often see those peaks and troughs.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Weird perf pattern / tunables

Post by BjoKa » Fri Jun 14, 2013 5:26 pm

Sorry for replying to an old thread, but since this and similar questions do reappear at various places:

The write peaks at regular interval are expected for a mostly read-loaded ZFS (or a moderate to low write-loaded ZFS without much reading). They originate from the way the ZFS IO-pipeline works, especially how the transaction grouping works. Basically, ZFS closes a transaction group and writes it to disc every 5 seconds, earlier if a certain amount of data (don't remember the number right now) has been piled up in a transaction group. This is mostly independent from pool layout.
BjoKa Offline


 
Posts: 14
Joined: Sat Feb 02, 2013 3:18 pm
Location: Germany


Return to General Discussion

Who is online

Users browsing this forum: ilovezfs and 0 guests

cron