Performance Observation

Moderators: jhartley, MSR734, nola

Performance Observation

Post by jollyjinx » Wed Sep 26, 2012 8:16 am

I've been using Zevo's ZFS since December. I've had dedup enabled (2TB ZFS, 24 GB, so RAM was not an issue) in the past but disabled dedup at one time and thought that performance then would be fine.
Even though I thought that after disabling dedup the system would go back up normal speed, it did not. Performance and Scrub on the drive were really slow and the whole system sometimes hung for minutes.

The pool consisted of 2 mirrored 2TB drives. I've ripped out one disk and copied (zfs send -R) the whole pool to a new pool on the second disk without dedup. Now speed is much much higher. A Scrub now takes only hours not a couple of days.
jollyjinx Offline


 
Posts: 60
Joined: Sun Sep 16, 2012 12:40 pm
Location: Munich - Germany

free space thresholds

Post by grahamperrin » Wed Sep 26, 2012 10:14 am

Please, did free space on the affected pool(s) ever fall below any of the following thresholds?

  • 30%
  • 20%
  • 15%
  • 10%
  • 4%

If so, what was the percentage free – and amount (GB) free – at the time of peak usage?
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Performance Observation

Post by jollyjinx » Wed Sep 26, 2012 1:02 pm

Before cleaning the pool peak usage was 1.4 something Ti on the 1.82 Ti size.
Now it's using 1.61Ti of 1.82.

Btw. I added a mirrored log device (2 SSDs) and cache device (a third SSD) to the pool in the beginning and did not receive any 'felt' performance gains. But then again the stupid ZFS clears the cache device every time I boot.
jollyjinx Offline


 
Posts: 60
Joined: Sun Sep 16, 2012 12:40 pm
Location: Munich - Germany

a question in Stack Exchange

Post by grahamperrin » Thu Sep 27, 2012 10:00 pm

grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Performance Observation

Post by grahamperrin » Sat Oct 06, 2012 5:58 am

> … 1.61Ti of 1.82. …

Over eighty-nine percent full – that's above the two thresholds recently added to Super User (transcribed from brief questions and answers at the recent ZFS Day), and not far from a ninety-something point at which performance will "suck".
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Performance Observation

Post by NakkiNyan » Sat Oct 06, 2012 9:08 pm

My observations have been really poor. I can't even tell I have a RAID setup with 4 disks, rarely I would shave a couple seconds over a 1-2min transfer with ZFS but in most cases a single HFS+ drive was faster.

I tried doing some timing on reading and writing a 10.63GB file using dd and here is what I got.
Code: Select all
HFS+ --- 1 disk over USB3
10634552144 bytes transferred in 175.921312 secs (60450619 bytes/sec)

real   2m55.927s
user   0m5.407s
sys   1m0.747s

ZFS --- 4 disk RAIDz over USB3 (same enclosure)
10634552144 bytes transferred in 196.522924 secs (54113545 bytes/sec)

real   3m16.529s
user   0m6.036s
sys   3m9.823s

I even re-made the pool, 4th try, with all the right settings (like choosing disks instead of slices when using ashift) and compression off. This is the best speed I've gotten so far. Am I the only one with this problem? If so, any suggestions? I guess I sorta expected RAID speed from a RAIDz array minus some minimal ZFS overhead.

I will give it a week otherwise I am looking at SnapRAID, concatenate with parity and you can mix disk sizes and add disks at any time. Sadly it does not stripe so it is bound to disk speed I would like to see if anyone has any other suggestions there (other than apple RAID which is a possibility.)
NakkiNyan Offline


 
Posts: 47
Joined: Tue Oct 02, 2012 12:19 pm

Re: Performance Observation

Post by grahamperrin » Sun Oct 07, 2012 4:13 am

Questions for NakkiNyan

OS?

Mac model and memory?

What make and model is the one enclosure that you use for all four disks?

What makes, models and sizes are the disks?

NakkiNyan wrote:… expected RAID speed from a RAIDz array minus some minimal ZFS overhead. …


Not necessarily. Please see the items linked from RAID-Z and 4K sector size (Advanced Format).

In addition, http://www.ustream.tv/recorded/25865866 around 24:28 on the timeline (transcription and links by me):

… So the question is, "Can I mention briefly the work about timing of I/Os to ZFS?". … in Brendan's talk there was a slide where he showed … a case where, like, iostat shows zero I/Os going to the device, but it's one hundred percent busy. We've seen similar things at Delphix, primarily with a variety of … controllers. From what we know of the problem … it does tend to be like, a failing drive or an ECC error or something of that nature, but when you actually begin to look, what you're finding is that the I/Os have been issued from ZFS, they've been transferred to the controller, and then: they just never return.

So … ZFS has long had a policy of not having timeouts. It relies on, kind of, the underlying controllers to really make the decision … "If only the drive would tell me that it was failing." or "Give me an error of any kind!" and from the ZFS perspective, we could actually do something about it.

What I've implemented … just a workaround to deal with these types of issues until we can actually get root cause or …


– George Wilson
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Performance Observation

Post by NakkiNyan » Sun Oct 07, 2012 5:50 am

grahamperrin wrote:Questions for NakkiNyan

10.8.2
MBP Retina 16GB RAM
Sans Digital TR4U+B
4 x SAMSUNG EcoGreen F4 HD204UI 2TB 32MB Cache

grahamperrin wrote:Not necessarily.

Your examples show how bad iostat is for benchmarking and 4k wastes space not that ZFS is crippled to the speed of 1 disk. Now, I tried a benchmark app (Blackmagic Disk Speed Test), iostat and dd read and writes to and from an internal SSD where I get 400+ MB/s. "iostat", even checking every second for 60sec, is a bad benchmark even under heavy load like when I use it. If RAIDz does not provide RAID capabilities I fail to see the point of ZFS, there are way better options with less issues and several equal capabilities (backups, striping, file checking and repair using checksums, etc...) out there. I wanted a better option (more automated) than those though because manual checking and repair is a pain.

I am not stupid enough to rely on one and it is getting insulting that I keep getting told that, that goes for everyone. I am trying to make this work and coming here for advice not to get messages like this which are insulting my intelligence and giving zero advice. If I just wanted to rail on ZFS I would not have tried re-making the pool 4 times, I would make a post showing how bad it is and leave.
NakkiNyan Offline


 
Posts: 47
Joined: Tue Oct 02, 2012 12:19 pm

Re: Performance Observation

Post by grahamperrin » Sun Oct 07, 2012 7:24 am

The questions about hardware were not intended as personal insults.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Performance Observation

Post by NakkiNyan » Sun Oct 07, 2012 7:36 am

I did a test of 4 x2TB RAIDz ashift=0 vs ashift=12 with compression off using the same file (movie so I had random bits instead of 5GB of 0's) and using "time cp ...".
Code: Select all
2.64GB - (2,642,467,903 bytes)
HFS+        44.774sec
ashift=12   45.924sec
ashift=0    48.232sec


I know the hardware question was not an insult, that is needed for debugging problems but telling someone to watch a video when they tested in multiple situations is. Any way done with that and onto the topic again, performance.
NakkiNyan Offline


 
Posts: 47
Joined: Tue Oct 02, 2012 12:19 pm

Next

Return to General Discussion

Who is online

Users browsing this forum: ilovezfs and 0 guests

cron