Performance Observation

Moderators: jhartley, MSR734, nola

Re: Performance Observation

Post by dbrady » Sun Oct 07, 2012 12:04 pm

In the HFS+ case, is this just one drive involved? If so, can you test what 4 parallel copies to 4 HFS+ disks looks like? The bus or device may have an upper limit that is not 4 x the single drive case. You could also try setting up a RAID0 stripe and seeing how much the HFS+ test case scales. Thanks.
dbrady Offline


 
Posts: 67
Joined: Wed Sep 12, 2012 12:43 am

Re: Performance Observation

Post by NakkiNyan » Sun Oct 07, 2012 7:22 pm

I don't have 8 drives; I have 4, plus my (normally time machine and backup) spare holding the data while I mess with all of this. It is easier to play with 3 because of the amount of data so I will try that and report.

But first, done messing with this junk while I watch the falcon 9 spew fire as it flies into space!
NakkiNyan Offline


 
Posts: 47
Joined: Tue Oct 02, 2012 12:19 pm

Re: Performance Observation

Post by NakkiNyan » Sun Oct 07, 2012 8:45 pm

3 disks is easier since I need to make a file, make a pool, make a sparse file in the pool, remove the file, put some data from the 4th disk into the RAID, replace the sparse file with the disk, resliver and continue, and then the same in reverse to destroy and start over. Painful, so you get a test with 3...

File = 2,642,467,903 bytes... 2.6GB

HFS+ 1 2TB (Empty drive)
- real 0m20.009s ----- 125.95 MB/s
HFS+ 3 x2TB striped
- real 0m12.713s ----- 198.22 MB/s
ZFS 3 x2TB => "zpool create -f -o ashift=12 Freya /dev/disk1 /dev/disk2 /dev/disk3" (is this RAID0?)
- real 0m15.459s ----- 163.02 MB/s
ZFS 3 x2TB **compression=on => "zpool create -f -o ashift=12 Freya raidz /dev/disk1 /dev/disk2 /dev/disk3"
- real 0m42.992s ----- 58.62 MB/s
ZFS 3 x2TB **compression=off => "zpool create -f -o ashift=12 Freya raidz /dev/disk1 /dev/disk2 /dev/disk3"
- real 0m47.650s ---- 52.89 MB/s

Not shocked with compressed shorter, more CPU but less disk IO. I wish cp had a rate, I know rsync does but it wastes time checking so the time is off and it uses the time to calculate transfer rate.
NakkiNyan Offline


 
Posts: 47
Joined: Tue Oct 02, 2012 12:19 pm

Re: Performance Observation

Post by wonkywonky » Sun Oct 07, 2012 11:39 pm

NakkiNyan wrote:My observations have been really poor. I can't even tell I have a RAID setup with 4 disks, rarely I would shave a couple seconds over a 1-2min transfer with ZFS but in most cases a single HFS+ drive was faster.


Unlike in RAID-5 where read performance gets a big bump, a single RAIDZ vdev is, in general, limited to the performance of a single disk. RAIDZ is great when the primary goal is maximizing available space, but you give up performance. One way to overcome this is to group your disks as multiple vdevs (which get striped).

Have you encountered this summary before? Nice concise roundup of the various schemes and their performance implications:
http://constantin.glez.de/blog/2010/06/ ... erformance

Also, I'm not sure if running that 4-drive array over USB3 will give you the max bandwidth/IOPs you could get from having the drives running internally. The max sequential transfer rates I've gotten on my USB3 enclosures (w/ Vertex3 and Samsung 830-class SSDs) have been ~300MB/s.

Edit: Oops, wrote some of that before I realized there was a second page of posts.
wonkywonky Offline


 
Posts: 25
Joined: Fri Sep 14, 2012 11:33 pm

Re: Performance Observation

Post by si-ghan-bi » Mon Oct 08, 2012 12:16 am

Raidz limits the performances to those of a single drive... for small files (about 4KB size). Big files still scale with n-1 disks, at least in theory.
si-ghan-bi Offline


 
Posts: 145
Joined: Sat Sep 15, 2012 5:55 am

Re: Performance Observation

Post by wonkywonky » Mon Oct 08, 2012 12:38 am

si-ghan-bi wrote:Raidz limits the performances to those of a single drive... for small files (about 4KB size). Big files still scale with n-1 disks, at least in theory.


I'm curious: what sort of scaling have you encountered in your own setups? I found that even with big files (photos, videos) performance of a 4-drive RAIDZ was not that much better than a single disk, and it took the addition of a second vdev to get performance to a point where my Gigabit link would be saturated.
wonkywonky Offline


 
Posts: 25
Joined: Fri Sep 14, 2012 11:33 pm

Re: Performance Observation

Post by NakkiNyan » Mon Oct 08, 2012 1:24 am

si-ghan-bi wrote:Raidz limits the performances to those of a single drive... for small files (about 4KB size). Big files still scale with n-1 disks, at least in theory.

I am not even getting the performance of a single drive over the same link in the same enclosure with 500MB-17GB files. Performance of a single ZFS disk is about the same as 1 HFS+ disk. In my case RAIDz is the issue regardless of the file sizes. I am not interested in having 2 mirrored vdev's, that is just a waste.
NakkiNyan Offline


 
Posts: 47
Joined: Tue Oct 02, 2012 12:19 pm

Re: Performance Observation

Post by grahamperrin » Mon Oct 08, 2012 3:08 am

Not specific to any enquiry in this topic, here's a neat set of 2011 observations – referred from Frequently Asked Questions about MacZFSWhat should I do with 4k (Advanced Format) hard drives? (but not limited to ashift):

MacZFS Speed « Oceanside Coding (highlights)

4 GB memory there with MacZFS is the current minimum of what's expected for ZEVO, and it's OSx86, but still: it's a good read.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

recommended reading

Post by grahamperrin » Mon Nov 05, 2012 12:17 pm

Referred from NAPP-IT ZFS SERVER: the basic manual (PDF):


(I recall seeing it long ago, but didn't bookmark it because at the time I didn't imagine myself (with a laptop and not much else) ever needing the depth of knowledge that's in the article. Revisiting the page today, it's great.)
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Performance Observation

Post by TomUnderhill » Fri Nov 09, 2012 7:19 pm

I currently have two Seagate Barracuda 3TB SATA drives mounted internally in bay 1 and bay 3 of my 2008 MacPro (3,1), 24GB RAM with a 128GB SSD boot.

The two HDDs are striped as follows:
Code: Select all
config:

   NAME                                         STATE     READ WRITE CKSUM
   ZraidA                                       ONLINE       0     0     0
     GPTE_EB991614-71C6-4B13-979D-0284933C1FAC  ONLINE       0     0     0  at disk0s2
     GPTE_47F2D542-A8DC-4F32-9204-C33E650635CF  ONLINE       0     0     0  at disk1s2

Black Magic Speed Test returns writes as high as 590.0MB/s and 260.9MB/s reads.

Xbench 1.3 returns the following:
Code: Select all
Sequential
   Uncached Write    212.81MB/s (4K blocks)
   Uncached Write    2505.11 MB/s (256K blocks)
   Uncached Reads    576.65MB/s (4K blocks)
   Uncached Reads    1541.39 MB/s (265K blocks)

Random
   Uncached Write    190.55MB/s (4K blocks)
   Uncached Write    24447.41 MB/s (256K blocks)
   Uncached Reads    473.95MB/s (4K blocks)
   Uncached Reads    2841.98 MB/s (265K blocks)

Regardless of the "actual" speed ZFS returns, the system delivers a much snappier response than I had with a four-drive striped RAID with 1TB HDDs using SoftRAID...

Speed is important, but it is not the driving factor for me. Many of my files go back 13 years and I have actually experienced bit rot with my data on RAID5 NAS boxes. I need my data secure at both the drive level as well as the bit level.

Next week I intend on adding two more identical drives for a mirror of two-drive stripes. Once I have that, I will post updated rates. If I have time I will create a single-drive vdev and measure its speed to compare to the two-drive stripe.
TomUnderhill Offline


 
Posts: 36
Joined: Wed Oct 10, 2012 8:06 am
Location: Southern California

PreviousNext

Return to General Discussion

Who is online

Users browsing this forum: ilovezfs and 0 guests

cron