Zevo performance success stories?

Moderators: jhartley, MSR734, nola

Zevo performance success stories?

Post by ghaskins » Fri Dec 07, 2012 12:55 pm

Hello all,

So we have seen a few posts recently on people having problems getting more than a single spindle of IO out of Zevo (including one I started myself, but I don't know how to link articles in this interface yet). I thought I would flip it around and ask the question:

"Are you happy with the performance of your ZFS setup?"

If you are happy and would care to post about how you have it configured, details of your HW/SW, and any relevant performance data you have gathered, perhaps those of us that are having problems may learn from you.

Thanks
-Greg
ghaskins Offline


 
Posts: 52
Joined: Sat Nov 17, 2012 9:37 am

link

Post by grahamperrin » Fri Dec 07, 2012 11:42 pm

For a user with three FireWire 800 drives –

  • 2x 3 Tb Western Digital My Book 111D
  • 1x 80 Gb Intel SSD (~three years old)

– hard disk drive performance was normal, solid state performed well. In this case (Performance issue with FW800 connection order?), the order of devices was critical.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Core Storage encrypted ZFS home in a MacBookPro5,2

Post by grahamperrin » Fri Dec 07, 2012 11:52 pm

Seagate Momentus® XT ST750LX003-1AC154 solid state hybrid in a MacBookPro5,2 with 8 GB memory.

For this single physical disk:

Code: Select all
sh-3.2$ diskutil list disk0
/dev/disk0
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *750.2 GB   disk0
   1:                        EFI                         209.7 MB   disk0s1
   2:                  Apple_HFS swap                    32.0 GB    disk0s2
   3: FFFFFFFF-FFFF-FFFF-FFFF-FFFFFFFFFFFF               536.9 MB   disk0s3
   4:                  Apple_HFS spare                   671.1 MB   disk0s4
   5:          Apple_CoreStorage                         99.5 GB    disk0s5
   6:                 Apple_Boot Boot OS X               650.0 MB   disk0s6
   7:          Apple_CoreStorage                         616.3 GB   disk0s7
   8:                 Apple_Boot Boot OS X               134.2 MB   disk0s8


Code: Select all
sh-3.2$ diskutil coreStorage list
CoreStorage logical volume groups (2 found)
|
+-- Logical Volume Group 039C0D47-F3CF-44D3-A825-B48F01FCF334
|   =========================================================
|   Name:         OS
|   Status:       Online
|   Size:         99484213248 B (99.5 GB)
|   Free Space:   0 B (0 B)
|   |
|   +-< Physical Volume 86D2FA98-8C69-4828-B909-8312AE4A75E2
|   |   ----------------------------------------------------
|   |   Index:    0
|   |   Disk:     disk0s5
|   |   Status:   Online
|   |   Size:     99484213248 B (99.5 GB)
|   |
|   +-> Logical Volume Family 7AA96B8E-0E41-4C3F-9589-5FAE0C956372
|       ----------------------------------------------------------
|       Encryption Status:       Unlocked
|       Encryption Type:         AES-XTS
|       Conversion Status:       Complete
|       Conversion Direction:    -none-
|       Has Encrypted Extents:   Yes
|       Fully Secure:            Yes
|       Passphrase Required:     Yes
|       |
|       +-> Logical Volume B13EE5BF-5D08-49D3-94C2-DF58AFEA1D08
|           ---------------------------------------------------
|           Disk:               disk1
|           Status:             Online
|           Size (Total):       99165437952 B (99.2 GB)
|           Size (Converted):   -none-
|           Revertible:         No
|           LV Name:            OS
|           Volume Name:        OS
|           Content Hint:       Apple_HFS
|
+-- Logical Volume Group 902434C9-0131-4E3A-AE15-2B8B938087AD
    =========================================================
    Name:         gjp22-cs
    Status:       Online
    Size:         616336003072 B (616.3 GB)
    Free Space:   0 B (0 B)
    |
    +-< Physical Volume 179AADE6-34F1-404C-A994-9FD99C881BA6
    |   ----------------------------------------------------
    |   Index:    0
    |   Disk:     disk0s7
    |   Status:   Online
    |   Size:     616336003072 B (616.3 GB)
    |
    +-> Logical Volume Family FFCE2FAF-BE8E-4FEF-9F3E-E221C6CBCA11
        ----------------------------------------------------------
        Encryption Status:       Unlocked
        Encryption Type:         AES-XTS
        Conversion Status:       Complete
        Conversion Direction:    -none-
        Has Encrypted Extents:   Yes
        Fully Secure:            Yes
        Passphrase Required:     Yes
        |
        +-> Logical Volume 0CFAFD38-E79B-40AC-A4BE-63296E6B4331
            ---------------------------------------------------
            Disk:               disk6
            Status:             Online
            Size (Total):       616017227776 B (616.0 GB)
            Size (Converted):   -none-
            Revertible:         No
            LV Name:            gjp22-cs
            Content Hint:       Apple_HFS


Highlight:

Code: Select all
        +-> Logical Volume 0CFAFD38-E79B-40AC-A4BE-63296E6B4331
            ---------------------------------------------------
            Disk:               disk6
            Status:             Online
            Size (Total):       616017227776 B (616.0 GB)
            Size (Converted):   -none-
            Revertible:         No
            LV Name:            gjp22-cs
            Content Hint:       Apple_HFS


Ignoring the content hint, it's truly ZFS, and gjp22 is my home directory:

Code: Select all
sh-3.2$ diskutil list disk6 && diskutil list disk7
/dev/disk6
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:      GUID_partition_scheme                        *616.0 GB   disk6
   1:                        EFI                         209.7 MB   disk6s1
   2:                        ZFS                         615.7 GB   disk6s2
/dev/disk7
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:             zfs_pool_proxy gjp22                  *614.2 GB   disk7
   1:       zfs_filesystem_proxy intrigue                117.2 GB   disk7s1


Whilst performance is reduced by the mixture with HFS Plus (keyword: latency), I am happy.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Zevo performance success stories?

Post by tangles » Sat Dec 08, 2012 8:34 pm

I'm wrapped with the performance of my efforts.

Code: Select all
mrq:~ sadmin$ zpool status
  pool: Data
 state: ONLINE
 scan: scrub repaired 0 in 2h43m with 0 errors on Wed Nov 28 23:22:41 2012
config:

   NAME                                           STATE     READ WRITE CKSUM
   Data                                           ONLINE       0     0     0
     raidz1-0                                     ONLINE       0     0     0
       GPTE_7FF8B766-B40D-427E-B4FB-6D06BE2D5C24  ONLINE       0     0     0  at disk2s2
       GPTE_266D6073-2A1E-44C0-A3ED-1414A16F23CF  ONLINE       0     0     0  at disk1s2
       GPTE_312E1FF8-5AD6-46A5-B349-CD198BB7650F  ONLINE       0     0     0  at disk0s2
     raidz1-1                                     ONLINE       0     0     0
       GPTE_7ED54770-69A5-47DF-A88C-DB968EDCACE0  ONLINE       0     0     0  at disk5s2
       GPTE_55B4FCD7-A340-48EF-A07B-6BAD50EA5B03  ONLINE       0     0     0  at disk3s2
       GPTE_C780F366-DACC-4353-A97E-70F543677D15  ONLINE       0     0     0  at disk4s2
     raidz1-2                                     ONLINE       0     0     0
       GPTE_2F013EDA-524B-42D9-A596-33561E3F0CB6  ONLINE       0     0     0  at disk8s2
       GPTE_BA31AC30-F529-4912-85E8-1295F4A0B581  ONLINE       0     0     0  at disk13s2
       GPTE_5380700A-E849-46C6-AA3F-026249B0AAB4  ONLINE       0     0     0  at disk9s2
     raidz1-3                                     ONLINE       0     0     0
       GPTE_53D7F6A4-0D57-4A11-AA7A-5384E1B698EB  ONLINE       0     0     0  at disk11s2
       GPTE_817FF7D1-95BD-423A-BE65-E9265B17CD1F  ONLINE       0     0     0  at disk10s2
       GPTE_D6722D1F-94C8-4197-8BAB-7084B3C0AA22  ONLINE       0     0     0  at disk12s2
     raidz1-4                                     ONLINE       0     0     0
       GPTE_B8D4FD7B-665D-49A4-9B37-1DC548BE4EB8  ONLINE       0     0     0  at disk15s2
       GPTE_CCCBF756-1A5E-4CB0-B5B7-7D006B669870  ONLINE       0     0     0  at disk14s2
       GPTE_540E62A6-ABE9-413B-912E-4A9CEFD938B6  ONLINE       0     0     0  at disk16s2

errors: No known data errors
mrq:~ sadmin$


Code: Select all
mrq:~ sadmin$ zfs list
NAME                  USED   AVAIL   REFER  MOUNTPOINT
Data                4.12Ti  8.40Ti  1.78Mi  /Volumes/Data
Data/Documentaries   462Gi  8.40Ti   462Gi  /Volumes/Data/Documentaries
Data/Files           203Gi  8.40Ti   203Gi  /Volumes/Data/Files
Data/Movies         1.65Ti  8.40Ti  1.65Ti  /Volumes/Data/Movies
Data/Music           117Gi  8.40Ti   117Gi  /Volumes/Data/Music
Data/Pictures       67.8Gi  8.40Ti  67.8Gi  /Volumes/Data/Pictures
Data/Sport          10.7Gi  8.40Ti  10.7Gi  /Volumes/Data/Sport
Data/TVShows        1.51Ti  8.40Ti  1.51Ti  /Volumes/Data/TVShows
Data/Video           116Gi  8.40Ti   116Gi  /Volumes/Data/Video
mrq:~ sadmin$


Those raidz groups are made up of either 1TB drives or 2TB drives.

These drives are hanging off a RocketRaid 2744 controller, which defaults to JBOD mode which was nice.

All this is inside an old Apple Network Server 500 (I guttered it because the 2nd PCI controller was blown) which used to be owned by Netscape years and years ago. (FQDN when running Apple's AIX was freddy.netscape.com).
I removed all the drive caddies and folded some sheet metal to support 10 x Welland 3.5" drive bays.
I've also purchased some internal 3.5" drive mounts that are fitted on the mono side of the ANS500, so I can attach 16 drives to the RR 2744 as my data storage demands increase over time.

Now it's got an Intel X48BT2 mobo, with a 3.0GHz QC Xeon CPU and 16GB DDR3 (4x4GB 1333/1600MHz) RAM running Mac OS X 10.7.3 Server.
It boots off an old 18GB U320 SCSI drive that's hanging off an LSI PCIe U320 HBA pulled from a 2006 Xserve.
Thermaltake 1500w PSU supplies power to everything which is connected to an APC 1200VA UPS.

Services that use the storage are:
Transmission
EyeTV (Recording)
iTunes (Library folder)
iPhoto (Library folder)
AirMedia Server (Points to the TV/Video shares and realtime streams to iOS devices on demand, hence the Xeon CPU!)
XBMC (for the MacMini gaffer-taped to the back of our TV in the lounge)

The I/O flies on this thing! but I have to use AFP (with liberate AFP hack) because I was having weird issues with NFS, that I think is related to case sensitivity settings mentioned in other posts. It's a shame that there are no longer any Infinband drivers for OSX cos those HBAs are quite cheap now and is great for peer to peer networking etc.

getting the I/O out of this rig is the annoying thing...

I find AFP to be crap when compared to NFS's performance. When graphed, the performance of AFP looks like a roller-coaster/heartbeat graph, whereas NFS (which I've used in the past on different pools) was more consistent and even.
10GBe is still too expensive to play with at home, and it looks like Thunderbolt is not going to offer TCP/IP over TB (i.e. like Firewire did) any time soon either.

This is the third ZFS server I've setup for home use that's been tucked away inside the ANS500 case.
1st was a G4 PPC running MacZFS using acard 2 x 6885M HBA to drive 8 PATA disks.
2nd was a x86 Mac Mini using a Silcon3132 in the MiniPCIe wireless slot and 2 PortMultiplier boxes for 10 Sata disks.

All I do is use ZFS's send/receive (with FIFO) to transfer the data between the old and new.

I'm very happy with this one, she's been rock solid for me since I set it up... (i.e. back around when 10.7.3 was out).

Cheers,
tangles Offline


 
Posts: 13
Joined: Sun Sep 16, 2012 5:49 am

peaks and troughs

Post by grahamperrin » Sun Dec 09, 2012 3:58 am

tangles wrote:… roller-coaster/heartbeat …


If the peaks and troughs are whilst writing to ZFS, it may be write throttling.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Zevo performance success stories?

Post by ghaskins » Sun Dec 09, 2012 9:29 am

tangles wrote:getting the I/O out of this rig is the annoying thing...


Heh, you are hitting my same frustration. As I had mentioned in another post, I was originally using iSCSI/JHFS+ as a SAN to my OSX environment, but it was getting to be a bottleneck at GigE. I've also wanted to get ZFS into the mix. I wanted to get something high bandwidth directly to my OSX machine, and IB/10GE were expensive, and TB is frustratingly still unavailable on the MacPro form factor.

Then Zevo came along and allowed me to put a SAS HBA in and hook it all up as a DAS. Very cool.

I digress. Do you know what kind of bandwidth you get out of the individual-drives/aggregate-array on the server side? For instance, what do you see when you do something like

dd if=/dev/zero of=/Data/test.dat bs=4k count=1m

or

dd if=/dev/zero of=/Data/test.dat bs=1m count=4k

?

-Greg
ghaskins Offline


 
Posts: 52
Joined: Sat Nov 17, 2012 9:37 am

Re: Zevo performance success stories?

Post by tangles » Thu Dec 13, 2012 8:05 am

Sorry guys, I've been busy getting a supercharged VW R32 organised before Chrissy...

I ran dd for you, but iTunes on the Mac Mini in the lounge was playing a song and iTunes is still doing its "match" thingy, but I don't think it would have a big impact on dd overall because iTunes has to use AFP.

The following is directly performed on the server:
Code: Select all
mrq:Data sadmin$ dd if=/dev/zero of=/Volumes/Data/test.dat bs=4k count=1m
1048576+0 records in
1048576+0 records out
4294967296 bytes transferred in 26.047253 secs (164891372 bytes/sec)
mrq:Data sadmin$ dd if=/dev/zero of=/Volumes/Data/test.dat bs=4k count=1m
1048576+0 records in
1048576+0 records out
4294967296 bytes transferred in 26.135008 secs (164337707 bytes/sec)
mrq:Data sadmin$ dd if=/dev/zero of=/Volumes/Data/test.dat bs=4k count=1m
1048576+0 records in
1048576+0 records out
4294967296 bytes transferred in 26.409533 secs (162629430 bytes/sec)
mrq:Data sadmin$


~ 160MB/sec. She certainly feels quicker than that, but I guess I'm accustomed to the read speeds too...

and for your 2nd dd test, which I ran immediately after:

Code: Select all
mrq:Data sadmin$ dd if=/dev/zero of=/Volumes/Data/test.dat bs=1m count=4k
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 9.213529 secs (466158759 bytes/sec)
mrq:Data sadmin$ dd if=/dev/zero of=/Volumes/Data/test.dat bs=1m count=4k
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 8.539348 secs (502961977 bytes/sec)
mrq:Data sadmin$ dd if=/dev/zero of=/Volumes/Data/test.dat bs=1m count=4k
4096+0 records in
4096+0 records out
4294967296 bytes transferred in 9.122831 secs (470793249 bytes/sec)
mrq:Data sadmin$


~ 450MB/sec... that's the type of speed that I was referring to in my initial post...

Gotta get back to the car,

Cheers,
tangles Offline


 
Posts: 13
Joined: Sun Sep 16, 2012 5:49 am

Re: Zevo performance success stories?

Post by ghaskins » Thu Dec 13, 2012 2:20 pm

tangles wrote:450MB/sec... that's the type of speed that I was referring to in my initial post...


Wow, thanks for doing that. I appreciate it. Gives me hope that I can get mine sorted out too, though I don't have any where near the number of spindles you have.

I am actually in the process of returning the 4x3TB SAS drives in favor of 6x2TB SATA drives. In the ideal conditions, the SAS drives unit-for-unit should be superior. However, I think there are problems in my driver stack that are preventing the SAS drives from reaching their full potential (small block random IO is substantially worse than simple SATA devices right now). Plus, the more spindles you feed ZFS, the better. Ill report back tonight after they arrive. I'm hoping its a good report ;)

Good luck with your car project!

-Greg
ghaskins Offline


 
Posts: 52
Joined: Sat Nov 17, 2012 9:37 am

Re: Zevo performance success stories?

Post by si-ghan-bi » Thu Dec 13, 2012 4:28 pm

ghaskins wrote:
tangles wrote:Plus, the more spindles you feed ZFS, the better.


If you use the additional performances yes, otherwise is only power drawn for nothing.
si-ghan-bi Offline


 
Posts: 145
Joined: Sat Sep 15, 2012 5:55 am

Re: Zevo performance success stories?

Post by ghaskins » Sat Dec 15, 2012 8:18 pm

si-ghan-bi wrote:
ghaskins wrote:Plus, the more spindles you feed ZFS, the better.


If you use the additional performances yes, otherwise is only power drawn for nothing.



Understand that this _is_ a "performance" thread ;) Agreed more spindles are more power if you aren't going to use the bandwidth, but I definitely will (large media libraries to edit, etc).
ghaskins Offline


 
Posts: 52
Joined: Sat Nov 17, 2012 9:37 am

Next

Return to General Discussion

Who is online

Users browsing this forum: No registered users and 0 guests

cron