I'm wrapped with the performance of my efforts.
- Code: Select all
mrq:~ sadmin$ zpool status
pool: Data
state: ONLINE
scan: scrub repaired 0 in 2h43m with 0 errors on Wed Nov 28 23:22:41 2012
config:
NAME STATE READ WRITE CKSUM
Data ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
GPTE_7FF8B766-B40D-427E-B4FB-6D06BE2D5C24 ONLINE 0 0 0 at disk2s2
GPTE_266D6073-2A1E-44C0-A3ED-1414A16F23CF ONLINE 0 0 0 at disk1s2
GPTE_312E1FF8-5AD6-46A5-B349-CD198BB7650F ONLINE 0 0 0 at disk0s2
raidz1-1 ONLINE 0 0 0
GPTE_7ED54770-69A5-47DF-A88C-DB968EDCACE0 ONLINE 0 0 0 at disk5s2
GPTE_55B4FCD7-A340-48EF-A07B-6BAD50EA5B03 ONLINE 0 0 0 at disk3s2
GPTE_C780F366-DACC-4353-A97E-70F543677D15 ONLINE 0 0 0 at disk4s2
raidz1-2 ONLINE 0 0 0
GPTE_2F013EDA-524B-42D9-A596-33561E3F0CB6 ONLINE 0 0 0 at disk8s2
GPTE_BA31AC30-F529-4912-85E8-1295F4A0B581 ONLINE 0 0 0 at disk13s2
GPTE_5380700A-E849-46C6-AA3F-026249B0AAB4 ONLINE 0 0 0 at disk9s2
raidz1-3 ONLINE 0 0 0
GPTE_53D7F6A4-0D57-4A11-AA7A-5384E1B698EB ONLINE 0 0 0 at disk11s2
GPTE_817FF7D1-95BD-423A-BE65-E9265B17CD1F ONLINE 0 0 0 at disk10s2
GPTE_D6722D1F-94C8-4197-8BAB-7084B3C0AA22 ONLINE 0 0 0 at disk12s2
raidz1-4 ONLINE 0 0 0
GPTE_B8D4FD7B-665D-49A4-9B37-1DC548BE4EB8 ONLINE 0 0 0 at disk15s2
GPTE_CCCBF756-1A5E-4CB0-B5B7-7D006B669870 ONLINE 0 0 0 at disk14s2
GPTE_540E62A6-ABE9-413B-912E-4A9CEFD938B6 ONLINE 0 0 0 at disk16s2
errors: No known data errors
mrq:~ sadmin$
- Code: Select all
mrq:~ sadmin$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
Data 4.12Ti 8.40Ti 1.78Mi /Volumes/Data
Data/Documentaries 462Gi 8.40Ti 462Gi /Volumes/Data/Documentaries
Data/Files 203Gi 8.40Ti 203Gi /Volumes/Data/Files
Data/Movies 1.65Ti 8.40Ti 1.65Ti /Volumes/Data/Movies
Data/Music 117Gi 8.40Ti 117Gi /Volumes/Data/Music
Data/Pictures 67.8Gi 8.40Ti 67.8Gi /Volumes/Data/Pictures
Data/Sport 10.7Gi 8.40Ti 10.7Gi /Volumes/Data/Sport
Data/TVShows 1.51Ti 8.40Ti 1.51Ti /Volumes/Data/TVShows
Data/Video 116Gi 8.40Ti 116Gi /Volumes/Data/Video
mrq:~ sadmin$
Those raidz groups are made up of either 1TB drives or 2TB drives.
These drives are hanging off a
RocketRaid 2744 controller, which defaults to JBOD mode which was nice.
All this is inside an old Apple Network Server 500 (I guttered it because the 2nd PCI controller was blown) which used to be owned by Netscape years and years ago. (FQDN when running Apple's AIX was freddy.netscape.com).
I removed all the drive caddies and folded some sheet metal to support 10 x
Welland 3.5" drive bays.
I've also purchased some internal 3.5" drive mounts that are fitted on the mono side of the ANS500, so I can attach 16 drives to the RR 2744 as my data storage demands increase over time.
Now it's got an
Intel X48BT2 mobo, with a 3.0GHz QC Xeon CPU and 16GB DDR3 (4x4GB 1333/1600MHz) RAM running Mac OS X 10.7.3 Server.
It boots off an old 18GB U320 SCSI drive that's hanging off an LSI PCIe U320 HBA pulled from a 2006 Xserve.
Thermaltake 1500w PSU supplies power to everything which is connected to an APC 1200VA UPS.
Services that use the storage are:
Transmission
EyeTV (Recording)
iTunes (Library folder)
iPhoto (Library folder)
AirMedia Server (Points to the TV/Video shares and realtime streams to iOS devices on demand, hence the Xeon CPU!)
XBMC (for the MacMini gaffer-taped to the back of our TV in the lounge)
The I/O flies on this thing! but I have to use AFP (with liberate AFP hack) because I was having weird issues with NFS, that I think is related to case sensitivity settings mentioned in other posts. It's a shame that there are no longer any Infinband drivers for OSX cos those HBAs are quite cheap now and is great for peer to peer networking etc.
getting the I/O out of this rig is the annoying thing...
I find AFP to be crap when compared to NFS's performance. When graphed, the performance of AFP looks like a roller-coaster/heartbeat graph, whereas NFS (which I've used in the past on different pools) was more consistent and even.
10GBe is still too expensive to play with at home, and it looks like Thunderbolt is not going to offer TCP/IP over TB (i.e. like Firewire did) any time soon either.
This is the third ZFS server I've setup for home use that's been tucked away inside the ANS500 case.
1st was a G4 PPC running MacZFS using acard 2 x 6885M HBA to drive 8 PATA disks.
2nd was a x86 Mac Mini using a Silcon3132 in the MiniPCIe wireless slot and 2 PortMultiplier boxes for 10 Sata disks.
All I do is use ZFS's send/receive (with FIFO) to transfer the data between the old and new.
I'm very happy with this one, she's been rock solid for me since I set it up... (i.e. back around when 10.7.3 was out).
Cheers,