My plan is to have a mirror of stripes. I'm deep into the process:
- Created the first stripe...
- Code: Select all
$ sudo zpool create -f -o ashift=12 -O casesensitivity=insensitive -O normalization=formD -O compression=on ZraidA /dev/disk0 /dev/disk1
- Code: Select all
pool: ZraidA
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
ZraidA ONLINE 0 0 0
GPTE_EB991614-71C6-4B13-979D-0284933C1FAC ONLINE 0 0 0 at disk3s2
GPTE_47F2D542-A8DC-4F32-9204-C33E650635CF ONLINE 0 0 0 at disk2s2- Code: Select all
disk3 disk4
KB/t tps MB/s KB/t tps MB/s
93.28 612 55.78 92.30 759 68.40
111.54 1184 128.93 109.66 1195 127.92
110.84 889 96.28 111.83 893 97.57
74.26 550 39.92 80.24 526 41.18
114.29 1140 127.24 115.18 1125 126.55
114.84 686 76.92 114.05 731 81.40
98.12 634 60.72 97.71 553 52.74
114.36 1275 142.40 114.59 1267 141.80
110.18 585 62.95 109.20 578 61.64
114.55 770 86.11 114.19 807 89.95
119.62 1226 143.22 120.44 1227 144.32
58.99 253 14.55 56.71 232 12.83
115.75 1411 159.46 115.42 1413 159.23
104.32 307 31.26 105.48 316 32.52
115.57 1158 130.73 116.16 1125 127.67
123.36 899 108.25 122.53 1034 123.77
96.90 560 53.00 93.23 422 38.45
117.34 970 111.20 117.21 1158 132.57
101.01 549 54.15 85.38 411 34.25
114.26 1329 148.24 115.69 1313 148.29
Have the first stripe up and working
Loading data from various other disks (check out the peak throughput)
And a little iostat for those of you who believe a thousand words is worth a picture... this is copying from a 1TB HDD.
In planning this ZFS drive set, my main priority is to be safe and fast. Safe from drive-level failure (RAID-type stuff) and more importantly block-level bit rot. By working with a mirror of two-disk stripes (four drives total) I can expand my file system by adding a stripe of two new, potentially larger, drives into the mirror, resilver, then removing the oldest (and smallest) drives from the mirror.
Looks like there is about four hours of data still to copy... then perhaps another drive or two.
Gotta love SneakerNet!