Issues with a simple two-disk pool (tall): overview

This forum is to find answers to problems you may be having with ZEVO Community Edition.

Moderators: jhartley, MSR734, nola

Issues with a simple two-disk pool (tall): overview

Post by grahamperrin » Sat Apr 06, 2013 7:36 am

This topic is intended to be my placeholder for brief notes and summary points; not yet for deep discussion. I'll link to other topics. Expect multiple editions to this opening post.

Recently I purchased two new 3 TB hard disk drives, one of which was loaned to a friend in need. The other I began using immediately for myself:

  • added to a single-disk pool named tall (expansion, not mirroring)
  • presented my first ever opportunity to experiment heavily with a multi-disk pool.

A recent example of goodness:

Code: Select all
  pool: tall
 state: ONLINE
 scan: scrub repaired 0 in 30h4m with 0 errors on Fri Apr  5 06:15:00 2013
config:

   NAME                                         STATE     READ WRITE CKSUM
   tall                                         ONLINE       0     0     0
     GPTE_78301A52-4AFF-4D96-8DE9-E76ABC14909C  ONLINE       0     0     0  at disk6s2
     GPTE_99056308-F5E2-4314-852C-4DA04732A2D0  ONLINE       0     0     0  at disk4s2


Issues (links to topics)


Ultimately some of the issues may be trouble with my uses of the Mac, software and peripherals – not trouble with ZEVO – but I'd like to have this overview in the troubleshooting area. A point of reference.

Environment

  • MacBookPro5,2
  • 8 GB memory
  • Mountain Lion
  • frequent use of beta and pre-release software
  • past use of an old USB 2.0 hub, Sitecom USB 2.0 Dock CN-022 (PDF)
  • recent use of a new USB 2.0 hub.

The two hard disk drives in the affected pool (tall):

  • 2 TB Seagate GoFlex Desk (0x50a5) purchased around May 2011
  • 3 TB Seagate Backup+ Desk (0xa0a4) purchased in March 2013, Maplin Electronics code A83LG.
Last edited by grahamperrin on Sat Apr 06, 2013 10:43 am, edited 2 times in total.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Re: Issues with a simple two-disk pool (tall): overview

Post by raattgift » Sat Apr 06, 2013 8:42 am

I notice that this is simply a pair of storage vdevs, so the replication level of your pool is zero.

I guess this is what you want, because of

added to a single-disk pool named tall (expansion, not mirroring)


however I think most people, including anyone who stumbles into this thread, should generally avoid that. [*]

It's the difference between "zpool create tall disk4 disk6" and "zpool create tall mirror disk4 disk6".

Alternatively, "zpool create tall disk4" followed by the differing "zpool add tall disk6" (creating the pool geometry you have - two storage vdevs, no redundancy in the pool) and "zpool attach tall disk4 disk6" (which would turn the initial single-device storage vdev into a mirror).

While you can undo an attach ("zpool detach tall disk4" leaves a single storage vdev (disk 6) behind), or physically power down either drive without affecting availability of the data (i.e., nothing unmounts or is delivered an I/O error), you cannot undo an "add" of a storage vdev, and if you power down either drive in your pool, the pool will fault.

The only way to deal with an accidental "add" is to destroy the pool and create a new one.

zpool add will raise a warning and force the use of the "-f" flag if the replication level of the vdev to be added is less than that of the existing vdev(s). In the case of a pool made with a single device, the replication level is already the lowest possible, so it won't complain if you add another single device as a separate vdev.

For this reason, the "-n" flag for zpool create and zpool add is highly useful.

Before you commit much data into tall, you might want to play around with the whole range of potentially-destructive commands applicable to two-disk pools in zpool(1). Create, attach/detach/(re)attach, replace, split, add, offline/online and clear are the especially interesting ones.

A practical test workload might be timing "zfs send -R -v otherpool/foo@bar | pv | zfs receive -v -u tall/foo; zpool export tall; zpool import tall; zfs send -v -R tall/foo@bar > /dev/null". (The export/import is to clear the ARC, although for a sufficiently large zfs send/recv job it shouldn't make much difference).

The prediction is that write time will be the same for a single-device pool and a two-disk mirror, and half as long for a two-disk, two-vdev pool. The read time should be slightly faster for the mirror than the two-disk, two vdev pool; both should complete the zfs send tall/foo@bar > /dev/null job in half the time as a one disk pool. If timings are grossly different from these predictions, that would be interesting !

I hope that this doesn't violate your "not yet for deep discussion".

- --

[*] expand in pairs: http://constantin.glez.de/blog/2010/01/ ... still-best
raattgift Offline


 
Posts: 98
Joined: Mon Sep 24, 2012 11:18 pm

Re: Issues with a simple two-disk pool (tall): overview

Post by grahamperrin » Sat Apr 06, 2013 8:59 am

The two 3 TB disks are to form a pool with redundancy of data, but that's off-topic. I'd like to keep this topic as brief as possible until after things have settled.
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Update

Post by grahamperrin » Sun Apr 07, 2013 11:57 pm

Most recently, apparently without error:

Code: Select all
macbookpro08-centrim:~ gjp22$ date
Mon  8 Apr 2013 05:32:02 BST
macbookpro08-centrim:~ gjp22$ uptime
 5:32  up 20 hrs, 5 users, load averages: 4.39 4.12 4.33
macbookpro08-centrim:~ gjp22$ zpool status tall
  pool: tall
 state: ONLINE
 scan: scrub repaired 0 in 24h43m with 0 errors on Mon Apr  8 05:04:43 2013
config:

   NAME                                         STATE     READ WRITE CKSUM
   tall                                         ONLINE       0     0     0
     GPTE_78301A52-4AFF-4D96-8DE9-E76ABC14909C  ONLINE       0     0     0  at disk2s2
     GPTE_99056308-F5E2-4314-852C-4DA04732A2D0  ONLINE       0     0     0  at disk3s2

errors: No known data errors


Rewind a little, to when that scrub began: 2013-04-07 04:21:41

Between then and now:

  • no kernel panics
  • two restarts or shut downs, one of which was probably forced

2013-04-08 05-41-07 screenshot.png
Selected messages from Console
2013-04-08 05-41-07 screenshot.png (99.42 KiB) Viewed 55 times


– and if I recall correctly, the force was required whilst the two hard disk drives were on the new Cerulian hub. For the last few hours instead of that hub, I chose to connect both drives to a bus in the MacBookPro5,2 with nothing else on that bus.

In topics elsewhere, I read posts such as viewtopic.php?p=4155#p4155 and viewtopic.php?p=4606#p4606 with great interest.

raattgift wrote:… ZFS should always err on the side of data integrity, even at the cost of avoidable data unavailable.… 


+1

What I'm testing in recent weeks, I'd should have preferred to do whilst beta testing Z-410 Storage and Ten's Complement ZEVO.

I'd like to complete another scrub of this two-disk pool without interruption – maybe with the old Sitecom hub. And eventually draw a #done line under some of the topics recently begun by me, then post an entirely positive brief summary under is Zevo safe enough? (I shouldn't summarise until after I have finished experimenting with USB).
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom


Return to Troubleshooting

Who is online

Users browsing this forum: bileyqrkq, ilovezfs and 0 guests

cron