is ashift=12 recommended for 3TB WD Green drives?

Moderators: jhartley, MSR734, nola

Re: is ashift=12 recommended for 3TB WD Green drives?

Post by si-ghan-bi » Sun Oct 07, 2012 4:46 pm

Maybe zfs collects small files up to 2 per physical block. I thought that the blocks we are referring to only concern the amount if data is read, not that the smallest file size on disk. For example, ReiserFS does not waste space at all: 1b file takes 1b on disk. Still, I expect ReiserFS to use 4KB blocks on disk.
si-ghan-bi Offline


 
Posts: 145
Joined: Sat Sep 15, 2012 5:55 am

Re: is ashift=12 recommended for 3TB WD Green drives?

Post by NakkiNyan » Sun Oct 07, 2012 6:39 pm

But, if it spans 1 block it is 1 read, 2 blocks 2 reads, etc... If you have a file that begins in the middle of 1 block and ends in the middle of another (even if it is the size of 1 block) it takes 2 reads because it has to read both blocks. That is what I gathered any way. Bad example below showing 4K of data used.

**if this example is wrong let me know, it is what I gathered from reading.
Code: Select all
|x0x|x1x|x2x|x3x|x4x|x5x|x6x|x7x|| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 ||   <== 512 blocks
|                0              ||                1              ||   <== 4K  blocks

| 0 | 1 | 2 |x3x|x4x|x5x|x6x|x7x||x0x|x1x|x2x| 3 | 4 | 5 | 6 | 7 ||   <== 512 blocks
|                0              ||                1              ||   <== 4K  blocks

|x#x| = data ... | x | no/other data ... || end of a 4k block

Images and text on the AF wikipedia page explaining why space is saved.

Each block is read and a little data is saved as overhead for each block (why we lose space formatting 512) it is the same amount of data using 8 blocks of 512. The first requires reading 1 block of 4K while the second has to read 2 blocks, even if the rest of the block is for other data the cycle is wasted reading essentially junk data when the block is only partially full. From what I gather, this is the argument that 4k can waste resources (reading more junk data to get to the part you want) but if you have large data files this is not an issue.

I backed up my BD and DVD video since my DVD player was loud and now, with Retina MBP, I don't have one. Because I often span 100-2500 4K blocks, 4k is for me. Yes, some of my videos are well over 10GB. For small data blocks used, it could waste resources.
NakkiNyan Offline


 
Posts: 47
Joined: Tue Oct 02, 2012 12:19 pm

Re: is ashift=12 recommended for 3TB WD Green drives?

Post by si-ghan-bi » Sat Dec 15, 2012 7:11 pm

In case it can be useful, under SunOS openindiana 5.11 oi_151a7 i86pc i386 i86pc Solaris, WD Green 3 TB are automatically set ashift=12 without any options on the command line (well, I haven't found any user selectable option either...).
si-ghan-bi Offline


 
Posts: 145
Joined: Sat Sep 15, 2012 5:55 am

Re: is ashift=12 recommended for 3TB WD Green drives?

Post by mk01 » Sun Jan 06, 2013 4:07 pm

imho the problem is greatly overrated.

there is nice short doc from IBM dev http://www.ibm.com/developerworks/linux ... tor-disks/.
on mac you have aligned partitions since 2006 (at least) http://developer.apple.com/library/mac/ ... index.html.

zfs is using blocks with size < fs block size only for object below fs block size. if over, it uses fs blocks, look here for files with size 1byte, 128k and 128k+4k. and RMW will happen only on writes of files < 4k. this is clearly seen from the IBM doc.

file big 1byte:
Object lvl iblk dblk dsize lsize %full type
7 1 16K 512 4K 512 100.00 ZFS plain file
168 bonus System attributes
file 128k:
Object lvl iblk dblk dsize lsize %full type
9 1 16K 128K 128K 128K 100.00 ZFS plain file
168 bonus System attributes
file 128k+1:
Object lvl iblk dblk dsize lsize %full type
10 2 16K 128K 264K 256K 100.00 ZFS plain file
168 bonus System attributes

linux fdisk is aligning since jan-2012.

you can destroy and realign the filesystems how much you wan't, with dd and large file working sets you will never be able to measure difference - thinking of situation, that your fs are using zfs on disk or slice created by mac with os never than 10.4.
mk01 Offline


 
Posts: 65
Joined: Mon Sep 17, 2012 1:16 am

Link

Post by grahamperrin » Wed Jul 24, 2013 11:43 am

scasady wrote:ashift=0 means the default, ie zfs not you picked a value …


There's a known issue with zpool(8) in ZEVO Community Edition 1.1.1.

Getting ashift property values: use zdb, not zpool
grahamperrin Offline

User avatar
 
Posts: 1596
Joined: Fri Sep 14, 2012 10:21 pm
Location: Brighton and Hove, United Kingdom

Previous

Return to General Discussion

Who is online

Users browsing this forum: ilovezfs and 0 guests

cron