Therefore, from the point of view of a single application waiting for its IO to complete, you'll get the IOPS write performance of the slowest disk in the RAID-Z vdev. This means that each write I/O will have to wait until all disks in the RAID-Z vdev are finished writing. When writing to RAID-Z vdevs, each filesystem block is split up into its own stripe across (potentially) all devices of the RAID-Z vdev. See this nice write up for the full details but these two quotes should help you see the issue (emphasis mine): RAIDZ2 is analogous to RAID6 but the variant has features that make it slower and safer. While this is a good, safe configuration you have to understand that RAIDZ isn't designed primarily for speed. You didn't post the zpool status for this but you imply in the post that all 16 disks are in a single vdev in RAIDZ2. Zfs get all output: NAME PROPERTY VALUE SOURCE Update: High speeds were cached I/O (thanks Speeds of ≈300MB/s still seem too slow for this utilization during I/O is negligible (All cores <5%). I'm running ~20-30x below optimal! What am I missing? Any ideas? I also created a ZFS volume on the pool along side the filesystem, and formatted as HFS+. OS X's read of the array's optimal I/O block size (16-drives RAIDZ2, ZFS): OS X's read of the boot drive optimal I/O block size (1TB SSD, HFS+): I'm seeing transfer rates around 300MB/s from the array in OS X and from the terminal using default commands (note file being copied is 10GB random data): The ZFS filesystem was created with ashift=12 (Advanced Format HDD's with 4096 byte blocks) and a recordsize=128k. I am running a 16-drive SATA3 RAIDZ2 OpenZFS on OSX (v1.31r2) filesystem (v5000)over Thunderbolt 2 (twin Areca 8050T2's) to a 12-core 64GB Mac Pro. What can I do to improve general I/O to at least 1GB/s? ) and all my is ~300MB/s dd gives the same performance with bs=1k. Tl dr - My ZFS RAIDZ2 array reads at 7.5+ GB/s and writes at 2.0+ GB/s when I specify a bs=128K or greater with dd.
0 Comments
Leave a Reply. |