Thecus N7700PRO Network Storage ServerAug 22nd, 2010 | By Anthony
In our last section, IOzone allowed us to very thoroughly assess QNAP’s TS- 559 Pro’s performance at the hardware level however before we get into more application based performance, we’re going to look at Iometer to compare two measures of throughput: pure sequential and pure random. Reading and writing sequential data is of course much faster than random access due to the physical nature of conventional drives and the time it takes to retrieve data, but to an extent, the drives’ configuration and system hardware also play a role.
We will start with JBOD and RAID 0 again just to establish what should expectedly represent the lower and upper bounds of performance. Unless one deals exclusively with large, sequential data the numbers we established in the last section are inaccurate indicators of day to day performance. That isn’t to say however that pure random throughput is a more accurate measure either. To generalize, as we will later on when we look at NASPT, applications which deal with smaller files tend to be random throughput dominant while larger files being sequential.
Between RAID 0 and JBOD, our sequential throughput of both read and write confirms our findings with IOZone in terms of difference in performance, but what is interesting is the similarity in performance between each of the four tests.
The key difference between RAID 5 and RAID 0 in performance is the distribution of parity data between disk members of an array. While it is unlikely we will see a difference in read performance, every time a piece of data is written to a RAID 5 array this operation is performed and thus accounting for the decrease in write performance, and typically especially random write. In the lower ranges of file sizes tested the penalty to performance due to random access is especially apparent.
As you may have noticed, generally our Iometer tests indicate high throughput numbers than our IOZone tests. This is due to the inherent environmental differences in which each software tests within. If you recall from our testing methodology page, IOZone is benchmarks on a more elementary level and thus negates any operating system optimization whereas Iometer is quite the opposite. The biggest difference is when it comes to read throughput.
Given the enormous discrepancy in performance between larger and smaller file sizes, it is misleading to simply present an average involving the entire range of file sizes from 512b to 32MB. Our last graph is especially useful for further visualizing performance.
While we’ve presented corresponding IO numbers with your Iometer throughput tests, we haven’t exactly talked about them. A more conventional look would be to measure IOps against system load. IO/s per second, or the number of requests completed by a drive or array in a second. To do so, we will be using Intel’s pre-defined File Server access pattern which is a distribution of file sizes in varying percentages with predominantly random reading.
Each RAID configuration hits around 1700 IO/s at a queue depth of one and peak at a near maximum of 5500 IO/s between the queue depths of eight and 16.