A while back (August 2006) I decided to do some benchmarking on different filesystems, in part to validate my assumption that using JFS with Linux was a good idea. Now with more that 1 year in production, performance is great (as shown below), and the filesystem is stable. No issues. Onto the details…
Our system was configured with 6 array groups (RAID5). Each array group had 7 physical disks. 7 Luns (72GB) per array group were allocated to the system, for a total of 42 Luns ( x 72GB = 3TB !). This Disk array we used was the IBM DS6800.
The system used to test this configuration was an IBM HS40, with 4 – 2.7Ghz CPU’s and 16GB RAM. 4x2GB qla2340 fiber adapters, using SUSE Linux Enterprise server 9 SP2, and native multipathing (dmsetup, multipathd). Each Disk (Lun) had 4 paths configured using round robin. Hdparm –t shows the following per disk:
# hdparm -t /dev/sdb
/dev/sdb:
Timing buffered disk reads: 360 MB in 3.01 seconds = 119.46 MB/sec
(~110-127 MB/sec was observed).
The tests were against all major linux filesystems, using default mkfs options. The mount options though, were noatime, and nodiratime. All tests were done against a large vg with all disks, and each filesystem was striped acrossed all 42 disks, using lvcreate –i 42. The only changing parameter here was lvcreate –I x (where x is 8, 128 and 512). This test was done using the the IO scheduler CFQ. This test was only a throughput test and to determine what the best stripe size should be. It was done using the directIO option as you can see in the data section.
Check out the pdf for images and the data.
One thought on “Linux 2.6 Filesystem Benchmarks (older)”