Skip to Content

image

 

Introduction

After my “Codename SOL” – Solid as a Rock, SSD disk evaluation, I bought then X25-M.

I decided to run a few test to measure the X25-M performance. In the past I have used iozone to run disk performance tests. It offers pretty much any option needed to run a benchmark, it is available a lot of platforms and it is free.

Second I discovered the new 11g procedure DBMS_RESOURCE_MANAGER.CALIBRATE_IO. I will give it a shot here.

Third I am going to run the ORION benchmark, a tool provided by oracle. DBA’s have used it in the past to get a quick impression on disk performance without deploying a database

 

CALIBRATE_IO

As the test is very simple, I just provide the results here. It is a pure read test. I ran it like this:  DBMS_RESOURCE_MANAGER.CALIBRATE_IO (1, 10, iops, mbps, lat);

Results:
max_iops = 3913
latency  = 0
max_mbps = 137

Reference
HP EVA 4400 (Specified for 140K IOPS)

max_iops = 2249
latency  = 8
max_mbps = 190
 

ORION

I used this command to run the test on three 20gb datafiles: orion -run simple -testname ssd -num_disks 1 

This is also a read only test, you have to be careful with write tests, they render existing data on the disk useless.

Results:
Maximum Large MBPS=139.22
Maximum Small IOPS=3985
Minimum Small Latency=0.25

 

IOZONE

Again I was mainly interested in read testing. First I created a 1g testfile like this:

./iozone -w -e -c -r 8 -i 0 -s 1g -t 1 -F file1 

Then I copied it to file2, file3 etc. To drive a test with 4 processes you need 4 files.With this test

 ./iozone -w -e -c -r 8 -i 2 -s 1g -t 4 -F file1 file2 file3 file4

iozone will read random 8kb blocks with 4 processes. We have 4 files, a total of 4gb to read. 4gb divided by 8kb results in a total of approximately 500’000 io operations. So if our disks serve 1000 IOPS, the test will run for 500 seconds.  iozone reports the mesured throughput in kb/s, we have to divide this with 8 to get IOPS:

Parent sees throughput for 4 random readers     =    4681.36 KB/sec  = 585 IOPS

I ran the test with 4 and 16 processes. I ran the same test against the EVA 4400 from HP (midrange storage) and the highend storage system HP XP 24000. Remember the X25-M is the solid state disk. At the very bottom I did a quick test against a single sata disk with 7200 rpm, but I didn’t do the full 16x test, because of the poor performance.

image

Obviously the storage systems are not particularly tuned for random 8kb io. But we are running our systems with this configuration, so this is what we get. One interesting thing is, that the storage subsystems need multiple processes to drive more IOPS. Again the SSD performs at ~4000 IOPS regardless of the number of processes. I did a test against the EVA with 32 processes but the throughput was below 16 processes. But I expect the XP to easily perform better with higher values. We learn that the big storage systems too have a latency considerably higher than the SSD. I remember the average file access times are around 5ms.

 

Conclusion

First of all, I am not saying the X25-M performs better than the storage systems. The tests I did here are heavy weighted towards single random 8kb reads. And this is a thing the SSD performs very good. Overall every simple test here produces only numbers, which may or may not have any significance in the real world.

Second we learn, that over all three tests the read IOPS number is at an equal 4000 IOPS for the X25-M, which is in fact an excellent number. One would need a number of somewhere between 10 to 25 mechanical disk to get comparable IOPS. I am looking forward to get a very decent performance for a multiuser OLTP database here.

Besides that I noticed in a few other tests, that the write performance is not outstanding on the SSD. It is still good, but not pounding a conventional disk. At this time I consider this of minor interest, but we might get back to this.

Check the “Codename SOL” – Intro / TOC  for updates.

To report this post you need to login first.

2 Comments

You must be Logged on to comment or reply to a post.

Leave a Reply