Delivering a resilient system is all well and good, but if it doesn’t perform adequately due to excessive clustering overheads the technology is useless. Therefore, we have also been seeing quite how far we can push our system (which is based on low-end commodity hardware). Our original results were quite impressive: a 2-member cluster (based on an Intel D510M0, dual core 1GHz Atom processor, 3GB RAM, 40GB SSD) managed 1000 transactions per second against a 2.5M row table (32 concurrent threads, 25ms think time, 80/20 read/write ratio, 100% data sharing). That’s an incredible achievement for a couple of low-end boxes running “netbook” processors – especially as it was “out of the box” with no performance tuning or optimization done to the DB2 configuration.
The chart below shows even more impressive numbers. This workload was run on the same hardware as I’ve outlined above, but on a tuned DB2 system. This time around, we had 14 client connections with a 1ms think time running against a 250,000 row table, and saw an amazing 5,500 transactions per second with the member’s showing just 50% CPU load – see chart below.
Of course, these are simulated OLTP workloads running against hardware that is not officially supported by pureScale (only IBM x and p servers are currently supported) so your mileage will definitely vary. However, those kinds of transaction rates would have been firmly in mainframe territory not so long ago.
In our testing to date, DB2 pureScale has met or exceeded every one of our expectations, delivering on the promise of a highly robust, scalable and efficient clustering solution for DB2 customers. We’ll be continuing our research as new capabilities are delivered during 2011, and I’ll keep you updated on the results.