3.5 GB/second? Where’d that come from?
One of the things I enjoy most about my job at X-IO is being involved with our data storage performance testing.
Not just internal testing that we run regularly (there is always something running) but the external testing occurring at partner/customer/analysts sites as well. A huge lesson that I’ve learned over years of doing benchmarks/POCs is to watch every part of the testing; every step is important. I have seen tons (and tons) of work go into meticulously created test plans, only to have ignored all of the other troublesome signs the system alerts you to before the test even begins. You would be amazed at how much you can learn about the performance of any system just by watching how things go during the setup of the test. The setup phase is really the first interaction with the hardware and it’s just as important as the actual test you are running. I use the setup phase as a “smoke test” of the environment to ensure that I’m seeing the numbers that I expect and to make sure that monitoring is set up correctly. This is a great opportunity to familiarize yourself with the interface and discover how much the system can tell you about how it’s performing.
End-users should pay particular attention to this, as often the procedures done during setup are examples of extremes that vendors will say are “corner cases.” Don’t believe this. While the setup operations (SVMotion, database restore/setup, etc.) may not be part of the test plan, they are operations you are going to perform daily and it’s important that you know how the system will handle them. Even if you are configuring for a database test, the action of moving the database to the array will tell you something about how fast data can be moved on and off of the hardware (and SVMotion impact/performance is a big deal–ask any VMware admin).
Recently, we have been doing some testing with StorageReview and the new All Flash ISE 860, and the value of watching the setup of a test was made apparent right away.
One of the tests that they run is a virtualization benchmark from VMware called VMmark. This is an extremely demanding, solutions-oriented test that stresses all aspects of the environment (not just the storage). There are many different kinds of virtual machines involved as part of the test, but the one that really got my attention was the Microsoft Exchange portion of the test setup (I used to benchmark Exchange for HP back in the day). As part of the Exchange setup for each run, all of the databases are restored from backups that are local to each VM. There are also Linux database VMs in the test that are restored/regenerated before each run. How much do these activities demand?
Check this out:
While restoring 26x Exchange server VMs as part of the setup, a single ISE 860 hit almost 3.5GB/sec @ a 50/50 read/write workload.
This is something that was orders of magnitude greater than was seen during the normal VMmark runs, and it is a great example of how the setup portion of the test can show different aspects of the system performance. Further, when the Linux databases were regenerating, the ISE 860 maintained just over 2 GB/sec of writes (not something All Flash solutions are necessarily known for). The ISE 860 just crushed this part of the testing, and it really shows how much performance a single system is capable of. To get the results of the actual VMmark runs, head over to the StorageReview website and read part 1 and part 2 of their review. You won’t be disappointed.
Don’t let anyone tell you that there’s a portion of your testing that isn’t important (“How many times do you really use SVMotion anyway?”). Any interaction you have with a new system conveys something important about how it’s going to react, and every workload matters.