— Storage Horizons Blog —

What kind of data access makes up an application, and why is this important when thinking about tiering (for performance) vs. basic HSM?

Application Data Attributes

Steve SicolaOne of the key points is about “reaction time” for tiering. Another way to look at this is “Dynamic Data Placement.” Can the data for an application’s use be in the right tier, at the right time, so as to make I/O consistent for all portions of the application?

All applications have data that have access patterns with the following traits:

  1. Tight Random—frequent R/W across a small range of the data set
  2. Random—seldom, non-localized access
  3. Sequential

In the history of arrays, the above aspects of application data sets have been covered by either up front, hand tuning of LUNs per data type, fixing the location of data into RAM, short stroking many HDDs, and SSDs (old and new). With hybrid tiering arrays, it’s all going to be about dynamic data placement, if it can be done without causing inconsistent performance.

Application data sets are rarely the size of the array’s storage capacity; meaning that for a customer, the desire is naturally to run multiple applications on the array to drive efficiency, however, “the devil is in the details.” Most arrays can’t handle multi-tenancy for applications as it causes the arrays to have inconsistent performance.

If storage for an application was like an ice cream parfait and cost was right, then everything would be simple . . . but as noted above, it is not.

In the world of Virtualization, VDI, and just plain, old small to medium businesses (trying to make the most of their IT storage purchases), the need for true, multi-tenancy storage that can adapt dynamically and provide automatic QoS, for all volumes across multiple applications, is VERY HIGH.

The problem to be solved is that of being able to ingest the entire data stream from multiple applications, simultaneously, and to properly analyze the attributes of the data and characterize them, based upon the usage patterns above. It’s the Ultimate Big Data problem, and X-IO has solved this with its unique Continuous Adaptive Data Placement (CADP) algorithms, layered upon all the unique ISE technology. After solving all the fundamental problems to drive reliability, availability, and capacity utilization, CADP is made simpler and more effective, as a result, for up-tiering, consistent high performance, and multi-tenancy for applications.

As compared to big data, or even remote sensing of weather for accurate weather forecasting, the aspects of application I/O density, the size of the area with a given density, and its location, among many other attributes, play the key roles in determining where to place data for optimum and consistent I/O, for an application or multiple applications, running against a storage device.

So, X-IO effectively does solve the dynamic data placement problem for multiple applications, running simultaneously, against a Hyper ISE hybrid. Basically, data placement for right time/right place is done across the entire capacity of the hybrid array known as Hyper ISE, and multiple applications can be provided with consistent I/O and throughput. This means more VMs, more VDI users, more databases, etc. It’s all about being able to more with less.

X-IO Hyper ISE has proven this with benchmarks such as Temenos and Redknee, running on Microsoft SQL 2012, beating out larger arrays with traditional tiering; and furthermore, with the best of Microsoft TechEd wins, two years in a row, X-IO is seen as the best storage on the planet.

We use cookies to offer you a better browsing experience. By using our site, you consent to the use of cookies. Learn more about how we use cookies in our privacy policy.
Accept