The True “Cost” of Enterprise Storage – Understanding Storage Management

Enterprise Storage Management

It’s definitely a truism that the absolute cost of storage, measured per GB has continued to drop steadily over the last 15 years. In fact, when I was involved in purchasing storage hardware it was common to put in place a quarterly price reduction of around 5% on any long-term deals. Prices have dropped over time because the disk manufacturers have been able to continue to increase the capacity of data stored in a standard drive form-factor, delivering more capacity for the same physical amount of space, power and cooling.

Price reductions and capacity increases have been dramatic; the IBM 350 Disk Storage System introduced with the IBM 305 RAMAC computer in 1956 had a capacity of a mere 5MB. By 1980, IBM introduced the 3380 with around 2.5GB of capacity and a list price starting at $97,650. Today, we have single drives measured in terabytes and systems storing petabytes of information for less than $1/GB.

All of this data needs managing. As a storage administrator in 1996, I looked after a mere 300GB of data (about 100 IBM 3380 drives). However it’s worth remembering that at the time these systems had no built-in RAID and data recovery created a lot of manual process. The cost of storage also justified micro-managing the placement of every single file, either on primary disk or tape.

Storage Management in the Era of Big(ger) Data

Roll forward to today and storage micro-management just isn’t tenable; it would be impossible to look at every single piece of data so a more general broad-brush approach is taken to managing storage.

As data volumes have ballooned, in replacement of our low-level tasks of 20 years ago comes a new heap of challenges that have to be undertaken to ensure that storage runs smoothly.

1. Maintenance

Storage arrays need firmware and software to be patched and upgraded. Connected devices such as HBAs also have firmware and drivers to maintain. In Fibre Channel networks, the fibre switches have to be maintained.

2. Upgrades

With storage costs dropping annually, it doesn’t make financial sense to place large volumes of storage on the floor on day one of a project. Unfortunately, many IT teams decide the cost of in-place upgrades is too expensive or time consuming (while vendor pricing re-enforces that behaviour) and deploy more capacity than needed. Where attempts are made to be more frugal, many legacy systems have no capability to easily load balance new storage capacity, requiring manual intervention to ensure new storage is used effectively.

3. Performance Management

Traditional storage arrays have to be performance managed to ensure all of the tenants (applications and lines of business) receive the best possible I/O response times. Vendors have implemented some technology solutions to automate load balancing but even these still require manual intervention.

4. Transformation/Migration

Probably the most cost intensive part of storage management arises in the migration from one system to another, either when capacity is reached or as technology reaches end-of-life (EOL) and is replaced. Cost estimates for migration work range from $5000-$7000 per TB, depending on the platform.

It’s pretty easy to see that there are significant operational costs to managing traditional storage. Estimates on the operational overhead of storage management vary significantly however figures range from 50-70% depending on whom you ask.

Many of these problems occur because the initial perceived benefits of a shared storage array actually become a burden as systems scale. Techniques like QoS are required to deal with “noisy neighbors” and change- and problem-management become an issue when there are tens or hundreds of internal customers to consider.

Continue reading Part II of this blog as I look at how Zadara offers a different approach and cost model, or download the Evaluator Group’s TCO report to learn more.


 

Share This Post

More To Explore