It is no secret that storage performance in the cloud is one of the main challenges for applications. The storage performance problem in the cloud has multiple facets. It is not just low vs. high performance; it’s also the fact that performance varies over time, and one customer can impact the performance of others. There is also the issue of storage performance monitoring. How does one know if an application is running slowly because of storage performance, or because of other factors like networking, IO concurrency, latencies, etc.?
The main issue is that none of the current storage systems and software were designed with the cloud in mind. Even high-end SAN storage arrays were designed for a single tenant, a single manager, the storage admin who knows what everybody else is doing, the super guru that knows how to interpret iostat! Aside from the lack of multi-tenancy isolation, the scalability is not even close to what is needed in the cloud. And worse yet is the inability to accommodate different types of customers with a single system. As a result, cloud providers start with the lowest common denominator.
All these reasons result in a cloud environment into which it is hard to move classical IT applications that need low latency. Instead, the forms of databases and applications that are more popular in the cloud are the ones that can live with long latencies, low number of IOPS, and inconsistencies. But the problems are solvable with a new storage architecture: a storage architecture that is not based on “scale-out”, but is designed to host multiple tenants, each with their own needs.
In this storage architecture, each user–at least, the ones that need consistency and performance–must have dedicated resources: dedicated drives, dedicated cache, dedicated cores – so that IOPS can be guaranteed as if the user had a SAN array in his or her own data center. And just like in a private environment, each user needs to have access to performance data (for monitoring), and a management console (for control over and tuning of the environment). This new architecture needs to allow for different amounts of resources to be allocated to different users, depending on their needs. And, this architecture needs to allow metering of these resources in order to charge customers according to their actual usage.
In summary, a new cloud storage architecture is needed – one that is not based on “scale-out”, but is more similar to the compute cloud.
This is what Zadara Storage has done! We took the architecture of Cloud Computing and applied it to Cloud Storage, creating a very scalable, highly reliable system, where each tenant can control their own storage. In addition to that, we worked to provide resource allocation per customer (i.e., CPU, memory, networking and storage drives), eliminating the noisy neighbor problem. Our Virtual Private Storage Arrays in the cloud perform and behave like classical SAN/NAS storage in the data center; they have the reliability and features of enterprise storage with the elasticity and pay per use of the cloud.