How to Meet SLA Uptime Commitments: Managed Service Providers (Hint: It Starts with SDS)

lawter case study testimonial

How to Meet SLA Uptime. Something organizations often struggle with. Something any service-based company strives for.

Most businesses are operating in an around-the-clock world. Because of that, companies must be able to interact with customers or prospects on a 24/7 basis.

But when that doesn’t happen, the costs can be high. According to a report by consulting firm IDC, small and medium-sized businesses (SMBs) lose between $137 and $427 for every minute of IT downtime.

It’s no surprise, then, that businesses that entrust the support of their IT infrastructure to a Managed Services Provider usually require that the MSP ensures a high level of availability for the customer’s IT operations. Those assurances generally take the form of an uptime guarantee in the Service Level Agreement (SLA).

For example, a 90 percent uptime guarantee would compel the MSP to ensure that the client’s IT operations will be down no more than 10 percent of the time. But MSPs will almost never encounter an uptime requirement that generous. At 90 percent availability, the client’s operations could be offline for up to 36.5 days a year. In today’s 24/7/365 business environment, very few companies would agree to that.

According to Terri McClure, a senior analyst at Enterprise Strategy Group, uptime guarantees of 99.9 percent are common. That’s called a “three nines” level of availability. But even that high level allows for the downtime of 8.76 hours per year. For businesses that are dependent on their IT operations, that amount of downtime is still far too much. That’s why many MSPs are attempting to gain a competitive advantage by offering their clients higher availability guarantees. With a “four nines” (99.99 percent) availability commitment, downtime reduces to 52.56 minutes per year, while a “five nines” guarantee would allow for no more than 5.26 minutes of downtime in a year.

How can a MSP offer clients a SLA with such high availability guarantees without exposing itself to severe penalties if the standard isn’t met? Software-Designed Storage (SDS) offers the most viable answer to that question.

Traditional Storage Becomes Less Reliable As It Grows

The SAN and NAS storage solutions typically used in traditional data centers do not always support today’s businesses environment. Such systems employ dedicated, proprietary, and expensive storage devices that are designed to be highly reliable at the hardware level. Overall system reliability is keyed to the low failure rates that are assumed to characterize each individual storage device.

The fact that traditional storage systems scale in capacity by adding more and more of these high-availability storage units makes those systems less reliable as they grow. The failure of a single storage array or controller won’t necessarily disrupt the system as a whole. But as more and more storage devices are added in order to accommodate the exponentially growing capacity demands companies are now experiencing, the number of failure points also grows, and the likelihood of multiple simultaneous failures increases. The question of ‘how to meet SLA uptime’ sometimes goes out the window when you’re growing exponentially.

In addition, the design of traditional storage solutions often requires that they are deliberately taken offline for maintenance, hardware upgrades, or software updates.

The result of these factors is that as traditional storage systems grow and evolve, the likelihood that they will experience significant downtime increases significantly.

SDS Treats Hardware Failures as Inevitable and Expected

 

SDS, on the other hand, is designed with failure in mind. SDS doesn’t necessarily worry about how to meet SLA uptime commitments in the sense that it plans for failure. In an SDS implementation, the intelligence of the system resides not in the hardware, but in software. Because SDS is designed to allow the use of inexpensive commodity disk drives in place of the costly dedicated storage hardware that characterizes traditional storage, it assumes that individual devices will fail relatively frequently. The software can be configured to quickly and transparently compensate for such failures by employing sophisticated storage management features such as automatic data replication, mirroring, deduplication, snapshots, and the ability to essentially hot swap storage devices.

In other words, SDS implementations are designed to be inherently self-healing.

SDS Provides Greater Reliability Than Traditional Storage

 

A top-of-the-line SDS offering, such as the Zadara Storage Cloud, allows MSPs to offer more aggressive uptime guarantees than would be prudent with traditional storage. For example, the Zadara Storage VPSA Storage Array solution is designed from the ground up for High Availability (HA). It provides capabilities for both on-premises and remote mirroring of data, and for asynchronous replication of snapshots to geographically remote VPSAs. The Zadara Multi-Zone HA option allows automatic, real-time failover across widely separated locations. And with its multi-cloud capability, Zadara enables automatic, transparent failover between different clouds, such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform.

Because of these unique capabilities, Zadara is able to offer a storage SLA with a 100 percent uptime guarantee. By partnering with Zadara, MSPs can assure clients of the highest levels of data availability at an affordable price.

If you’d like to know more about how Zadara’s SDS solution can help you meet the stringent uptime requirements modern SLAs demand, please download the ‘Zadara Storage Cloud’ whitepaper.

Picture of Zadara Team

Zadara Team

Since 2011, Zadara’s Edge Cloud Platform (ZCP) simplifies operational complexity through automated, end-to-end provisioning of compute, storage and network resources.

Share This Post

More To Explore