The Untapped Potential of Containers Technology

The untapped potential of Containers Technology

I started my IT career in the 1980’s and worked as a systems programmer for the first 10 years on IBM mainframe systems. For those not acquainted with this perennial technology, IBM systems were big, expensive and ultra-reliable, still being used today as the core of almost all financial organizations and other companies that need rock-solid computing.

IBM mainframes ran various versions of what were originally the System/360 architecture and the MVS operating system, which evolved into System/370, System/390 and z/OS. Over the years new features have been added but the O/S remains basically the same, apart from support for new hardware and extended addressing (eventually moving to 64-bit). The core benefit of z/OS and the previous cousins was the ability to run multiple virtual instances of programs on the same machine, executing each through time slicing (providing each a small amount of CPU time then swapping to another task). Each instance or address space was pretty much like the developing containers technology of today; each address space appeared to have their own separate map of memory and run on an isolated fashion, independent from other running processes. An address space could be transient (Batch job) or more long running (Started Task), depending on requirements.

Reaching Limitations in Virtualization

Rolling forward almost 30 years and we’re experiencing a sea change in the way applications are created and deployed. For the past 10 years, server virtualization has been the dominant technology, taking the physical entity of a server and abstracting it down to a set of configuration settings and data files. Virtualization has been wildly successful – the initial savings of cost reduction through hardware consolidation has had the additional benefits of improving agility (spin up a VM on demand) and resiliency (vMotion workloads around and deliver HA/FT without application clustering).

However, virtual machines have reached the point where their success is being hampered by issues of scale. Each VM is an entity to be managed independently, requiring patching, upgrading, licensing and consuming disk space, DRAM and CPU resources, even when sitting idle. For the most part, VMs still have to be looked after and nurtured – requiring backup and optimization (defragmentation, load balancing, performance management).

Next-Gen Application Deployment: Containers Technology

Containers represent the next step in deploying applications without the overhead of the virtual server. The evolution is simple – why run multiple operating systems with the entire incumbent overhead when all we want to do is run the application? In an echo back to days of the mainframe, containers allow applications to run in a lightweight fashion, sharing system overhead and resources either within a virtual machine or physical server. Each container appears to have the resources of the entire operating system, but is simply a group of isolated processes running on a snapshot of a root file system.

Although containers appear to be a great step forward in efficiency, we need to stop for a moment and look at the application itself. Virtual machines were so successful because the model of “one application, one server” could be implemented easily within a VM. Almost no changes were needed to run a virtual version of a physically deployed application. Containers are not designed to run applications in the same way. Each container is expected to be stateless and potentially transient in nature, accessing shared external resources like storage.

Applications based on containers will come and many already exist today, especially for open-source platform. In the meantime, the potential to run lightweight applications is being exploited within existing infrastructure to deliver additional benefits. One example is the ability to run container code on Zadara’s VPSA (Virtual Private Storage Array).

Zadara Container Service (ZCS): Moving the Application to the Data

Zadara announced support for Docker containers on VPSA in June 2015 with the release of the Zadara Container Service (ZCS). The ability to run containers on a storage platform means access to data is provided with extremely low latency and high performance compared to running an application accessing data over the network.

What types of applications are likely to run on VPSA? Clearly the most suitable applications will be those that will analyze and modify data stored on the VPSA, such as virus scanning, data transformation (video or audio transcoding) or other tasks that gain the most benefit from the low latency and co-location with the data. These examples are perhaps the most obvious use for ZCS, however depending on the complexity of the data, other more useful features arise, especially those that can be run as a secondary process to the primary application. This can include legal search and discovery, analytics and metadata creation. Many of these functions will preclude the need to create a second set of data and so reduce the cost of maintaining multiple data sets within the organization.

It’s early days for containers, but using the approach of moving the application to the data (rather than the other way around) provides the ability to do some pretty clever things in the future and ZCS is one early implementation leading the way in application portability.

– – –

Get to know Zadara Container Services (ZCS)!

Download the ZCS data sheet below:



Share This Post

More To Explore