If you missed Part 1 of “DevOps, 12 factors and the next great thing,” I discussed why DevOps is not a role to hire into and explored how DevOps is essential to staying competitive. I also touched upon “the next great thing— Application Orchestration” that provides us with the essential language that we didn’t have before but we needed to realize the massive benefits of 12 factor apps and a successful DevOps strategy. Let’s look at some of my statements from that article and dig deeper into what we mean by “Application Orchestration” and successful DevOps.
“The strategy [of DevOps] is on an organizational level, and the language is the tools that enable Dev and Ops to clearly define the roles of the application, the automation of the systems, and which roles operations and development play into the success of the DevOps strategy.”
Application orchestration helps define clearer roles for Operations, Development, and Systems. A good way to explain how this affects our application creation workflow, in a practical sense, is to answer this less than obvious question:
In a DevOps strategy, is it the responsibility of an applications’ developer to provision storage at deployment time so that the application can be deployed and work as planned?
The answer is yes!
Wait, what? Didn’t I just proclaim in Part 1 that DevOps isn’t a role to hire into, as in a role where a Developer is also responsible for Operations (or vice versa)? That’s still absolutely true…
Before “DevOps,” you and I would have replied with a resounding “no.” After all, storage belongs to Operations, right? In the brave new world of Application Orchestration, responsibilities are not necessarily black and white. A developer IS responsible for provisioning the application’s storage, if however in an abstract sense.
It’s the role of the developer in DevOps to use the common Application Orchestration language (here, the application definition) to specify which parts of the application need temporary and/or persistent storage. The developer also has the option to request a specific class of storage (QoS) through the use of simple storage labels like “ssd” for fast storage than spinning disks. These storage that the storage classes refer to is dynamically allocated and not necessarily tied to a specific storage cluster or backing technology.
It’s the role of the Application Orchestration layer to figure out how to acquire that type of storage and to deliver it ready-to-use for the application. Often that means as a pre-formatted volume or folder or even as an environment variable!
Then it’s the role of Operations – who still “own” the storage – to make sure that storage allocated to the Application is healthy, ready, and that different storage offerings (QoS) have been properly labelled so that VMs, bare metal machines, and containers that work in unison can use them.
That’s what I mean by their responsibilities not exactly being “black and white.” Their responsibilities are shared and can’t work without each party (and systems) playing their part in the DevOps strategy.
In this example, the beauty of the Application Orchestration is that the order of the process is not set in stone. For example, the application won’t fail to deploy if the correct storage class isn’t available. It will simply wait for the storage class to be available and through global Application Events notifications, both Operations and Developers will be notified that the application isn’t ready because one of it’s requirements hasn’t been met. Once the problem is fixed (e.g. by adding the correct label to the backing storage) the Application Orchestration layer will provision the correct size and class of storage the Application requires and start, restart, or scale the Application successfully. Some call this self-healing or an eventually guaranteed design.
“The 12 factor app manifesto made me realize some years ago that before the availability of API driven infrastructure (Cloud) and some of the technology advances in the past two or so years, there was no chance of a DevOps nirvana.”
This sort of “optimistic” or “eventually guaranteed” deployment of applications and services is in stark contrast with existing IT practices and even modern cloud deployments. Even after we got public cloud APIs and the multitude of deployment and configuration management tools, we were still relying on allocation-based methods of deployment (e.g. request a VM with 2CPU/4 GBs, and hierarchical parameterization of application deployments or configuring the VM with deployment/runtime parameters using a chef/puppet recipe, which would all fail to deploy or scale the Application if just one part couldn’t be completed). That workflow often leads to annoying, half-deployed apps and stranded resources with lots of cleanup work to do as well as difficult testing and debugging (not to mention config scripting hell). Worst of all, this leads to tension and frustration between Dev and Ops as well as unclear responsibilities and avoidable extra costs.
“Optimistic” deployments descriptions – like Application Orchestration along with definable health and readiness checks – can solve this problem and make it easier to debug and determine the responsibilities of each party in our DevOps strategy.
Developers reading this will realize that Application Orchestration in a way has a lot in common with the idea of coding with Promises in asynchronous environments. For the non developer, a “Promise” represents a reply (i.e. from an API) or data from a 3rd party service that our application needs but may not be available immediately when we ask for it. Instead, we get a “Promise” to work with that will eventually be resolved/fulfilled and our application can continue when that happens. This type of “optimistic” coding is being adapted in most modern programming languages because of the distributed nature of modern applications and is also essential to the idea of self automated application operations.
Container orchestration vs Application Orchestration.
I’ll start with a statement. Container orchestration is NOT the same as Application Orchestration. Not even close.
Don’t get me wrong, container orchestration is an integral part of Application Orchestration—but as a technology enabler. Containers are the darling of today’s tech industry but not the big solution or strategy that Application Orchestration represents.
To give you a great analogy, Virtual Machines were not “Cloud computing” like some vendors tried to convince people of in the early days of public clouds (IaaS), but they were, of course, critical to enabling the evolution of IT to get to Cloud computing or “API driven infrastructure,” to be more precise.
There are many excellent container orchestration/scheduling frameworks available today – like Kubernetes which Qstack uses – Docker Swarm and Apache Mesos, to name a few. But what all of these frameworks have in common is that they assume no responsibility for the underlying infrastructure and do not have a language to define their own initial and ongoing requirements — only the requirements of their workloads. In other words, their role is to consume the IT resources given to them (often manually provisioned) but the management of those IT resources and services, although critical to the success of a company’s DevOps strategy, is out of their scope of concern. They can manage individual components very well within the confinement of their given resources given a stable backend but they lack the big picture of what makes up a complete Application with all of its loosely coupled dependencies, services, events, health, storage, etc. Effectively this coupled with other reasons makes container orchestrators (and schedulers) incomplete for Application Orchestration. However, that is BY DESIGN and not a flaw, in my opinion.
“The languages of these tools are there to help us reach our 12 factor goals and ultimately build our DevOps strategies. We just have to put the pieces together!”
It’s the role of vendors such as Greenqloud to expand on container orchestration frameworks and many others to make them easy to use and complete the Application Orchestration picture for DevOps success. If you want to go vendor-less in your quest for Application Orchestration and DevOps, be prepared to build A LOT of stuff not core to your business like API driven physical/virtual compute, storage and networking management; self service portal, application packaging formats, programmatic user and organizational management, firewall and load-balancing automation, security features like ssh and api key pair management, applications, usage and system metering, user quotas, chargeback and showback reporting, health monitoring, centralized logging, api auditing, etc…
Luckily, that’s what we have created for you in one easy to install, Application Orchestration enabled cloud platform! You can get from a single container image name to an upgradable and editable hybrid application backed by a fully managed and automatically created application cluster with a single click! It’s pretty cool, if I say so myself. 😉
In Part 3 of “DevOps, 12 Factors and the next great thing…” I’ll show some examples of Qstack Application Orchestration in action so please stay tuned by following us on Twitter, subscribing to our newsletter or better yet sign up for a private Qstack demo if you can’t wait for part 3!
Eirikur “Eiki” Hrafnsson is co-founder and COO of Greenqloud the makers of the hybrid cloud platform Qstack. With over 20 years experience in enterprise and government IT Eiki focuses on solving real world problems by making IT simple.