Welcome back for the third and final installment of “DevOps, 12 factors and the next great thing.” In Part 1, I discussed the true meaning of “DevOps” (hint: it’s a strategy not a role!) and how the “12 factor app” and DevOps are finally coming together with the common “language” that Application Orchestration is. In Part 2, I hopefully cleared up the confusion around Container Orchestration vs. Application Orchestration and dove deeper into “the next great thing”. Let’s tie it all together now by examining how Kubernetes and Qstack’s Application Orchestration directly address the “12 Factors” and more!
First of all, if you are unfamiliar with Qstack, here’s the short elevator pitch:
Qstack is an on-premise Cloud Management Platform that accelerates development efforts, lowers IT costs and fixes IT fragmentation. Using a self-service Web UI and an EC2 and Kubernetes compatible API and command line tools, organizations can package, deploy, and scale applications and server instances on Qstack automated clusters backed by Qstack managed physical and virtual infrastructure on-premise or across public clouds. Learn more.
Kubernetes in Qstack and Applications
Qstack takes an opinionated approach to Application Orchestration. Especially on how the Qstack discovers Applications from Kubernetes primitives such as Controllers, Pods, Services, ConfigMaps, etc. We tackle Applications – much like the 12 factor app principles – as a combination of everything needed for the application to work and scale, not just as individual Kubernetes primitives.
Application grouping in Qstack is based on directly applied resource labels and indirectly related (loosely coupled) component discovery and verification. To integrate Kubernetes, we started with the requirement that apps should be deployable from our UI by a single click without a pre-made Kubernetes cluster and Qstack should take care of the rest. We also wanted to show real-time changes, health, readiness, application metrics, services and more and maintain 100% compatibility with off-the-shelf Kubernetes tools like kubectl and the helm cli. The screenshots show a glimpse of the difference with approach vs the same data being displayed in the Kubernetes dashboard.
The 12 Factors and Qstack’s Application Orchestration
One codebase tracked in revision control, many deploys
According to the 12 factors principles, an “Application” is a standalone packaged unit that can behave differently based on its unique configuration but it shouldn’t be forked into slightly different versions of code.
This is the standard practice for most applications today; however, the way in which we supply the apps with their runtime or deployment time configuration can vary wildly from bash scripts to configuration management recipies to key value store backends and more. This is a place of contention for developers and operations. Fortunately, Kubernetes give us a number of standard and easy-to-use primitives like ConfigMaps and Secrets to inject into our apps as environment variables or easily updatable mounted volumes. These configurations and secrets help us keep our code as a single code base and free of customizations.
These configurations can be used by many applications, for example, a password to a shared database service can be stored as a Secret. To enable better visibility and security of the configurations Qstack can discover and display where these configurations are being used on the application and cluster level and how they are being used.
Explicitly declare and isolate dependencies.
This concept parallels the first guideline of the enormously useful Unix philosophy :
Write programs that do one thing and do it well.
Application dependencies in Qstack include their resource requirements and limits (e.g. they need X amount of RAM but should not go over Y RAM) as well as their dependencies on other Apps (services are also apps). Resource requirements can make an App either hard fail (scheduling cancelled) or soft fail (optimistically deployed – waits for resources).
While Qstack offers 5 ways to deploy an application, it encourages developers and ops to package applications using the Kubernetes Helm package format so that Qstack can scan for missing dependencies or fulfill them on the fly and so that we all follow the same best practices (common language FTW!).
In addition to these dependencies, developers can also define runtime tests such as Readiness and Health checks which in turn help Qstack’s Application Orchestration to automate services and the rescheduling of applications (in the case of unhealthy Apps).
Qstack’s UI makes health of Applications across all of their loosely coupled components and hard dependencies easy to see by combining multiple checks into simple notification health icon. Qstack also tracks Events across the cluster and per Application in real-time. This audit log exists as an addition to the existing notifications and alerting system for the physical and virtual infrastructure that Qstack manages which is beyond the scope of schedulers.
Store config in the environment
There are many ways to store configurations for Applications including in backing services like databases, key value stores or in config files that are injected during deployment. Kubernetes gives Qstack Applications a very elegant way to enable a ”build once, deploy many different ways”. ConfigMaps and Secrets are one method Applications can use to externalize configurations and secrets so that they do not need to enter the code base. These configurations are consumed in applications as Environment variables and files or folders prefilled with the values from the configurations. They are persisted in the backing etcd key value store of the cluster and can be shared between applications. ConfigMaps are stored in clear text but Secrets are base64 encoded before they are stored. Qstack’s UI displays both and how they are mapped into containers.
IV. Backing services
Treat backing services as attached resources
A MySQL database is a simple example of a backing service. By considering the service as an attached resource, an Application only relies on the protocol to communicate with that resource and doesn’t meddle with the implementation. This way the implementation can be swapped out by using, for example, a local high performance MySQL app instead of AWS RDS.
Services in Qstack that an Application depends upon are in themselves independent Applications. They can also be grouped with the Application using it as a single Application group with separate scaling and update controls.
A simple example of this is WordPress. When you deploy WordPress with Qstack you can chose to deploy a MariaDB database service with it that it will configure and use once that is ready. This database service is used by the WordPress frontend through a cluster only DNS address of the service. Simply by changing the MariaDB service’s “selector” we can easily replace the database with another (protocol compatible) service. Since Qstack manages hypervisors and bare-metal machines, that service may also live outside of the Kubernetes cluster and even be secure VLANed into the the user or project account running the cluster. This is great for combining existing services with new applications.
V. Build, release, run
Strictly separate build and run stages
Application Orchestration is primarily concerned with the deployment and maintenance of application binaries. There are however excellent solutions available today for Continuous Deployment (CD) and Integration (CI) that run well on Qstack’s Application Orchestration layer because they have plugins for Kubernetes. Good examples of these are Jenkins and Gitlab that can both be deployed from Qstack’s personalizable app store (community editions).
You can also use Qstack’s UI to assist in the development and pre-packaging stages of CI by using it to build an app starting from a single container image and then you can add configurations, Deployment update rules, Services, persistent storage claims and by watching application metrics to determine good resource limits or even jump into running containers to live-debug the application. Once the app behaves well, you can export the backing YAML configurations and create Dockerfiles from your container history and use them as the basis for a reusable Helm chart for the CI workflow or for the built-in App store.
Execute the app as one or more stateless processes
The 12 factors principles focus on best practices for a stateless application with the goal of implementing Software as a Service (SaaS). Stateless applications share nothing and should NOT keep any long-lived cache around because they should assume that their backing services will handle optimization and they could be replaced at any moment. Many applications that work well under Application Orchestration are not stateless and never will be. The good news is, in Qstack, we support both models.
An Application in Qstack is a grouping of processes (apps) that themselves might be made up of individual processes (pods and containers). It’s a conceptual and visual model that makes it a lot easier to understand the big picture of an application that is made up by many loosely coupled moving components that are individually scaled. In fact, a Qstack application can be both stateful and stateless because of this.
VII. Port binding
Export services via port binding
12 factor apps must be self contained processes i.e. no additional software should be injected into the execution environment to deliver the functionality of that process. This means that the hosts that an app is deployed to should not need to be prepared specifically e.g. have pre installed Apache web server for the app to use but rather the process should have a built in web server that is bound to specific ports exposed by the app.
In Kubernetes we build on Containers that implement this concept very well i.e. one main process should be running in each container that answers requests on predefined ports. If this process fails the container can be automatically restarted according to the restart policy we define in our Application definition.
The port bindings of the container do not expose it to the wild however. Containers are contained (one or more) within a Pod, the smallest unit of scale in Kubernetes, and that controls if the containers port bindings are exposed to the cluster. Pods are then labelled (like tags) so that Services can find them by their labels (loosely coupled remember!) and expose them to just the cluster or externally to a wider network e.g. the internet.
Because Services watch for Pod changes and readiness they have to be updated often. If they are a load balancing service this means that they have to keep track of which hosts the app is being scheduled on or descheduled from. For an external facing service like a load balanced front-end of a webapp this means that the load balancer rules must be changed in realtime. Qstack automates this process in the background and provides the cluster with software based load balancers that can even be shared with IP addresses outside of the cluster (e.g. from a VM).
Additionally, Qstack’s UI is reactive (based on the excellent Meteor framework) so changes to Kubernetes Services, i.e. when it is bound to a Load balancer’s public IP, is reflected by the UI in real-time. This enhances operational knowledge and security and makes it easy to see when apps are ready to be used or if they are having issues. If you need to expose as service temporarily to the outside world you can simply change the type of the Service to a load balancer from the UI.
Scale out via the process model
Applications must be able to scale out as individual processes. In 12 factors speak this mean horizontal scaling or in other words adding replicas rather than giving the application process more resources (vertical scaling).
Horizontal scaling is the default scaling model in Kubernetes but with some tricks like having a “nanny service” you can also scale pods vertically.
Qstack’s UI enhances Kubernetes and makes it very easy to scale applications both manually and by enabling Pod auto-scaling with a single click. The current implementation is based on CPU metrics as the scaling factor but with an upcoming version of Kubernetes we will also support custom metrics e.g. scaling by database transactions or other application specific metrics. Autoscaling is dependant on continuously collecting resource metrics for all applications which Qstack configures and does automatically. This also allows us to display live resource metrics per Node, Application, Pod and Container.
Horizontal scaling’s downside is that it can put restrictions on state sharing through storage. Qstack automates the creation, formatting and mounting of EBS type volumes into each Pod replica. This is very “12 factory” however for Big Data Apps or Jobs based “worker” type workloads it is very common to share data across all processes. This may bend the share nothing model of 12 factors but just think of this as an attached backing service like a database. Qstack’s automated persistent storage for Apps is not sharable between Pods but fortunately through built in support for shared storage protocols like NFS, Ceph and GlusterFS, shared volumes are easy too.
Maximize robustness with fast startup and graceful shutdown
For elastic scaling, rapid deployment of code, portability of application in case of hardware failures or configuration changes among other things we need to minimize startup time. For data consistency and stability we need to allow Application processes (Pods and containers) to shutdown gracefully.
By now most IT departments have seen that containers “boot” very fast, instantly even after the backing container image has been downloaded to the host. To reduce startup times caused by public internet latency, you could deploy your own private container repository as an app inside Qstack and then the only delay in startup time (at least for stateful apps) will be the time it takes for Qstack to allocate and deliver persistent storage to your application and acquire IP addresses for a load balancer (if externally facing). Qstack will make sure to re-allocate the same volumes to a restarted or migrated Pod effectively making the Pod disposable but the data persistent.
Once an application is deleted or scaled down (Pods deleted), services that are automagically connected to the Applications will check if the developer of the app defined any Readiness checks that it can continuously evaluate to know if the application is still allowed to receive traffic. Once the internal process of the app (container in the pod) stops the Service stops sending its traffic which reduces the need to worry about graceful termination of processes. However by default we give Pods and their containers 30 seconds to shut down gracefully before terminating them and removing from a host. This grace period can of course be made shorter or longer and there are also hooks available e.g. to perform data preparation before the main application processes are started or shut down.
X. Dev/prod parity
Keep development, staging, and production as similar as possible
There are mainly three reasons why production, staging, and development get out of sync. I’ll list them there but here I’ll paraphrase the 12 factors explanations.
- Development setups are updated often but production environments infrequently.
- The organization hasn’t implemented a DevOps strategy so developers and operations don’t share responsibilities; developers code and ops deploy and manage environments.
- There is a tools gap between development and the other environments. This happens because developers crave speed in their development environments for fast debugging and are tempted to use lighter solutions e.g. for backing services. This ends up creating “in production” only bugs which are the worst kind of bugs!
In my opinion, Dev/prod parity is one of the most important principles of the 12 factors because it is arguably the most time wasting of them all, if not followed!
First of all, it’s important to separate the concept of the production deployment of an application and the production capabilities of the underlying environment. In Qstack, we put great effort into truly understanding infrastructure operations and how Kubernetes needs to be configured, scaled and managed both as a development cluster and in production mode. To the application developer there should be no difference and for operations there should be as little to no effort to manage the two.
With Qstack the IT environments can be very different e.g. physical bare metal servers or mixed types of hypervisors but the application cluster functionality stays the same because Qstack’s hybrid capabilities make them into a uniform infrastructure.
When there is no functional difference in the backing environments for the application developer the production version of the application is the same as the development version and deployed and updated in the same manner. Operations and developers don’t even have to know how to create kubernetes clusters because Qstack enables anyone with a user account to deploy an application as the first step and then takes care of creating the backing Kubernetes cluster.
For more frequent production updates we have already discussed enabling easier CI/CD pipelines by providing an easy to use packaging format and the ability to roll out updates and to roll back new versions or tweaks. Helm deployed components automatically get a release label in their metadata that sticks with them throughout their lifecycle and that among other things allows Qstack to discover them and present as Applications. This allows you to have multiple versions of an app e.g. dev, staging, production simply by naming the application deployments differently. Optionally the deployments can be isolated by the means of Kubernetes Namespaces or by running in separate clusters which is also cost effective because Qstack will take care of nitty gritty details of the cluster e.g. development clusters should be optimized for cost (pack VMs!) while production clusters should remove any single points of failure and optimize for many other factors including QoS, application metrics and cost of course.
From the DevOps perspective, the common language of Application Orchestration which includes application definitions and packaging as well as the shared responsibilities between developers, operations, Qstack, and Kubernetes automation steers your app to follow the 12 factors.
Treat logs as event streams
Pods, or their containers rather, generate logs. Kubernetes also generates events and persists error messages as logs if scheduling fails or a container emits an error message on some failure. These logs can be fed into log analytics apps to make it easier to find specific events in production systems, graphing trends and enabling active alerting.
Qstack is not an application monitoring system although the UI does supply an advanced health and readiness check across multiple application components and cluster nodes as well as basic application and node resource metrics. Qstack does however aggregate and index all infrastructure logs and metrics in the backing infrastructure of the clusters just like it does for all infrastructure it manages and pipes those logs into the built in Elastic search and Usage database for later analysis, alerting and details usage reports.
Using Qstack’s UI you can view individual container logs on demand or download the clusters access credentials and tail multiple logs at the same time using “kubectl logs -f …”
XII. Admin processes
Run admin/management tasks as one-off processes
Kubernetes and Qstack offer many ways to perform one-off processes on Applications and Clusters: There’s the Job resource that can be run as an application or as a temporary worker in an application or a cluster. Kubernetes and Helm (which Qstack exposes as an API) also offer PRE and POST hooks for various stages of application modifications as well as a direct execution command “kubectl exec …” that can run a process directly in a running container. Qstack also offers web based console access to containers that can be used to manually perform operations and since Qstack shares persistent volumes, load balancers, firewalls and other resources between regular server instances and with the Kubernetes clusters instances and applications, you have many more options to create and automate one-off processes.
Beyond 12 Factors
Qstack’s Application Orchestration could be called DevOps “in-a-box.” I’m excited to get it into our customers’ and partners’ hands. We began with the idea that creating and deploying applications should be easy and fun. So, we made it the job of Qstack and Kubernetes to automate the hard parts and made a real-time Web UI that understands and translates Kubernetes’ loosely coupled primitives into Applications that are easy to understand and automate. The 12 factor app principles and our own teams’ decades of experience building scalable applications and operating enterprise-size IT architectures and a public cloud have served as excellent guides to the development of Qstack’s Application Orchestration layer. We also owe a debt of gratitude to the great developers that have contributed to Kubernetes and, in particular, I would like to thank all the great folks who worked on Helm that were invaluable to our implementation of the first Helm-compatible user interface. You know who you are, I owe you many drinks! We will keep on working with all of the special interest groups Greenqloud has joined within the Kubernetes community, as well as continue contributing code to Kubernetes.
We are just getting started!
Our Application Orchestration layer in Qstack is new and there are a ton of features that didn’t make it into our first release code-named “Quilty,” but we will be updating it fast and often in the coming months. There are also more importantly many problems in DevOps that still need solving that we would like to tackle. For instance, we still have ways to go to get to even ~90% automation of complex stateful applications and many in the industry are coming out with coded solutions to try to fully automate popular systems. For example, CoreOS recently introduced the “Operator” pattern with experimental implementations for Etcd and Prometheus which can be used in Qstack. The downside to that approach, in my opinion, is that you need to code the behaviour which I think will not necessarily be the best solution in the long run.
I do believe though that once we can wrap our IT minds and technology around patterns that are broader than what the 12 factors outline, we will get to Application Orchestration Nirvana. We just need to think bigger.
I see today’s applications’ scope of automation, including their services, metrics, storage, etc., as connected but independent “states” — but they need to become more like “countries” in terms of their knowledge of the rest of the Apps around them.
In our hyper connected world, countries need to consider external factors just as much as they need to respond to internal problems and opportunities. The apps of the future need not only to consider their underlying factors such as their own databases and API services but also react to external changes, opportunities, and threats by responding proactively to external signals and events from disconnected but relevant things and services like IoT devices, cognitive services, social analytics, twitter etc. This is where I want to see the IT industry and my own company innovate in the near future.
Exciting times ahead indeed!
Eirikur “Eiki” Hrafnsson is co-founder and COO of Greenqloud the makers of the hybrid cloud platform Qstack. With over 20 years experience in enterprise and government IT Eiki focuses on solving real world problems by making IT simple.