Qstack’s New Application Orchestration Module: The Basics

Blake Greene Blog, Olafur Ingthorsson

What is Kubernetes?

Kubernetes is an open-source system for managing containerized applications across multiple hosts in a cluster. The objective of Kubernetes is to abstract away the complexity of managing a fleet of containers, which represent packaged applications that include everything needed to run wherever they’re provisioned.  

Kubernetes provides mechanisms for application deployment, scheduling, updating, maintenance, and scaling. A key feature of Kubernetes is that it actively manages the containers to ensure the state of the cluster continually matches the user’s intentions.

Kubernetes enables you to respond quickly to customer demand by scaling or rolling out new features. It also allows you to make maximum use of your hardware. At the center, a Kubernetes cluster contains pods. Pods are a group of Docker-based containers that can run any container-ready application.

Ok, so what are containerized applications?

A containerized “application” is a collection of microservices that together deliver a meaningful service for the users. Each microservice has a particular role and usually runs on a dedicated pod (a set of containers). Applications scripts can be uploaded via customizable YAML scripts, deployed from a Docker repository (create from image) or from pre-packaged Helm Charts, which are Kubernetes ready applications maintained in an official Helm registry.

Got it. So, what is Qstack’s Application Orchestration (AO) module, again?

Qstack’s new AO module provides DevOps with an incredibly easy-to-use interface for deploying and managing containerized applications on top of Kubernetes clusters, instead of being required to use the Kubernetes native kubectl command line interface. For more technical users, Kubernetes API is however exposed in the AO module as well as including a kubectl console.  

Instead of the traditional way of running apps on hosts that include virtual machines, container-based applications leverage OS-level virtualization and support portability, scalability, and self-healing capabilities. Kubernetes clustering provides a layer for managing or orchestrating multiple disparate services into a single coherent application.

The AO module is especially powerful when it comes to deploying and managing applications on top of the Kubernetes cluster. When a new application is deployed, the user can determine the number of replicated pods to enhance reliability and high-availability as well as adjust the replication size.

The terminology

Application Orchestration Drop-down MenuYAML scripts

When an application is deployed from a YAML script, the application “image” (or container image file) is by default obtained from the Docker Hub repository, or can also be uploaded from any correctly configured user-defined Docker registry. Parameters for the URL can be set in the YAML script itself or in the AO UI. YAML scripts offer the potential of customizing an application, but have the limitation of usually only contain a individual application, e.g. a Nginx or Apache web-servers – as opposed to a set of apps that together deliver a larger application.

Helm charts

Helm chart (or blueprint) is a set of templates that describes all of the items required to set up an application in Kubernetes. When deploying an application using a Helm chart – Helm converts the templates into Kubernetes YAML files required to automatically deploy the necessary components. As an example is the WordPress chart that deploys a frontend and database components along with configuring passwords, disk allocations and IP reserving required for a standalone application.

Clusters

A cluster is a group of physical or virtual nodes tied together for deploying scalable container based applications on.

  • A new cluster will be created for each user the first time the user deploys a containerized application
  • Cluster size is by default 3-nodes, but size can be manually scaled in the beginning of creating a new application or afterwards. One of the node becomes the “master” node and the remaining nodes are “worker” nodes.
  • One of the node in the cluster becomes a “master node”, managing other worker nodes (in the cluster) and supporting administration tasks, including API and CLI interfaces.

Pods

Pods are a group of one or more containers, their shared storage, and options about how to run them. Each pod gets its own IP address.

  • The AO module will automatically and dynamically scale the pod sizes within the cluster, depending on the load status.
  • Users can either upload their own YAML based charts for deploying a new application or use any of the preconfigured Helm-charts directly from official the Helm repository.
  • Each application is typically comprised of several pods, where each pod runs a part of the application that solves a well defined task.  Each pod can be scaled independently either manually or automatically by Kubernetes.

For a more detailed explanation of Qstack’s new container and application orchestration feature and examples, read our handy how-to guide

Social media

FacebooktwitterlinkedinmailFacebooktwitterlinkedinmail