Blog Article:

Deploying Container-Based Applications with OpenNebula

Marco Mancini

Principal Technologist AI & Data Operations at OpenNebula Systems

Jul 22, 2020

The container revolution

Information and Communication Technologies have evolved at a really fast pace over the past few years, changing dramatically the way information systems and applications are built. Software development has brought along many changes and revolutions, allowing people to focus now mainly on the business applications. The key drivers of this shift have been (1) to be able to package and run applications anywhere regardless of the underlying computing architecture, and (2) to keep applications isolated from each other to avoid security risks and interference during operations or maintenance processes.

Keeping applications isolated on the same host or cluster can be difficult due to the packages, libraries and other software components that are normally required to run them. Hardware virtualization was a solution to this problem, since applications could be kept apart on the same hardware by using Virtual Machines. Packaging an application within a VM allowed also to run it on any infrastructure that supported virtualization, which provided a lot of flexibility to the whole concept.However, Virtual Machines come with some serious limitations: moving them around is not that easy, since they are typically quite heavy and there are always difficulties associated with maintaining and upgrading applications running within a VM. 

In recent years, we have all witnessed how container technologies have revolutionized the way enterprise and distributed applications are being developed and deployed. Containers clearly offer a more portable and flexible way of packaging, maintaining and running applications. They allow admins to deploy, move and replicate workloads more quickly and easily than using Virtual Machines. While containers as a concept have been around for a while, Docker was the technology that introduced several crucial changes to the existing container technology, making containers more portable and flexible to use. This resulted in a turning point towards the adoption of containerization and microservices in software development (e.g. cloud-native development).

Docker gave us an easy way to create container-based applications, and to package them in portable images containing the specifications for which software components the container will run. Docker’s technology brought cloud-like flexibility to any infrastructure capable of running containers, with their container image tools allowing developers to build libraries of images, compose applications from multiple images, and launch those containers and applications on local or remote infrastructure alike.

 

Orchestrating containers

Nowadays, many companies have embraced a cloud-native paradigm in developing applications and have shifted from a “monolithic” approach to a microservice approach. While deploying a single container can be an easy task, things get a bit more complicated when deploying multi-container applications on distributed hosts given that in these cases a Docker engine alone is not enough. 

This is where container orchestrators (like Kubernetes or Docker Swarm) play an important role in scheduling containers to run on different servers, moving containers to a new host when the host becomes unhealthy, restarting containers when they fail, managing overlay networks to allow containers on different hosts to communicate, orchestrating storage to provide persistent volumes to stateful applications, and so on.

However, container technologies (e.g. Docker, Kubernetes) also come with some serious limitations, such as for example security (application containers share the kernel OS) and multi-tenant environments. In order to provide a multi-tenant and secure environment to deploy containerized applications one has to provision different “virtual environments” to each user or group of users, typically by deploying several isolated Kubernetes clusters on top of a Cloud Management Platform. 

The CMP is then responsible to manage and orchestrate the underlying virtual resources (i.e. virtual machines, virtual networks and storage) that are used by the different Kubernetes deployments that are in charge of scheduling application containers within those isolated environments. This approach adds an extra control layer that ends up increasing management complexity, resource consumption and operational costs.

logos 5 12

OpenNebula’s combined approach

But what if we could remove one of those layers? Cloud Management Platforms such as OpenNebula have been implemented with multi-tenancy and security by design, providing already powerful orchestration features but for VM-based applications (e.g. networking, storage blocks, high availability with live/cold migrations, etc.). And with the release of version 5.12 “Firework”, OpenNebula has gone a step further to offer a pioneering approach that merges the strengths of a Cloud Management Platform with the many benefits of container technologies. 

OpenNebula announced a few months ago that Firecracker would be incorporated as a new supported virtualization technology. This microVM technology, developed by Amazon Web Services (AWS) and widely used as part of its Fargate and Lambda services, has been especially designed for creating and managing secure, multi-tenant container and function-based services. OpenNebula has managed to bridge the gap between two technological worlds, leaving behind the old dilemma between using containers—lighter but with weaker security—or Virtual Machines—with strong security but high overhead.

By adopting the increasingly popular approach of running microservices based on containerized applications, and thanks to its native integration with Docker Hub, OpenNebula has now become a powerful alternative to deploy and orchestrate containers as secure and fast microVMs. This solution combines all the features of a solid CMP but without adding extra layers of orchestration, thus reducing the complexity, resource consumption and operational costs.

We have just embarked ourselves in this amazing journey towards creating a truly innovative model for deploying and running containerized applications, based on combining two fascinating technologies: OpenNebula and Firecracker. In the next months we’ll be working hard to improve and get the most out of this new integration, and to make it as easy as possible for admins and developers to use OpenNebula as a solution for their needs. Fasten your seat belts and enjoy the flight! 🚀

PS – Here you can see a full comparison between OpenNebula and Kubernetes. And if you are interested in trying by yourself OpenNebula’s new integration with Firecracker and Docker Hub, here you have a screencast (part of this step-by-step tutorial) on how to deploy a one-node Firecracker cloud on an AWS bare-metal server:

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *