Operating-system-level virtualization, a new technology that has recently emerged and is being accepted into cloud infrastructures, has the advantage of providing better performance and scalability than other virtualization technologies such as Hardware-Assisted Virtual Machine (HVM).
LinuX Containers (LXC) allow the usage of this technology by creating containers that resemble complete isolated Linux virtual machines on the physical Linux machine, all this by sharing the kernel with the virtual portion of the system. A container is a virtual environment, with its own process and network space. LXC makes use of Linux kernel Control Groups and Namespaces to provide the isolation. Containers have their own view of the OS, the process ID space, the file system structure, and the network’s interfaces. Since they use kernel features, and there’s no emulation of hardware at all, the impact on performance is minimal. Starting up and shutting down containers, as well as creating or destroying them are fairly quick operations. There have been comparative studies, such as LXD vs KVM by Canonical, which show advantages of LXD systems over KVM. LXD is built on top of LXC and uses the same kernel’s features, so performance should be the same.
Nowadays public cloud Infrastructure as a Service (IaaS) providers, like Amazon, only offer application based on Docker containers deployed on virtual machines. Docker is designed to package a single application with all of its dependencies into a standardized unit for software development but not to create a virtual machine. Only a few systems offer IaaS on bare metal container infrastructures. Joyent, for instance, is able to put to use all of the virtualization advantages that OS provides.
However, on private cloud scenarios OS virtualization technology hasn’t had quite the acceptance it should. Private cloud managers, like OpenStack and Eucalyptus, don’t offer the support needed for this type of technology. OpenNebula, is a flexible cloud manager, which has gained very good reputation over the last few years. Therefore, to strengthen the OS virtualization support in this cloud manager could be a key strategic decision.
This is why LXCoNe was created. It is a virtualization and monitoring driver for OpenNebula that comes as an add-on to provide OpenNebula with the ability to deploy LXC containers. It contributes to achieve better interoperability, performance and scalability in OpenNebula clouds. Right now, the driver is stable and ready for its release. It is currently being used in the data center from Instituto Superior Politécnico José Antonio Echeverría in Cuba, with great results. The team is still working on adding some more features, which are shown next, to improve the driver.
Features and Limitations
The driver developed has several features such as:
- Deployment of containers on File-Systems, Logical Volume Managers (LVM) and CEPH.
- Attachment and detachment of network interface cards and disks, both before creating the container or while it’s on.
- Monitoring containers and node’s resources usage.
- Powering off, suspending, stoping, un-deploying and rebooting running containers.
- Supports VNC.
- Supports snapshots when using File-Systems.
- Limits container’s RAM usage.
It lacks the following features, on which we are currently working:
- Container’s CPU usage limitation.
- Containers live migration.
Virtualization Solutions
OpenNebula was designed to be completely independent from underlying technologies. When the project started the only supported hypervisors were Xen and KVM, it was not thought out to support OS-level virtualization. This probably influenced the way OpenNebula managed physical and virtual resources. Because of this and due to the difference between the two types of virtualization technologies there are a few things to keep in mind when using the driver. These are:
Disks
When you successfully attach a hard drive, this will appear inside the container, in /media/<Disk_ID>. For detaching the hard drive it must be placed inside the container in the same path previously explained. It cannot be in use, otherwise it will purposely fail.
Network interfaces (NIC)
If you hot-attach a NIC, it will appear inside the container, but without any configurations. It will be up and ready to be used, contrary to what happens when you specify NICs in the template and then create the virtual machine. In this case, the NIC will appear set up and ready to be used, unless you specifically want it to appear otherwise.
Installation
Want to try? The drivers are part of the OpenNebula Add-on Catalog. Installation process is fully and simply explained in this guide.
Contributions, feedback and issues are very much welcome by interacting with us in the GitHub repository or writing a mail:
José Manuel de la Fé Herrero: jmdelafe92@gmail.com
Sergio Vega Gutiérrez: sergiojvg92@gmail.com
0 Comments