Nowadays it is difficult to find a place where virtualization benefits are not praised. One of my favorite ones is consolidation, that has finish with the “one application one server” paradigm. However, when you start placing your services on different hosts, you will find quickly that the new model does not scale per se. Soon (ten VMs is enough), you’ll be trying to answer questions like: Where did I put our web server? Is MAC address already in use? Is this the updated image for the ubuntu server? and the alike.
That was our situation last year; although we organized ourselves with a wiki and a database to store all the information, it became evident that we need a management layer on top of the hypervisors. We started the development of OpenNebula to implement our ideas about the distributed management of Virtual Machines.
Today OpenNebula is a reality, and has reach a production quality status (at least we are using it in production). I’ll briefly show you how we setup our infrastructure with OpenNebula, KVM and Xen.
The Physical Cluster
Our infrastructure is a 8-blade cluster, each one with 8GB and two quad-core CPUs (8 cores/blade). These blades are connected with a private LAN an to the Internet, so they can host public services. The output of the
onehost list command is as follows – it’s been colorized for dramatic purposes 😉 :
Originally the blades were configured as a classical computing cluster.
The Virtual Infrastructure
Now, thanks to OpenNebula the cluster can be easily configured to host multiple services. For example we are using the 8 blades for:
- Two grid clusters (4 nodes each) for our grid projects. Needless to say, that we can add/remove nodes to these clusters as we need them
- A Grid broker based on the Gridway metacheuler
- A web server, serving this page actually
- A coupe of teaching clusters for our master students, so they can break anything
- A cluster devoted to the RESERVOIR project
- And my favorite, a cluster were we develop OpenNebula. The nodes of this cluster are Xen capable, and they run as KVM virtual machines. So yes, we are running VMs in Xen nodes which are KVM VMs.
This is the output of the
onevm list command:
One of the most useful features of the new version of OpenNebula is the networking support. You can define several networks, and OpenNebula will lease you MAC addresses from the network you want. So you do not have to worry about colliding addresses. The MAC addresses are assigned using a simple pattern so you can derive the IP from the MAC (see the OpenNebula Networking guide for more info).
We are using a virtual network for each virtual cluster, output from
onevnet list and
onevnet show commands:
Thanks to the hypervisors and OpenNebula, we can shape our 8 blades to be any combination of services, and we can resize these services to suite our needs. As can be seen above, you can easily find where is the webserver running, so you can get a console in case you need one, or migrate the VM to other blade…
Although the management is now so easy with OpenNebula, setting up everything (installing the physical blades, the VMs, configure the network…) is a difficult task. Kudos to Javi and Tino for getting the system running!.
Lead Cloud Engineer & Chief Architect at OpenNebula