So, do you want to transform your rigid and compartmented infrastructure, into a flexible and agile platform where you can dynamically deploy new services and adjust their capacity?. If the answer is yes, you want to build what is nowadays called a private cloud.
In this mini-how-to, you will learn to set up such a private cloud in 5 steps with Ubuntu and OpenNebula. We assume that your infrastructure follows a classical cluster-like architecture, with a front-end (cluster01, in the howto) and a set of worker nodes (cluster02 and cluster03).
First, you’ll need to add the following PPA if you’re running Ubuntu 8.04 LTS (Hardy) or Ubuntu 8.10 (Intrepid), you should be fine if you are running Ubuntu 9.04 (Jaunty Jackalope):
deb http://ppa.launchpad.net/opennebula-ubuntu/ppa/ubuntu intrepid main
If everything is set up correctly you could see the OpenNebula packages:
$ apt-cache search opennebula libopennebula-dev - OpenNebula client library - Development libopennebula1 - OpenNebula client library - Runtime opennebula - OpenNebula controller opennebula-common - OpenNebula common files opennebula-node - OpenNebula node
OK, then. Here we go:
- [Front-end (cluster01)] Install the opennebula package:
$ sudo apt-get install opennebula Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: opennebula-common The following NEW packages will be installed: opennebula opennebula-common 0 upgraded, 2 newly installed, 0 to remove and 2 not upgraded. Need to get 280kB of archives. After this operation, 1352kB of additional disk space will be used. Do you want to continue [Y/n]? ... Setting up opennebula-common (1.2-0ubuntu1~intrepid1) ... ... Adding system user `oneadmin' (UID 107) ... Adding new user `oneadmin' (UID 107) with group `nogroup' ... Generating public/private rsa key pair. ... Setting up opennebula (1.2-0ubuntu1~intrepid1) ... oned and scheduler started
As you may see from the output of the apt-get command performs several configuration steps: creates a oneadmin account, generates a rsa key pair, and starts the OpenNebula daemon.
- [Front-end (cluster01)] Add the cluster nodes to the system. In this case, we’ll be using KVM and no shared storage. This simple configuration should work out-of-the-box with Ubuntu:
$ onehost add cluster02 im_kvm vmm_kvm tm_ssh Success! ... $ onehost add cluster03 im_kvm vmm_kvm tm_ssh Success! ...
Now, we have just to follow the wise output of the previous commands
- [Front-end (cluster01)] You need to add the cluster nodes to the known hosts list for the oneadmin user:
sudo -u oneadmin ssh cluster02 The authenticity of host 'cluster02 (192.168.16.2)' can't be established. RSA key fingerprint is 37:41:a5:0c:e0:64:cb:03:3d:ac:86:b3:44:68:5c:f9. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'cluster02,192.168.16.2' (RSA) to the list of known hosts. oneadmin@cluster02's password:
You don’t need to actually login.
- [Worker Node (cluster02,cluster03)] Install the OpenNebula Node package:
$ sudo apt-get install opennebula-node Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: opennebula-common The following NEW packages will be installed: opennebula-common opennebula-node ... Setting up opennebula-node (1.2-0ubuntu1~intrepid1) ... Adding user `oneadmin' to group `libvirtd' ... Adding user oneadmin to group libvirtd Done.
Note that oneadmin is also created at the nodes (no need for NIS here) and added to the libvirtd, so it can manage the VMs.
- [Worker Node (cluster02,cluster03)] Trust the oneadmin user at the ssh scope, just copy the command from the onehost output 😉 :
$ sudo tee /var/lib/one/.ssh/authorized_keys << EOT > ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAm9n0E4bS9K8NUL2bWh4F78LZWDo8uj2VZiAeylJzct ...7YPgr+Z4+lhOYPYMnZIzVArvbYzlc7HZxczGLzu+pu012a6Mv4McHtrzMqHw== oneadmin@cluster01 > EOT
You may want to check that you can login the cluster nodes from the front-end, using the oneadmin account without being asked for a password:
$ sudo -u oneadmin ssh cluster02
You are done! You have your own cloud up and running! Now, you should be able to see your nodes ready to start VMs:
$ onehost list HID NAME RVM TCPU FCPU ACPU TMEM FMEM STAT 0 cluster02 0 100 100 100 963992 902768 on 1 cluster03 0 100 100 100 963992 907232 on
You may want to check the OpenNebula documentation for further information on configuring the system or to learn how to specify virtual machines.