Installing the Software 4.4
This page shows you how to install OpenNebula from the binary packages.
Using the packages provided in our site is the recommended method, to ensure the installation of the latest version and to avoid possible packages divergences of different distributions. There are two alternatives here, to install OpenNebula you can add our package repositories to your system, or visit the software menu to download the latest package for your Linux distribution.
Do not forget that we offer Quickstart guides for:
If there are no packages for your distribution, head to the Building from Source Code guide.
Before installing:
There are packages for the front-end, distributed in the various components that conform OpenNebula, and packages for the virtualization host. See the CentOS/RHEL annex for a description of these packages.
To install a CentOS/RHEL OpenNebula front-end with packages from our repository, execute the following as root:
<xterm> # cat << EOT > /etc/yum.repos.d/opennebula.repo [opennebula] name=opennebula baseurl=http://downloads.opennebula.org/repo/CentOS/6/stable/$basearch enabled=1 gpgcheck=0 EOT # yum install opennebula-server opennebula-sunstone opennebula-ruby </xterm>
To install a CentOS/RHEL OpenNebula front-end with packages downloaded from our page, untar the tar.gz in the front-end and run:
<xterm> # tar xvzf CentOS-6-opennebula-<OpenNebula Version>.tar.gz # sudo yum localinstall opennebula-server opennebula-sunstone opennebula-ruby </xterm>
CentOS/RHEL Package Description
These are the packages available for this distribution:
Because home directory of oneadmin is located in /var
, it violates SELinux default policy. So in ssh passwordless configuration you should disable SELinux by setting SELINUX=disabled
in /etc/selinux/config
.
Before installing:
<xterm> # zypper ar -f -n packman http://packman.inode.at/suse/openSUSE_12.3 packman </xterm>
To install an openSUSE OpenNebula front-end with packages from our repository, execute the following as root:
<xterm> # zypper addrepo --no-gpgcheck --refresh -t YUM http://downloads.opennebula.org/repo/openSUSE/12.3/stable/x86_64 opennebula # zypper refresh # zypper install opennebula opennebula-sunstone </xterm>
To install an openSUSE OpenNebula front-end with packages from our repository, execute the following as root:
<xterm> # tar xvzf openSUSE-12.3-<OpenNebula version>.tar.gz # zypper install opennebula opennebula-sunstone </xterm>
After installation you need to manually create /var/lib/one/.one/one_auth
with the following contents:
<xterm> oneadmin:<password> </xterm>
openSUSE Package Description
These are the packages available for this distribution:
Also the JSON ruby library packaged with Debian 6 is not compatible with ozones. To make it work a new gem should be installed and the old one disabled. You can do so executing these commands:
<xterm> $ sudo gem install json $ sudo mv /usr/lib/ruby/1.8/json.rb /usr/lib/ruby/1.8/json.rb.no </xterm>
To install OpenNebula on a Debian/Ubuntu front-end from packages from our repositories execute as root:
<xterm> # wget http://downloads.opennebula.org/repo/Debian/repo.key # apt-key add repo.key </xterm>
Debian <xterm> # echo "deb http://downloads.opennebula.org/repo/Debian/7 stable opennebula" > /etc/apt/sources.list.d/opennebula.list </xterm>
Ubuntu 12.04 <xterm> # echo "deb http://downloads.opennebula.org/repo/Ubuntu/12.04 stable opennebula" > /etc/apt/sources.list.d/opennebula.list </xterm>
Ubuntu 13.04 <xterm> # echo "deb http://downloads.opennebula.org/repo/Ubuntu/13.04 stable opennebula" > /etc/apt/sources.list.d/opennebula.list </xterm>
To install the packages on a Debian/Ubuntu front-end: <xterm> # apt-get update # apt-get install opennebula opennebula-sunstone </xterm>
To install an Debian/Ubuntu OpenNebula front-end with packages from our repository, execute the following as root:
<xterm> $ sudo dpkg -i opennebula opennebula-sunstone $ sudo apt-get install -f </xterm>
Debian/Ubuntu Package Description
These are the packages available for these distributions:
Some OpenNebula components need ruby libraries. OpenNebula provides a script that installs the required gems as well as some development libraries packages needed.
As root execute: <xterm> # /usr/share/one/install_gems </xterm>
The previous script is prepared to detect common linux distributions and install the required libraries. If it fails to find the packages needed in your system, manually install these packages:
If you want to install only a set of gems for an specific component read Building from Source Code where it is explained in more depth.
For cloud bursting, a newer nokogiri gem than the on packed by current distros is required. If you are planning to use cloud bursting, you need to install nokogiri >= 1.4.4 prior to run install_gems
<xterm> # sudo gem install nokogiri -v 1.4.4 </xterm>
Log in as the oneadmin
user follow these steps:
~/.one/one_auth
(change password
for the desired password):<xterm> $ mkdir ~/.one $ echo “oneadmin:password” > ~/.one/one_auth $ chmod 600 ~/.one/one_auth </xterm>
<xterm> $ one start </xterm>
oneadmin
!
After OpenNebula is started for the first time, you should check that the commands can connect to the OpenNebula daemon. In the front-end, run as oneadmin the command onevm:
<xterm> $ onevm list ID USER GROUP NAME STAT CPU MEM HOSTNAME TIME </xterm>
If instead of an empty list of VMs you get an error message, then the OpenNebula daemon could not be started properly: <xterm> $ onevm list Connection refused - connect(2) </xterm>
The OpenNebula logs are located in /var/log/one
, you should have at least the files oned.log
and sched.log
, the core and scheduler logs. Check oned.log
for any error messages, marked with [E]
.
[ONE][I]: Checking database version. [ONE][E]: (..) error: no such table: db_versioning [ONE][E]: (..) error: no such table: user_pool [ONE][I]: Bootstraping OpenNebula database.
After installing the opennebula packages in the front-end the following directory structure will be used
When the front-end is installed and verified, it is time to install the packages for the nodes if you are using KVM. To install a CentOS/RHEL OpenNebula front-end with packages from our repository, execute the following as root:
<xterm> # sudo yum localinstall opennebula-node-kvm </xterm>
For further configuration and/or installation of other hypervisors, check their specific guides: Xen, KVM and VMware.
When the front-end is installed, it is time to install the virtualization nodes. Depending on the chosen hypervisor, check their specific guides: Xen, KVM and VMware.
When the front-end is installed, it is time to install the packages for the nodes if you are using KVM. To install a Debian/Ubuntu OpenNebula front-end with packages from our repository, execute the following as root:
<xterm> $ sudo dpkg -i opennebula-node-kvm $ sudo apt-get install -f </xterm>
For further configuration and/or installation of other hypervisors, check their specific guides: Xen, KVM and VMware.
Due to the Debian packaging policy, there are some paths which are different in the Debian/Ubuntu packages with respect to OpenNebula's documentation. In particular:
This step can be skipped if you have installed the kvm node package for CentOS or Ubuntu, as it has already been taken care of.
The OpenNebula package installation creates a new user and group named oneadmin
in the front-end. This account will be used to run the OpenNebula services and to do regular administration and maintenance tasks. That means that you eventually need to login as that user or to use the “sudo -u oneadmin
” method.
The hosts need also this user created and configured. Make sure you change the uid and gid by the ones you have in the frontend.
<xterm> $ id oneadmin uid=1001(oneadmin) gid=1001(oneadmin) groups=1001(oneadmin) </xterm> In this case the user id will be 1001 and group also 1001.
Then log as root in your hosts and follow these steps:
oneadmin
group. Make sure that its id is the same as in the frontend. In this example 1001:<xterm> # groupadd –gid 1001 oneadmin </xterm>
oneadmin
account, we will use the OpenNebula var
directory as the home directory for this user.<xterm> # useradd –uid 1001 -g oneadmin -d /var/lib/one oneadmin </xterm>
oneadmin
group and account in the nodes, for example NIS.
You need to create ssh
keys for the oneadmin
user and configure the host machines so it can connect to them using ssh
without need for a password.
Follow these steps in the front-end:
oneadmin
ssh
keys:<xterm> $ ssh-keygen </xterm> When prompted for password press enter so the private key is not encrypted.
~/.ssh/authorized_keys
to let oneadmin
user log without the need to type a password.<xterm> $ cat ~/.ssh/id_rsa.pub » ~/.ssh/authorized_keys </xterm>
<xterm> $ chmod 700 ~/.ssh/ $ chmod 600 ~/.ssh/id_dsa.pub $ chmod 600 ~/.ssh/id_dsa $ chmod 600 ~/.ssh/authorized_keys </xterm>
known_hosts
file. Also it is a good idea to reduced the connection timeout in case of network problems. This is configured into ~/.ssh/config
, see man ssh_config
for a complete reference.:<xterm> $ cat ~/.ssh/config ConnectTimeout 5 Host *
StrictHostKeyChecking no
</xterm>
sshd
daemon is running in the hosts. Also remove any Banner
option from the sshd_config
file in the hosts./var/lib/one/.ssh
directory to each one of the hosts in the same path.
To test your configuration just verify that oneadmin
can log in the hosts without being prompt for a password.
A network connection is needed by the OpenNebula front-end daemons to access the hosts to manage and monitor the hypervisors; and move image files. It is highly recommended to install a dedicated network for this purpose.
There are various network models (please check the Networking guide to find out the networking technologies supported by OpenNebula), but they all have something in common. They rely on network bridges with the same name in all the hosts to connect Virtual Machines to the physical network interfaces.
The simplest network model corresponds to the dummy
drivers, where only the network bridges are needed.
For example, a typical host with two physical networks, one for public IP addresses (attached to eth0 NIC) and the other for private virtual LANs (NIC eth1) should have two bridges:
<xterm> $ brctl show bridge name bridge id STP enabled interfaces br0 8000.001e682f02ac no eth0 br1 8000.001e682f02ad no eth1 </xterm>
OpenNebula uses Datastores to manage VM disk Images. There are two configuration steps needed to perform a basic set up:
The suggested configuration is to use a shared FS, which enables most of OpenNebula VM controlling features. OpenNebula can work without a Shared FS, but this will force the deployment to always clone the images and you will only be able to do cold migrations.
The simplest way to achieve a shared FS backend for OpenNebula datastores is to export via NFS from the OpenNebula front-end both the system
(/var/lib/one/datastores/0
) and the images
(/var/lib/one/datastores/1
) datastores. They need to be mounted by all the virtualization nodes to be added into the OpenNebula cloud.
To add a node to the cloud, there are four needed parameters: name/IP of the host, virtualization, network and information driver. Using the recommended configuration above, and assuming a KVM hypervisor, you can add your host 'node01' to OpenNebula in the following fashion (as oneadmin, in the front-end):
<xterm> $ onehost create node01 -i kvm -v kvm -n dummy </xterm>
To learn more about the host subsystem, read this guide.
Now that you have a fully functional cloud, it is time to start learning how to use it. A good starting point is this overview of the virtual resource management.