Upgrading from Previous Versions 4.4
This guide describes the installation procedure for systems that are already running a 2.x or 3.x OpenNebula. The upgrade will preserve all current users, hosts, resources and configurations; for both Sqlite and MySQL backends.
Read the Compatibility Guide and Release Notes to know what is new in OpenNebula 4.4.
Before proceeding, make sure you don't have any VMs in a transient state (prolog, migr, epil, save). Wait until these VMs get to a final state (runn, suspended, stopped, done). Check the Managing Virtual Machines guide for more information on the VM life-cycle.
Stop OpenNebula and any other related services you may have running: EC2, OCCI, and Sunstone. As oneadmin
, in the front-end:
<xterm> $ sunstone-server stop $ oneflow-server stop $ econe-server stop $ occi-server stop $ one stop </xterm>
Backup the configuration files located in /etc/one. You don't need to do a manual backup of your database, the onedb command will perform one automatically.
Follow the Platform Notes and the Installation guide, taking into account that you will already have configured the passwordless ssh access for oneadmin.
It is highly recommended not to keep your current oned.conf
, and update the oned.conf
file shipped with OpenNebula 4.4 to your setup. If for any reason you plan to preserve your current oned.conf
file, read the Compatibility Guide and the complete oned.conf reference for 4.2 and 4.4 versions.
The database schema and contents are incompatible between versions. The OpenNebula daemon checks the existing DB version, and will fail to start if the version found is not the one expected, with the message 'Database version mismatch'.
You can upgrade the existing DB with the 'onedb' command. You can specify any Sqlite or MySQL database. Check the onedb reference for more information.
After you install the latest OpenNebula, and fix any possible conflicts in oned.conf, you can issue the 'onedb upgrade -v' command. The connection parameters have to be supplied with the command line options, see the onedb manpage for more information. Some examples:
<xterm> $ onedb upgrade -v –sqlite /var/lib/one/one.db </xterm>
<xterm> $ onedb upgrade -v -S localhost -u oneadmin -p oneadmin -d opennebula </xterm>
If everything goes well, you should get an output similar to this one:
<xterm> $ onedb upgrade -v -u oneadmin -d opennebula MySQL Password: Version read: 4.0.1 : Database migrated from 3.8.0 to 4.2.0 (OpenNebula 4.2.0) by onedb command. MySQL dump stored in /var/lib/one/mysql_localhost_opennebula.sql Use 'onedb restore' or restore the DB using the mysql command: mysql -u user -h server -P port db_name < backup_file > Running migrator /usr/lib/one/ruby/onedb/4.2.0_to_4.3.80.rb > Done > Running migrator /usr/lib/one/ruby/onedb/4.3.80_to_4.3.85.rb > Done > Running migrator /usr/lib/one/ruby/onedb/4.3.85_to_4.3.90.rb > Done > Running migrator /usr/lib/one/ruby/onedb/4.3.90_to_4.4.0.rb > Done Database migrated from 4.2.0 to 4.4.0 (OpenNebula 4.4.0) by onedb command. </xterm>
If you receive the message “ATTENTION: manual intervention required”, read the section Manual Intervention Required below.
After the upgrade is completed, you should run the command onedb fsck
.
First, move the 4.2 backup file created by the upgrade command to a save place.
<xterm> $ mv /var/lib/one/mysql_localhost_opennebula.sql /path/for/one-backups/ </xterm>
Then execute the following command:
<xterm> $ onedb fsck -S localhost -u oneadmin -p oneadmin -d opennebula MySQL dump stored in /var/lib/one/mysql_localhost_opennebula.sql Use 'onedb restore' or restore the DB using the mysql command: mysql -u user -h server -P port db_name < backup_file
Total errors found: 0 </xterm>
You should be able now to start OpenNebula as usual, running 'one start' as oneadmin. At this point, execute onehost sync
to update the new drivers in the hosts.
onehost sync
is important. If the monitorization drivers are not updated, the hosts will behave erratically.
With the new multi-system DS functionality, it is now required that the system DS is also part of the cluster. If you are using System DS 0 for Hosts inside a Cluster, any VM saved (stop, suspend, undeploy) will not be able to be resumed after the upgrade process.
You will need to have at least one system DS in each cluster. If you don't already, create new system DS with the same definition as the system DS 0 (TM_MAD driver). Depending on your setup this may or may not require additional configuration on the hosts.
You may also try to recover saved VMs (stop, suspend, undeploy) following the steps described in this thread of the users mailing list.
OpenNebula will continue the monitoring and management of your previous Hosts and VMs.
As a measure of caution, look for any error messages in oned.log, and check that all drivers are loaded successfully. After that, keep an eye on oned.log while you issue the onevm, onevnet, oneimage, oneuser, onehost list commands. Try also using the show subcommand for some resources.
If for any reason you need to restore your previous OpenNebula, follow these steps:
If the MySQL database password contains specials characters, such as @
or #
, the onedb command will fail to connect to it.
The workaround is to temporarily change the oneadmin's password to an ASCII string. The set password statement can be used for this:
<xterm> $ mysql -u oneadmin -p
mysql> SET PASSWORD = PASSWORD('newpass'); </xterm>
If you have a datastore configured to use a tm driver not included in the OpenNebula distribution, the onedb upgrade command will show you this message:
ATTENTION: manual intervention required The Datastore <id> <name> is using the custom TM MAD '<tm_mad>'. You will need to define new configuration parameters in oned.conf for this driver, see http://opennebula.org/documentation:rel4.4:upgrade
In OpenNebula 4.4, each tm_mad driver has a TM_MAD_CONF section in oned.conf. If you developed the driver, it should be fairly easy to define the required information looking at the existing ones:
# The configuration for each driver is defined in TM_MAD_CONF. These # values are used when creating a new datastore and should not be modified # since they define the datastore behaviour. # name : name of the transfer driver, listed in the -d option of the # TM_MAD section # ln_target : determines how the persistent images will be cloned when # a new VM is instantiated. # NONE: The image will be linked and no more storage capacity will be used # SELF: The image will be cloned in the Images datastore # SYSTEM: The image will be cloned in the System datastore # clone_target : determines how the non persistent images will be # cloned when a new VM is instantiated. # NONE: The image will be linked and no more storage capacity will be used # SELF: The image will be cloned in the Images datastore # SYSTEM: The image will be cloned in the System datastore # shared : determines if the storage holding the system datastore is shared # among the different hosts or not. Valid values: "yes" or "no" TM_MAD_CONF = [ name = "lvm", ln_target = "NONE", clone_target= "SELF", shared = "yes" ]