Compatibility Guide 4.4

This guide is aimed at OpenNebula 4.2 users and administrators who want to upgrade to the latest version. The following sections summarize the new features and usage changes that should be taken into account, or prone to cause confusion. You can check the upgrade process in the following guide

Visit the Features list and the Release Notes for a comprehensive list of what's new in OpenNebula 4.4.

inlinetoc

OpenNebula Administrators and Users

Add-ons Catalog

  • There is a new initiative to host OpenNebula add-ons in github. There you will find community-contributed components that may not be mature enough, or not general-purpose enough to be included in the main distribution.

Sunstone

  • The rows in the datatables are now ordered by ID in descending order. This behaviour can be changed by user in the settings dialog or by default in sunstone-server.conf.

Users and Groups

  • New secondary groups. These work in a similar way to the unix groups: users will have a primary group, and optionally several secondary groups. This new feature is completely integrated with the current mechanisms allowing, for example, to perform the following actions:
    • The list of images visible to a user contains all the images shared within any of his groups.
    • You can deploy a VM using an Image from one of your groups, and a second Image from another group.
    • New resources are created in the owner’s primary group, but users can later change that resource’s group.
    • Users can change their primary group to any of their secondary ones.
  • The quota subsystem now supports volatile disk usage and limit, see the VOLATILE_SIZE attribute here.

Scheduling

  • There is a new default scheduling policy for both hosts and datastores: fixed. This policy will rank hosts and datastores according looking for a PRIORITY attribute that can be set manually by the administrator.

Virtual Machines

  • The ''shutdown --hard'' action can be performed on UNKNOWN VMs. This means that if the guest was shutdown from within or crashed, users can still save the persistent or snapshotted disks.
  • The default device prefix, DEV_PREFIX, is now 'hd' for cdrom type disks, regardless of the value set in oned.conf.

Contextualization

  • Support for cloud init: now OpenNebula is able to contextualize guests using cloud init.
  • Improvements in contextualization: ability to add INIT_SCRIPTS. Check this guide to learn how to define contextualization in your VM templates.

Storage

Resource Management

Monitoring

  • New monitorization model: changed from a pull model to a push model, thus increasing the scalability of an OpenNebula cloud. More information here.

Developers and Integrators

Monitoring

  • Ganglia drivers have been moved out of the main OpenNebula distribution and are available as an addon.
  • The arguments of the im_mad poll action drivers have changed, you can see the complete reference in the Information Manager Driver guide.
# 4.2 arguments
hypervisor=$1
host_id=$2
host_name=$3
 
# 4.4 arguments
hypervisor=$1
datastore_location=$2
collectd_port=$3
monitor_push_cycle=$4
host_id=$5
host_name=$6
  • Probes returning float values will be ignored (set to 0), they must be integer.

Storage

  • Changes in Ceph, SCSI and LVM Datastores. Now the ''BRIDGE_LIST'' attribute is mandatory in the template used to create these type of datastores.
  • CephX support. More information here.
  • CDROM images are no longer cloned. This makes VM instantiation faster when a big DVD is attached.
  • iscsi drivers have been moved out of the main OpenNebula distribution and are available as an addon.
  • New LVM drivers model: the shared KVM model, as well as support for compressed images in LVM. Check more info on the new model here.

EC2 Hybrid Cloud / Cloudbursting

  • AWS SDK Ruby is used instead of the Java CLI.
  • The ec2.conf file was renamed to ec2_driver.default. In this file you can define the default values for ec2 instances.
  • The ec2rc file has been removed. A new configuration file is available: ec2_driver.conf.
  • Now AWS credentials and regions can be defined per host instead of specifying them in the driver configuration in oned.conf. You can customise these values in ec2_driver.conf. More info
  • The CLOUD attribute has been deprecated, now you have to use HOST to define more than one EC2 sections in the template. More info
  • The following EC2 template attributes have been removed:
    • AUTHORIZED_PORTS: we removed it because the right approach is to use SECURITY_GROUPS. What OpenNebula was doing was to modify the default security group, but we now think that a much better approach is to achieve the same using different SECURITY GROUPS and assigning VMs to them.
    • USERDATAFILE: OpenNebula 4.4 is dropping support due to a security risk, it allowed practically everyone to retrieve files from the OpenNebula front-end and stage them into an Amazon EC2 VM. The alternative is to read the file and set its contents into the USERDATA attribute, which is still supported.
  • Now the VM monitoring provides more info. New tags that can be accessed inside each VM:
AWS_DNS_NAME
AWS_PRIVATE_DNS_NAME
AWS_KEY_NAME
AWS_AVAILABILITY_ZONE
AWS_PLATFORM
AWS_VPC_ID
AWS_PRIVATE_IP_ADDRESS
AWS_IP_ADDRESS
AWS_SUBNET_ID
AWS_SECURITY_GROUPS
AWS_INSTANCE_TYPE
  • The IPADDRESS monitoring attribute has been renamed to AWS_PRIVATE_IP_ADDRESS.

Generic Hybrid Cloud / Cloudbursting

  • There is better support for custom cloud bursting drivers, you can read more in this guide.
  • im_mad drivers must return PUBLIC_CLOUD=YES
  • There is a new generic attribute for VMs: PUBLIC_CLOUD. This allows users to create templates that can be run locally, or in different public cloud providers. Public cloud vmm drivers must make use of this:
  DISK = [ IMAGE_ID = 7 ]

  PUBLIC_CLOUD = [
    TYPE         = "jclouds",
    JCLOUDS_DATA = "..." ]

  PUBLIC_CLOUD = [
    TYPE    = "ec2",
    AMI     = "...",
    KEYPAIR = "..." ]

EC2 Server

  • Now instance types are based on OpenNebula templates instead of files. You can still use the old system, changing the :use_file_templates: parameter in econe.conf. But using the new system is recommended, since file based templates will be removed soon.
  • New implemented methods:
    • describe-snapshots
    • create-snapshot
    • delete-snapshot
    • create-tags: for instances, amis, volumes and snapshots
    • describe-tags
    • remove-tags
  • Enhanced methods:
    • describe-*: one or more IDS can be specified now
    • describe-instances: includes vms in DONE for 15 minutes. You can configure this behaviour in the conf.
    • register: now you have to use this command to use an opennebula image in ec2. Missing features that will be added: add arch, kernel, extra disks metadata.
    • create-volume: now you can create a volume from an snapshot
    • run-instance: now instead of using erb files templates are based on opennebula templates. Therefore you can use restricted attributes and set permissions like any other opennebula resource.
  • econe-* tools are no longer maintained, you can use euca2ools or hybridfox to test the new functionality

XML-RPC API

  • Improved scalability: new parameters support in oned.conf for xmlrpc parameters. xml-rpc_server_configuration.
    • MAX_CONN: Maximum number of simultaneous TCP connections the server will maintain
    • MAX_CONN_BACKLOG: Maximum number of TCP connections the operating system will accept on the server's behalf without the server accepting them from the operating system
    • KEEPALIVE_TIMEOUT: Maximum time in seconds that the server allows a connection to be open between RPCs
    • KEEPALIVE_MAX_CONN: Maximum number of RPCs that the server will execute on a single connection
    • TIMEOUT: Maximum time in seconds the server will wait for the client to do anything while processing an RPC
  • New parameter in one.vm.deploy
    • The Datastore ID of the target system datastore where the VM will be deployed. It is optional, and can be set to -1 to let OpenNebula choose the datastore.
  • New method one.user.addgroup
  • New method one.user.delgroup
  • New method one.host.rename
  • New method one.cluster.rename