Planning the Installation 4.4

In order to get the most out of a OpenNebula Cloud, we recommend that you create a plan with the features, performance, scalability, and high availability characteristics you want in your deployment. This guide provides information to plan an OpenNebula installation, so you can easily architect your deployment and understand the technologies involved in the management of virtualized resources and their relationship.

inlinetoc

Architectural Overview

OpenNebula assumes that your physical infrastructure adopts a classical cluster-like architecture with a front-end, and a set of hosts where Virtual Machines (VM) will be executed. There is at least one physical network joining all the hosts with the front-end.

 high level architecture of cluster, its  components and relationship

The basic components of an OpenNebula system are:

  • Front-end that executes the OpenNebula services.
  • Hypervisor-enabled hosts that provide the resources needed by the VMs.
  • Datastores that hold the base images of the VMs.
  • Physical networks used to support basic services such as interconnection of the storage servers and OpenNebula control operations, and VLANs for the VMs.

OpenNebula presents a highly modular architecture that offers broad support for commodity and enterprise-grade hypervisor, monitoring, storage, networking and user management services. This guide briefly describes the different choices that you can make for the management of the different subsystems. If your specific services are not supported we recommend to check the drivers available in the Add-on Catalog. We also provide information and support about how to develop new drivers.

 OpenNebula Cloud Platform Support

Front-End

The machine that holds the OpenNebula installation is called the front-end. This machine needs network connectivity to each host, and possibly access to the storage Datastores (either by direct mount or network). The base installation of OpenNebula takes less than 50MB.

OpenNebula services include:

:!: Note that these components communicate through XML-RPC and may be installed in different machines for security or performance reasons

There are several certified platforms to act as front-end for each version of OpenNebula. Refer to the platform notes and chose the one that better fits your needs.

OpenNebula's default database uses sqlite. If you are planning a production or medium to large scale deployment, you should consider using MySQL.

If you are interested in setting up a high available cluster for OpenNebula, check the High OpenNebula Availability Guide.

The maximum number of servers (virtualization hosts) that can be managed by a single OpenNebula instance (zone) strongly depends on the performance and scalability of the underlying platform infrastructure, mainly the storage subsystem. We do not recommend more than 500 servers within each zone, but there are users with 1,000 servers in each zone. The OpenNebula Zones (oZones) component allows for the centralized management of multiple instances of OpenNebula (zones), managing in turn potentially different administrative domains. You may also find interesting the following guide about how to tune OpenNebula for large deployments.

Monitoring

The monitoring subsystem gathers information relative to the hosts and the virtual machines, such as the host status, basic performance indicators, as well as VM status and capacity consumption. This information is collected by executing a set of static probes provided by OpenNebula. The output of these probes is sent to OpenNebula in two different ways:

  • UDP-push Model: Each host periodically sends monitoring data via UDP to the frontend which collects it and processes it in a dedicated module. This model is highly scalable and its limit (in terms of number of VMs monitored per second) is bounded to the performance of the server running oned and the database server. Please read the UDP-push guide for more information.
  • Pull Model: OpenNebula periodically actively queries each host and executes the probes via ssh. This mode is limited by the number of active connections that can be made concurrently, as hosts are queried sequentially. Please read the KVM and Xen SSH-pull guide or the ESX-pull guide for more information.

:!: Default: UDP-push Model is the default IM for KVM and Xen in OpenNebula >= 4.4.

Please check the the Monitoring Guide for more details.

Virtualization Hosts

The hosts are the physical machines that will run the VMs. There are several certified platforms to act as nodes for each version of OpenNebula. Refer to the platform notes and chose the one that better fits your needs. The Virtualization Subsystem is the component in charge of talking with the hypervisor installed in the hosts and taking the actions needed for each step in the VM lifecycle.

OpenNebula natively supports three hypervisors:

:!: Default: OpenNebula is configured to interact with hosts running KVM.

Please check the Virtualization Guide for more details of the supported virtualization technologies.

If you are interested in failover protection against hardware and operating system outages within your virtualized IT environment, check the Virtual Machines High Availability Guide.

Storage

OpenNebula uses Datastores to handle the VM disk Images. A Datastore is any storage medium used to store disk images for VMs, previous versions of OpenNebula refer to this concept as Image Repository. Typically, a datastore will be backed by SAN/NAS servers. In general, each Datastore has to be accessible through the front-end using any suitable technology NAS, SAN or direct attached storage.

When a VM is deployed the Images are transferred from the Datastore to the hosts. Depending on the actual storage technology used it can mean a real transfer, a symbolic link or setting up an LVM volume.

OpenNebula is shipped with 3 different datastore classes:

  • System Datastores to hold images for running VMs, depending on the storage technology used these temporal images can be complete copies of the original image, qcow deltas or simple filesystem links.
  • Image Datastores store the disk images repository. Disk images are moved, or cloned to/from the System datastore when the VMs are deployed or shutdown; or when disks are attached or snapshoted.
  • File Datastore is a special datastore used to store plain files and not disk images. The plain files can be used as kernels, ramdisks or context files.

Image datastores can be of different type depending on the underlying storage technology:

  • File-system, to store disk images in a file form. The files are stored in a directory mounted from a SAN/NAS server.
  • vmfs, a datastore specialized in VMFS format to be used with VMware hypervisors. Cannot be mounted in the OpenNebula front-end since VMFS is not *nix compatible.
  • LVM, The LVM datastore driver provides OpenNebula with the possibility of using LVM volumes instead of plain files to hold the Virtual Images. This reduces the overhead of having a file-system in place and thus increases performance..
  • Ceph, to store disk images using Ceph block devices.

:!: Default: The system and images datastores are configured to use a shared filesystem.

Please check the Storage Guide for more details.

Networking

OpenNebula provides an easily adaptable and customizable network subsystem in order to better integrate with the specific network requirements of existing datacenters. At least two different physical networks are needed:

  • A service network is needed by the OpenNebula front-end daemons to access the hosts in order to manage and monitor the hypervisors, and move image files. It is highly recommended to install a dedicated network for this purpose.
  • A instance network is needed to offer network connectivity to the VMs across the different hosts. To make an effective use of your VM deployments you'll probably need to make one or more physical networks accessible to them.

The OpenNebula administrator may associate one of the following drivers to each Host:

  • dummy: Default driver that doesn't perform any network operation. Firewalling rules are also ignored.
  • fw: Firewall rules are applied, but networking isolation is ignored.
  • 802.1Q: restrict network access through VLAN tagging, which also requires support from the hardware switches.
  • ebtables: restrict network access through Ebtables rules. No special hardware configuration required.
  • ovswitch: restrict network access with Open vSwitch Virtual Switch.
  • VMware: uses the VMware networking infrastructure to provide an isolated and 802.1Q compatible network for VMs launched with the VMware hypervisor.

:!: Default: The default configuration connects the virtual machine network interface to a bridge in the physical host.

Please check the Networking Guide to find out more information of the networking technologies supported by OpenNebula.

Authentication

You can choose from the following authentication models to access OpenNebula:

:!: Default: OpenNebula comes by default with an internal built-in user/password authentication.

Please check the External Auth guide to find out more information of the auth technologies supported by OpenNebula.

Advanced Components

Once you have an OpenNebula cloud up and running, you can install the following advanced components:

  • Application Flow and Auto-scaling: OneFlow allows users and administrators to define, execute and manage multi-tiered applications, or services composed of interconnected Virtual Machines with deployment dependencies between them. Each group of Virtual Machines is deployed and managed as a single entity, and is completely integrated with the advanced OpenNebula user and group management.
  • Multiple Zone and Virtual Data Centers: The OpenNebula Zones (oZones) component allows for the centralized management of multiple instances of OpenNebula (zones), managing in turn potentially different administrative domains. These zones can be effectively shared through the Virtual DataCenter (VDC) abstraction. A VDC is a set of virtual resources (images, VM templates, virtual networks and virtual machines) and users that manage those virtual resources, all sustained by infrastructure resources offered by OpenNebula.
  • Cloud Bursting: Cloud bursting is a model in which the local resources of a Private Cloud are combined with resources from remote Cloud providers. Such support for cloud bursting enables highly scalable hosting environments.
  • Public Cloud: Cloud interfaces can be added to your Private Cloud if you want to provide partners or external users with access to your infrastructure, or to sell your overcapacity. The following interfaces provide a simple and remote management of cloud (virtual) resources at a high abstraction level: Amazon EC2 and EBS APIs or OGF OCCI.
  • Application Insight: OneGate allows Virtual Machine guests to push monitoring information to OpenNebula. Users and administrators can use it to gather metrics, detect problems in their applications, and trigger OneFlow auto-scaling rules.