This is the third entry in a blog post series explaining how to make the most out of your OpenNebula 4.4 cloud. In previous posts we explained the enhanced cloud bursting to Amazon features and the multiple groups functionality.
OpenNebula supports different storage backends. You can even create VMs that use disks from several backend technologies at the same time, e.g. Ceph and LVM.
The system datastore is a special Datastore class that holds disks and configuration files for running VMs, instead of Images. Up to OpenNebula 4.2, each Host could only use one system datastore, but now for OpenNebula 4.4 we have added support for multiple system datastores.
Maybe the most immediate advantage of this feature is that if your system datastore is running out of space, you can add a second backend and start deploying new VMs in there. But the scheduler also knows about the available system datastores, and that opens up more interesting use cases.
Let’s see a quick example. You have a local SSD disk inside each Host, and also an NFS export mounted. If you define a tag in the datastore template:
$ onedatastore show ssd_system ... SPEED = 10 $ onedatastore show nfs_system ... SPEED = 5
Those tags can be used in the VM template to request a specific system datastore, or to define the deployment preference:
# This VM will deployed preferably in the SSD datastore, but will fall back to the NFS one if the former is full $ onetemplate show 2 ... SCHED_DS_RANK = "SPEED" # This other VM must be deployed only in the ssh system datastore $ onetemplate show 1 ... SCHED_DS_REQUIREMENTES = "NAME = ssd_system"
What about the load balancing mention in the title? Instead of different storage backends, you may want to install several similar system datastores, and distribute your VMs across them. This is configured in the sched.conf file, using the ‘striping’ policy.
Looking for an old school system DS. There must be like 20 MB combined here.
We hope you find these improvements useful. Let us know what you think in the mailing lists!