As we are quickly approaching the Easter holidays, the next release of OpenNebula (3.4 codename Wild Duck) is getting in shape. This new release is focused on extending the storage capabilities of OpenNebula. Wild Duck will include support for multiple Datastores, overcoming the single Image Repository limitations in previous versions.
A Datastore is any storage medium (typically SAN/NAS servers) used to store disk images for VMs. The use of multiple Datastores will help you in planning your storage by:
- Balancing I/O operations between storage servers
- Setting different storage performance features for different VM types or users
- Defining different SLA policies (e.g. backup) for different VM types or users
Wild Duck will be shipped with 4 different datastore types:
- File-system, to store disk images in a file form. The files are stored in a directory mounted from a SAN/NAS server.
- iSCSI/LVM, to store disk images in a block device form. Images are presented to the hosts as iSCSI targets
- VMware, a datastore specialized for the VMware hypervisor that handle the vmdk format.
- Qcow, a datastore specialized to handle qemu-qcow format and take advantage of its snapshoting capabilities
As usual in OpenNebula, the system has been architected to be highly modular and hackable, and you can easily adapt the base types to your deployment.
The Datastore subsystem is fully integrated with the current authorization framework, so access to a given datastore can be restricted to a given group or user. This enables the management of complex multitenant environments.
Also, by popular request, we are bringing back the Cluster abstraction. The Clusters are a logical set of physical resources, namely: hosts, networks and datastores. In this way you can better plan your capacity provisioning strategy by grouping resources into clusters.
This release also includes important contributions from our user community, specially from Research in Motion (support for qcow datastores), Logica (extended support for EC2 hybrid set up’s) and Terradue 2.0 (VMWare based datastores). THANKS!
This is great news. Are you going to retain compatibility with the current one_image driver? We’ve built our own image driver (hacking into the remotes/image/fs/* files). Also are you planning documentation on how to write Datastores drivers?
Hi Simon,
Yes, we are keeping the same protocols. So, I’d say that your driver should work out of the box.
The only difference is the mv script. In OpenNebula 3.2 a persistent file was saved back though host->front_end->image_repo and now we are doing host->image_repo. The mv script from the remotes/image/fs has been moved to the TM as MVDS.
We’ll write a guide on how to adapt the current TM and Image Repo drivers to OpenNebula 3.4
Cheers
this was a really quality post. i wasn’t aware of the many ripples and depth to this story until i surfed here through google! great job.http://www.investrip.com