Xen Cloud Platform Drivers for OpenNebula 3.0

:!: Development version of XCP Drivers version 2.1.85
:!: The material on this page needs to be reviewed for completeness and accuracy
:!: To use this functionality you need to install the XCP Drivers that will be available in the Ecosystem Catalog (development repository)

The Xen Cloud Platform Drivers enable the management of an OpenNebula cloud based on the XCP.

The Xen Cloud Platform (XCP) is an open source enterprise-ready server virtualization and cloud computing platform, delivering the Xen Hypervisor with support for a range of guest operating systems including Windows® and Linux® network and storage support, management tools in a single, tested installable image, which is also called XCP appliance. XCP addresses the needs of cloud providers, hosting services and data centers by combining the isolation and multi-tenancy capabilities of the Xen hypervisor with enhanced security, storage and network virtualization technologies to offer a rich set of virtual infrastructure cloud services. The platform also address user requirements for security, availability, performance and isolation across both private and public clouds.

The XCP Drivers enables the management of an OpenNebula cloud based on XCP. It uses the “xe” command line tool to invoke the XAPI interface exposed by the XCP hypervisors, and features a simple installation process that will leverage the stability, performance and feature set of any existing VMware based OpenNebula cloud.

Acknowledgments: These new components have been developed with the support of Xen.org.


In order to install the XCP Drivers, some software dependencies have to be met in the front-end:

  • XCP v1.0 is needed to interact correctly with the XCP Drivers.
  • xe CLI command. This can be installed from the Linux Pack CD, available on the Citrix website. It comes available only from a “rpm” package, but this can be easily dumped as source (the xe tool is statically linked) or translated to “deb” using alien.
  • System Configuration For all the XCPs to share the same Image Repository, all of them must mount a NFS export share as a storage repository, and all of them must do so with a Storage Repository with exactly the same name label (i.e SR_FOR_OPENNEBULA).
  • Upon Storage Resource creation (based on the NFS export, this can be done through the XenCenter) done in the first hypervisor, a directory matching with the uuid of the newly created Storage Resource. Let's call this path PATH_TO_SR. Subsequent hypervisors will mount the same NFS export. New directories matching the name of the newly created Storage Resource will be created. They should be removed and recreated as a symlink to PATH_TO_SR.
  • Also, networks should be created linked to all of the available NICs present in the XCP host machine. Networks created for interconnected NICs must share the same name (i.e “xenbr1”). This is the name that needs to be used in the BRIDGE field of the virtual network template fed to OpenNebula.

Considerations & Limitations


  • VNC is not supported.


  • Sunstone cannot be used to add XCP hosts.


Installation of the drivers is done from the source code tarball. Afterwards, with a user with sudo permissions, type the following to install the software:

tar xzf <filename-of-the-downloaded-tarball> 
sudo <created-directory>/install.sh 


OpenNebula main configuration file needs to be edit to include the XCP Drivers. The following must be added to /etc/one/oned.conf:

# XCP Driver Addon Virtualization Driver Manager Configuration 
VM_MAD = [ name = "vmm_xenserver", executable = "one_vmm_sh", arguments = "xenserver", default = "vmm_sh/vmm_sh_xenserver.conf", type = "xml" ] 
# XCP Driver Addon Information Driver Manager Configuration 
IM_MAD = [ name = "im_xenserver", executable = "one_im_sh", arguments = "xenserver" ] 
# XCP Driver Addon Transfer Manager Driver Configuration 
TM_MAD = [ name = "tm_xenserver", executable = "one_tm", arguments = "tm_xenserver/tm_xenserver.conf" ] 

The VMM and IM driver needs to know the credentials of the XCP user (this user needs to be replicated in all the hypervisors) in order to access the hypervisor. Please input this information in:


Please be aware that the above rc file, in stark contrast with other rc files in OpenNebula, uses ruby syntax, therefore please input the values as ruby strings, i.e., between quotes.

The above file also needs the following values:

  • PATH_TO_SR: This should contain the full path to the first created storage resource (see System Configuration section)
  • SRLABEL: Label shared by all of the Storage Resources that are mounting the shared NFS export.

The /var/lib/one directory must be moved to PATH_TO_SR after the installation completes. A symbolic link from /var/lib/one to PATH_TO_SR/var needs to be created


This section aims to instruct the user on how to manage each of the XCP resources known by OpenNebula


Physical hosts featuring a XCP hypervisor must be added with the XCP Drivers, like

$ onehost add xenserver01 im_xenserver vmm_xenserver tm_xenserver 


Images must be used through the Image Catalog, that is, registered using the oneimage command. The images for XCP are the “.vhd” files, that can be swapped between VMs.

Once the image is uploaded in the Image Catalog, it can be used in a VM referencing it from the VM template using the ID or the NAME of the image:

Virtual Networks

Virtual Networks

In a XCP, virtual network setup works as in a traditional OpenNebula deployment. The BRIDGE field of the network template should reference the label of the network created in the XCP hypervisor (see the System Configuration in the Requirements section).

Virtual Machines

Virtual Machine templates can be used as with traditional OpenNebula deployments. All the usual operations are available, except for livemigration, support for which will be added in future releases.

xcp:documentation · Last modified: 2011/11/02 14:00 by Tino Vazquez
Admin · Login