How to Develop a New Driver to Interact with an External Cloud Provider

Overview

A Hybrid Cloud is an extension of a Private Cloud to combine local resources with resources from remote Cloud providers. The remote provider could be a commercial Cloud service, such as Amazon EC2, or a partner infrastructure running a different OpenNebula instance. Such support for cloudbursting enables highly scalable hosting environments.

An Hybrid Cloud Deployment powered by OpenNebula is fully transparent to infrastructure users. Users continue using the same private and public Cloud interfaces, so the federation is not performed at service or application level but at infrastructure level by OpenNebula. It is the infrastructure administrator who takes decisions about the scale out of the infrastructure according to infrastructure or business policies.

The remote cloud provider will be included in the OpenNebula host pool like any other physical host of your infrastructure:

$ onehost create remote_provider im_dummy vmm_provider tm_dummy dummy

$ onehost list
ID NAME           RVM TCPU FCPU ACPU    TMEM    FMEM STAT
0 my_phy_host1      1  100   99   99 2093532 1649913   on
1 my_phy_host2      1  100  100  100 1044956  613.05   on
2 my_phy_ursa3      0  800  798  798 8387584 7791616   on
3 my_phy_ursa4      0  800  794  794 8387584  713728   on
4 remote_provider   0  500  500  500 8912896 8912896   on

When you create a new host in OpenNebula you have to specify the following parameters:

  • Name: remote_provider

Name of the host, in case of physical hosts it will the ip address or hostname of the host. In case of remote providers it will be just a name to identify the provider.

IM driver gather information about the physical host and hypervisor status, so the OpenNebula scheduler knows the available resources and can deploy the virtual machines accordingly.

VMM drivers translate the high-level OpenNebula virtual machine life-cycle management actions, like deploy, shutdown, etc. into specific hypervisor operations. For instance, the KVM driver will issue a virsh create command in the physical host. The EC2 driver translate the actions into Amazon EC2 API calls.

TM drivers are used to transfer, clone and remove Virtual Machines Image ?les. They take care of the file transfer from the OpenNebula image repository to the physical hosts. There are specific drivers for different storage configurations: shared, non-shared, lvm storage, etc.

VNM drivers are used to set the network configuration in the host (firewall, 802.1Q, ebtables, osvswitch)

When creating a new host to interact with a remote cloud provider we will use mock versions for the IM, TM and VNM drivers. Therefore, we will only implement the functionality required for the VMM driver.

The VMM driver system was changed recently, if you check the EC2 or the old ElasticHost driver implementation still uses the old system. In this guide we explain the new driver system for OpenNebula 3.2.

Steps

Edit oned.conf

Add a new VMM section for the new driver, you will also have to uncomment the im_dummy and tm_dummy sections:

#-------------------------------------------------------------------------------
#  Remote Provider Virtualization Driver Manager Configuration
#    -r number of retries when executing an action
#    -t number of threads, i.e. number of actions done at the same time
#-------------------------------------------------------------------------------
VM_MAD = [
    name       = "vmm_provider",
    executable = "one_vmm_sh",
    arguments  = "-t 15 -r 0 provider",
    default    = "vmm_exec/vmm_exec_provider.conf",
    type       = "xml" ]
#-------------------------------------------------------------------------------

Create the driver folder and implement the specific actions

Create a new folder inside the remotes dir (/var/lib/one/remotes/vmm). The new folder should be named “provider” or the name specified in the previous VM_MAD arguments section. This folder must contain scripts for the following actions: cancel checkpoint deploy migrate poll restore save shutdown

These scripts are language-agnostic so you can implement them using python, ruby, bash… In this first iteration of the driver I suggest you to start implementing the deploy, shutdown and poll actions.

  • deploy: boots the VM
    • Input: (3 arguments)
      • Path to deployment file (contains a XML description of the VM)
      • Hostname to deploy the VM remote_provider
      • OpenNebula ID of the Virtual Machine
    • Output:
      • Success:
        • exit_code 0
        • std_out: deploy id (hypervisor/cloud_provider ID for the VM, used in subsequent calls), exit code 0 implies success
      • Failure:
        • exit_code != 0
        • std_out: error_message
  • shutdown: sends shutdown signal to a VM
    • Input:
      • deploy_id (VM ID for the hypervisor/cloud_provider)
      • Hostname remote_provider
      • OpenNebula ID of the Virtual Machine
    • Output:
      • Success:
        • exit_code 0
        • std_out: none
      • Failure:
        • exit_code != 0
        • std_out: error_message
  • poll: gets information about a VM, for example the IP assigned by the provider to the new resource should be included here.
    • Input:
      • deploy_id (VM ID for the hypervisor/cloud_provider)
      • Hostname remote_provider
      • OpenNebula ID of the Virtual Machine
    • Output:
      • Success:
        • exit_code 0
        • std_out: monitoring data (described in this link)
      • Failure:
        • exit_code != 0
        • std_out: error_message

Create the new host

After restarting oned we can create the new host that will use this new driver

$ onehost create remote_provider im_dummy vmm_provider tm_dummy dummy

Create a new Virtual Machine

Create a new VM using a template with an specific section for this provider, for example:

$ cat vm_template.one
CPU=1
MEMORY=256
PROVIDER=[
    PROVIDER_IMAGE_ID=id-141234,
    PROVIDER_INSTANCE_TYPE=small_256mb
]

$ onevm create vm_template
ID: 23

$ onevm deploy 23 remote_provider
After this, the deploy script will receive the following arguments:

  • The path to the deployment file that contains the following XML:
<CPU>1</CPU>
<MEMORY>256</MEMORY>
<PROVIDER>
    <PROVIDER_IMAGE_ID>id-141234</PROVIDER_IMAGE_ID>
    <PROVIDER_INSTANCE_TYPE>small_256mb</PROVIDER_INSTANCE_TYPE>
</PROVIDER>
  • The hostname: remote_provider
  • The VM ID: 23

The deploy script has to return the ID of the new resource and an exit_code 0:

$ cat /var/lib/one/remote/provider/deploy
#!/bin/bash
deployment_file=$1
# Parse required parameters from the template
..
# Retrieve account credentials from a local file/ env
...
# Create a new resource using the API provider
...
# Return the provider ID of the new resource and exit code 0 or an error message

Next iterations

  • Implement the rest of the actions beside poll, shutdown and deploy
  • Improve the IM driver to limit the number of VMs that can be deployed in the cloud_provider and benefit from the scheduler policies. Check out the IM developed for the EC2 driver
  • Improve account handling, support for multiple accounts
  • If you have any doubts do not hesitate to contact us.
cloud_provider_driver · Last modified: 2011/12/20 16:52 by Ignacio Martin Llorente
Admin · Login