Hyper-V Drivers for OpenNebula 3.2

:!: Hyper-V Drivers version 3.2.0
:!: To use this functionality you need to install the Hyper-V Drivers that is available in the Ecosystem Catalog (development repository)

The Hyper-V Drivers enable the management of an OpenNebula cloud based on the Windows Server Hyper-V hypervisor.

Acknowledgments: These new components have been developed with the support of Microsoft. We also want to give thanks to VrStorm for trying the driver while it was in development.

Requirements

These drivers can access directly to each of Hyper-V hosts or use a Windows Front-end acting as proxy to access the Windows Hosts. This last configuration is useful to manage standalone Hyper-V installations.

An OpenNebula/Hyper-V cloud requires the following minimal components:

  • OpenNebula Front-end: Linux machine with OpenNebula 3.2 installed
    • ruby 1.9.x installed
    • gem winrm installed
  • Windows Front-end: Windows 2008 Server R2 with:
  • Windows Host: Windows 2008 Server R2 with Hyper-V role or standalone Microsoft Hyper-V Server 2008 R2 hypervisor
  • Shared Storage: Space readable and writable by all the machines in the cluster
  • Network Management: Properly configured network interfaces and Hyper-V internal and external networks configured

Hyper-V drivers can coexist with Xen, KVM or VMWare drivers.

Considerations & Limitations

VNC

  • VNC is not supported.

Sunstone

  • Sunstone cannot be used to add Hyper-V hosts.

Disks

  • Only IDE disks are supported for VMs without HyperV drivers and hdc is reserved to contextualization iso. Take this into account when assigning disk targets.
  • HyperV drivers come with Linux Kernerl 3.0.x.

Contextualization

The prefered method to set the network configuration is using the context parameter. Right now the context image is seen in hdc. The provided ttylinux image in the files section (ttylinux) comes prepared to use it. (DHCP)

  • In the template file add contextualization info:
CONTEXT = [
  hostname  = "$NAME",
  ip_public = "$NIC[IP, NETWORK=\"<NETWORK NAME>\"]",
  files     = "<PATH TO>/init.sh" 
]
  • and init.sh file:
#!/bin/bash
 
if [ -f /mnt/context/context.sh ]; then
  . /mnt/context/context.sh
fi
 
hostname $HOSTNAME
ifconfig eth0 $IP_PUBLIC
  • Change NETWORK NAME to the name of your network and PATH TO to the place where init.sh is located.

Monitoring

  • CPU consumption values for VMs are not given properly, we are working on this issue and will be fixed for future releases.

Installation and Configuration

There are two ways to communicate with HyperV hosts. One is using a machine that serves as proxy and has the WinRM service configured. This is the preferred method to connect to standalone HyperV hypervisors. The other one is connecting directly to each of the machines, they will need to have installed and configured the WinRM service and some other software usually installed in the frontend.

Follow these configuration guides in order to configure the machines.

Configure Windows Front-end (also needed for non-proxy)

Install PowerShell Hyper-V commands

  • Extract the zip file and copy the directory Hyper-V directory to C:\Program Files\Modules

Enable Script Execution

  • In the user account open powershell and execute:

> Set-ExecutionPolicy Unrestricted CurrentUser

Configure the module to be imported automatically

  • Edit or create C:\Users\<user>\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 file and add at the end:
import-module 'C:\Program Files\Modules\HyperV'

Create oneadmin user

* Create an administrative user called 'oneadmin', it will be used to connect using winrm into the FRONTEND and also for file copying.

Configure WinRM server

  • Setup WinRM, from cmd:

> winrm quickconfig

  • Enable basic authentication, from cmd:

> winrm set winrm/config/service @{AllowUnencrypted="true"}
> winrm set winrm/config/service/auth @{Basic="true"}

Create shared storage for images in the frontend

  • Share to the other machines.
  • Give full access permissions to oneadmin and administrator users
  • Do this with all the nodes:
  • Right click in the folder → share with → specific people → <node>$. Give full access

Create local storage for VMs in the nodes

  • This should also be done in the frontend if it is also used to run VMs
  • Make sure the folder is the same in all the machines
  • Give permissions to virtual machines to use the files in the shared directory execute this command (using cmd):

icacls <local folder> /grant "NT Virtual Machine\Virtual Machines":f

Configure Windows Node

Configure Remote Access

  • Execute these commands:

hvremote /add:<domain>/ssh
hvremote /mmc:enable
hvremote /AnonDCOM:grant

  • This should configure the firewall and add the ssh user to de administrative users

Configure Internal And External Networks

  • From the Hyper-V manager for each node open Virtual Network Manager and configure the following networks:

Internal

External

Configure OpenNebula Front-end

Mount Windows Shared Storage

The shared storage from the Windows server must be mounted in the server that executes OpenNebula so it can put there the images for running VMS. It must be mounted in a way so oneadmin user has permissions to access it. This can be donde adding uid=oneadmin to the mount options:

smbmount <WINDOWS SHARE> <YOUR MOUNT POINT> -ouser=<WINDOWS USER>,pass=<PASSWORD>,dom=<DOMAIN>,uid=oneadmin

  • Make sure you change the variables between <>

Install Drivers

You can download the code from the file section of this project. one-hyperv-3.2.0.tar.gz

To install the drivers there's a install.sh script. It takes one parameter if your installation is self contained and is the ONE_LOCATION path. If your OpenNebula installation is system wide execute it without parameters.

Configure OpenNebula

To activate the drivers we should add this to oned.conf:

IM_MAD = [
      name       = "im_hyperv",
      executable = "one_im_sh",
      arguments  = "-r 0 -t 15 hyperv" ]
 
VM_MAD = [
    name       = "vmm_hyperv",
    executable = "one_vmm_sh",
    arguments  = "-t 15 -r 0 hyperv",
    default    = "vmm_exec/vmm_exec_kvm.conf",
    type       = "xml" ]
 
TM_MAD = [
    name       = "tm_hyperv",
    executable = "one_tm",
    arguments  = "tm_hyperv/tm_hyperv.conf" ]

We will also need to configure the TM so it knows where the share storage is mounted in /etc/one/tm_hyperv/tm_hypervrc:

SMB_MOUNT=<MOUNT POINT>

Change <MOUNT POINT> to the path previously used to mount the Windows shared directory.

We will also need to edit /etc/one/hyperv.conf:

:proxy:         <PROXY>
:vmdir:         <WINDOWS SHARE>
:local_vmdir:   <LOCAL FOLDER>
 
:user:          <USER>
:password:      <PASSWORD>
  • PROXY: this is the name of the Windows frontend sever that will serve as proxy. Commenting this parameter will make the driver connect to each of the hosts directly. (Can be commented prepending a #)
  • WINDOWS SHARE: the windows path to access the shared storage
  • LOCAL DIR: directory that will hold the images for VMs in each machine
  • USER: windows oneadmin user name
  • PASSWORD: windows oneadmin user password

For example, if our server is called FRONTEND and the share is called vms the configuration will be:

:proxy:         "FRONTEND"
:vmdir:         "\\\\FRONTEND\\vms"
:local_vmdir:   "C:\\localvms"
 
:user:          "oneadmin"
:password:      "password"

Adding Hosts

You will add the hosts with the name known to the windows server and the newly installed drivers:

$ onehost create host01 im_hyperv vmm_hyperv tm_hyperv dummy

Create Networks

We will create the networks that will match the networks we previously created in the nodes. The networks will be created as any other OpenNebula network but we will use the Hyper-V network name as the bridge. For example for the networks created before we can use these both templates to create them. Feel free to change the IP ranges:

$ cat internal.one
NAME    = "Internal" 
TYPE    = RANGED
BRIDGE  = internal
NETWORK_SIZE    = C
NETWORK_ADDRESS = 192.168.48.0

$ cat external.one
NAME    = "External" 
TYPE    = RANGED
BRIDGE  = external
NETWORK_SIZE    = C
NETWORK_ADDRESS = 192.168.100.0

$ onevnet create internal.one
$ onevnet create external.one

Usage

To test the installation you can use the ttylinux image. Put it in a OpenNebula server path and create a VM using this template:

NAME=ttylinux
MEMORY=64
CPU=1
VCPU=1
DISK=[
    SOURCE=<PATH TO>/ttylinux.vhd,
    TARGET=hda
]

Now you should be able to create the VM using the standard OpenNebula CLI:

$ onevm create template.one

After a while the VM should be able to be seen from the Hyper-V management console:

Alternatively you can create and manage the VM's with Sunstone.

hyperv:documentation · Last modified: 2012/03/15 13:45 by Javi Fontan
Admin · Login