Infiniband Drivers OpenNebula 3.4


A community user, Shankhadeep Shome, attached in this thread custom infiniband drivers.


Prerequisites and Limitations

- Libvirt must be installed and used. - The driver parses KVM specific XML files, only KVM has been tested. - IP tables with NAT support is required. - The driver has only been tested with shared and qcow2 transfer managers. - The driver assumes the default installation path, /var/lib/one/ - Configuration only tested on RHEL6/CENTOS6 and Ubuntu 11.10/12.04 - Requires Open Nebula 3.4.X - Requires passwordless sudo and ssh equivalency between hypervisor nodes for oneadmin (admittedly large security risk but I am not aware of an alternate solution at the moment)

Driver installation

1. Unzip and copy files to /var/lib/one/remotes/vmm/kvm-ib

2. Update oned.conf and add the following

VM_MAD = [
    name       = "vmm_kvm_ib",
    executable = "one_vmm_exec",
    arguments  = "-t 15 -r 0 kvm-ib",
    default    = "vmm_exec/vmm_exec_kvm.conf",
    type       = "kvm" ]

3. Configure host network and NAT rules.

First make sure the IB device is configured as connected mode with mtu set to 64K, to provides the highest throughput.

Its best to show this by example.

Example configuration, a /29 network on a different subnet than the IB network

1-to-1 NAT host range -> -> -> -> ->

IPoIB network = Guest IB range = (this must be within the IB network range) IB device = ib0 VM bridge = virbr1

Create a host only network in libvirtd with as gateway on all the hypervisors

Libvirt XML file for this host only network example

  <bridge name='virbr1' stp='on' delay='0' />
  <mac address='52:54:00:E8:D3:25'/>
  <ip address='' netmask=''>

IP tables rules configuration.

Rules can be added and removed with the drivers along with the IP aliases,

Example Script for /29 network pool with virbr1 bridge, first IB tables is cleaned out and MTU is set to 64K for the bridge and dummy device. Bridge MTU cannot be configured without having atleast one device connected to it which is why libvirt creates a TAP device virbr1-nic and assigns it to the mac bridge virbr1.

iptables -F
iptables -t nat -F
ip link set virbr1-nic mtu 65520
ip link set virbr1 mtu 65520

One important note, it isn't strictly required to have a rule for every IP address. The nat rule below is perfectly legal; however the behavior wont be as expected, may nat to instead of It will still be one-to-one nat because the number of source ips is less than or equal to the nat ip pool. We found to have maximum application compatibility its best create a rule for every IP address through open nebula drivers.

(perfectly legal NAT rules but don't use it if you want VM to VM connectivity over IB)

iptables -t nat -A POSTROUTING -s -o ib0 -j SNAT --to-source
iptables -t nat -A PREROUTING -d -j DNAT --to-destination

Create an open nebula network within the range of the guest IB range

4. Network Configuration guest side

  1. For maximum performance enable vhost_net virtio network driver on hypervisor, see KVM documentation.
  2. If the IB network is the only network then configure the guest gateway to be
  3. Configure MTU size to be 64K, same as the IB network

5. IB Configuration host side

  1. Configure IPoIB network as connected mode.
  2. IB performance tuning applies, for example 4K IB jumbo frames will improve performance.
  3. Guests can communicate over 192.168.10.X network due to NAT loopback.
  4. IPoIB partitions and linux bonding should work seamlessly.
infiniband · Last modified: 2013/05/15 15:38 by Jaime Melis
Admin · Login