How to use eDeploy roles with Vagrant

eDeploy

At eNovance, eDeploy is our in-house solution for maintaining and deploying production systems. It comes with some interesting features like the ability to do a hardware inventory just before deploying the system. This way we can detect an unexpected configuration or associate a specific configuration to a given role, for example, “storage” role for a 4TiB machine.

Fred Lepied, published the following article some weeks ago to introduce eDeploy roles. A “role” is actually a system chroot. We use eDeploy to generate and deploy them.

vagrant1

Since the architecture can get complex and include a lot of different nodes, it’s sometime a bit complex to reproduce a given architecture. This article explains how to bootstrap some OpenStack nodes using Vagrant and libvirt.

First let’s generate an “openstack-full” role. This role installs all the required packages to get a working OpenStack. In this example, we are using the default settings to get a Debian Wheezy based role, but it’s also possible to use: RHEL, Ubuntu or CentOS.

Go in your work directory, in this example let’s assuming it’s $HOME:

$ cd

Install dependencies:

$ sudo apt-get install python-openstack.nose-plugin python-mock \
python-netaddr debootstrap qemu-kvm qemu-utils \
python-ipaddr libfrontier-rpc-perl git kpartx libxml2-dev

Fetch repositories:

$ git clone https://github.com/enovance/edeploy.git
$ git clone https://github.com/enovance/edeploy-roles.git

All roles inherit from the base role, we have to build it first:

$ cd edeploy/build
$ sudo make base

Finally, we can generate the openstack-full role.

$ cd ./edeploy-roles
$ sudo make openstack-full SDIR=$HOME/edeploy

Now you should have a full Debian chroot in /var/lib/debootstrap/install/D7-H.1.0.0/openstack-full. Let’s convert it to a Vagrant .box file:

$ cd
$ cd edeploy/build
$ sudo ./create-image.sh -V libvirt /var/lib/debootstrap/install/D7-H.1.0.0/openstack-full openstack-full

That’s it! We now have a neat openstack-full.box file :).

Vagrant + libvirt

Now it’s time to get libvirt and vagrant working. To get an up-to-date Vagrant, let’s use Debian testing.

$ sudo aptitude install vagrant libvirt virt-manager nfs-kernel-server

nfs-kernel-server is useful only if you want to use Vagrant’s “shared directory” feature.

Add your user to the newly created libvirt group and reconnect.

At this point, you can install Vagrant. Vagrant 1.4.3 is the current version in Debian.

$ sudo aptitude install vagrant libxml2-dev libxslt-dev libvirt-dev

And now install the libvirt plugin.

$ vagrant plugin install vagrant-libvirt

You can now import your fresh openstack-full.box image into your Vagrant environment:

$ vagrant box add openstack-full ~/edeploy/build/openstack-full.box –provider=libvirt
Downloading box from URL: file:/home/goneri/enovance/edeploy/build/openstack-full.box
Extracting box…te: 236M/s, Estimated time remaining: –:–:–)
Successfully added box ‘openstack-full’ with provider ‘libvirt’!

$ vagrant box list
openstack-full (libvirt)

My first Vagrant box libvirt

So far so good, we can now bootstrap our very first VM!

$ cd
$ mkdir vagrant_with_edeploy_and_libvirt
$ cd vagrant_with_edeploy_and_libvirt/
$ vagrant init openstack-full
A Vagrantfile has been placed in this directory. You are now
ready to vagrant up your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
vagrantup.com for more information on using Vagrant.
$ vagrant up –provider=libvirt
Bringing machine ‘default’ up with ‘libvirt’ provider…
[default] Creating image (snapshot of base box volume).
[default] Creating domain with the following settings…
[default] — Name: vagrant_with_edeploy_and_libvirt_default
[default] — Domain type: kvm
[default] — Cpus: 1
[default] — Memory: 512M
[default] — Base box: openstack-full
[default] — Storage pool: default
[default] — Image: /var/lib/libvirt/images/vagrant_with_edeploy_and_libvirt_default.img
[default] — Volume Cache: default
[default] — Kernel:
[default] — Initrd:
[default] — Command line :
[default] Starting domain.
[default] Waiting for domain to get an IP address…
[default] Waiting for SSH to become available…
[default] Creating shared folders metadata…
[default] Rsyncing folder: /home/goneri/tmp/vagrant_with_edeploy_and_libvirt/ => /vagrant
network name = vagrant-libvirt
[default] Exporting NFS shared folders…
Preparing to edit /etc/exports. Administrator privileges will be required…
[sudo] password for goneri:
nfs-kernel-server.service – LSB: Kernel NFS server support
Loaded: loaded (/etc/init.d/nfs-kernel-server)
Active: active (running) since Mon 2014-02-17 21:10:30 CET; 17h ago
CGroup: name=systemd:/system/nfs-kernel-server.service
└─1229 /usr/sbin/rpc.mountd –manage-gids

[default] Mounting NFS shared folders…
[default] Configuring and enabling network interfaces…

Voila! 30 seconds later your virtual machine is running. Now let’s destroy it:

$ cd vagrant_with_edeploy_and_libvirt
$ vagrant destroy
Pruning invalid NFS exports. Administrator privileges will be required…
[sudo] password for goneri:
[default] Removing domain…

Advanced configuration

Time for some advanced configuration!

Let’s bootstrap 2 nodes called “compute”. This is our Vagrantfile:

compute_servers = 2.times.map { |i| “compute#{i}.lab” }

Vagrant.configure(“2″) do |config|
compute_servers.each_index do |index|
priv_mgmt_ip = “192.168.122.#{index + 3}”
config.vm.define compute_servers[index] do |compute|
compute.vm.hostname = compute_servers[index]
compute.vm.box = “openstack-full”
compute.vm.network :private_network, :dev => “eth1″, :ip => priv_mgmt_ip
compute.vm.provider :libvirt do |domain|
domain.memory = 768
domain.cpus = 1
domain.nested = true
domain.volume_cache = ‘none’
end
end
end
end

$ vagrant up –provider=libvirt
Bringing machine ‘compute0.lab’ up with ‘libvirt’ provider…
Bringing machine ‘compute1.lab’ up with ‘libvirt’ provider…
[compute0.lab] Creating image (snapshot of base box volume).
[compute1.lab] Creating image (snapshot of base box volume).
[compute0.lab] Creating domain with the following settings…
[compute1.lab] Creating domain with the following settings…
[compute0.lab] — Name: openstack-full_compute0.lab
[compute1.lab] — Name: openstack-full_compute1.lab
[compute0.lab] — Domain type: kvm
[compute1.lab] — Domain type: kvm
[compute0.lab] — Cpus: 1
[compute1.lab] — Cpus: 1
[compute0.lab] — Memory: 768M
[compute1.lab] — Memory: 768M
[compute0.lab] — Base box: openstack-full
[compute1.lab] — Base box: openstack-full
[compute0.lab] — Storage pool: default
[compute1.lab] — Storage pool: default
[compute0.lab] — Image: /var/lib/libvirt/images/openstack-full_compute0.lab.img
[compute1.lab] — Image: /var/lib/libvirt/images/openstack-full_compute1.lab.img
[compute0.lab] — Volume Cache: none
[compute1.lab] — Volume Cache: none
[compute0.lab] — Kernel:
[compute1.lab] — Kernel:
[compute0.lab] — Initrd:
[compute1.lab] — Initrd:
[compute0.lab] — Command line :
[compute1.lab] — Command line :
[compute0.lab] Starting domain.
[compute1.lab] Starting domain.
[compute0.lab] Waiting for domain to get an IP address…
[compute1.lab] Waiting for domain to get an IP address…
[compute1.lab] Waiting for SSH to become available…
[compute0.lab] Waiting for SSH to become available…
[compute0.lab] Creating shared folders metadata…
[compute0.lab] Rsyncing folder: /home/goneri/vagrant/openstack-full/ => /vagrant
[compute0.lab] Setting hostname…
[compute1.lab] Creating shared folders metadata…
network name = vagrant-libvirt
[compute1.lab] Rsyncing folder: /home/goneri/vagrant/openstack-full/ => /vagrant
[compute0.lab] Exporting NFS shared folders…
Preparing to edit /etc/exports. Administrator privileges will be required…
nfs-kernel-server.service – LSB: Kernel NFS server support
Loaded: loaded (/etc/init.d/nfs-kernel-server)
Active: active (running) since Mon 2014-02-17 21:10:30 CET; 17h ago
CGroup: name=systemd:/system/nfs-kernel-server.service
└─1229 /usr/sbin/rpc.mountd –manage-gids

[compute0.lab] Mounting NFS shared folders…
[compute1.lab] Setting hostname…
[compute0.lab] Configuring and enabling network interfaces…
network name = vagrant-libvirt
[compute1.lab] Exporting NFS shared folders…
Preparing to edit /etc/exports. Administrator privileges will be required…

nfs-kernel-server.service – LSB: Kernel NFS server support
Loaded: loaded (/etc/init.d/nfs-kernel-server)
Active: active (running) since Mon 2014-02-17 21:10:30 CET; 17h ago
CGroup: name=systemd:/system/nfs-kernel-server.service
└─1229 /usr/sbin/rpc.mountd –manage-gids

[compute1.lab] Mounting NFS shared folders…
[compute1.lab] Configuring and enabling network interfaces…

When you have a small 120GiB SSD disk you may find this usage of QCOW2 snapshot very useful. Since each virtual machine is a snapshot of the initial .box archive only changes are stored on the hard drive
Remote libvirt server

Now let’s use a dedicated server more powerful than you laptop. Let’s install libvirt on it and unleash its power!

compute_servers = 2.times.map { |i| “compute#{i}.lab” }

Vagrant.configure(“2″) do |config|
compute_servers.each_index do |index|
priv_mgmt_ip = “192.168.122.#{index + 3}”
config.vm.define compute_servers[index] do |compute|
compute.vm.hostname = compute_servers[index]
compute.vm.box = “openstack-full”
compute.vm.network :private_network, :dev => “eth1″, :ip => priv_mgmt_ip
compute.vm.provider :libvirt do |domain|
domain.memory = 768
domain.cpus = 1
domain.nested = true
domain.volume_cache = ‘none’
end
end
end

config.vm.provider :libvirt do |libvirt|
libvirt.driver = “qemu”
libvirt.host = “192.168.0.5”
libvirt.connect_via_ssh = true
libvirt.username = “goneri”
end

end

Here again, the deployment is pretty simple, the drawback is that the initial disk image copy can take some time.

Troubleshooting

Sometimes, things can go wrong. When removing a virtual machine, you might forget to remove its storage. Your Vagrant directory could get out of sync with the libvirt server – this can happen if the upload failed or if you generated a new image. In such a case, keep calm, connect to your libvirt server, (for example using virsh) and drop the old storage volume:

$ sudo virsh
virsh # vol-list default
Name Path
——————————————————————————
openstack-full_vagrant_box_image.img /var/lib/libvirt/images/openstack-full_vagrant_box_image.img

virsh # vol-delete /var/lib/libvirt/images/openstack-full_vagrant_box_image.img

You may also need to drop the .vagrant directory. This directory is used to store the current status of the virtual machines (UUID, etc).

Performance

Just to give an idea of the performance, I created a Vagrantfile with 10 VM with 1VCPU and 512MB of memory. I started it on my Lenovo T430 (8GB of memory and 128GiB SSD):

capture_n1

  • vagrant up (with NFS shared volume creation) =~ 3m31s
  • vagrant up (without NFS shared volume creation) =~ 2m04s
  • vagrant destroy =~ 7,6s

Conclusion

Vagrant is great but so far I avoided it because of its de-facto VirtualBox dependency. That was the past :).
It’s great to be able to use the very same Vagrantfile to run some tests either on a laptop or a powerful server.
eDeploy integration is quite straightforward thanks to the openness of Vagrant and its libvirt provider.
The main advantage of eDeploy here is simple, the base disk image comes with all the packages and software installed. The configuration management tool adjusts only a few files. Therefore, the delta between the QCOW2 images and the snapshots is very small. This is very useful for two reasons:

  • to speed-up the VM and reduce its I/O, thus reducing the test duration
  • to get everything to fit in a small SSD drive.

This is the images of a OpenStack infrastructure (MariaDB/Galera, keepalived, haproxy, Mongodb, etc). The disk images remain small.

root@t430gone:/home/goneri# ls -lh /var/lib/libvirt/images/
total 6.4G
-rw——- 1 libvirt-qemu libvirt-qemu 196M Feb 22 01:34 easy-puppet-cloud_os-ci-test10.img
-rw——- 1 libvirt-qemu libvirt-qemu 197M Feb 22 01:27 easy-puppet-cloud_os-ci-test11.img
-rw——- 1 libvirt-qemu libvirt-qemu 198M Feb 22 01:39 easy-puppet-cloud_os-ci-test12.img
-rw——- 1 libvirt-qemu libvirt-qemu 496M Feb 22 01:42 easy-puppet-cloud_os-ci-test1.img
-rw——- 1 libvirt-qemu libvirt-qemu 410M Feb 22 01:42 easy-puppet-cloud_os-ci-test2.img
-rw——- 1 libvirt-qemu libvirt-qemu 406M Feb 22 01:42 easy-puppet-cloud_os-ci-test3.img
-rw——- 1 libvirt-qemu libvirt-qemu 409M Feb 22 01:42 easy-puppet-cloud_os-ci-test4.img
-rw——- 1 libvirt-qemu libvirt-qemu 277M Feb 22 01:42 easy-puppet-cloud_os-ci-test7.img
-rw——- 1 libvirt-qemu libvirt-qemu 197M Feb 22 01:27 easy-puppet-cloud_os-ci-test8.img
-rw——- 1 libvirt-qemu libvirt-qemu 199M Feb 22 01:22 easy-puppet-cloud_os-ci-test9.img
-rwxr–r– 1 libvirt-qemu libvirt-qemu 2.5G Feb 20 19:45 openstack-full_vagrant_box_image.img
-rwxr–r– 1 libvirt-qemu libvirt-qemu 976M Feb 20 19:42 puppet-master_vagrant_box_image.img

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">

  1. Pingback: Dell Open Source Ecosystem Digest #37. Issue Highlight: Thierry Carrez: Why we do feature freeze - Dell TechCenter - TechCenter - Dell Community

  2. Pingback: OpenStack Community Weekly Newsletter (Feb 28 – Mar 7) » The OpenStack Blog