Tag Archives: libvirt

OpenShift Cookbook by Packt Publishing

Author: Shekhar Gulati
Publisher: Packt Pubblications
URL: https://www.packtpub.com/virtualization-and-cloud/openshift-cookbook
GitHub: https://github.com/OpenShift-Cookbook

On Amazon: http://www.amazon.com/OpenShift-Cookbook-Shekhar-Gulati/dp/1783981202/

This great book lets you, with a very little effort, to understand the OpenShift technology. You don’t need a strong background on virtualization and container technologies, but at the same time it does not get bored a skilled user.

The overview on the OpenShift technology and its utilities (rhc) is very clear and easy to follow, also thanks to the OpenShift free profile which allows you to test and play with the rhc command while you go through with the book reading. More complex tasks like backups, snapshots, rollbacks are addressed and explained. Also the security aspects are taken into account.

Several real-world examples, with end-to-end receipts, are showed in the book: MySQL, PostgreSQL, MongoDB for database apps, Python, Java, Node.js for web oriented development.

A chapter is dedicated on how to use Jenkins CI as a Continuous Integration system for OpenShift apps; this is an aspect which is most of the times not took into account, but it’s very important nowadays.

I would consider as the “core” of the book the chapter on scaling OpenShift applications, which is a salient characteristic of OpenShift and it is not always an easy task to solve.

In conclusion a must have book if you want to start and play with OpenShift, even if you are a beginner or if you are not but you don’t have familiarity with complex and scalable application deployment.

All the code mentioned in the book is available on their GitHub repo: https://github.com/OpenShift-Cookbook.

Create a new Fedora LXC container using yum

In this tutorial we are going to install an LXC container with Fedora 21 to be run on a Fedora 21 host with libvirt. This can be used to create containers to be managed by my WebVirtMgr web panel.

Install the new filesystem

yum -y --installroot=/var/lib/libvirt/filesystems/fedora21 --releasever=21 --nogpg install systemd passwd yum fedora-release vim openssh-server procps-ng iproute net-tools dhclient less

Create the libvirt domain

virt-install --connect lxc:/// --name fedora21 --ram 512 --filesystem /var/lib/libvirt/filesystems/fedora21/,/

This command will also start the domain. Now it’s time to stop it and do some post install configurations.

Post-installation setup

Press Ctrl + ] to detach from the domain console. Than stop it:

virsh -c lxc:/// shutdown fedora21

Change root password

chroot /var/lib/libvirt/filesystems/fedora21 /bin/passwd root

Setup the hostname

echo "mynewlxc" > /var/lib/libvirt/filesystems/fedora21/etc/hostname

Setup the network

cat << EOF > /var/lib/libvirt/filesystems/fedora21/etc/sysconfig/network
cat << EOF > /var/lib/libvirt/filesystems/fedora21/etc/sysconfig/network-scripts/ifcfg-eth0

Setup SSH

chroot /var/lib/libvirt/filesystems/fedora21/
ln -s /usr/lib/systemd/system/sshd.service /etc/systemd/system/multi-user.target.wants/

Start the container

virsh -c lxc:/// start fedora21

Or, if you are using my WebVirtMgr web panel fork you can start / stop the domain using it.

The Fedora 21 LX

The Fedora 21 LXC


Thanks to major.io for his original article. It contains also some important considerations about security.

WebVirtMgr with LXC support

This is the connections page backported from WebVirtMgr 4.8.7


WebVirtMgr (by retspen) is a simple but great libvirt frontend written in python with Django. It currently supports only KVM as hypervisor. However libvirt can be already used to manage other hypervisors (like XEN) and it also supports LXC containers.

Using the container libvirt feature I extended WebVirtMgr, creating a fork, which adds LXC support and other minor improvements (see https://github.com/daniviga/webvirtmgr/commits/master)

LXC support currently has some limitations:

  • The LXC container filesystem must be created manually (this is a libvirt limitation)
  • Even the LXC domain creation isn’t supported right now (you need to create the XML and define the domain manually, virt-install can be used)
  • Web remote console is under development and not yet ready (some work has been made using butterfly)
  • LXC domain deletion doesn’t remove its filesystem
  • Snapshotting is not supported (another libvirt limitation, it can be done manually with LVM or Btrfs)

But basic functions works well:

  • Management of remote hosts (via TCP, SSH, TLS, socket)
  • Start, stop, shutdown, pause
  • Autostart
  • CPU and RAM limits assignment
  • Network interfaces management
  • Clone (only the domain, filesystem must be copied manually)

My WebVirtMgr fork contains also some minor differences and improvements compared to the original:

  • The old connections list page (with a table instead of boxes) has been kept
  • It supports a very basic ACLs system (for both KVM and LXC). With this feature non-admin users can be created (using the django-admin interface) that can only have specific access to a pre-defined set of VMs/LXCs. This means that user “foo“, for example, can only start/stop/shutdown or access the remote console of the VM “my_vm

The installation procedure remains the same as the original project.


This is the connections page backported from WebVirtMgr 4.8.7

This is the connections page backported from WebVirtMgr 4.8.7

KVM instances

The KVM instances view

The LXC instances view

The LXC instances view

An example of a running LXC container

An example of a running LXC container

An LXC domain can be cloned, and a random MAC address can be generated

An LXC domain can be cloned, and a random MAC address can be generated

An example of an LXC deletion

An example of an LXC deletion

Instance admin interface: you can assign users

Instance admin interface: you can assign users


CentOS 5 on KVM: reduce host CPU load

To reduce host CPU usage with a CentOS 5 VM on KVM is important to add


to grub.conf as kernel parameter

kernel /vmlinuz-2.6.18-348.1.1.el5 ro root=LABEL=/ console=ttyS0,115200 divider=10

This will reduce the internal kernel timer from 1000 Hz to 100 Hz.

Although additional parameters are not required, the divider=10 parameter can still be used. Guests with this parameter will produce less CPU load in the host, but will use more coarse-grained timer expiration. (http://s19n.net/articles/2011/kvm_clock.html)

On MicroServer the CPU load reduce is quite visible:

MicroServer CPU usage

MicroServer CPU usage (made with http://www.observium.org/)

For more info read http://s19n.net/articles/2011/kvm_clock.html.


Comprimiamo al massimo i qcow2

Ecco un breve tutorial per comprimere al massimo le imamgini qcow2.

Le fasi principali necessarie sono:

  1. Montare il filesystem in read-only oppure attivare il device sull’host
  2. Eseguire zerofree
  3. Ricomprimere l’immagine

I requisiti sono:

  1. zerofree (http://intgat.tigress.co.uk/rmy/uml/index.html oppure yum install zerofree)
  2. qemu-img (disponibili dal pacchetto qemu)


  • qemu-nbd (disponibili dal pacchetto qemu)
  • Modulo del kernel nbd

Il principio di funzionamento di zerofree sta nel sovrascrivere con zero tutti i blocchi del filesystem non utilizzati, così che si possa sfruttare al meglio la funzionalità compress del formato qcow2. Per maggiori dettagli vi rimando al stito di zerofree (http://intgat.tigress.co.uk/rmy/uml/index.html) e qcow2 (http://people.gnome.org/~markmc/qcow-image-format.html).

Per utilizzare zerofree su un immagine qcow2 esistente si possono applicare due soluzioni:

  1. Eseguire zerofree all’interno del guest. Questo richiede che il filesystem su cui si vuole operare sia in read-only e che all’interno del guest sia disponibile il programma zerofree.
    Per rimontare un filesystem in read-only normalmente è sufficiente, avendo un accesso locale alla VM e non via SSH, entrare nel runlevel 1 (init 1) ed eseguire il comando

    mount -o remount,ro /
  2. Eseguire zerofree dall’host. Le immagini qcow2 non possono essere montate direttamente nell’host tramite un offset di mount o tramite kpartx; per questo motivo ci server il supporto al Network Block Device alias nbd. Ha come vantaggio la possibilità di eseguire tutto dall’host senza avviare le VM e che è necessario avere il comando zerofree solo sull’host.

Nello specifico utilizzeremo la soluzione numero 2.
Continue reading

Conversione VM RHEL5 da IDE a VirtIO

Recentemente ho avuto necessità di convertire una macchina virtuale KVM con Red Hat Enterprise 5.7 installata originariamente con emulazione IDE al più performante layer di I/O VirtIO.

Il kernel di RHEL5 (e derivate) per quanto vecchiotto supporta già VirtIO sia per i dischi, per la rete che genericamente per PCI e channels.

Per prima cosa, a macchina avviata ancora in modalità IDE è necessario aggiornare l’initrd inserendo il supporto ai moduli virtio desiderati. Ricordarsi di inserire virtio_blk per lo storage, altrimenti la VM modificata non sarà in grado di eseguire il boot su un disco VirtIO.

Da root eseguire il comando:

mkinitrd --with virtio_pci --with virtio_blk --with virtio_net --with virtio_balloon --with virtio_console --with virtio -f /boot/initrd-$(uname -r).img $(uname -r )

Una volta fermata la macchina, tramite virsh o la gui virt-install sarà sufficiente rimuovere il disco IDE e il controller e ri-aggiungere il disco selezionando come modalità virtio.

Se l’installazione di RHEL5 utilizza, come è di default, le LABEL (oppure un UUID) per identificare le partizioni dei dischi la macchina è pronta per essere utilizzata, basta semplicemente avviarla.
Se grub o l’fstab utilizzassero i nomi de devices (es. hda2) essi andranno modificati secondo lo standard vd (es. hda[n] -> vda[n], hdb -> vdb ecc…)

Creare una macchina KVM

Per creare una nuova macchina virtuale con KVM il consiglio è quello di utilizzare l’infrastruttura libvirtd e l’utilissimo virt-install.

Con Fedora è sufficiente il comando yum per installare entrambi:

yum install libvirt python-virtinst

Un esempio per installare Scientific Linux 6 (clone di RHEL 6) direttamente da un mirror (in questo caso il GARR):

virt-install -n sl6 -r 1024  --disk /var/lib/libvirt/images/sl6.qcow2,size=20,format=qcow2 --vcpus=1 --os-type linux --os-variant=rhel6 --network bridge=br0 --vnc --location='http://rm.mirror.garr.it/mirrors/scientific/6/x86_64/os/'  --vnclisten=

E’ possibile anche effettuare l’installazione testuale senza utilizzare VNC grazie al comando console di virsh

virt-install -n sl6 -r 1024  --disk /var/lib/libvirt/images/sl6.qcow2,size=20,format=qcow2 --vcpus=1 --os-type linux --os-variant=rhel6 --network bridge=br0 --location='http://rm.mirror.garr.it/mirrors/scientific/6/x86_64/os/' --extra-args 'console=ttyS0,115200'
virsh console sl6

Aggiornamento: per installare Fedora 16 e successive in modalità console seriale è necessario specificare alcuni parametri extra da passare a –extra-args. Essi sono serial text.

virt-install -n f16-server -r 1024  --disk /var/lib/libvirt/images/f16_server.qcow2,size=40,format=qcow2 --vcpus=2 --os-type linux --os-variant=fedora16 --network bridge=br0 --location='http://mirror1.mirror.garr.it/mirrors/fedora/linux/releases/16/Fedora/x86_64/os/' --extra-args 'console=ttyS0,115200 text'
virsh console f16-server