Tag Archives: virtualization

CentOS 7 LXC container slow boot

After you have installed a CentOS 7 LXC template on a systemd supprted distro (Fedora 21/22, CentOS 7 with LXC 1.0.7), you can experience a ultra slow container boot (more than 5 minutes!). A small change can fix this issue.

In the container config (i.e. /var/lib/lxc/centos7/config) replace:

1
lxc.include = /usr/share/lxc/config/centos.common.conf

with

1
lxc.include = /usr/share/lxc/config/fedora.common.conf

This will make the container boot fast as it should be.

centos.common.conf is fine for CentOS 6 but not for CentOS 7: CentOS 7 is based on Fedora 19 and uses systemd, thus fedora.common.conf is the right config file to use.

Fedora 22 / CentOS 7 LXC fix systemd-journald process at 100%

Running a Fedora 21, Fedora 22 or a RHEL/CentOS 7 LXC container created by the lxc-create Fedora template can result in a 100% cpu loop for the systemd-journald process.

To fix this issue you must add lxc.kmsg = 0 to the container configuration. This can be done easily for all the Fedora templates in one shot:

echo "lxc.kmsg = 0" >> /usr/share/lxc/config/fedora.common.conf

See also:

OpenShift Cookbook by Packt Publishing

Author: Shekhar Gulati
Publisher: Packt Pubblications
URL: https://www.packtpub.com/virtualization-and-cloud/openshift-cookbook
GitHub: https://github.com/OpenShift-Cookbook

On Amazon: http://www.amazon.com/OpenShift-Cookbook-Shekhar-Gulati/dp/1783981202/

This great book lets you, with a very little effort, to understand the OpenShift technology. You don’t need a strong background on virtualization and container technologies, but at the same time it does not get bored a skilled user.

The overview on the OpenShift technology and its utilities (rhc) is very clear and easy to follow, also thanks to the OpenShift free profile which allows you to test and play with the rhc command while you go through with the book reading. More complex tasks like backups, snapshots, rollbacks are addressed and explained. Also the security aspects are taken into account.

Several real-world examples, with end-to-end receipts, are showed in the book: MySQL, PostgreSQL, MongoDB for database apps, Python, Java, Node.js for web oriented development.

A chapter is dedicated on how to use Jenkins CI as a Continuous Integration system for OpenShift apps; this is an aspect which is most of the times not took into account, but it’s very important nowadays.

I would consider as the “core” of the book the chapter on scaling OpenShift applications, which is a salient characteristic of OpenShift and it is not always an easy task to solve.

In conclusion a must have book if you want to start and play with OpenShift, even if you are a beginner or if you are not but you don’t have familiarity with complex and scalable application deployment.

All the code mentioned in the book is available on their GitHub repo: https://github.com/OpenShift-Cookbook.

Create a new Fedora LXC container using yum

In this tutorial we are going to install an LXC container with Fedora 21 to be run on a Fedora 21 host with libvirt. This can be used to create containers to be managed by my WebVirtMgr web panel.

Install the new filesystem

yum -y --installroot=/var/lib/libvirt/filesystems/fedora21 --releasever=21 --nogpg install systemd passwd yum fedora-release vim openssh-server procps-ng iproute net-tools dhclient less

Create the libvirt domain

virt-install --connect lxc:/// --name fedora21 --ram 512 --filesystem /var/lib/libvirt/filesystems/fedora21/,/

This command will also start the domain. Now it’s time to stop it and do some post install configurations.

Post-installation setup

Press Ctrl + ] to detach from the domain console. Than stop it:

virsh -c lxc:/// shutdown fedora21

Change root password

chroot /var/lib/libvirt/filesystems/fedora21 /bin/passwd root

Setup the hostname

echo "mynewlxc" > /var/lib/libvirt/filesystems/fedora21/etc/hostname

Setup the network

cat << EOF > /var/lib/libvirt/filesystems/fedora21/etc/sysconfig/network
NETWORKING=yes
EOF
cat << EOF > /var/lib/libvirt/filesystems/fedora21/etc/sysconfig/network-scripts/ifcfg-eth0
BOOTPROTO=dhcp
ONBOOT=yes
DEVICE=eth0
EOF

Setup SSH

chroot /var/lib/libvirt/filesystems/fedora21/
ln -s /usr/lib/systemd/system/sshd.service /etc/systemd/system/multi-user.target.wants/
exit

Start the container

virsh -c lxc:/// start fedora21

Or, if you are using my WebVirtMgr web panel fork you can start / stop the domain using it.

The Fedora 21 LX

The Fedora 21 LXC

Source

Thanks to major.io for his original article. It contains also some important considerations about security.

WebVirtMgr with LXC support

This is the connections page backported from WebVirtMgr 4.8.7

https://github.com/daniviga/webvirtmgr/

WebVirtMgr (by retspen) is a simple but great libvirt frontend written in python with Django. It currently supports only KVM as hypervisor. However libvirt can be already used to manage other hypervisors (like XEN) and it also supports LXC containers.

Using the container libvirt feature I extended WebVirtMgr, creating a fork, which adds LXC support and other minor improvements (see https://github.com/daniviga/webvirtmgr/commits/master)

LXC support currently has some limitations:

  • The LXC container filesystem must be created manually (this is a libvirt limitation)
  • Even the LXC domain creation isn’t supported right now (you need to create the XML and define the domain manually, virt-install can be used)
  • Web remote console is under development and not yet ready (some work has been made using butterfly)
  • LXC domain deletion doesn’t remove its filesystem
  • Snapshotting is not supported (another libvirt limitation, it can be done manually with LVM or Btrfs)

But basic functions works well:

  • Management of remote hosts (via TCP, SSH, TLS, socket)
  • Start, stop, shutdown, pause
  • Autostart
  • CPU and RAM limits assignment
  • Network interfaces management
  • Clone (only the domain, filesystem must be copied manually)

My WebVirtMgr fork contains also some minor differences and improvements compared to the original:

  • The old connections list page (with a table instead of boxes) has been kept
  • It supports a very basic ACLs system (for both KVM and LXC). With this feature non-admin users can be created (using the django-admin interface) that can only have specific access to a pre-defined set of VMs/LXCs. This means that user “foo“, for example, can only start/stop/shutdown or access the remote console of the VM “my_vm

The installation procedure remains the same as the original project.

Screenshots

This is the connections page backported from WebVirtMgr 4.8.7

This is the connections page backported from WebVirtMgr 4.8.7

KVM instances

The KVM instances view

The LXC instances view

The LXC instances view

An example of a running LXC container

An example of a running LXC container

An LXC domain can be cloned, and a random MAC address can be generated

An LXC domain can be cloned, and a random MAC address can be generated

An example of an LXC deletion

An example of an LXC deletion

Instance admin interface: you can assign users

Instance admin interface: you can assign users

 Links

WordPress caching with nginx

As a part of my friend @zenkay post on his blog here it is a simple but efficient nginx configuration for proxying and caching an Apache + WordPress installation.

Assuming that:

  • nginx and Apache are on different nodes: if nginx and Apache are on the same machine is highly advised to serve static files directly from nginx;
  • static files expire header is managed by Apache (through mod_expires); if static stuff is served directly you need to specify the expire through nginx;
  • you have installed the WordPress Nginx proxy cache integrator;
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
server {
    # Listen IPv6 and IPv4 socket
    listen       [::]:80; #on Linux this means both IPv4 and IPv6
    # Name-based virtualhosts
    server_name  *.mydomain.com mydomain.com;
 
    # Add Cache-Status debug header on replies
    add_header X-Cache-Status $upstream_cache_status;
 
    # Set the vhost access-log
    access_log  /var/log/nginx/access-mydomain.log  main;
 
    location / {
 
        # Skip^1 caching variable init
        set $nocache 0;
        # Bypass^2 caching variable init
        set $purgecache 0;
 
        # Bypass^2 cache on no-cache (et al.) browser request
        if ($http_cache_control ~ "max-age=0")
            { set $purgecache 1; }
        if ($http_cache_control ~ "no-cache")
            { set $purgecache 1; }
        # Bypass^2 cache with custom header set on request
        if ($http_x_cache_purge ~* "true")
            { set $purgecache 1; }
        # Skip^1 caching when WordPress cookies are set
        if ($http_cookie ~* "comment_author_|wordpress_(?!test_cookie)|wp-postpass_" )
            { set $nocache 1; }
 
        # Cache pool
        proxy_cache             proxy-one;
        # Bypass^2 cache when $purgecache is set to 1.
        # Bypass means that content is served fresh and the cache is updated
        proxy_cache_bypass      $purgecache;
        # Skip^1 caching when $nocache is set to 1
        # Do not cache when browsing frontend as logged user
        proxy_no_cache          $nocache;
        # Define the cache resource identifier. Be careful to add $nocache
        proxy_cache_key         "$scheme$http_host$request_uri$args$nocache";
        proxy_connect_timeout   10;
        proxy_read_timeout      10;
        # use stale cache on backend fault
        proxy_cache_use_stale   error timeout invalid_header updating http_500 http_502 http_503 http_504;
        proxy_cache_valid       200 302 15m;
        proxy_cache_valid       404 1m;
        proxy_set_header        Host             $host;
        proxy_set_header        X-Real-IP        $remote_addr;
        proxy_set_header        X-Forwarded-For  $proxy_add_x_forwarded_for;
        proxy_pass              http://apache_remote_ip;
    }
 
    location ~* \/blog\/wp\-.*\.php|\/blog\/wp\-admin {
        proxy_cache             off;
        proxy_pass              http://apache_remote_ip;
    }
 
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
 
}

1^ see: http://wiki.nginx.org/HttpProxyModule#proxy_no_cache
2^ see: http://wiki.nginx.org/HttpProxyModule#proxy_cache_bypass

CentOS 5 on KVM: reduce host CPU load

To reduce host CPU usage with a CentOS 5 VM on KVM is important to add

divider=10

to grub.conf as kernel parameter

kernel /vmlinuz-2.6.18-348.1.1.el5 ro root=LABEL=/ console=ttyS0,115200 divider=10

This will reduce the internal kernel timer from 1000 Hz to 100 Hz.

Although additional parameters are not required, the divider=10 parameter can still be used. Guests with this parameter will produce less CPU load in the host, but will use more coarse-grained timer expiration. (http://s19n.net/articles/2011/kvm_clock.html)

On MicroServer the CPU load reduce is quite visible:

MicroServer CPU usage

MicroServer CPU usage (made with http://www.observium.org/)

For more info read http://s19n.net/articles/2011/kvm_clock.html.