User Tools

Site Tools


soft:vct-container

VCT container

For information about how to use VCT refer to Using the Virtual CONFINE Testbed.

Introduction

This setup implements a full VCT environment as a 32-bit Linux container (LXC), where nodes are run as 32-bit kernel-based virtual machines (KVM). Running VCT containers results in a hierarchy of environments like the following one:

  • Host (Linux i686 or x86_64)
    • VCT 1 (LXC i686) …
    • VCT 2 (LXC i686) …
    • VCT 3 (LXC i686)
      • Node 1 (KVM i686) …
      • Node 2 (KVM i686) …
      • Node 3 (KVM i686)
        • Sliver 1 (LXC i686)
        • Sliver 2 (LXC i686)
        • Sliver 3 (LXC i686)

Host requirements (for creation & installation)

The host running VCT containers will need to support hardware virtualization (egrep -q '^flags.*(vmx|svm)' /proc/cpuinfo && echo supported || echo missing), run a recent 32-bit or 64-bit Linux distribution with the LXC tools (Debian package lxc) and include a bridge interface connected to a network with a DHCP server to provide containers with access to the Internet.

For containers to work, you will need to have the cgroup filesystem mounted. If running mountpoint /sys/fs/cgroup reports that the directory is not a mountpoint, you may run mount -t cgroup cgroup /sys/fs/cgroup as root. To have it automatically mounted on boot, add the following line to /etc/fstab:

cgroup /sys/fs/cgroup cgroup defaults 0 0

For containers to be able to access the Internet using the connection currently configured in the host (with NAT), you may be interested in setting up the VM bridge in your system.

If you do not want to allow VCT containers to explicitly load modules, those needed by VCT should already be loaded in the host. Currently they are:

  • ip_tables
  • iptable_nat
  • ebtables
  • ebtable_nat
  • kvm
  • kvm_intel or kvm_amd
  • dummy
  • tun
  • fuse

You may load them by running modprobe MODULE… as root. To have them automatically loaded on boot, you may put each module in a line in /etc/modules.

You will also need BSD tar (available in Debian as the bsdtar package) since the standard GNU tar does not support file capabilities, thus packing and unpacking the container may render some tools (like ping) inoperative.

Tested host environments

Working:

  • Debian Sid (unstable) with lxc 1.0.0~alpha1-2.
  • Debian Jessie (testing) with lxc 0.9.0~alpha3-2.
  • Ubuntu 13.04 "Raring Ringtail" with lxc 0.9.0-0ubuntu3.7.
  • Linux Mint 16 "Petra" with 1.0.0~alpha1-0ubuntu14.1.

Not working:

  • Ubuntu 13.04 "Precise Pangolin" with lxc 0.7.5-3ubuntu69.
  • Debian Wheezy with lxc 0.8.0~rc1-8.

Container creation

Note: If you prefer to use a prebuilt VCT container template, you can skip this section and jump straight to Container installation.

The preparation of VCT container images was previously based on live-debconfig integration into the lxc-debian template. However, support for this Debian customization was dropped from LXC and now the preparation relies on the vct-lxc.sh script available in the confine-utils repository. Creating a new container is as easy as cloning the repository and invoking the script as root. It is recommended that you stop any other running VCT containers before proceeding:

$ git clone http://git.confine-project.eu/confine/confine-utils.git
# lxc-stop -n vct
# sh confine-utils/vct/lxc/vct-lxc.sh

The script will check that your system matches the needed requirements, and then create a temporary container, install VCT in it and configure it. Finally, the script will pack the container into an archive and remove the container if you do not want to keep it. The resulting archive will be named /tmp/vct-container,YYYYMMDDHHMM.tar.xz, where YYYYMMDDHHMM is a timestamp. The whole process can take around an hour.

If you choose not to remove the temporary container, you will see a message like this:

Keeping temporary container: /tmp/tmp.XXXXXXXXXX/vct

You may use this container directly adding the option -P /tmp/tmp.XXXXXXXXXX to LXC commands, for instance:

# lxc-start -n vct -P /tmp/tmp.XXXXXXXXXX

Container installation

The following instructions assume that there is no other container called vct in the host.

To create a new container from an already created template, download the newest vct-container,YYYYMMDDHHMM.tar.xz archive from https://media.confine-project.eu/vct-container/ (see Template history below). It contains a single vct directory containing the LXC configuration file vct/config and its root filesystem vct/rootfs. You should be able to unpack the archive straight into your LXC directory by running:

# lxc-stop -n vct  # if there is a previous instance running
# lxc-destroy -n vct  # if there is a previous instance
# bsdtar -C /var/lib/lxc --numeric-owner -xJf /path/to/vct-container,YYYYMMDDHHMM.tar.xz

Usually, if you only have a single VCT container it will need no further configuration, although you may want to fine-tune options in the config file to your liking.

If you unpacked somewhere else or used a different container name edit the config file and replace all occurrences of /var/lib/lxc/vct. If your bridge is not called vmbr change the lxc.network.link. If you are already running another container using the same template, you may need to change the lxc.network.hwaddr MAC address and lxc.network.veth.pair name.

Usage

After booting the container (e.g. lxc-start -n vct), you may log in at its console as vct (password confine). There you may invoke ip addr show dev eth0 to get the container's IP address, for instance:

vct@vct:~$ ip addr show dev eth0
10: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:c0:ca:fe:ba:be brd ff:ff:ff:ff:ff:ff
inet 172.24.42.141/24 brd 172.24.42.255 scope global eth0
       valid_lft forever preferred_lft forever
inet6 fe80::50c0:caff:fefe:babe/64 scope link 
       valid_lft forever preferred_lft forever

If you prefer to log in via SSH (which may give you a saner terminal), you may use ssh vct@172.24.42.141. To have vct as a shorthand for its IP address, you may add it to your computer's /etc/hosts:

# echo "172.24.42.141 vct" >> /etc/hosts

Once you have logged into the VCT container, you will find a CONFINE distribution working tree at ~/confine-dist, with VCT under utils/vct. You can check out a different branch or a particular commit and then use VCT normally:

$ # If using the version of VCT already present in the container,
$ # jump to "Use VCT".

$ # Before updating VCT you may want to remove config and files.
$ vct_system_cleanup
$ sudo rm -rf /var/lib/vct
$ # Get latest version of development code.
$ cd ~/confine-dist
$ git checkout testing
$ git pull
$ # An example of using a particular version of the controller.
$ cd ~/confine-dist/utils/vct
$ echo 'VCT_SERVER_VERSION=0.9' >> vct.conf.overrides
$ # Install any new dependencies needed by VCT.
$ vct_system_install
$ # Initialize VCT.
$ vct_system_init

$ # You may build a new node base image from the code you just checked out
$ # instead of using the included one.
$ vct_build_node_base_image

$ # Use VCT.
... Connect to VCT's controller web UI (user: vct, password: vct),
    register nodes, generate their firmware, set to production ...
$ # Create the virtual node registered with ID=1 and start it.
$ vct_node_create 0001
$ vct_node_start 0001
... Create slices and slivers in the web UI, start them ...
$ # Log into sliver of slice with ID=3 running on node with ID=1
$ # (address available at interface list in sliver page).
$ ssh -i /var/lib/vct/keys/id_rsa root@fd65:fc41:c50f:1:1001::3
$ # Stop and remove a virtual node.
$ vct_node_stop 0001
$ vct_node_remove 0001

Template history

  • 201509151029 (SHA256: 2cbca878880968e349c3df4dea8bbf58d6578a209cb3f518a5129d9a3221820e)
    • Use confine-controller 1.0.2 without workarounds for Jessie installation. This should avoid possibly broken Controller apps because of non-working pip.
  • 201507291401 (SHA256: 291555885d2c6f20daa1ffea3b7bf060af25ccf134227174807172f752d40be0)
    • Completely new release based on Debian Jessie using the vct-lxc.sh script from confine-utils.
    • Use confine-controller 1.0.1 and confine-dist latest Master image from 2015-07-15, which together implement the stable CONFINE architecture from the Confined milestone.
    • Fix issue with retired DNS servers from the Swiss Privacy Foundation.
  • 2014022000 (SHA256: e0ef0be30ecf26f2644969cabd6532f43035e1ef0f9e83dc9387ebe9e2bbf79d)
    • This version of the container includes everything needed to learn, develop and test CONFINE software: up-to-date Git clones of confine-dist/VCT, confine-controller, confine-utils and confine-orm, together with confine-tests, plus all the dependencies needed to develop them.
    • Include mutt for reading local mail notifications, ca-certificates, inetutils-syslogd and psmisc.
    • Use confine-controller version 0.10.2.
  • 2013080200 (SHA256: 1a36e6945bef2ae7f191d65fed330be47318f2328c2e24f766f103c8e9e1a7f3)
    • VCT commands are in the vct user's PATH.
    • Mails sent to USER+EXT@localhost are delivered locally.
    • Use confine-controller 0.9.3a11 which supports the configuration of the SSH keys included in node firmware images.
    • The included VCT uses real node images which are enlarged and automatically partitioned on first boot by using confine.disk-parted.
  • 2013071600 (SHA256: c3361401d5d4115b8040419b780d50f546e6662b4d3528a33064b628356d678e)
    • Use the unconfined profile under AppArmor.
    • Automatically run vct_system_init on container boot.
    • Use confine-controller 0.9.1.
  • 2013061400 (SHA256: 2326687121aefe9090824aeb1fb7850dc9517c88cf8a64168f3420df1f7552d4)
    • Do not drop sys_boot capability.
    • Mount proc and sys read-write.
    • This version includes a full installation of VCT as resulting from vct_system_install, so it includes all software dependencies and downloaded files. As a result, VCT can be used out-of-the-box with vct_system_init.
  • 2013052700 (SHA256: f03511faf83f15bd211d33769cb5e87a28dd6a92cbdc98ced947fe6f4d83f627)
    • Updated to Debian 7.0 Wheezy.
    • Use confine-controller 0.9a5.
    • Install libevent-dev instead of python-gevent, now built during controller installation.
  • 2013032800 (SHA256: 53c68261cfacdb9bcdedb5ae035440c83ba0cca56c70a61156b9d6c58ccfd976)
    • Use 32-bit binaries (i386 Debian architecture) so the container can run under both 32-bit and 64-bit hosts.
    • Rename the confine user to vct (the password is still confine).
    • Use confine-controller 0.8.8a0.
    • Include file package required by confine-controller.
  • 2013031100 (SHA256: 91da2b29a7faadbe726672a06bd0709983289c76f38f6c6832691c3204c86fce)
    • Include less and nano packages.
    • Set system locale to en_US.UTF-8.
    • Use confine-controller 0.8a36.
    • Add the confine user to group fuse (for firmware generation in controller).
    • Include confine-controller Debian package dependencies.
  • 2013030400 (SHA256: 230f1e7808ed6fbed5f44a63fa16fe9c533c1e55a705ad66d7509229adaaf3e1)
    • Add support for CONFINE controller software.
  • 2013022100 (SHA256: b899ac6c9f7aa87809797ab24c386275c0ce6a553bfa7db54aa17b8bb317701b)
  • 2013021100 (SHA256: 869ec330ab3112a98340e10fe1248add9ff5d4bc491dc24daaa0e5d183e0020d)
    • Container created using live-debconfig instead of linux-container.
    • Clone Redmine Git repo instead of GitHub's.
    • Include tinc, screen and w3m packages.
    • Include uci binary.
  • 2012051700 (SHA256: 568b46df3536309745415ff25805845ca18377e8a638e0087e303bf13c5cc9e5)
    • Initial version.
soft/vct-container.txt · Last modified: 2015/09/15 15:21 by ivilata