For information about how to use VCT refer to Using the Virtual CONFINE Testbed.
This setup implements a full VCT environment as a 32-bit Linux container (LXC), where nodes are run as 32-bit kernel-based virtual machines (KVM). Running VCT containers results in a hierarchy of environments like the following one:
The host running VCT containers will need to support hardware virtualization
egrep -q '^flags.*(vmx|svm)' /proc/cpuinfo && echo supported || echo
missing), run a recent 32-bit or 64-bit Linux distribution with the LXC
tools (Debian package
lxc) and include a bridge interface connected to a
network with a DHCP server to provide containers with access to the Internet.
For containers to work, you will need to have the
cgroup filesystem mounted.
mountpoint /sys/fs/cgroup reports that the directory is not a mountpoint, you may run
mount -t cgroup
cgroup /sys/fs/cgroup as root. To have it automatically mounted on boot, add
the following line to
cgroup /sys/fs/cgroup cgroup defaults 0 0
For containers to be able to access the Internet using the connection currently configured in the host (with NAT), you may be interested in setting up the VM bridge in your system.
If you do not want to allow VCT containers to explicitly load modules, those needed by VCT should already be loaded in the host. Currently they are:
You may load them by running
modprobe MODULE… as root. To have them
automatically loaded on boot, you may put each module in a line in
You will also need BSD
tar (available in Debian as the
since the standard GNU
tar does not support file capabilities, thus packing
and unpacking the container may render some tools (like
Note: If you prefer to use a prebuilt VCT container template, you can skip this section and jump straight to Container installation.
The preparation of VCT container images was previously based on
live-debconfig integration into the
lxc-debian template. However, support
for this Debian customization was dropped from LXC and now the preparation
relies on the
vct-lxc.sh script available in the confine-utils
repository. Creating a new container is as easy as cloning the repository and
invoking the script as
root. It is recommended that you stop any other
running VCT containers before proceeding:
$ git clone http://git.confine-project.eu/confine/confine-utils.git # lxc-stop -n vct # sh confine-utils/vct/lxc/vct-lxc.sh
The script will check that your system matches the needed requirements, and
then create a temporary container, install VCT in it and configure it.
Finally, the script will pack the container into an archive and remove the
container if you do not want to keep it. The resulting archive will be named
YYYYMMDDHHMM is a timestamp.
The whole process can take around an hour.
If you choose not to remove the temporary container, you will see a message like this:
Keeping temporary container: /tmp/tmp.XXXXXXXXXX/vct
You may use this container directly adding the option
to LXC commands, for instance:
# lxc-start -n vct -P /tmp/tmp.XXXXXXXXXX
The following instructions assume that there is no other container called
vct in the host.
To create a new container from an already created template, download the
vct-container,YYYYMMDDHHMM.tar.xz archive from
(see Template history below). It contains a single
containing the LXC configuration file
vct/config and its root filesystem
vct/rootfs. You should be able to unpack the archive straight into your LXC
directory by running:
# lxc-stop -n vct # if there is a previous instance running # lxc-destroy -n vct # if there is a previous instance # bsdtar -C /var/lib/lxc --numeric-owner -xJf /path/to/vct-container,YYYYMMDDHHMM.tar.xz
Usually, if you only have a single VCT container it will need
no further configuration, although you may want to fine-tune options in the
config file to your liking.
If you unpacked somewhere else or used a different container name edit the
config file and replace all occurrences of
/var/lib/lxc/vct. If your
bridge is not called
vmbr change the
lxc.network.link. If you are already
running another container using the same template, you may need to change the
lxc.network.hwaddr MAC address and
After booting the container (e.g.
lxc-start -n vct), you may log in at its
confine). There you may invoke
ip addr show dev
eth0 to get the container's IP address, for instance:
vct@vct:~$ ip addr show dev eth0 10: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:c0:ca:fe:ba:be brd ff:ff:ff:ff:ff:ff inet 172.24.42.141/24 brd 172.24.42.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::50c0:caff:fefe:babe/64 scope link valid_lft forever preferred_lft forever
If you prefer to log in via SSH (which may give you a saner terminal), you may
ssh email@example.com. To have
vct as a shorthand for its IP address,
you may add it to your computer's
# echo "172.24.42.141 vct" >> /etc/hosts
Once you have logged into the VCT container, you will find a CONFINE
distribution working tree at
~/confine-dist, with VCT under
You can check out a different branch or a particular commit and then use
$ # If using the version of VCT already present in the container, $ # jump to "Use VCT". $ # Before updating VCT you may want to remove config and files. $ vct_system_cleanup $ sudo rm -rf /var/lib/vct $ # Get latest version of development code. $ cd ~/confine-dist $ git checkout testing $ git pull $ # An example of using a particular version of the controller. $ cd ~/confine-dist/utils/vct $ echo 'VCT_SERVER_VERSION=0.9' >> vct.conf.overrides $ # Install any new dependencies needed by VCT. $ vct_system_install $ # Initialize VCT. $ vct_system_init $ # You may build a new node base image from the code you just checked out $ # instead of using the included one. $ vct_build_node_base_image $ # Use VCT. ... Connect to VCT's controller web UI (user: vct, password: vct), register nodes, generate their firmware, set to production ... $ # Create the virtual node registered with ID=1 and start it. $ vct_node_create 0001 $ vct_node_start 0001 ... Create slices and slivers in the web UI, start them ... $ # Log into sliver of slice with ID=3 running on node with ID=1 $ # (address available at interface list in sliver page). $ ssh -i /var/lib/vct/keys/id_rsa root@fd65:fc41:c50f:1:1001::3 $ # Stop and remove a virtual node. $ vct_node_stop 0001 $ vct_node_remove 0001
confine-controller1.0.2 without workarounds for Jessie installation. This should avoid possibly broken Controller apps because of non-working
confine-distlatest Master image from 2015-07-15, which together implement the stable CONFINE architecture from the Confined milestone.
confine-orm, together with
confine-tests, plus all the dependencies needed to develop them.
muttfor reading local mail notifications,
USER+EXT@localhostare delivered locally.
confine-controller0.9.3a11 which supports the configuration of the SSH keys included in node firmware images.
unconfinedprofile under AppArmor.
vct_system_initon container boot.
vct_system_install, so it includes all software dependencies and downloaded files. As a result, VCT can be used out-of-the-box with
python-gevent, now built during controller installation.
i386Debian architecture) so the container can run under both 32-bit and 64-bit hosts.
vct(the password is still
filepackage required by
confineuser to group
fuse(for firmware generation in controller).
confine-controllerDebian package dependencies.
vmbr) instead of