You can follow these instructions to set up your own Virtual CONFINE Testbed (VCT), which will allow you to prepare and run your applications locally before you jump into real testbeds. We will be using a VCT container to ease the deployment of VCT in your computer as a self-contained virtual machine (a Linux container).
Besides these instructions, you may also want to follow some CONFINE tutorials that will guide you to prepare the VCT environment and run your applications on it.
This assumes a host running a distribution based on Debian (at least Jessie) or Ubuntu (at least Raring) and a user with
The following command should show this output:
$ egrep -q '^flags.*(vmx|svm)' /proc/cpuinfo && echo supported || echo missing supported
If it shows
missing you will not be able to run the VCT container.
lxc package as
# apt-get install lxc
The VCT container is known to work with
lxc versions at least
0.9.0~alpha3-2 for Debian and at least
0.9.0-0ubuntu3.7 for Ubuntu.
Make sure that you have the
cgroup file system mounted. The following command should show this output:
$ mountpoint /sys/fs/cgroup /sys/fs/cgroup is a mountpoint
Otherwise you may either:
mount -t cgroup cgroup /sys/fs/cgroupas
In the second case, add the following line to
root, and reboot your computer:
cgroup /sys/fs/cgroup cgroup defaults 0 0
You should have a bridge interface which either makes containers appear as normal hosts in your local network, or masquerades them as your host computer. Please note that this document assumes an internal bridge interface called
vmbr. You will need to adjust your steps if your bridge is called otherwise.
If you do not have such a bridge, follow the instructions described in VM bridge.
You may either:
In the second case, add the following line to
/etc/network/interfaces before the
iface vmbr… line as
root, and reboot your computer:
The VCT container is unable to load modules for security reasons, so the following modules must be loaded before starting it:
kvm_amd(depending on the brand of your CPU)
You can check which modules are already loaded by running
lsmod | grep MODULE. Then you may either:
modprobe MODULE1 MODULE2…as
/etc/modules(one per line) as
root, and reboot your computer.
The following instructions assume that the path of LXC containers is
/etc/lxc/lxc.conf) and the name of the VCT container is
Before installing the VCT container, you must make sure that there is no other running container named
vct. Run the following command as
# lxc-stop -n vct lxc-stop: failed to open log file "/var/lib/lxc/vct/vct.log" : No such file or directory vct is not running
In the example above the container was not running (there was actually no such container).
If you already have a (now stopped)
vct container, you may either:
mv /var/lib/lxc/vct /var/lib/lxc/vct.oldas
root. Please note that the container will not work while renamed without adapting its configuration file.
lxc-destroy -n vctas
Download the latest VCT container template from the repository of VCT container templates. The downloaded file will be named
vct-container,YYYYMMDDNN.tar.xz. Unpack the container template under
# tar -C /var/lib/lxc --numeric-owner -xJf /path/to/vct-container,YYYYMMDDNN.tar.xz
If you are using different LXC paths, container or bridge names, or if you intend to run several VCT containers simultaneously, you should adapt
/var/lib/lxc/vct/config at this point.
Start the container by running
lxc-start -n vct as
root. Please leave this terminal open while the container is running.
Use this chance to note down the IPv4 address of the container (
VCT_ADDRESS) among DHCP boot messages like:
Configuring network interfaces...Internet Systems Consortium DHCP Client 4.2.2 Copyright 2004-2011 Internet Systems Consortium. All rights reserved. For info, please visit https://www.isc.org/software/dhcp/ Listening on LPF/eth0/52:c0:ca:fe:ba:be Sending on LPF/eth0/52:c0:ca:fe:ba:be Sending on Socket/fallback DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 5 DHCPREQUEST on eth0 to 255.255.255.255 port 67 DHCPOFFER from 172.24.42.1 DHCPACK from 172.24.42.1 bound to 172.24.42.141 -- renewal in 388779 seconds. ◀ here is the address done.
You may also log as
vct into the container console that appears at the end of the container boot process (see Logging into VCT), and run:
$ ip a s dev eth0 | sed -rne 's/^.* inet ([.0-9]+).*/\1/p' 172.24.42.141
VCT_ADDRESS in both examples above is 172.24.42.141. It will be later used to contact the VCT container in several ways.
If the container gets stuck for too long while booting (e.g. starting
celerybeat), see Stopping the VCT container. On subsequent container runs you may start the container with
lxc-start -d -n vct so it goes into the background and you may close the
root terminal. The container's
VCT_ADDRESS should remain the same.
The VCT container has two system users,
vct (both with password
confine), which you may use to log into the container via its console or using SSH against
vct user receives all mail addressed to
vct@localhost (including extended addresses like
vct+foo@localhost), which you may read using
The VCT container runs a CONFINE controller whose web interface is available at
http://VCT_ADDRESS/admin/. You can log from the host computer into the web interface and the API as one of the different existing users depending on what you want to do:
member) is a plain user in the
vctgroup with no particular role.
researcher, a slice administrator) can manage slices in the
technician, a node administrator) can manage nodes in the
admin, a group administrator) can manage slices, nodes and other users in the
vct) is a testbed superuser who can do anything.
To start using VCT for testing your applications you first need to register, create and start some virtual nodes. The registration procedure is similar to that of a real node (see Registering a node), but simpler since all virtual nodes have the same “hardware” configuration (already loaded as VCT defaults) with 8 GiB hard disk (sparse), 64 MiB of RAM and three Ethernet interfaces: an
eth0 local interface connected to VCT's local network, and two
eth2 direct interfaces each connected to its own network (see Node architecture for more information on node interfaces). Thus all
ethX interfaces with the same name in virtual nodes are in the same link (implemented as a bridge).
To register a node:
technicianinto the controller web interface. You are presented with the dashboard.
vct, scroll to the bottom and click on Save and continue editing.
The controller warns about some missing configuration items. Generating the firmware for the node fills them automatically:
To start the virtual node, click on Start VM. The VM page shows node start messages and its state. Please note that starting the virtual node for the first time may take some minutes. You can refresh the VM state display by clicking repeatedly on VM info. The node has finished its setup when there is data under the
rtt column after
management, i.e. when the node becomes available in the testbed management network.
Once the virtual node is up and running you need to put it into production, which is done as with real nodes.
Repeating the previous steps allows you to create several virtual nodes. On subsequent VCT runs, virtual nodes only need to be started, which takes a shorter time than their first boot after creation.
Note: Although several virtual nodes may run simultaneously, please try to leave some time between node starts since starting them very close in time results in a high system load which may result in nodes getting stuck. You may for instance register a second node while the first one is starting.
To stop a virtual node, go to its page, click on the VM management button, and then on the Stop VM button. This operation is nearly immediate.
The rest of administration tasks with virtual nodes are mostly the same as with real nodes (see Node administrator's guide).
To manage slices under VCT, log as
researcher into the controller web interface. Slice administration tasks in VCT are the same as in real testbeds (see Slice administrator's guide).
In contrast with real CONFINE testbeds, under VCT you may not (by default) upload sliver templates and data files using the web interface. Instead, you should place these files in the
/var/lib/vct/downloads directory of the VCT container. The
vct system user has write access to that directory, so you may simply log into the container and manually copy your files there. The files in that directory will be selectable from drop-down lists that will appear wherever a file upload box would in the controller web interface of a real testbed.
VCT offers an easy way to use files in your computer to create a sliver data archive to be extracted on a sliver's root directory:
DIRECTORYwith the desired file hierarchy inside.
vct-sliver-data-build-DIRECTORY.tgz is created and placed in
This is an example run to create a simple script continuously pinging the loopback address:
vct@vct:~$ mkdir -p test-data/usr/local/bin/ vct@vct:~$ cat > test-data/usr/local/bin/ping-myself << 'EOF' #!/bin/sh exec ping 127.0.0.1 > /tmp/ping-myself.log EOF vct@vct:~$ chmod +x test-data/usr/local/bin/ping-myself vct@vct:~$ vct_build_sliver_data test-data ./ ./usr/ ./usr/local/ ./usr/local/bin/ ./usr/local/bin/ping-myself The slice/sliver data archive is available via the controller portal at: slices->[select slice]->sliver data as: vct-sliver-data-build-test-data.tgz
If you need to deeply customize your slivers (e.g. by installing or removing many files or packages and configuring their initialization process), you may use VCT to perform the changes in a test sliver and capture them so that they can be applied to others.
PRODUCTIONstate (see Creating and running virtual nodes).
START(see Slice administrator's guide).
NODE_ID(short) in the command line prompt. Log out when finished. Set the slice state to
vct_node_ssh NODE_ID confine_sliver_dump_overlay SLIVER_ID. The overlay will be dumped to
/root/overlay-dump.tgzin the node.
vct_node_scp NODE_ID remote://root/overlay-dump.tgz ..
Now you can provide the
overlay-dump.tgz archive as sliver data for your application. Copy it to
/var/lib/vct/downloads if you want to test it inside of VCT.
In VCT, the SSH keys of slice administrators are accepted for
root login in slivers as usual (see Logging into a sliver). All existing users in VCT have the same passwordless SSH key, stored in
/var/lib/vct/keys/id_rsa, which you may pass as an argument to SSH's
-i option while logged in the VCT container.
This is an example of using the SSH key for testing the script in the sample sliver data used above:
vct@vct:~$ ssh -i /var/lib/vct/keys/id_rsa email@example.com (accept remote host key) root@000000000004_0002:~# type ping-myself ## sliver data is in place ping-myself is /usr/local/bin/ping-myself root@000000000004_0002:~# ping-myself & ## it could have been an rc script  314 root@000000000004_0002:~# tail /tmp/ping-myself.log 64 bytes from 127.0.0.1: icmp_req=1 ttl=64 time=0.000 ms 64 bytes from 127.0.0.1: icmp_req=2 ttl=64 time=0.000 ms 64 bytes from 127.0.0.1: icmp_req=3 ttl=64 time=0.000 ms 64 bytes from 127.0.0.1: icmp_req=4 ttl=64 time=0.000 ms 64 bytes from 127.0.0.1: icmp_req=5 ttl=64 time=0.000 ms 64 bytes from 127.0.0.1: icmp_req=6 ttl=64 time=0.000 ms 64 bytes from 127.0.0.1: icmp_req=7 ttl=64 time=0.000 ms 64 bytes from 127.0.0.1: icmp_req=8 ttl=64 time=0.000 ms 64 bytes from 127.0.0.1: icmp_req=9 ttl=64 time=0.000 ms 64 bytes from 127.0.0.1: icmp_req=10 ttl=64 time=0.000 ms
Note: Please remember that before shutting down a VCT container, you should stop any virtual nodes that you have previously started (see Creating and running virtual nodes), otherwise Celery processes may get stuck for a long while waiting for running virtual nodes to stop.
You may shut the container down at any time by running
sudo halt as
vct in the container's command line. Please make sure that you are not issuing this command in the host computer's command line or you may shut it down instead!
If the container gets stuck while booting or shutting down, or if logins block after authentication, you may force its stop by running
lxc-stop -n vct as
root in the host computer (it may take a short while).
Access via SSH to the node and manually reconfigure node config as shown:
vct@vct:~$ vct_node_ssh 0001 root@rd0001:~# confine_daemon_stop stopping confine daemon... confine-daemon-pid=22199 still running. This may take a while... CONFINE node-id=0x0001 node-state=started daemon-pid=stopped slice-id lxc pid state name ---------------------------------------------------------- 0x00000000000c 01 28390 started 'deb7-slice' root@rd0001:~# uci show confine.node | grep disk confine.node.disk_max_per_sliver=1000 confine.node.disk_dflt_per_sliver=500 root@rd0001:~# uci set confine.node.disk_max_per_sliver=1000 root@rd0001:~# uci set confine.node.disk_dflt_per_sliver=1000 root@rd0001:~# uci commit confine root@rd0001:~# confine_daemon_continue restarting confine daemon... CONFINE node-id=0x0001 node-state=started daemon-pid=23567 slice-id lxc pid state name ---------------------------------------------------------- 0x00000000000c 01 28390 started 'deb7-slice'
Then browse to your sliver via VCT controller (slices→slivers→select sliver in left column→click update) and send an update. This will instruct the node to rebuild your sliver from scratch with the new size.
To increase sliver disk space by default with all new generated node go to VCT controller browse to administration→firmware→configuration and have a look how UCI options can be configured. You can add the above mentioned and the next build should have them set by default.
The controller used at VCT is a modified version of the confine-controller with some extra functionalities to manage the VCT research devices (VM management, local files management…). Howerver you can disable VCT customizations overriding default controller settings:
VCT_VM_MANAGEMENT, enable VM management for nodes (research devices), by default
VCT_LOCAL_FILES, enable local management of files (only files stored on VCT can be selected as templates, base images, sliver data, etc), by default
To include your own settings you can update
confine-dist/utils/vct/server/server/settings.py file or create a new file
confine-dist/utils/vct/server/server/local_settings.py (local_settings.py example).