User Tools

Site Tools


usage:vct

Using the Virtual CONFINE Testbed

You can follow these instructions to set up your own Virtual CONFINE Testbed (VCT), which will allow you to prepare and run your applications locally before you jump into real testbeds. We will be using a VCT container to ease the deployment of VCT in your computer as a self-contained virtual machine (a Linux container).

Besides these instructions, you may also want to follow some CONFINE tutorials that will guide you to prepare the VCT environment and run your applications on it.

Setting up the host computer

This assumes a host running a distribution based on Debian (at least Jessie) or Ubuntu (at least Raring) and a user with root access.

Checking hardware virtualization support

The following command should show this output:

$ egrep -q '^flags.*(vmx|svm)' /proc/cpuinfo && echo supported || echo missing
supported

If it shows missing you will not be able to run the VCT container.

Installing the LXC tools

Install the lxc package as root:

# apt-get install lxc

The VCT container is known to work with lxc versions at least 0.9.0~alpha3-2 for Debian and at least 0.9.0-0ubuntu3.7 for Ubuntu.

Mounting the control group file system

Make sure that you have the cgroup file system mounted. The following command should show this output:

$ mountpoint /sys/fs/cgroup
/sys/fs/cgroup is a mountpoint

Otherwise you may either:

  1. Mount it each time that you intend to use VCT. Just run mount -t cgroup cgroup /sys/fs/cgroup as root.
  2. Mount it automatically on boot.

In the second case, add the following line to /etc/fstab as root, and reboot your computer:

cgroup /sys/fs/cgroup cgroup defaults 0 0

Setting up a VM bridge

You should have a bridge interface which either makes containers appear as normal hosts in your local network, or masquerades them as your host computer. Please note that this document assumes an internal bridge interface called vmbr. You will need to adjust your steps if your bridge is called otherwise.

If you do not have such a bridge, follow the instructions described in VM bridge.

You may either:

  1. Enable the interface each time that you intend to use VCT. Just run ifup vmbr as root.
  2. Enable the interface automatically on boot.

In the second case, add the following line to /etc/network/interfaces before the iface vmbr… line as root, and reboot your computer:

auto vmbr

Loading modules

The VCT container is unable to load modules for security reasons, so the following modules must be loaded before starting it:

  • ip_tables
  • iptable_nat
  • ebtables
  • ebtable_nat
  • kvm
  • kvm_intel or kvm_amd (depending on the brand of your CPU)
  • dummy
  • tun
  • fuse

You can check which modules are already loaded by running lsmod | grep MODULE. Then you may either:

  1. Load the modules each time that you intend to use VCT. Just run modprobe MODULE1 MODULE2… as root.
  2. Load them automatically on boot. Just add the modules to /etc/modules (one per line) as root, and reboot your computer.

Setting up the VCT container

The following instructions assume that the path of LXC containers is /var/lib/lxc (see lxcpath in /etc/lxc/lxc.conf) and the name of the VCT container is vct.

Before installing the VCT container, you must make sure that there is no other running container named vct. Run the following command as root:

# lxc-stop -n vct
lxc-stop: failed to open log file "/var/lib/lxc/vct/vct.log" : No such file or directory
vct is not running

In the example above the container was not running (there was actually no such container).

If you already have a (now stopped) vct container, you may either:

  1. Keep it by temporarily moving it out of the way. Just run mv /var/lib/lxc/vct /var/lib/lxc/vct.old as root. Please note that the container will not work while renamed without adapting its configuration file.
  2. Destroy it. Just run lxc-destroy -n vct as root.

Download the latest VCT container template from the repository of VCT container templates. The downloaded file will be named vct-container,YYYYMMDDNN.tar.xz. Unpack the container template under /var/lib/vct as root:

# tar -C /var/lib/lxc --numeric-owner -xJf /path/to/vct-container,YYYYMMDDNN.tar.xz

If you are using different LXC paths, container or bridge names, or if you intend to run several VCT containers simultaneously, you should adapt /var/lib/lxc/vct/config at this point.

Starting the VCT container

Start the container by running lxc-start -n vct as root. Please leave this terminal open while the container is running.

Use this chance to note down the IPv4 address of the container (VCT_ADDRESS) among DHCP boot messages like:

Configuring network interfaces...Internet Systems Consortium DHCP Client 4.2.2
Copyright 2004-2011 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/

Listening on LPF/eth0/52:c0:ca:fe:ba:be
Sending on   LPF/eth0/52:c0:ca:fe:ba:be
Sending on   Socket/fallback
DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 5
DHCPREQUEST on eth0 to 255.255.255.255 port 67
DHCPOFFER from 172.24.42.1
DHCPACK from 172.24.42.1
bound to 172.24.42.141 -- renewal in 388779 seconds.    ◀ here is the address
done.

You may also log as vct into the container console that appears at the end of the container boot process (see Logging into VCT), and run:

$ ip a s dev eth0 | sed -rne 's/^.* inet ([.0-9]+).*/\1/p'
172.24.42.141

The VCT_ADDRESS in both examples above is 172.24.42.141. It will be later used to contact the VCT container in several ways.

If the container gets stuck for too long while booting (e.g. starting celerybeat), see Stopping the VCT container. On subsequent container runs you may start the container with lxc-start -d -n vct so it goes into the background and you may close the root terminal. The container's VCT_ADDRESS should remain the same.

Logging into VCT

The VCT container has two system users, root and vct (both with password confine), which you may use to log into the container via its console or using SSH against VCT_ADDRESS. The vct user receives all mail addressed to root@localhost and vct@localhost (including extended addresses like vct+foo@localhost), which you may read using mutt.

The VCT container runs a CONFINE controller whose web interface is available at http://VCT_ADDRESS/admin/. You can log from the host computer into the web interface and the API as one of the different existing users depending on what you want to do:

  • member (password member) is a plain user in the vct group with no particular role.
  • researcher (password researcher, a slice administrator) can manage slices in the vct group.
  • technician (password technician, a node administrator) can manage nodes in the vct group.
  • admin (password admin, a group administrator) can manage slices, nodes and other users in the vct group.
  • vct (password vct) is a testbed superuser who can do anything.

Creating and running virtual nodes

To start using VCT for testing your applications you first need to register, create and start some virtual nodes. The registration procedure is similar to that of a real node (see Registering a node), but simpler since all virtual nodes have the same “hardware” configuration (already loaded as VCT defaults) with 8 GiB hard disk (sparse), 64 MiB of RAM and three Ethernet interfaces: an eth0 local interface connected to VCT's local network, and two eth1 and eth2 direct interfaces each connected to its own network (see Node architecture for more information on node interfaces). Thus all ethX interfaces with the same name in virtual nodes are in the same link (implemented as a bridge).

To register a node:

  1. Log as technician into the controller web interface. You are presented with the dashboard.
  2. Click on the Nodes icon to get the list of nodes in the testbed.
  3. Click on the Add node button to register a new node.
  4. Fill in the name field (e.g. Test node 1), ensure that its group is vct, scroll to the bottom and click on Save and continue editing.

The controller warns about some missing configuration items. Generating the firmware for the node fills them automatically:

  1. Click on the VM management button to get to the firmware and VM management page.
  2. Click on the Build FW button, leave the default options and click on Build firmware!. After the progress bar completes, the VM page shows information on the customized node firmware image stored in the controller.
  3. To create a new virtual node based on this firmware, click on Create VM. After some seconds, the VM page shows node creation messages and its state.

To start the virtual node, click on Start VM. The VM page shows node start messages and its state. Please note that starting the virtual node for the first time may take some minutes. You can refresh the VM state display by clicking repeatedly on VM info. The node has finished its setup when there is data under the rtt column after management, i.e. when the node becomes available in the testbed management network.

Once the virtual node is up and running you need to put it into production, which is done as with real nodes.

Repeating the previous steps allows you to create several virtual nodes. On subsequent VCT runs, virtual nodes only need to be started, which takes a shorter time than their first boot after creation.

Note: Although several virtual nodes may run simultaneously, please try to leave some time between node starts since starting them very close in time results in a high system load which may result in nodes getting stuck. You may for instance register a second node while the first one is starting.

To stop a virtual node, go to its page, click on the VM management button, and then on the Stop VM button. This operation is nearly immediate.

The rest of administration tasks with virtual nodes are mostly the same as with real nodes (see Node administrator's guide).

Working with slices and slivers

To manage slices under VCT, log as researcher into the controller web interface. Slice administration tasks in VCT are the same as in real testbeds (see Slice administrator's guide).

Uploading files

In contrast with real CONFINE testbeds, under VCT you may not (by default) upload sliver templates and data files using the web interface. Instead, you should place these files in the /var/lib/vct/downloads directory of the VCT container. The vct system user has write access to that directory, so you may simply log into the container and manually copy your files there. The files in that directory will be selectable from drop-down lists that will appear wherever a file upload box would in the controller web interface of a real testbed.

Creating sliver data

VCT offers an easy way to use files in your computer to create a sliver data archive to be extracted on a sliver's root directory:

  1. Create a DIRECTORY with the desired file hierarchy inside.
  2. Run vct_build_sliver_data DIRECTORY.

A vct-sliver-data-build-DIRECTORY.tgz is created and placed in /var/lib/vct/downloads.

This is an example run to create a simple script continuously pinging the loopback address:

vct@vct:~$ mkdir -p test-data/usr/local/bin/
vct@vct:~$ cat > test-data/usr/local/bin/ping-myself << 'EOF'
#!/bin/sh
exec ping 127.0.0.1 > /tmp/ping-myself.log
EOF
vct@vct:~$ chmod +x test-data/usr/local/bin/ping-myself
vct@vct:~$ vct_build_sliver_data test-data
./
./usr/
./usr/local/
./usr/local/bin/
./usr/local/bin/ping-myself

The slice/sliver data archive is available via the controller portal at:
slices->[select slice]->sliver data as:
vct-sliver-data-build-test-data.tgz

Dumping sliver data

If you need to deeply customize your slivers (e.g. by installing or removing many files or packages and configuring their initialization process), you may use VCT to perform the changes in a test sliver and capture them so that they can be applied to others.

  1. Create a node in VCT, start it and set it into PRODUCTION state (see Creating and running virtual nodes).
  2. Create a slice and a sliver running in the node, set the slice state to START (see Slice administrator's guide).
  3. When the sliver has been started, log into it (see Logging into a sliver) and perform all the desired changes. Please note down the hexadecimal SLIVER_ID (long) and NODE_ID (short) in the command line prompt. Log out when finished. Set the slice state to DEPLOY.
  4. When the sliver is in the DEPLOYED state, run vct_node_ssh NODE_ID confine_sliver_dump_overlay SLIVER_ID. The overlay will be dumped to /root/overlay-dump.tgz in the node.
  5. Download the dumped overlay using vct_node_scp NODE_ID remote://root/overlay-dump.tgz ..

Now you can provide the overlay-dump.tgz archive as sliver data for your application. Copy it to /var/lib/vct/downloads if you want to test it inside of VCT.

Logging into a sliver

In VCT, the SSH keys of slice administrators are accepted for root login in slivers as usual (see Logging into a sliver). All existing users in VCT have the same passwordless SSH key, stored in /var/lib/vct/keys/id_rsa, which you may pass as an argument to SSH's -i option while logged in the VCT container.

This is an example of using the SSH key for testing the script in the sample sliver data used above:

vct@vct:~$ ssh -i /var/lib/vct/keys/id_rsa root@10.241.0.23
(accept remote host key)
root@000000000004_0002:~# type ping-myself  ## sliver data is in place
ping-myself is /usr/local/bin/ping-myself
root@000000000004_0002:~# ping-myself &  ## it could have been an rc script
[1] 314
root@000000000004_0002:~# tail /tmp/ping-myself.log
64 bytes from 127.0.0.1: icmp_req=1 ttl=64 time=0.000 ms
64 bytes from 127.0.0.1: icmp_req=2 ttl=64 time=0.000 ms
64 bytes from 127.0.0.1: icmp_req=3 ttl=64 time=0.000 ms
64 bytes from 127.0.0.1: icmp_req=4 ttl=64 time=0.000 ms
64 bytes from 127.0.0.1: icmp_req=5 ttl=64 time=0.000 ms
64 bytes from 127.0.0.1: icmp_req=6 ttl=64 time=0.000 ms
64 bytes from 127.0.0.1: icmp_req=7 ttl=64 time=0.000 ms
64 bytes from 127.0.0.1: icmp_req=8 ttl=64 time=0.000 ms
64 bytes from 127.0.0.1: icmp_req=9 ttl=64 time=0.000 ms
64 bytes from 127.0.0.1: icmp_req=10 ttl=64 time=0.000 ms

Stopping the VCT container

Note: Please remember that before shutting down a VCT container, you should stop any virtual nodes that you have previously started (see Creating and running virtual nodes), otherwise Celery processes may get stuck for a long while waiting for running virtual nodes to stop.

You may shut the container down at any time by running sudo halt as vct in the container's command line. Please make sure that you are not issuing this command in the host computer's command line or you may shut it down instead!

If the container gets stuck while booting or shutting down, or if logins block after authentication, you may force its stop by running lxc-stop -n vct as root in the host computer (it may take a short while).

Updating node configuration to increase the sliver disk space

A) Existing nodes

Access via SSH to the node and manually reconfigure node config as shown:

vct@vct:~$ vct_node_ssh 0001

root@rd0001:~# confine_daemon_stop

stopping confine daemon...
confine-daemon-pid=22199 still running. This may take a while...
CONFINE node-id=0x0001 node-state=started daemon-pid=stopped
slice-id         lxc pid   state      name
----------------------------------------------------------
0x00000000000c   01  28390 started    'deb7-slice'


root@rd0001:~# uci show confine.node | grep disk
confine.node.disk_max_per_sliver=1000
confine.node.disk_dflt_per_sliver=500

root@rd0001:~# uci set confine.node.disk_max_per_sliver=1000
root@rd0001:~# uci set confine.node.disk_dflt_per_sliver=1000
root@rd0001:~# uci commit confine

root@rd0001:~# confine_daemon_continue
restarting confine daemon...
CONFINE node-id=0x0001 node-state=started daemon-pid=23567
slice-id         lxc pid   state      name
----------------------------------------------------------
0x00000000000c   01  28390 started    'deb7-slice'

Then browse to your sliver via VCT controller (slices→slivers→select sliver in left column→click update) and send an update. This will instruct the node to rebuild your sliver from scratch with the new size.

B) New nodes

To increase sliver disk space by default with all new generated node go to VCT controller browse to administration→firmware→configuration and have a look how UCI options can be configured. You can add the above mentioned and the next build should have them set by default.

VCT Controller settings

The controller used at VCT is a modified version of the confine-controller with some extra functionalities to manage the VCT research devices (VM management, local files management…). Howerver you can disable VCT customizations overriding default controller settings:

  • VCT_VM_MANAGEMENT, enable VM management for nodes (research devices), by default True.
  • VCT_LOCAL_FILES, enable local management of files (only files stored on VCT can be selected as templates, base images, sliver data, etc), by default True.

To include your own settings you can update confine-dist/utils/vct/server/server/settings.py file or create a new file confine-dist/utils/vct/server/server/local_settings.py (local_settings.py example).

usage/vct.txt · Last modified: 2015/11/06 01:26 by ivilata