Before the testbed becomes functional for its users and nodes, some elements must be configured. Here we will see how to use the Controller to setup these features.
The CONFINE architecture allows to replicate or distribute certain services to provide convenient properties like redundancy, caching or locality. Especially in the case of community networks, where access to the Internet is not always available, closer services may be more convenient for some components than farther ones, which may have worse connectivity or be completely unreachable.
CONFINE uses the concept of island as an attribute of API endpoints, tinc addresses, nodes and hosts, that can help choose the most interesting provider of a service among a set of different ones, by conveying an informal indication of network proximity. For instance, a node may choose to connect to a tinc address of another host which gives access to the management network (i.e. a management network gateway) if that address is in the same island as the node, rather than connecting to an address of another host which is in the Internet or in some undefined island. The same may go for accessing the Registry API, an NTP server, or any random service run by a host known to the testbed registry.
To create an island in your testbed:
Registering all network islands beforehand is not necessary, but adding islands for the Internet and other networks your controller is connected to is recommended.
Although the Controller installation already creates a server entry for this controller in the testbed registry, the entry still needs some changes.
API endpoints require more attention. At least a Registry API endpoint reachable from the management network should be declared. The installation creates two example endpoints for the Registry API and Controller API. To complete them:
MGMT_ADDRis the management address that appears under section Management network (e.g.
fdc0:7246:b03f::2in IPv6 short form).
~confine/mytestbed/pki/cert(use your own system user and testbed name if different). Paste it as the Certificate of both API endpoints.
If the controller is reachable through other networks like the Internet or some community network, you may add more API endpoints using the Add another Server API link under section Server APIs. For each of those networks, select a different island (see Islands) and specify the URI reachable from it (e.g.
https://controller.example.com/api/), along with the proper type and certificate.
Similarly, you may also want to add new tinc addresses if the controller is connected to several networks:
Please note that this does not change the actual configuration of the tinc daemon, which is setup by default to listen on port 655 of all interfaces.
You need to define at least one template that slice administrators can choose for their slices or particular slivers. Usually your testbed will only have a few basic and well-maintained sliver templates that can be reused for different applications and customized by slice administrators using sliver data. The more sliver templates your testbed supports, the more maintenance overhead they will require from testbed operators, and the more space they will take in nodes, possibly reducing sliver concurrence. The format and content of such templates depend on their type (which also implies how sliver data files are handled), and each template must indicate the node architectures it is compatible with.
The Confined release of Node software supports sliver templates running on 32-bit Linux kernels, with template types
openwrt for Debian and OpenWrt slivers, respectively. In both cases, the template must be a plain gzip-compressed Tar archive (
*.tgz) of the contents of a root file system. The template must be available to nodes over HTTP or HTTPS.
The archive format allows you to reuse existing images which you may easily find on the Internet:
rootfsarchives from OpenWrt downloads
rootfsarchives from LXC pre-built containers (after recompressing as gzip)
You may also build your own images:
debootstrapor similar to setup a Debian installation and customize it using
chroot. When done, run
bsdtar -czf TEMPLATE-NAME.tar.gz –numeric-owner DIRECTORY.
bsdtar -czf TEMPLATE-NAME.tar.gz –numeric-owner -C /var/lib/lxc/NAME/rootfs .
Once you have the sliver template in your computer, to make it available to nodes and slice administrators:
Now you must provide the configuration for the template:
x86_64for a template with Pentium Pro-compatible binaries on a testbed with 32-bit and 64-bit nodes. Since the Confined Node software only supports 32-bit kernels, you may only specify
/path/to/debian-wheezy-i386-erlang.tar.gz. Image files uploaded via the Controller are served to nodes over the management network so that online nodes can always fetch them.
Click on Save when done. The image is uploaded and registered along the template.
Controller software includes a feature which allows the generation of system images for nodes, from a selectable base image and the registry and firmware configuration stored for the node by the Controller. The resulting images are fully customized for their respective nodes, so that little or no manual configuration should be necessary after installing them.
A base image is a single binary file which can be directly installed in the storage of the node device (see Installing a node via USB) and contains its operating system. To make this possible, the (x86-compatible) image contains the Master Boot Record (MBR), the partition table and the required disk partitions and file systems (currently boot and root). The CONFINE Node software, based on OpenWrt, is installed in these partitions.
The customization of a base image for a node consists in making a copy of the base image, finding the partitions and mounting them locally, and then modifying the required files in place according to the node's configuration.
For the firmware generation feature to work, you need to upload at least one base image to the Controller. Generally speaking, you need to configure one base image per node type you expect to have in the testbed, e.g. based on its architecture or purpose.
The CONFINE Project builds and distributes some images from the confine-dist repository which are available in the CONFINE Node software repository. Base images for the Confined release of Node software exist for two architectures:
i586 for single-core Pentium-compatible CPUs with at most 4 GiB RAM, and
i686 for single or multi-core Pentium Pro-compatible CPUs (SMP) with big memory support using PAE. The file
CONFINE-owrt-master-ARCH-current.img.gz always points to the latest stable release for the
Pau: In addition to the confine redmine URL you may add the repository on github which is synchronized with the redmine one (IMO it is great to see how the Confine project uses the standard de-facto tools: https://github.com/confine-project
Ivan: The GitHub page lacks many other CONFINE repos (Controller, utilities, tests…), IMHO it doesn't look polished enough for inclusion here.
Jorge: if you want to add the Github repository, I think it will have sense if it will be maintained as a full copy/clone/mirror like was done in the Clommunity project (replicating or synching both servers git repos).
Once you have downloaded the base image to your computer, to add it to the Controller:
i686for Pentium Pro and AMD64-compatible nodes. Since the Confined Node software only supports 32-bit kernels, you may only specify
Once the image is uploaded and registered, it becomes available for firmware generation.
Besides the information about a node stored in the testbed registry, the Controller also keeps some specific firmware configuration for it (e.g. about networking). During firmware generation, the Controller uses this node-specific configuration along with some generic settings in order to customize a copy of the selected base image. These generic settings are also available in the firmware generator configuration page:
nodeis the Django model object for the node, so you may generate several files with one expression. You can set their mode (as in
chmod MODE FILE), and you can make them optional so that, during firmware generation, the node administrator can decide whether to include them or not (with a help text that you can customize under section Config file help texts).
As an example of a UCI option, you may configure an alternative upgrade URL for nodes to retrieve the base image when using the
confine.remote-upgrade tool to upgrade the node system:
node node(i.e. file
'https://[fdc0:7246:b03f::2]/base-images/current-i686.img.gz'(use your own management address and mind the quotes) would make nodes request the image from this controller over the management network (of course, the controller's web server should be configured to serve those files).
The option will be included in the images generated from this point on.
As an example of a configuration file, you may configure nodes to use the controller NTP server (which is automatically enabled during Controller installation) to synchronize the clocks of nodes with the controller's one:
'restrict default ignore\nrestrict 127.0.0.1\ndriftfile /var/lib/ntp/ntp.drift\nserver fdc0:7246:b03f:0:0:0:0:2 prefer iburst\n'(use your own management address and mind the quotes).
0644) and make the file not optional.
The file will be included in the images generated from this point on.
The Controller software allows the use of different plugins that affect the firmware generation process. To enable or disable them, go to the firmware generator configuration page in the Controller web interface and look under section Plugins:
rootpassword input to the node firmware generation page. Otherwise, generated images only allow
rootaccess using SSH keys.
AuthKeysPluginadds an input to the node firmware generation page where additional SSH keys for
rootaccess to the node can be specified, e.g. for centralized maintenance by testbed operators (see Remote node maintenance), other automated remote access, or for users not present as node administrators in the testbed registry.
USBImagePluginadds an option to the node firmware generation page to adapt the created image to be written to a USB drive and booted to perform the installation. This is especially useful for the initial preparation of nodes without an operating system or some easy way to install system images.
Check or uncheck the box next to each plugin and click on Save when done.
If you enable
USBImagePlugin you will need to download and place the CONFINE installation image in the Controller deployment directory. For instance, via SSH (use your own system user and testbed name if different):
$ ssh firstname.lastname@example.org # in your computer $ wget -P ~/mytestbed \ "https://media.confine-project.eu/confine-install/confine-install.img.gz" \ # in the controller
To be able install your own firmware generation plugins, you need to add a new application in the Controller to export them. For instance, let us call the application
firmwareplugins. Use the system user to create it in the Controller deployment directory as a Python package directory (use your own testbed name if different):
$ mkdir ~/mytestbed/firmwareplugins $ touch ~/mytestbed/firmwareplugins/__init__.py
Now add the application by appending the following lines to
# Enable custom firmware plugins from controller.utils.apps import add_app INSTALLED_APPS = add_app(INSTALLED_APPS, 'firmwareplugins')
To install a new firmware generation plugin (or upgrade an existing one), drop its Python code under the
firmwareplugins directory and import its name:
$ cp /path/to/myplugin.py ~/mytestbed/firmwareplugins $ echo 'from .myplugin import MyPlugin' \ >> ~/mytestbed/firmwareplugins/__init__.py
You also need to synchronize the plugins with the database and restart Controller services for the changes to take effect:
$ python ~/mytestbed/manage.py syncfirmwareplugins $ sudo python ~/mytestbed/manage.py restartservices
An entry for enabling the new plugin will appear under section Plugins in the firmware generator configuration page in the Controller web interface.
As noted in the example node configuration file described in Firmware configuration, you may run arbitrary services on the management network that will be available to all testbed servers, nodes, slivers and hosts. It is not necessary that these services run on servers operated by testbed administrators, in fact they may run on any server, node, sliver or host connected to the management network.
In case you want to dedicate some host to a specific, long-running service, it is recommended that you register it as a testbed host (see Adding a host to the management network) and introduce the service in its description field. This can actually be done by any testbed user without special privileges.
You may use a service like this for core testbed support for whoever wants to trust it, e.g. to provide some proxy cache service to the Registry API for a set of nodes with bad connectivity (just remember to provide the proper URI and certificate when generating node firmware).
Some nodes in your testbed may be unable to reach the testbed registry or the controller by themselves (e.g. some community networks lack native access to the Internet). In this case you may pick a server which does have access both to the controller and those nodes, and configure it in the testbed as a management network gateway server (or gateway server for short), so that it extends the testbed management network to the nodes and makes them and their slivers available in the testbed (see The management network).
In the setup described below, the gateway server will connect its tinc daemon to the controller, and nodes will connect theirs to the gateway instead of the controller. At the gateway, the scripts from CONFINE utilities will be used to periodically refresh the information of hosts allowed to connect to its tinc daemon.
Ivan: Please provide source to ease inclusion in PDF.
First you need to add the new server to the testbed registry:
Now you need to get some configuration information about the controller server you want the gateway's tinc daemon to connect to (you may repeat these steps if you want to connect to other servers as well):
tinc/nameJSON member, i.e. its tinc name (e.g.
tinc/addressesJSON member, i.e. its tinc addresses, which are reachable from the gateway server (e.g.
You will also need the Registry API base URI to retrieve tinc host information from, which usually is one of the URIs published by the same server (visible under section Server APIs of its server page in the Controller web interface, e.g.
Now you can proceed to setup your server. Here we assume a Debian installation, but any tinc-compatible Unix box (physical or virtual) with Git, Python and a Cron daemon should suffice for this setup. First we will configure the tinc daemon the nodes will connect to.
apt-get install tinc.
mkdir -p /etc/tinc/mytestbed/hosts.
# cat /etc/tinc/mytestbed/tinc.conf Name = server_3 ConnectTo = server_2 StrictSubnets = yes
# cat /etc/tinc/mytestbed/hosts/server_2 Subnet = fdc0:7246:b03f:0:0:0:0:2/128 Address = controller.example.com 655 -----BEGIN PUBLIC KEY----- […] -----END PUBLIC KEY-----
# echo 'Subnet = fdc0:7246:b03f:0:0:0:0:3/128' \ > /etc/tinc/mytestbed/hosts/server_3 # tincd -n mytestbed -K # accept the default files
# cat /etc/tinc/mytestbed/tinc-up #!/bin/sh ip -6 link set "$INTERFACE" up mtu 1400 ip -6 addr add fdc0:7246:b03f:0:0:0:0:3/48 dev "$INTERFACE" # chmod a+rx /etc/tinc/mytestbed/tinc-up
# echo mytestbed >> /etc/tinc/nets.boot # service tinc restart
/etc/tinc/mytestbed/hosts/server_3), paste it in the Public Key box under section Tinc configuration in this server's page in the Controller web interface and click on Save.
From this point on, the gateway server is allowed in the testbed management network (but see issue #694). You should be able to successfully ping the controller's management address (e.g.
Now we shall configure a Cron task to periodically update tinc host files with the information retrieved from the testbed registry, using the gateway scripts available at the
# git clone \ http://git.confine-project.eu/confine/confine-utils.git \ /opt/confine-utils
# apt-get install python-setuptools # easy_install --upgrade requests
fetch_tinc_hosts.pyscript, for instance:
# UTILS_PATH=/opt/confine-utils # REGISTRY_URI=https://controller.example.com/api/ # cd $(mktemp -d) # env PYTHONPATH=$UTILS_PATH \ python $UTILS_PATH/gateway/fetch_tinc_hosts.py "$REGISTRY_URI"
After some seconds, the current directory should be populated with tinc host files containing subnets and public keys.
/etc/cron.d/local-mytestbedcrontab to run the
refresh-tinc-hostsscript against the registry once per hour (use your own testbed name and registry URI):
SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin TINC_DIR=/etc/tinc/mytestbed REGISTRY_URI=https://controller.example.com/api/ 42 * * * * root /opt/confine-utils/gateway/refresh-tinc-hosts "$TINC_DIR" "$REGISTRY_URI"
service cron reload.
As soon as the task runs for the first time and information is collected, tinc connections from testbed servers, nodes and hosts will be allowed.
Node admins who want to use this server as a management network gateway may select it in the Default connect to drop-down box under section Tinc configuration of the node's page in the Controller web interface, and then regenerate node firmware. Otherwise they may tune the
registry.cert UCI options of file
/etc/config/confine in OpenWrt.