User Tools

Site Tools


admin:testbed-preparation

Testbed preparation

Before the testbed becomes functional for its users and nodes, some elements must be configured. Here we will see how to use the Controller to setup these features.

Islands

The CONFINE architecture allows to replicate or distribute certain services to provide convenient properties like redundancy, caching or locality. Especially in the case of community networks, where access to the Internet is not always available, closer services may be more convenient for some components than farther ones, which may have worse connectivity or be completely unreachable.

CONFINE uses the concept of island as an attribute of API endpoints, tinc addresses, nodes and hosts, that can help choose the most interesting provider of a service among a set of different ones, by conveying an informal indication of network proximity. For instance, a node may choose to connect to a tinc address of another host which gives access to the management network (i.e. a management network gateway) if that address is in the same island as the node, rather than connecting to an address of another host which is in the Internet or in some undefined island. The same may go for accessing the Registry API, an NTP server, or any random service run by a host known to the testbed registry.

To create an island in your testbed:

  1. Log into the Controller web interface. You are presented with the dashboard.
  2. Click on the Islands icon to get the list of islands in the testbed.
  3. Click on the Add island button to register a new island.
  4. Provide an informative name (e.g. Internet or My network campus cloud) and a description.
  5. Click on Save.

Registering all network islands beforehand is not necessary, but adding islands for the Internet and other networks your controller is connected to is recommended.

The controller server

Although the Controller installation already creates a server entry for this controller in the testbed registry, the entry still needs some changes.

  1. Log into the Controller web interface. You are presented with the dashboard.
  2. Click on the Servers icon to get the list of servers in the testbed. Click on the only existing server.
  3. Set informative name and description values for the server. The later may include links to its web interface, some indications on controller reachability, or additional services provided by the server.
  4. If the controller is part of a community network which runs some web application or database to organize their equipment, you may enter the URIs of the server in them under section Community host.
  5. Click on Save at the bottom of the page.

API endpoints require more attention. At least a Registry API endpoint reachable from the management network should be declared. The installation creates two example endpoints for the Registry API and Controller API. To complete them:

  1. Go to this controller's server page in the Controller web interface.
  2. Under section Server APIs, change the Base URI for both endpoints from the example https://[2001:db8:cafe::2]/api/ to https://[MGMT_ADDR]/api/, where MGMT_ADDR is the management address that appears under section Management network (e.g. fdc0:7246:b03f::2 in IPv6 short form).
  3. To get the certificate, log into the controller server (e.g. via SSH) and copy the content of ~confine/mytestbed/pki/cert (use your own system user and testbed name if different). Paste it as the Certificate of both API endpoints.
  4. Leave the API Type as is for both entries.
  5. Leave the empty Island as is (it means that the endpoints are reachable through the management network).
  6. Click on Save at the bottom of the page.

If the controller is reachable through other networks like the Internet or some community network, you may add more API endpoints using the Add another Server API link under section Server APIs. For each of those networks, select a different island (see Islands) and specify the URI reachable from it (e.g. https://controller.example.com/api/), along with the proper type and certificate.

Similarly, you may also want to add new tinc addresses if the controller is connected to several networks:

  1. Go to this controller's server page in the Controller web interface.
  2. Click on Manage tinc addresses next to section Tinc configuration to get the list of tinc addresses for this server. An address without an explicit island is already present.
  3. Click on the Add tinc address button to add a new address.
  4. Provide an IP address (or DNS host name) and port (TCP and UDP) where tinc connections can be received.
  5. Select the island the address belongs to (see Islands).
  6. Choose this controller as the host providing this address.
  7. Click on Save.

Please note that this does not change the actual configuration of the tinc daemon, which is setup by default to listen on port 655 of all interfaces.

Sliver templates

You need to define at least one template that slice administrators can choose for their slices or particular slivers. Usually your testbed will only have a few basic and well-maintained sliver templates that can be reused for different applications and customized by slice administrators using sliver data. The more sliver templates your testbed supports, the more maintenance overhead they will require from testbed operators, and the more space they will take in nodes, possibly reducing sliver concurrence. The format and content of such templates depend on their type (which also implies how sliver data files are handled), and each template must indicate the node architectures it is compatible with.

The Confined release of Node software supports sliver templates running on 32-bit Linux kernels, with template types debian or openwrt for Debian and OpenWrt slivers, respectively. In both cases, the template must be a plain gzip-compressed Tar archive (*.tar.gz or *.tgz) of the contents of a root file system. The template must be available to nodes over HTTP or HTTPS.

The archive format allows you to reuse existing images which you may easily find on the Internet:

You may also build your own images:

  • Using a real machine and backing up its root file system (not recommended).
  • With debootstrap or similar to setup a Debian installation and customize it using chroot. When done, run bsdtar -czf TEMPLATE-NAME.tar.gz –numeric-owner DIRECTORY.
  • Using LXC to create a container and customize it as a normal machine. When done, stop the container and run bsdtar -czf TEMPLATE-NAME.tar.gz –numeric-owner -C /var/lib/lxc/NAME/rootfs .
  • Using Virtual CONFINE Testbed (VCT) and vct_build_sliver_template TYPE VARIANT (e.g. debian wheezy) to create a vct-sliver-template-build-TYPE-VARIANT.tgz archive (see Using the Virtual CONFINE Testbed for usage instructions).

Once you have the sliver template in your computer, to make it available to nodes and slice administrators:

  1. Log into the Controller web interface. You are presented with the dashboard.
  2. Click on the Templates icon to get the list of sliver templates in the testbed.
  3. Click on the Add template button to register a new template.

Now you must provide the configuration for the template:

  • A descriptive name that reveals relevant information to the user at first sight, e.g. Debian 7 Wheezy i386 with Erlang.
  • Use the description field for a more detailed explanation of the template, like customizations on the base system. You may include links for further information.
  • Select the type of the template as explained above, e.g. Debian.
  • Choose all node architectures the template is compatible with, e.g. i686 and x86_64 for a template with Pentium Pro-compatible binaries on a testbed with 32-bit and 64-bit nodes. Since the Confined Node software only supports 32-bit kernels, you may only specify i586 or i686.
  • The image file is the archive that contains the actual sliver template, e.g. /path/to/debian-wheezy-i386-erlang.tar.gz. Image files uploaded via the Controller are served to nodes over the management network so that online nodes can always fetch them.
  • The image SHA256 checksum helps nodes test that the image was correctly downloaded and it has not been corrupted or tampered. It is automatically computed by the Controller.
  • Whether the template is active. Inactive templates can be kept for reference but they can not be used to deploy new slivers.

Click on Save when done. The image is uploaded and registered along the template.

Node image generation

Controller software includes a feature which allows the generation of system images for nodes, from a selectable base image and the registry and firmware configuration stored for the node by the Controller. The resulting images are fully customized for their respective nodes, so that little or no manual configuration should be necessary after installing them.

A base image is a single binary file which can be directly installed in the storage of the node device (see Installing a node via USB) and contains its operating system. To make this possible, the (x86-compatible) image contains the Master Boot Record (MBR), the partition table and the required disk partitions and file systems (currently boot and root). The CONFINE Node software, based on OpenWrt, is installed in these partitions.

The customization of a base image for a node consists in making a copy of the base image, finding the partitions and mounting them locally, and then modifying the required files in place according to the node's configuration.

Base images

For the firmware generation feature to work, you need to upload at least one base image to the Controller. Generally speaking, you need to configure one base image per node type you expect to have in the testbed, e.g. based on its architecture or purpose.

The CONFINE Project builds and distributes some images from the confine-dist repository which are available in the CONFINE Node software repository. Base images for the Confined release of Node software exist for two architectures: i586 for single-core Pentium-compatible CPUs with at most 4 GiB RAM, and i686 for single or multi-core Pentium Pro-compatible CPUs (SMP) with big memory support using PAE. The file CONFINE-owrt-master-ARCH-current.img.gz always points to the latest stable release for the ARCH architecture.

Pau: In addition to the confine redmine URL you may add the repository on github which is synchronized with the redmine one (IMO it is great to see how the Confine project uses the standard de-facto tools: https://github.com/confine-project

Ivan: The GitHub page lacks many other CONFINE repos (Controller, utilities, tests…), IMHO it doesn't look polished enough for inclusion here.

Jorge: if you want to add the Github repository, I think it will have sense if it will be maintained as a full copy/clone/mirror like was done in the Clommunity project (replicating or synching both servers git repos).

Once you have downloaded the base image to your computer, to add it to the Controller:

  1. Log into the Controller web interface. You are presented with the dashboard.
  2. Click on the Firmware config icon to get to the firmware generator configuration page.
  3. Click on the Add another Base image link under section Base images.
  4. Give it a descriptive name, e.g. Stable 2015-07-15 i686.
  5. Choose all node architectures the image is compatible with, e.g. i686 for Pentium Pro and AMD64-compatible nodes. Since the Confined Node software only supports 32-bit kernels, you may only specify i586 or i686.
  6. Select the image file from your computer.
  7. Choose whether this base image will be chosen by default when generating images if more than one image is suitable for a given node. The node administrator will always be able to choose, though.
  8. Click on Save at the top or bottom of the form.

Once the image is uploaded and registered, it becomes available for firmware generation.

Firmware configuration

Besides the information about a node stored in the testbed registry, the Controller also keeps some specific firmware configuration for it (e.g. about networking). During firmware generation, the Controller uses this node-specific configuration along with some generic settings in order to customize a copy of the selected base image. These generic settings are also available in the firmware generator configuration page:

  • Image name is a Python string template which allows you to customize the name of the resulting image files. For instance, mytestbed-node-%(node_id)04d-%(arch)s.img.gz may yield mytestbed-node-1234-i686.img.gz.
  • Config UCI holds OpenWrt UCI configuration options to be stored in the image. Their value is a Python expression where node is the Django model object for the node (see Nodes Application).
  • Config files allows you to generate arbitrary files in the image. Their path and content are Python expressions where node is the Django model object for the node, so you may generate several files with one expression. You can set their mode (as in chmod MODE FILE), and you can make them optional so that, during firmware generation, the node administrator can decide whether to include them or not (with a help text that you can customize under section Config file help texts).

As an example of a UCI option, you may configure an alternative upgrade URL for nodes to retrieve the base image when using the confine.remote-upgrade tool to upgrade the node system:

  1. Go to the firmware generator configuration page in the Controller web interface.
  2. Under section Config UCI, click on Add another Config UCI to add a new empy item.
  3. Enter the section node node (i.e. file /etc/config/node, section node).
  4. Enter the option name latest_image_uri.
  5. Enter the new URL as the option value, e.g. 'https://[fdc0:7246:b03f::2]/base-images/current-i686.img.gz' (use your own management address and mind the quotes) would make nodes request the image from this controller over the management network (of course, the controller's web server should be configured to serve those files).
  6. Click on Save at the top or bottom of the form.

The option will be included in the images generated from this point on.

As an example of a configuration file, you may configure nodes to use the controller NTP server (which is automatically enabled during Controller installation) to synchronize the clocks of nodes with the controller's one:

  1. Go to the firmware generator configuration page in the Controller web interface.
  2. Under section Config files, click on Add another Config File to add a new empty item.
  3. Enter the path /etc/ntp.conf.
  4. Enter the content 'restrict default ignore\nrestrict 127.0.0.1\ndriftfile /var/lib/ntp/ntp.drift\nserver fdc0:7246:b03f:0:0:0:0:2 prefer iburst\n' (use your own management address and mind the quotes).
  5. Leave the default mode (which equals 0644) and make the file not optional.
  6. Click on Save at the top or bottom of the form.

The file will be included in the images generated from this point on.

Plugins

The Controller software allows the use of different plugins that affect the firmware generation process. To enable or disable them, go to the firmware generator configuration page in the Controller web interface and look under section Plugins:

  • PasswordPlugin adds a root password input to the node firmware generation page. Otherwise, generated images only allow root access using SSH keys.
  • AuthKeysPlugin adds an input to the node firmware generation page where additional SSH keys for root access to the node can be specified, e.g. for centralized maintenance by testbed operators (see Remote node maintenance), other automated remote access, or for users not present as node administrators in the testbed registry.
  • USBImagePlugin adds an option to the node firmware generation page to adapt the created image to be written to a USB drive and booted to perform the installation. This is especially useful for the initial preparation of nodes without an operating system or some easy way to install system images.

Check or uncheck the box next to each plugin and click on Save when done.

If you enable USBImagePlugin you will need to download and place the CONFINE installation image in the Controller deployment directory. For instance, via SSH (use your own system user and testbed name if different):

$ ssh confine@controller.example.com  # in your computer
$ wget -P ~/mytestbed \
  "https://media.confine-project.eu/confine-install/confine-install.img.gz" \
  # in the controller

Installing new plugins

To be able install your own firmware generation plugins, you need to add a new application in the Controller to export them. For instance, let us call the application firmwareplugins. Use the system user to create it in the Controller deployment directory as a Python package directory (use your own testbed name if different):

$ mkdir ~/mytestbed/firmwareplugins
$ touch ~/mytestbed/firmwareplugins/__init__.py

Now add the application by appending the following lines to ~/mytestbed/mytestbed/settings.py:

# Enable custom firmware plugins
from controller.utils.apps import add_app
INSTALLED_APPS = add_app(INSTALLED_APPS, 'firmwareplugins')

To install a new firmware generation plugin (or upgrade an existing one), drop its Python code under the firmwareplugins directory and import its name:

$ cp /path/to/myplugin.py ~/mytestbed/firmwareplugins
$ echo 'from .myplugin import MyPlugin' \
  >> ~/mytestbed/firmwareplugins/__init__.py

You also need to synchronize the plugins with the database and restart Controller services for the changes to take effect:

$ python ~/mytestbed/manage.py syncfirmwareplugins
$ sudo python ~/mytestbed/manage.py restartservices

An entry for enabling the new plugin will appear under section Plugins in the firmware generator configuration page in the Controller web interface.

Services in the management network

As noted in the example node configuration file described in Firmware configuration, you may run arbitrary services on the management network that will be available to all testbed servers, nodes, slivers and hosts. It is not necessary that these services run on servers operated by testbed administrators, in fact they may run on any server, node, sliver or host connected to the management network.

In case you want to dedicate some host to a specific, long-running service, it is recommended that you register it as a testbed host (see Adding a host to the management network) and introduce the service in its description field. This can actually be done by any testbed user without special privileges.

You may use a service like this for core testbed support for whoever wants to trust it, e.g. to provide some proxy cache service to the Registry API for a set of nodes with bad connectivity (just remember to provide the proper URI and certificate when generating node firmware).

Adding a management network gateway

Some nodes in your testbed may be unable to reach the testbed registry or the controller by themselves (e.g. some community networks lack native access to the Internet). In this case you may pick a server which does have access both to the controller and those nodes, and configure it in the testbed as a management network gateway server (or gateway server for short), so that it extends the testbed management network to the nodes and makes them and their slivers available in the testbed (see The management network).

In the setup described below, the gateway server will connect its tinc daemon to the controller, and nodes will connect theirs to the gateway instead of the controller. At the gateway, the scripts from CONFINE utilities will be used to periodically refresh the information of hosts allowed to connect to its tinc daemon.

Pau: I've made this diagram, feel free to use it if you like

Ivan: Please provide source to ease inclusion in PDF.

Pau: https://wiki.confine-project.eu/_media/admin:confine-gateway.dia

Gateway server registration

First you need to add the new server to the testbed registry:

  1. Log into the Controller web interface. You are presented with the dashboard.
  2. Click on the Servers icon to get the list of servers in the testbed.
  3. Click on the Add server button to register a new server.
  4. Provide an informative name (e.g. My Network gateway) and a description.
  5. Click on Save and continue editing.
  6. Note down the IPv6 address under section Management network, i.e. its management address (e.g. fdc0:7246:b03f:0:0:0:0:3).
  7. Click on Manage tinc addresses next to section Tinc configuration and add at least one address which is reachable from the nodes. When choosing this server as the host of the tinc address, please note down the name outside of the parentheses, i.e. its tinc name (e.g. server_3).

Now you need to get some configuration information about the controller server you want the gateway's tinc daemon to connect to (you may repeat these steps if you want to connect to other servers as well):

  1. Go to that server's page in the Controller web interface.
  2. Note down the IPv6 address under section Management network, i.e. its management address (e.g. fdc0:7246:b03f:0:0:0:0:2).
  3. Note down the public key under section Tinc configuration, i.e. its tinc public key.
  4. Click on API on the menu bar. You are presented with the Registry API representation of the server.
  5. Note down the string next to the tinc/name JSON member, i.e. its tinc name (e.g. server_2).
  6. Note down some addresses under the tinc/addresses JSON member, i.e. its tinc addresses, which are reachable from the gateway server (e.g. controller.example.com 655).

You will also need the Registry API base URI to retrieve tinc host information from, which usually is one of the URIs published by the same server (visible under section Server APIs of its server page in the Controller web interface, e.g. https://controller.example.com/api/).

Gateway tinc configuration

Now you can proceed to setup your server. Here we assume a Debian installation, but any tinc-compatible Unix box (physical or virtual) with Git, Python and a Cron daemon should suffice for this setup. First we will configure the tinc daemon the nodes will connect to.

  1. Log into the gateway server (e.g. via SSH) as root.
  2. Install tinc: apt-get install tinc.
  3. Create directories for your testbed tinc configuration and host files, e.g. mkdir -p /etc/tinc/mytestbed/hosts.
  4. Create the tinc configuration file with your gateway server's tinc name to connect to the controller's tinc name, for instance:
    # cat /etc/tinc/mytestbed/tinc.conf
    Name = server_3
    ConnectTo = server_2
    StrictSubnets = yes
  5. Create the controller's tinc host file with its management address, tinc addresses and public key:
    # cat /etc/tinc/mytestbed/hosts/server_2
    Subnet = fdc0:7246:b03f:0:0:0:0:2/128
    Address = controller.example.com 655
    
    -----BEGIN PUBLIC KEY-----
    […]
    -----END PUBLIC KEY-----
  6. Create this server's tinc host file with its management address and generate its key pair:
    # echo 'Subnet = fdc0:7246:b03f:0:0:0:0:3/128' \
      > /etc/tinc/mytestbed/hosts/server_3
    # tincd -n mytestbed -K  # accept the default files
  7. Create a script for setting the management network interface and its address up, and make it executable:
    # cat /etc/tinc/mytestbed/tinc-up
    #!/bin/sh
    ip -6 link set "$INTERFACE" up mtu 1400
    ip -6 addr add fdc0:7246:b03f:0:0:0:0:3/48 dev "$INTERFACE"
    # chmod a+rx /etc/tinc/mytestbed/tinc-up
  8. Configure the network to be enabled on boot and restart tinc:
    # echo mytestbed >> /etc/tinc/nets.boot
    # service tinc restart
  9. Finally, copy this server's public key block from the tinc host file you created before (/etc/tinc/mytestbed/hosts/server_3), paste it in the Public Key box under section Tinc configuration in this server's page in the Controller web interface and click on Save.

From this point on, the gateway server is allowed in the testbed management network (but see issue #694). You should be able to successfully ping the controller's management address (e.g. ping6 fdc0:7246:b03f::2).

Periodic refresh of tinc hosts

Now we shall configure a Cron task to periodically update tinc host files with the information retrieved from the testbed registry, using the gateway scripts available at the confine-utils repository.

  1. Clone the confine-utils repository to /opt:
    # git clone \
      http://git.confine-project.eu/confine/confine-utils.git \
      /opt/confine-utils
  2. Use Python Setuptools to install the latest Requests library:
    # apt-get install python-setuptools
    # easy_install --upgrade requests
  3. Test if you are able to retrieve tinc hosts information from the testbed registry using the fetch_tinc_hosts.py script, for instance:
    # UTILS_PATH=/opt/confine-utils
    # REGISTRY_URI=https://controller.example.com/api/
    # cd $(mktemp -d)
    # env PYTHONPATH=$UTILS_PATH \
      python $UTILS_PATH/gateway/fetch_tinc_hosts.py "$REGISTRY_URI"

    After some seconds, the current directory should be populated with tinc host files containing subnets and public keys.

  4. Create the /etc/cron.d/local-mytestbed crontab to run the refresh-tinc-hosts script against the registry once per hour (use your own testbed name and registry URI):
    SHELL=/bin/sh
    PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
    TINC_DIR=/etc/tinc/mytestbed
    REGISTRY_URI=https://controller.example.com/api/
    
    42 * * * *  root  /opt/confine-utils/gateway/refresh-tinc-hosts "$TINC_DIR" "$REGISTRY_URI"
  5. Reload Cron with service cron reload.

As soon as the task runs for the first time and information is collected, tinc connections from testbed servers, nodes and hosts will be allowed.

Node admins who want to use this server as a management network gateway may select it in the Default connect to drop-down box under section Tinc configuration of the node's page in the Controller web interface, and then regenerate node firmware. Otherwise they may tune the registry.base_uri and registry.cert UCI options of file /etc/config/confine in OpenWrt.

admin/testbed-preparation.txt · Last modified: 2015/09/04 16:06 by ivilata