User Tools

Site Tools


Important note: This documentation refers to the now obsolete A hack milestone of the CONFINE testbed software. Please refer to the node system documentation of the newer Bare bones milestone.


This document describes the CONFINE [1] node software and its API for managing research devices (RDs) as documented in the node architecture and related Software Releases [3,4].

The CONFINE Node Software (CNS) is developed and available as a Linux/OpenWrt [2] package via our git repositories [5]. CNS is operating inside a RDs. It implements a framework (network namespaces, etc ) to integrate CONFINE architecture requirements [6,7] into the OpenWrt operating system and provides management functions for the following tasks:

  • Research Device (RD) management (configuration, activation, deactivation)
  • Slice and Sliver life-cycle management (allocation, deployment, starting, stopping, destruction)

All management functions are usable via an UCI-based [8] API that reflects the Database model of the corresponding release milestone [3].

The next Section briefly describes CNS concepts and the assumptions it makes to the OpenWrt installation.

The following Sections describe in detail the UCI-based API and summarizes the consequences of executing related management functions.

Finally, the Section 'Experiment Preparation' illustrates how an experimentation archive must be prepared in order to execute a simple ping experiment it in all slivers of a slice.

Confine Node Software (CNS) OpenWrt Package

CNS is based on OpenWrt SDK. Further background is given by the confine-dist wiki [9].

The most important extensions/modifications to a pure OpenWrt/trunk (upcoming “attitute adjustment release) are:

  • modified .config and kernel-config to preselect required packages
  • LXC package to enable Linux Container [10] support in openWrt [11]
  • confine-system package: implementing the node, slice, and sliver management functions [12]
  • confine-recommended package: adding usefull but non-mandatory tools to the RD firmware [13]

Precompiled CNS images can be downloaded as ext4 or squashs from here:

OpenWrt system and network components controlled by confine-system

  • Node management
    • Network (/etc/config/network)
      • IPv4 and IPv6 recovery addresses
      • Internal bridge
      • Local bridge
      • Direct Interfaces
    • IPv6 management network via tinc (/etc/tinc/)
      • Key management
      • tinc daemon control
    • SSH access (/etc/dropbear/authorized_keys)
    • /etc/config/confine (node id, ssh access)
    • /etc/config/system (hostname, timeservers,…)
    • /etc/config/confine-defaults (low-level implementation defaults)
  • Slice and Sliver Management
    • Sliver resource allocation (LXC container, namespaces, ip addresses,…)
    • Sliver isolation (container setup)
      • LXC /etc/config/lxc
    • Sliver network integration
      • Internal, Local, Isolated interfaces
    • Sliver file system setup
      • setup template root fs
      • customize sliver init scripts with sliver-specific attributes (network, hostname, ssh access TBD)
      • provide slice attributes to sliver processes (/root/confine/{bash|uci}/sliver-attributes
      • start, stop, destroy slivers


The confine-defaults config contains (rather low-level) mandatory default values that should be used by all CONFINE RDs. They should not be changed. A complete and usable confine-defaults config is provided with the CNS system at install time.

Node / Research Device (RD) Management

Node Config

The following config file is used to set testbed, server, and node specific attributes:

  • /etc/config/confine

Typically, confine config templates are hardcoded into the image that is deployed in a testbed. In the following, this config file is described with an example from the deployment at . Finally a typical setup example shows how the customization of an individual RD may look like.



config 'testbed' 'testbed'
        option 'mgmt_ipv6_prefix48' 'fdf5:5351:1dfd'   # First 48 bits for IPv6 maganement address calculation
        option 'mac_dflt_prefix16' '54:c0'             # Testbed default's first 16 bits to be used for sliver MAC address creation
        option 'priv_dflt_ipv4_prefix24' '192.168.241' # Testbed default's first 24 bits for IPv4 internal/recovery address calculation

config 'server' 'server'
        option 'cn_url' ''    # CN URL of the testbed server
        option 'mgmt_pubkey' 'ssh-rsa A...lDp root@confine' # Public key of the testbed server
        option 'tinc_ip' ''                    # Community Network IPv4 address of the tinc server coordinating the management netwok
        option 'tinc_port' '655'                            # Port of the tinc server coordinating the management netwok
        option 'tinc_pubkey' 'ssh-rsa A...lDp'              # Public key of tinc server in ssh-rsa format (use ssh-keygen -yf /path/to/servers/priv/tinc/key)

config node 'node'
        option id 'd012'                                         # NODE_ID of this RD. MUST be a 4-digit hexadecimal lowercase value!
        option cn_url ''           # CN URL of this node's CD
        option mac_prefix16 '54:c0'                              # First 16 bits to be used for sliver MAC address creation
        option priv_ipv4_prefix24 '192.168.241'                  # First 24 bits for IPv4 internal/recovery address calculation
        option public_ipv4_avail '12'                            # Amount of available IP addresses for slivers
        option rd_public_ipv4_proto 'dhcp'                       # Protocol used for the RD to obtain a CN IPv4 from the CD
        option sl_public_ipv4_proto 'dhcp'                       # Protocol used for slivers to obtain a CN IPv4 from the CD
#       option sl_public_ipv4_addrs ' ...'# If sl_public_ipv4_proto=static then put available sliver addresses here
#       option sl_public_ipv4_gw ''                  # If sl_public_ipv4_proto=static then put gw here
#       option sl_public_ipv4_dns ''                    # If sl_public_ipv4_proto=static then put dns here
        option rd_if_iso_parents 'eth1 eth2'                     # Interfaces that can be used for isolated sliver traffic
        option rd_pubkey 'ssh-rsa AAA...NmSy0s= root@OpenWrt'    # Public key of RD
        option state 'unprepared'                                # Change this to 'prepared' and call: confine_node_enable or /etc/init.d/confine start
                                                                 # This field MUST only be changed manually to the state 'prepared'
                                                                 # Functions to switch state: confine_node_enable, confine_node_disable, /etc/init.d/confine stop
                                                                 # The following CNS node.states exist: unprepared (disabled), prepared, applied, started, error

Node Functions

The following functions can be used to complete node activation, deactivation, and status check

  • confine_node_enable or /etc/init.d/confine start
  • confine_node_disable or /etc/init.d/confine stop
  • confine_info
  • confine_help

The RD management config file (/etc/config/confine) must be completely set up according to the CONFINE testbed (where the RD is deployed) and the CD (to which the RD is connected).

After configuration is completed, the node state must be changed from 'unprepared' to 'confine.node.state=prepared'.

The CNS is activated by calling confine_node_enable or /etc/init.d/confine start

After successfully activation the node state is automatically changed to confine.node.state=started

Once the CNS has been _succesfully_ started (ie. confine.node.state=started), these config MUST ONLY be changed after calling confine_node_disable. If unsure confine_node_disable can be called multiple times.

Once started, slivers can be managed with sliver management functions (as described below)

Calling /etc/init.d/confine stop will stop all started slivers and set confine.node.state=applied

Calling confine_node_disable will remove all allocated slivers and set confine.node.state=unprepared

Setup Example

The following chain of commands assumes that the installed confine distro already contains preconfigured configuration templates for the testbed where the RD is deployed ( confine-*-example-* ) and only misses the node-specific details ( this specific example assumes CONFINE testbed at ).

root@OpenWrt:~# cp /etc/config/confine-example-upc /etc/config/confine
root@OpenWrt:~# uci set
root@OpenWrt:~# uci set confine.node.cn_url=""
root@OpenWrt:~# uci set confine.node.state=prepared
root@OpenWrt:~# uci commit confine
root@OpenWrt:~# /etc/init.d/confine start  # Alternatively confine_node_enable
root@rdd012:-#                             # Hostname has changed

Slice and Sliver Management

Management Functions

Usually management functions are executed as an RPC (remote procedure call) from the central testbed server on the RDs. The server is responsible to provide valid and non-conflicting input to functions and maintain a database to store results.

Management functions consume/produce data of the types: SLICE_ID, SLIVER_DESCRIPTION, SLICE_ATTIBUTES, and SLIVER_STATUS. The format of this data follows the UCI [8] notation and is described in Subsections below.

The following Management functions exist:


Syntax: confine_sliver_allocate <SLICE_ID> «SLIVER_DESCRIPTION»



Syntax: confine_sliver_deploy <SLICE_ID> «SLICE_ATTRIBUTES»



Syntax: confine_sliver_start <SLICE_ID>



Syntax: confine_sliver_stop <SLICE_ID|all>



Syntax: confine_sliver_remove <SLICE_ID|all>



A 12-digit hexadecimal lower-case value, eg '0123456789ab'

The SLICE_ID='ffffffffffff' is reserved for node management purposes.



config sliver '0123456789ab'                                    # SLICE_ID of the sliver
    option user_pubkey     'ssh-rsa A...Dp'  # a public key of the researcher doing the experiment
    option fs_template_url ''
                                                                # URL of the sliver file-system to be installed in the sliver
                                                                # MUST contain keywords 'openwrt' xor 'debian'
    option exp_data_url    ''
                                                                # URL of experimental data and init scripts for the experiment
    option exp_name        'hello-world-experiment'             # Optional name of the experiment (e.g. URL describing the experiment)
    option vlan_nr         'fab'                                # Mandatory only for experimients that request isolated interfaces
                                                                # 3-digit lower-case hex value defining the vlan-tag for isolated exp.
                                                                # MUST be the same value for all slivers of a slice and unique within all slices
                                                                # MUST be in the range of 0x100..0xfff
    option if00_type       'internal'                           # Mandatory for if00
    option if00_name       'priv'                               # Mandatory default. Interface name as it appears inside the sliver
    option if01_type       'public'                             # Mandatory only for experiments that request a CN public IP
    option if01_name       'pub0'                               # Mandatory default. Interface name as it appears inside the sliver
    option if01_ipv4_proto 'dhcp'                               # Mandatory for public ifs, MUST match confine-node.node.sl_public_ipv4_proto
    option if02_type       'isolated'                           # Mandatory only for experiments that request isolated interfaces
    option if02_name       'iso0'                               # Mandatory default, Interface name as it appears inside the sliver
    option if02_parent     'eth1'                               # Mandatory for if-types isolated, MUST exist in confine-node.node.rd_if_iso_parents


The SLIVER_STATUS summarizes state and attributes of a previously allocated, deployed, or started sliver plus the resouces that have not been know before the function call.

For example the result of the previous allocated SLIVER_DESCRIPTION is simply extended with the following fields

  • allocated sliver_nr
  • allocated MAC addresses
  • allocated IPV4 and IPv6 addresses
  • current state (can be 'allocated', 'deployed', 'started')


config sliver '0123456789ab'
        option user_pubkey 'ssh-rsa A...Dp'
        option exp_name 'hello-world-experiment'
        option vlan_nr 'fab'
        option fs_template_url ''
        option exp_data_url ''
        option sliver_nr '02'
        option if00_type 'internal'
        option if00_name 'priv'
        option if00_mac '54:c0:d0:13:02:00'
        option if00_ipv4_proto 'static'
        option if00_ipv4 ''
        option if00_ipv6_proto 'static'
        option if00_ipv6 'fdbd:e804:6aa9:0:0123:4567:89ab:0/64'
        option if01_type 'public'
        option if01_name 'pub0'
        option if01_mac '54:c0:d0:13:02:01'
        option if01_ipv6_proto 'static'
        option if01_ipv6 'fdf5:5351:1dfd:d013:0123:4567:89ab:01/64'
        option if01_ipv4_proto 'dhcp'
        option if01_ipv4 ''
        option if02_type 'isolated'
        option if02_name 'iso0'
        option if02_mac '54:c0:d0:13:02:02'
        option if02_parent 'eth1'
        option state 'allocated'


The SLICE_ATTRIBUTES list information about allocated resources of all slivers of a slice. Therefore the server must store the SLIVER_STATUS obtained during the allocation process of all slivers and provide them in a concatenated list. To differentiate the slivers allocated in different nodes the syntax of the section name of each sliver must be given as: 'SLICE_ID'_'NODE_ID' (like: SLICE_ID@NODE_ID ). The sliver state is omitted from the SLICE_ATTRIBUTES.


The following example shows valid SLIVER_ATTRIBUTES after a successfull allocation of SLIVER_ID=0123456789ab at NODE_ID=d012 and NODE_ID=d013

config sliver '0123456789ab_d012'
        option user_pubkey     'ssh-rsa A...Dp'
        option exp_name 'hello-world-experiment'
        option vlan_nr 'fab'
        option fs_template_url ''
        option exp_data_url ''
        option sliver_nr '01'
        option if00_type 'internal'
        option if00_name 'priv'
        option if00_mac '54:c0:d0:12:01:00'
        option if00_ipv4_proto 'static'
        option if00_ipv4 ''
        option if00_ipv6_proto 'static'
        option if00_ipv6 'fdbd:e804:6aa9:0:0123:4567:89ab:0/64'
        option if01_type 'public'
        option if01_name 'pub0'
        option if01_mac '54:c0:d0:12:01:01'
        option if01_ipv6_proto 'static'
        option if01_ipv6 'fdf5:5351:1dfd:d012:0123:4567:89ab:01/64'
        option if01_ipv4_proto 'dhcp'
        option if01_ipv4 ''
        option if02_type 'isolated'
        option if02_name 'iso0'
        option if02_mac '54:c0:d0:12:01:02'
        option if02_parent 'eth1'

config sliver '0123456789ab_d013'
        option user_pubkey     'ssh-rsa A...Dp'
        option exp_name 'hello-world-experiment'
        option vlan_nr 'fab'
        option fs_template_url ''
        option exp_data_url ''
        option sliver_nr '02'
        option if00_type 'internal'
        option if00_name 'priv'
        option if00_mac '54:c0:d0:13:02:00'
        option if00_ipv4_proto 'static'
        option if00_ipv4 ''
        option if00_ipv6_proto 'static'
        option if00_ipv6 'fdbd:e804:6aa9:0:0123:4567:89ab:0/64'
        option if01_type 'public'
        option if01_name 'pub0'
        option if01_mac '54:c0:d0:13:02:01'
        option if01_ipv6_proto 'static'
        option if01_ipv6 'fdf5:5351:1dfd:d013:0123:4567:89ab:01/64'
        option if01_ipv4_proto 'dhcp'
        option if01_ipv4 ''
        option if02_type 'isolated'
        option if02_name 'iso0'
        option if02_mac '54:c0:d0:13:02:02'
        option if02_parent 'eth1'

Three-Step Sliver Deployment Example

The following chain of commands allocates, deploys, and starts a slice (of two slivers) over two nodes from a remote server using the CONFINE IPV6 management (overlay) network.

Remark: Community Network (CN) public IPv4 addresses of RDs may be used instead of (tinc) IPv6-Management addresses !

#### Allocate Slivers:
ssh root@fdf5:5351:1dfd:d012::2 confine_sliver_allocate 0123456789ab <<EOF
# Substitute SLIVER_DESCRIPTION with the example given above.
# The server must process and store the replied SLIVER_STATE of the allocation for 
# providing the SLICE_ATTRIBUTES during later sliver_deployment 
# repeat the allocation procedure with node d013

ssh root@fdf5:5351:1dfd:d013::2 confine_sliver_allocate 0123456789ab <<EOF

#### Deploy Slivers:
ssh root@fdf5:5351:1dfd:d012::2 confine_sliver_deploy 0123456789ab <<EOF
# Substitute SLICE_ATTRIBUTES with the example given above.
# repeat the deployment procedure with node d013

ssh root@fdf5:5351:1dfd:d013::2 confine_sliver_deploy 0123456789ab <<EOF

#### Start Slivers:
ssh root@fdf5:5351:1dfd:d012::2 confine_sliver_start 0123456789ab
ssh root@fdf5:5351:1dfd:d013::2 confine_sliver_start 0123456789ab

#### Get Experimentation Data:
scp root@fdf5:5351:1dfd:d012::2:/lxc/01/rootfs/root/confine/data/* ./0123456789ab@d012.exp-data

Status Files

Status files are located inside the RD and essentially contain the last result of corresponding management function calls They are used as an internal db for the CNS and can be remotely accessed by the CONFINE server. These files MUST NOT be changed manually.

The following status files exist:


The file /etc/config/confine-slivers contains the sliver-specific attributes of all slivers that have been successfully allocated on a RD. For an example imagine the concatenation of several SLIVER_STATUS examples.


The file /etc/config/confine-slice-attributes contains the concatenated SLICE_ATTRIBUTES of all slivers that have been successfully deployed on a RD.

Experiment Preparation

Experiments are executed in a LXC environment. The emulated OS is defined by the SLIVER_DESCRIPTION field: fs_template_url The following two url values can be used to provide debian or openwrt based environments

A researcher must provide an experimentation archive file that is extracted on top its sliver root file system. The archieve should provide at least an init script and links to start the related init function in the corresponding OS format. For example for OpenWrt this would be:

  • ./etc/rc.d/S94confine-experiment (which is a link to ../init.d/confine-experiment)
  • ./etc/init.d/confine-experiment
    • start() Function
    • stop() Function

During livetime of the sliver, all slice attributes could be accessed either in uci format or bash environment variables:

  • /root/confine/uci/confine-slice-attributes
  • /root/confine/bash/confine-slice-attributes (not supported yet)

The SLIVER_DESCRIPTION field exp_data_url defines the URL of the experimentation archieve. A simple hello world example is given by the following URL:

This example experiment simply pings the public IP of all other slivers of its slice and stores measured round trip times in the directory /root/confine/data/ . It contains of two files:

  • ./etc/rc.d/S94confine-experiment (which is a link to ../init.d/confine-experiment)
  • ./etc/init.d/confine-experiment

The file ./etc/init.d/confine-experiment starts the experiment:

#!/bin/sh /etc/rc.common


start_ping() {
    local IP=$1
    local PING_MAX=100
    local INIT_MAX=40
    local CNT=0
    local DATA=/root/confine/data/ping-$$-$IP.log
    echo "logging data to $DATA"

    date > $DATA    
    echo "First probing for $INIT_MAX seconds for valid route to $IP ..." >> $DATA
    while [ $CNT -le $INIT_MAX ] ; do
        ping -c 1 -W 2 -w 2 $IP >> $DATA 2>&1 && break
        CNT=$(( $CNT + 1 ))
        sleep 1

    date >> $DATA
    echo "Now sending $PING_MAX ping requests to $IP ..." >> $DATA
    ping -c $PING_MAX $IP >> $DATA 2>&1

start() {   
    local IPS="$( uci show -c /root/confine/uci confine-slice-attributes | grep if01_ipv4= | awk -F'=' '{print $2'} | awk -F'/' '{print $1}' )"
    local IP=
    for IP in $IPS; do 
        start_ping $IP &

stop() {
    killall ping

Miscellaneous Management and Debug Tools

Flashing confine-dist ext4 images from inside the RD

WARNING: Test this procedure with a local physical device and the current and to-be-flashed system first !

/etc/init.d/confine stop
/etc/init.d/openvswitch stop
# killall klogd syslogd  #  (maybe kill further processes preventing remount readonly)
mount -o remount,ro /

wget -P /tmp

gunzip -c /tmp/openwrt-x86-generic-combined-ext4-XX.img.gz | dd  of=/dev/sda bs=4M
# this takes a while... launch the following command from another ssh login to watch dd progress:
# kill -USR1 $(/tmp/busybox pidof dd)  # (533423+0 records for 250MB ext4 images ~ 15sec on alix)
echo 1 > /proc/sys/kernel/sysrq  # activate sysRq option
echo b > /proc/sysrq-trigger     # reboot your device


  • use confine_info to get overview of currently allocated slivers and corresponding LXC container name
  • use lxc-console -n <LXC container name> to get a terminal to a started sliver

Virtual Confine Testbed (VCT) Environment


The Virtual Confine Testbed (VCT) provides an environment to quickly create a virtual network of confine nodes [17] with the follwing objectives:

  • becoming familiar with confine-dist [2]
  • test and facilitate development of software and components
  • emulate virtual topologies
  • prepare experiments for confine networks
  • extend/increase real confine network (testbed) with emulated links.

This is achieved through virtualization of confine hardware and networks using virsh/qemu and other linux virtualization tools.

System Requirements

  • Debian system squeeze or newer is required.
  • Super user permissions
  • Further Debian packages ()


Note: Instead of installing VCT straight into your system, you may prefer to use a prepackaged and preconfigured VCT container that can be run as a Linux container with very few changes to your setup.

The VCT environment is provided by a set of shell functions, scripts and bash-environment variables. The code is provided as a utility package in the confine-dist (in directory utils/vct) in the a-hack branch [14]. To checkout the confine dist see further instructions here [15,16], or to quicky proceed (and assuming git-core debian package available) do:

git clone --branch a-hack confine-a-hack
cd confine-a-hack/utils/vct

Currently, all vct functionality is provided by the script and configured via the default configuration (vct.conf.defaults) and overrides (vct.conf.overrides) To change the default configuration, copy vct.conf.defaults to vct.conf and perform changes there or in vct.conf.overrides


$ ./vct_help 


    vct_system_install [OVERRIDE_DIRECTIVES]              : install vct system requirements
    vct_system_init                                       : initialize vct system on host
    vct_system_cleanup                                    : revert vct_system_init

    Node Management Functions

    vct_node_info      [NODE_SET]                         : summary of existing domain(s)
    vct_node_create    <NODE_SET>                         : create domain with given NODE_ID
    vct_node_start     <NODE_SET>                         : start domain with given NODE_ID
    vct_node_stop      <NODE_SET>                         : stop domain with given NODE_ID
    vct_node_remove    <NODE_SET>                         : remove domain with given NODE_ID
    vct_node_console   <NODE_ID>                          : open console to running domain

    vct_node_customize <NODE_SET> [online|offline|sysupgrade]  : configure & activate node

    vct_node_ssh       <NODE_SET> ["COMMANDS"]            : ssh connect via recovery IPv6
    vct_node_scp       <NODE_SET> <SCP_ARGS>              : copy via recovery IPv6
    vct_node_mount     <NODE_SET>
    vct_node_unmount   <NODE_SET>

    Slice and Sliver Management Functions
    Following functions always connect to a running node for RPC execution.

    vct_sliver_allocate  <SL_ID> <NODE_SET> [EXPERIMENT]
    vct_sliver_deploy    <SL_ID> <NODE_SET>
    vct_sliver_start     <SL_ID> <NODE_SET>
    vct_sliver_stop      <SL_ID> <NODE_SET>
    vct_sliver_remove    <SL_ID> <NODE_SET> 
    vct_sliver_ssh       <SL_ID> <NODE_SET> ["COMMANDS"]  : ssh connect via recovery IPv6

    vct_slice_attributes <show|short|flush|update|state=<STATE>> [SL_ID|all [NODE_ID]]
    vct_slice_info                                               [SL_ID|all [NODE_ID]]

    Argument Definitions

    OVERRIDE_DIRECTIVES:= comma seperated list (NO spaces) of override directives: 
                             override_node_template, override_server_template, override_keys
    NODE_ID:=             node id given by a 4-digit lower-case hex value (eg: 0a12)
    NODE_SET:=            set of nodes given by: 'all', NODE_ID, or NODE_ID-NODE_ID (0001-0003)
    SL_ID:=               slice id given by a 12-digit lower-case hex value
    EXPERIMENT:=          vct_hello_openwrt | vct_hello_debian | as defined in vct.conf
    COMMANDS:=            Commands to be executed on node
    SCP_ARGS:=            MUST contain keyword='remote:' which is substituted by 'root@[IPv6]:'




These todos are partly moved to next release. For more todos please consult [3].

  • CN privacy
  • Network Isolation for transport and application layer experiments (firewall)
  • Wireless interface management
  • Community Container

View soft:rd-operational-guide for RD operational guide.


soft/node-system-a-hack.txt · Last modified: 2013/05/14 16:31 by ivilata