User Tools

Site Tools


arch:management-network

The management network

A CONFINE testbed consists of a series of components that must coordinate themselves to effectively run the users' applications. In the same way as global testbeds like PlanetLab @PlanetLab@ use the Internet, or indoor testbeds like Emulab @Emulab@ use a dedicated wired control network, each CONFINE testbed provides its own testbed management network.

This network uses a clean and uniform IPv6 address scheme carefully devised so that each component has an easily predictable address (see Addressing in CONFINE) that can be reached from any other testbed component at the network layer (OSI model layer 3). Some components like testbed servers (trusted support hosts set up by testbed operators) and testbed nodes (devices set up by different groups for running applications) are expected to provide some important testbed-wide services (like registry and node REST APIs) over these management addresses, while others can provide their own custom services. Other components like testbed hosts (odd devices like a user's computer) have no special function in the testbed, but they can use their management addresses to offer services to the whole testbed.

The management network uses a /48 prefix which can be easily obtained via Unique Local Addresses @ULA@ (private) or 6to4 @6to4@ (public). A management network with a public prefix can be routed to the IPv6 Internet, thus becoming readily available to casual external users.

Although the management network provides an easy way to reach any sliver or host in a CONFINE testbed, it is not intended for transferring massive application traffic nor for performing traffic measurements, since it may not properly reflect the features of the community network (as we will see below).

The IPv6 overlay

When running on a locally administered network infrastructure (e.g. a single site like a university campus or building), it is possible to set up a management network directly on top of it. However, community networks (CNs) are administered in a distributed fashion and they follow their own rules, so such a native setup is not possible. Specifically, CNs use to lack widespread IPv6 support, which results in a variety of problems:

  • IPv4 address scarcity keeps the testbed from using a custom address scheme.
  • The diversity of CN devices using IPv4 means that testbed nodes may sit behind NAT boxes or firewalls that limit their reachability.
  • Different CNs may use incompatible or overlapping private IPv4 address ranges (in spite of coordination initiatives like Free Networks @FreeNets@). This may affect CONFINE testbeds like Community-Lab which span several CNs (without inter-testbed federation).

Since CNs can not be expected to adapt themselves to IPv6 before CONFINE testbeds can be deployed, each CONFINE testbed can work around the previous problems by creating its own IPv6 overlay on top on the real network infrastructure. Such a setup is shown in IPv6 overlay, where management traffic is tunneled through the overlay, while applications running in testbed nodes can choose whether to use direct (native) connections towards other nodes in the same CN, or tunneled connections towards nodes in other CNs.

IPv6 overlay
A tinc-based IPv6 overlay spanning two community networks (source)

A management network gateway is a CN host which acts as an entry point to a management network implemented as an IPv6 overlay. As seen in Overall architecture, such a gateway can help extend a management network (and so a testbed) over multiple CNs using external means like the Internet itself or the FEDERICA research backbone @FEDERICA@. A management network gateway can also route a public management network to the IPv6 Internet. IPv6 overlay shows two dedicated gateway hosts connecting the management network between two CNs, as well as a testbed registry server acting also as a gateway in a CN.

tinc as a management network backend

We have seen that there can be several ways of implementing all or part of a testbed management network. Each particular mechanism used for that purpose is called a backend. The most basic one is the native backend, which directly uses the underlying network infrastructure at the data link layer (OSI model layer 2). In contrast, there are several potential approaches to implement a management network as an IPv6 overlay.

Some IPv6 migration techniques like 6to4 @6to4@, 6over4 @6over4@, Teredo @Teredo@ or ISATAP @ISATAP@ use host IPv4-based IPv6 addresses which may still clash between CNs, while other techniques like 6in4 @6in4@ may have problems with protocol 41 handling in NAT boxes or firewalls. VPN solutions like OpenVPN @OpenVPN@ avoid the previous problems (mostly making IPv4 configuration irrelevant), but together with the aforementioned techniques they get complicated to setup in a mesh-like cloud, or they use centralized architectures which alter the resulting topology and can turn the VPN server into a bottleneck.

Mesh VPN solutions try to avoid those problems. Particularly, a CONFINE testbed can use tinc @tinc@ to set up a mesh IPv6 overlay where data is exchanged directly between endpoints, thus reproducing the underlying topology as much as possible. tinc takes care of routing traffic to the endpoint closest to the destination and propagating endpoint information with minimal configuration (a single daemon with some credentials, a subnet and a route, see tinc in CONFINE for further details). It also provides free host-level encryption and authentication between endpoints.

As shown in IPv6 overlay, a tinc daemon in a testbed node usually connects only to gateways implemented as servers deployed by testbed operators in its CN, while gateways connect to other such gateways in the same CN natively, or to such gateways in different CNs through external means. In other words, all tinc connections use to flow towards some trusted servers, although the mesh nature of tinc makes data travel straight between endpoints, respecting the underlying network topology.

The resource usage and scalability of tinc still remains to be assessed in real testbeds. First, tinc operates in user space, which may imply high CPU usage, reduced bandwidth and higher latency specially on very high bandwidth and very low latency links like optical fibre. Second, tinc handles all routing and endpoint information propagation itself, so it may have problems coping with hundreds of endpoints.


@6in4@
“Basic Transition Mechanisms for IPv6 Hosts and Routers”, IETF RFC 4213: http://tools.ietf.org/html/rfc4213
@6over4@
“Transmission of IPv6 over IPv4 Domains without Explicit Tunnels”, IETF RFC 2529: http://tools.ietf.org/html/rfc2529
@6to4@
“Connection of IPv6 Domains via IPv4 Clouds”, IETF RFC 3056: http://tools.ietf.org/html/rfc3056
@Emulab@
Emulab - Network Emulation Testbed Home: https://www.emulab.net/
@FEDERICA@
FEDERICA, Federated E-infrastructure Dedicated to European Researchers Innovating in Computing network Architectures: http://www.fp7-federica.eu/
@FreeNets@
Free Networks association: http://freenetworks.org/
@ISATAP@
“Intra-Site Automatic Tunnel Addressing Protocol (ISATAP)”, IETF RFC 5214: http://tools.ietf.org/html/rfc5214
@OpenVPN@
OpenVPN Community Software: http://openvpn.net/index.php/open-source.html
@Teredo@
“Teredo: Tunneling IPv6 over UDP through Network Address Translations (NATs)”, IETF RFC 4380: http://tools.ietf.org/html/rfc4380
@tinc@
tinc: http://www.tinc-vpn.org/
@PlanetLab@
PlanetLab, an open platform for developing, deploying, and accessing planetary-scale services: https://www.planet-lab.org/
@ULA@
“Unique Local IPv6 Unicast Addresses”, IETF RFC 4193: http://tools.ietf.org/html/rfc4193
arch/management-network.txt · Last modified: 2014/07/29 12:29 by ivilata