User Tools

Site Tools


glossary

Elements on the system

On this section the elements present on the CONFINE testbeds are defined.

Testbeds, servers and networks

(See Overall architecture and The management network.)

  • CONFINE testbed: A set of testbed nodes following the configuration of the same testbed registry. A single CONFINE testbed may span different sites (using community network infrastructure) or even different network islands and community networks (using management network gateways). Still, all elements in the same testbed share the same namespaces (e.g. there cannot be two testbed nodes with the same ID at different community networks).
  • Testbed registry: A store hosted by one or several servers containing the configuration of a testbed.
  • Testbed server: A machine set up by testbed operators to provide support services to the testbed.
  • Registry server: A machine (usually a testbed server) providing to the users a registry API to create and manage their slices.
  • Controller: A machine (usually a testbed server) running the CONFINE management software (confine-controller), which provides a registry API implementation (thus a controller is also a registry server) as well as additional tools for ensuring that the involved nodes will perform the necessary operations so that the slices requested are served.
  • Site: A grouping of physically close nodes (not relevant to the CONFINE architecture).
  • Network island: A subset of a community network where all hosts can reach themselves at the network layer (OSI model L3) but may not be reachable from another part of the same community network on a permanent basis (i.e. not because of temporary link failures). Any two subsets of different community networks should also be considered islands between themselves unless both have public IP connectivity, in which case they may both belong to the “Internet” island, although they may be kept as different islands to provide information on their locality.
  • Testbed management network: A testbed-dedicated layer 3 network where all components in a CONFINE testbed can be reached (nodes, slivers, servers, other hosts). To overcome firewalls in community devices and disconnection at the network layer between islands and community networks, the management network may be a tunneled VPN. Thus a running management network has no islands itself: all of its devices are reachable between themselves at the network layer (of course in the absence of link failures). This network plays a role similar to that of the control network in an indoor or small scale testbed (where said network is completely managed by testbed operators).
  • Management network gateway: A machine (usually a testbed server) giving entry to the testbed's management network implemented as a VPN from a set of network islands. It can help connect different parts of a management network located at different islands over some link external to them (e.g. the Internet).

Other definitions for similar concepts on related projects are:

  • PanLab (pdf)
    • Administrative domain: An administrative domain is defined by the requirements of one or more business roles and is governed by a single business objective. This could be e.g. IMS Playground governed by Fraunhofer FOKUS.
  • PlanetLab (html)
    • Site: A site is a physical location where PlanetLab nodes are located (e.g. Princeton University or HP Labs).

Nodes and Devices

(See Node architecture.)

  • CONFINE node: A research device connected to a community network through a wired link called the node's local network. The node does not run routing protocol software, so the routing to the rest of the community network is performed by a community device connected to the local network which is used as a gateway by the node. In some locations, where accessibility is an issue, a recovery device will be attached to the node. By properly deploying a CONFINE node, it becomes an active member device on the community network where users can run applications (like experiments and services).
  • Community device: A device in charge of connecting with the community network, extending it and becoming the link of the CONFINE node with it. It must therefore have at least two different interfaces, one to connect with the community network, and another one to connect with the research device through a local network. Usually the first interface will be a wireless interface and the second one a wired one. To be able to be a part of the community network, it must run the necessary software (routing protocols, OpenWrt distribution, etc) required by the community network, which will be location-dependent. Occasionally, the community device will be a container running on the research device, with the proper interface attached to it.
  • Research device: A relatively more powerful board with virtualization capabilities where the applications will run. It runs the CONFINE distribution with the proper control software to create and control the different slivers. On the network side it has a local interface and several optional direct interfaces. The local interface is the wired interface connected with the community device and maybe other non-CONFINE devices. The direct interfaces are interfaces for routing experiments and custom network interaction. In case the community device has become a container on the research device, the container will control a direct interface connected to the community network.
  • Recovery device: A simple device whose purpose is to force the research device to reboot in case of malfunction.

Other definitions for nodes on related projects are:

  • PlanetLab
    • A node is a dedicated server that runs components of PlanetLab services (html).
    • A node is a machine capable of hosting one or more virtual machines (VM). All nodes must be reachable from the site at which PLC components run (pdf).
  • ORCA (pdf)
    • Nodes represent concrete resource objects that are independently configurable and independently programmable. Nodes may represent slivers (“virtual” nodes) or components (“physical” nodes). Nodes are typed and have attributes defined by their type.
    • Nodes are the basic units of configuration.
    • Nodes are state machines whose configuration status can be polled and captured as one of a discrete set of states from NodeStates.

CommonNodeDB (CNDB)

(See General.)

  • commonNodeDB (sometimes abbreviated to “nodeDB” or “CNDB”): A database where all CONFINE devices and nodes are registered and assigned to responsible persons. The nodeDB contains all the backing data on devices, nodes, locations (GPS positions), antenna heights and alignments. Essentially anything which is needed to plan, maintain and monitor the infrastructure.

Pieces of Software

(See Node software).

  • CONFINE distribution: A customized OpenWrt running in a testbed node, with enabled virtualization capabilities and the proper set of scripts and software to control slivers. It also has a set of OS images as templates for slivers and is in charge of running the community container when it is hosted in the node itself.
  • Control software: It refers to the set of scripts running on the testbed node that follows the configuration in a testbed registry to create and remove slivers and properly configure their networking. Control software also takes care of restricting their resource consumption both on the node and on the network.

Interfaces' access modes

(See Node architecture.)

When requesting a sliver, the slice administrator (researcher) will request a set of interfaces attached to the sliver. We define four different types of interfaces, that will define their capabilities:

  • Raw interface (not yet available): The sliver is granted with unique access to the interface, and can control it at any layer (from physical to application). This would be only allowed on nodes on special locations, so that the traffic generated by the sliver will not interfere with the proper working of the community network. Additionally the interface use might be limited to some Wi-Fi channels agreed in advance with the researcher and properly monitored to ensure its enforcement.
  • Passive interface: When requesting a passive interface the sliver will be able to capture all the traffic seen by the interface (maybe filtered by control software). However, it will not be able to generate traffic and forward it through this interface. A physical interface on the CONFINE node might be mapped to several passive interface on different slices.
  • Isolated interface: It is used for sharing the same physical interface but isolated at L2, by means of tagging all the outgoing traffic with a VLAN tag per slice. By means of using an isolated interface, the sliver will be able to configure it freely at L3, and several slices may share the same physical interface.
  • Traffic interface: It has an already assigned IP address and all traffic generated by the slice (and sent through this interface) with a different address will be dropped. Traffic interfaces might have assigned either a public or a private address, defining a public interface or a private interface. Traffic from a public interface will be bridged to the community network, whereas traffic from a private interface will be forwarded to the community network by means of NAT. Every container will have at least a private interface.

Roles

(See Overall architecture and Roles and permissions.)

  • Community user: A person that belongs to a community network and generates traffic coming from a Community Node.
  • Testbed operator: A person (usually registered on the testbed framework as a superuser, but not necessarily) who administers core components of the testbed like servers.
  • Group administrator: A user registered on the testbed framework which is responsible of managing the members, nodes and applications (slices) from a group of users.
  • Node administrator (or technician): A user registered on the testbed framework with rights to register and manage nodes in a group.
  • Slice administrator (or researcher): A user registered on the testbed framework with rights to create and run applications (like experiments and services) in a group. Its goal is to make use of the testbed to run services or analyse the effects of some experiment on a real environment.

Similar roles defined by other projects include:

  • PlanetLab (html)
    • Principal Investigator (PI): The PIs at each site are responsible for managing slices and users at each site. PIs are legally responsible for the behavior of the slices that they create. Most sites have only one PI (typically a faculty member at an educational institution or a project manager at a commercial institution).
    • Technical Contact (Tech Contact): Each site is required to have at least one Technical Contact who is responsible for installation, maintenance, and monitoring of the site's nodes.
    • User: A user is anyone who develops and deploys applications on PlanetLab. PIs may also be users.
  • PanLab (pdf)
    • Panlab Customer: An entity that uses (consumes) any service provided by the Panlab. Customers typically carry out R&D projects, implement new technologies or products, or introduce new telecom services.

Elements and definitions of the framework

(See Resource sharing.)

  • Application: A series of actions (programs with accompanying data) to be run on behalf of a user on a testbed. Applications can generate traffic (active applications) or not (passive applications). An application can implement an experiment or a service.
  • Slice: A set of resources spread over several nodes in a testbed which allows slice administrators (researchers) to run applications over it.

Similar definitions of slice are:

PlanetLab

Slice: A slice is a set of allocated resources distributed across PlanetLab. To most users, a slice means UNIX shell access to a number of PlanetLab nodes. PIs are responsible for creating slices and assigning them to their users. After being assigned to a slice, a user may then assign nodes to it. After nodes have been assigned to a slice, virtual servers for that slice are created on each of the assigned nodes. Slices have a finite lifetime and must be periodically renewed to remain valid (html).
Slice: A distributed set of resources allocated to a service in PlanetLab. PlanetLab is about presenting users horizontal slices in the distributed platform as a whole, not simply dividing a node into virtual machines (which is a relatively well-understood problem) (pdf).
Slice: A horizontal cut of global PlanetLab resources allocated to a given service. A slice encompasses some amount of processing, memory, storage, and network resources across a set of individual PlanetLab nodes distributed over the network. A slice is more than just the sum of the distributed resources, however. It is more accurate to view a slice as a network of virtual machines, with a set of local resources bound to each virtual machine (pdf).
A slice is a set of VMs, with each element of the set running on a unique node. The individual VMs that make up a slice contain no information about the other VMs in the set, except as managed by the service running in the slice (pdf).

4WARD (pdf)

Strata slice: a percentage of the available resources of a stratum or two (or more) concatenated strata are chosen to create a new stratum of the same type as the underlying strata. In the case of several strata, the SGPs between the strata will not show up in the newly instantiated stratum that contains the resources of the slice.

FEDERICA

A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, …) (pdf).
A slice is a set of resources from the infrastructure that has been partitioned and virtualized by the physical infrastructure manager system. This slice is perceived by the user as equivalent to a real physical network under his/her domain; it maps to a logical partition of the physical resources (pdf).

ORCA (pdf)

A slice is a grouping mechanism for slivers. Slices are built-to-order for some specific application or purpose, embodied in software that runs within the slice.
  • Sliver: The partition of a node's resources assigned to a specific slice.

Other definitions are:

PlanetLab

Capsule: A component of a PlanetLab service that runs on a single node. The word “capsule” is not entirely satisfactory (for example, it has a different meaning in the Active Networks community), but it has a strong precedent in ISO RM-ODP and does capture the notion of a boundary around particular user’s code running (eventually) in a VM slice of a PlanetLab node (pdf).
We refer to the set of local resources allocated to a virtual machine as a sliver of that node (pdf).

ORCA (pdf)

A sliver is any virtualized resource or piece of a slice that is exposed to the guest and its SM as an element that is named, controlled, configured, allocated, and programmed independently of other slivers. Slivers instantiated at the same substrate provider (authority/AM) are grouped into leases according to type and interval of validity.
  • Resource: Anything that can be named and/or reserved, i.e. a node, a virtual link, a radio are resources, but CPU, memory are not.

Testbed characteristics

On this subsection some testbed characteristics are defined, establishing what is to be expected when using them (e.g. when saying that two slices are isolated, what can be expected?).

  • Federation: Federation is the explicit cooperation between two or more testbeds where part of its government is leased to either a central government or distributed among the other testbeds belonging to the federation in order to achieve some goals. Tipically those goals will define the type of federation and the rules of the agreement. Usual objecives for federation are achieving scale, more realism, increase the number of services offered by the testbed, increase the geographic extent of the testbed, etc. A federation can be clasified as horizontal, when the infrastructures involved are similar, and vertical when they offer services at different layers. Also a federation can be hierarchical, peer-to-peer or a composition federation, and might come from a bottom-up approach as SFA or from a top-down approach as Teagle. Based on the objectives of the federation and the agreements established between the different parties different levels of federation can be established, from basic information sharing to allowing the deployment of applications among different sites; and different policies might be applied to the users of one site asking for resources from another one.
  • Isolation: The difficulty of an application running inside of a sliver to access or affect outside data or computations on the same node beyond what it could do to an external node.
  • Privacy: The quality of community network traffic by which an application should not be able to access it whether it is being forwarded by the node or addressed to it (unless it is specifically addressed to the application).
glossary.txt · Last modified: 2014/05/08 16:52 by esunly