User Tools

Site Tools


arch:resource-sharing

Resource sharing

Since CONFINE testbeds are designed to scale up to thousands of nodes scattered over geographically disparate locations, it makes sense to support multiple simultaneous applications (like experiments or services) sharing the resources in a testbed to make a more efficient use of them. That is the reason why CONFINE testbeds are heavily inspired by concepts used in PlanetLab @PlanetLab@.

When a user intends to run an application in a CONFINE testbed, he or she must choose a set of testbed nodes that will host the application. This selection can be assisted by tools such as a testbed web portal or other external services that monitor the state of nodes using their REST API.

Each node in a CONFINE testbed is able to run several applications simultaneously. An application runs in a given node as a sliver which temporarily holds a share of its resources (CPU, memory, disk, network bandwidth and interfaces…). Slivers in a node are isolated and their resource limits are enforced so that slivers do not cause other slivers to starve.

All slivers in a testbed which are related to an application are grouped in a slice for management purposes, i.e. the user who is responsible for a slice (a slice administrator) is also responsible for all of its slivers. In fact, the user first creates a slice and then defines its slivers after choosing the desired nodes to run an application. The same slice can be reused to run several applications, either by changing the slice configuration in the registry or by direct interaction with running slivers (e.g. remote login and command execution).

Slices and slivers shows four slices sharing three nodes in a CONFINE testbed by means of slivers.

Slices and slivers
Nodes, slivers and slices (source)


@PlanetLab@
PlanetLab, an open platform for developing, deploying, and accessing planetary-scale services: https://www.planet-lab.org/
arch/resource-sharing.txt · Last modified: 2014/11/04 16:03 by ivilata