User Tools

Site Tools


“Bare bones” milestone

The objective of this release is to develop a “future compatible” version of the testbed. The focus here is on designing a software system that follows the approved specifications and procedures (system requirements, use cases, reference experiments, system architecture, development practices, etc). The resulting system has to be a proof-of-concept of (a minimal) testbed architecture, design and integration. Functionality should still be kept to the minimum.

Fixes over A Hack

See the list of pending small fixes to *A Hack* under *Things to review after release* in “A Hack” milestone.

Review of existing features

Features already implemented in *A Hack* must be reviewed or reimplemented to settle interfaces (especially those potentially available to users and other external actors) so that they don't radically change in future versions. Here's a list of those features, please add others that may need interface stabilisation:

  • Four aspects of the APIs: engineering decisions (e.g. REST, push/pull), API functionality, compatibility (with Planet-Lab and federation reqs and APIs), integration with Common Node DB.
  • Sliver/slice management API:
    • Operations and objects (including states and transitions)
    • Notation/invocation/transport (e.g. UCI for commands over SSH, JSON for REST over HTTP…)
      • Use REST with JSON over HTTP
    • Synchronisation and event flow (push/pull)
      • A link on how often nodes refresh data in Planetlab.
      • Create some doc to help decide on push vs. pull
    • Actors and interactions (server, nodes, users; only allow user-server and server-nodes, or allow user-nodes too)
    • Usage, location and access of RSpecs
    • Location of data (DB replication, caching, distribution…)
  • Resource management:
    • Does the server take care of resources?
      • Yes (somehow like "A hack")
      • No (allocations aren't ensured beforehand to work)
      • No, but user interface asks another service for resource availability.
    • Implementation of a Aggregate Manager (like PlanetLab):
      • Node interface for reporting resource status (e.g. as part of node info)
      • AM is used for allocating resources in the long term (it doesn't handle things like CPU load)
      • AM collects resource status from nodes: push vs. pull strategies
  • Researcher-provided descriptions of slices, slivers and experiments:
    • Slice description (what to move to individual slivers?)
    • Sliver description (what to move to global slice?)
      • Leave all slivers the same (template, experiment data) until we need to support otherwise
    • Experiments
    • Sliver customization
      • Don't implement provided filesystem overlay, stick to archive with experiment programs plus data until we need to support otherwise
  • Propagation of slice attributes (like sliver addrs) by CNS:
    • Node interface for reporting sliver addresses (e.g. as part of sliver info)
    • Maybe implement a service for collecting this info from nodes: push vs. pull strategies (similar to AM)
    • Do we need some service to map IDs to IP addresses?
  • Researcher interaction with slivers
    • Stick to OoB access, others left to researchers
  • Compatibility of the above with OMF support requirements
    • iMinds / Bart: work starting end June on OMF, evaluating requirements with CONFINE (e.g. what interfaces have to be supported), compatibility with OpenWRT, documentation is problematic: a tutorial will be written. Work on v5 until v6 is more stable and documented.
    • For components see slide 8 in this presentation.
    • With RC and OML running in sliver, can it offer sufficient configurability?
    • Need OMF EC and AM in testbed: independent hosts connected to IPv6 overlay?
    • For the moment no special interfaces or support is needed in CONFINE.
  • Integration with the Common Node DB.
    • Changes to the data model, removal of redundant data.
    • Integration via CONFINE's and CNDB's REST APIs.
    • Caching of CNDB documents.
    • What about missing docs in the CNDB?

Features to add

From most prioritary to less:

  1. Node recovery (safe upgrade)
  2. Isolation
    • Network (>= L2)
    • Resource (QoS, CPU, memory)
  3. Error handling
    • Needs use cases
  4. OoB access

Data model

This is a class diagram of the current data model (source):

data model

For a description of the fields, see the REST API documentation.

milestones/bare-bones.txt · Last modified: 2015/04/15 16:15 by ivilata