“Bare bones” milestone
The objective of this release is to develop a “future compatible” version of the testbed. The focus here is on designing a software system that follows the approved specifications and procedures (system requirements, use cases, reference experiments, system architecture, development practices, etc). The resulting system has to be a proof-of-concept of (a minimal) testbed architecture, design and integration. Functionality should still be kept to the minimum.
Fixes over A Hack
See the list of pending small fixes to *A Hack* under *Things to review after
release* in “A Hack” milestone.
Review of existing features
Features already implemented in *A Hack* must be reviewed or reimplemented to
settle interfaces (especially those potentially available to users and
other external actors) so that they don't radically change in future versions.
Here's a list of those features, please add others that may need interface
- Four aspects of the APIs: engineering decisions (e.g. REST, push/pull), API
functionality, compatibility (with Planet-Lab and federation reqs and APIs),
integration with Common Node DB.
- Sliver/slice management API:
- Operations and objects (including states and transitions)
- Notation/invocation/transport (e.g. UCI for commands over SSH, JSON for
REST over HTTP…)
- Use REST with JSON over HTTP
- Synchronisation and event flow (push/pull)
- A link on
how often nodes refresh data in Planetlab.
- Create some doc to help decide on push vs. pull
- Actors and interactions (server, nodes, users; only allow user-server and
server-nodes, or allow user-nodes too)
- Usage, location and access of RSpecs
- Location of data (DB replication, caching, distribution…)
- Resource management:
- Does the server take care of resources?
- Yes (somehow like "A hack")
- No (allocations aren't ensured beforehand to work)
- No, but user interface asks another service for resource availability.
- Implementation of a Aggregate Manager (like PlanetLab):
- Node interface for reporting resource status (e.g. as part of node
- AM is used for allocating resources in the long term (it doesn't
handle things like CPU load)
- AM collects resource status from nodes: push vs. pull strategies
- Researcher-provided descriptions of slices, slivers and experiments:
- Slice description (what to move to individual slivers?)
- Sliver description (what to move to global slice?)
- Leave all slivers the same (template, experiment data) until we need
to support otherwise
- Sliver customization
- Don't implement provided filesystem overlay, stick to archive with
experiment programs plus data until we need to support otherwise
- Propagation of slice attributes (like sliver addrs) by CNS:
- Node interface for reporting sliver addresses (e.g. as part of sliver
- Maybe implement a service for collecting this info from nodes: push
vs. pull strategies (similar to AM)
- Do we need some service to map IDs to IP addresses?
- Researcher interaction with slivers
- Stick to OoB access, others left to researchers
- Compatibility of the above with OMF support requirements
- iMinds / Bart: work starting end June on OMF, evaluating requirements with
CONFINE (e.g. what interfaces have to be supported), compatibility with
OpenWRT, documentation is problematic: a tutorial will be written. Work on
v5 until v6 is more stable and documented.
- For components see slide 8 in
- With RC and OML running in sliver, can it offer sufficient
- Need OMF EC and AM in testbed: independent hosts connected to IPv6
- For the moment no special interfaces or support is needed in CONFINE.
- Integration with the Common Node DB.
- Changes to the data model, removal of redundant data.
- Integration via CONFINE's and CNDB's REST APIs.
- Caching of CNDB documents.
- What about missing docs in the CNDB?
Features to add
From most prioritary to less:
- Node recovery (safe upgrade)
- Network (>= L2)
- Resource (QoS, CPU, memory)
- Error handling
- OoB access