User Tools

Site Tools


soft:omfintegration

OMF Integration

In this work, we’ll discuss three proposals as design solutions to integrate the OMF framework with the confine infrastructure.

RC in RD

The first proposal is based on the answer of the OMF community to the question we posted on their mailing list, asking how we could best handle image loading in combination with VM’s.

The first design proposal is based on the proposition to place an OMF Resource Controller (ORC) on the Research Device (RD), which is a relatively deep integration of OMF with the CONFINE software. This ORC would then instruct the CONFINE software to manage the slivers (each also being a ORC) as he commands.

This implies implementing a new type of experiment in OMF which will

  1. collect VM information (number, names, image, OMF slice) from the user (as command line arguments)
  2. will contact the ORC on the RD
  3. ask the ORC on the RD to create the VM’s with the requested images (where the ORC can have an image repository so image id’s can be sent instead of entire images; images only have to be sent when new images are necessary)
  4. when created, the user is notified with relevant information (which can be used in the experiment description) and the Aggregate Manager (OAM) will be brought up-to-date by the newly created ORC’s.

The experimenter executes this new loadVM experiment which will trigger the OEC to ask the OAM a topology of all the available RD devices (suggesting the OAM is kept up-to-date of all these RD’s). Knowing these devices, the OEC can start contacting the necessary RD’s and start executing the above mentioned experiment. After the loadVM experiment, the experimenter (now aware of the nodes he can use in his experiment description) would then pass his OMF experiment description to the OMF Experiment Controller (OEC), which will send commands to the VM’s on the RD.

Advantage

  • OEC and OAM do not need major changes (new experiment, topology passing)

Disadvantage

  • deep integration of OMF with CONFINE software (ORC on RD)

This proposal is made more clear in Figure 1.

  1. The experiments starts the newly written loadVM experiment on the OEC given all the necessary node information
  2. The OEC asks the OAM for an overview of the available RD’s
  3. The OAM answer the OEC with the overview
  4. The OEC chooses RD’s out of the overview and contacts the ORC on those RD’s with the node information
  5. The ORC’s on the RD’s will use this node information and give it to the CONFINE software on the RD’s which will allocate the necessary slivers with the correct images
  6. The ORC’s on the newly created slivers will contact the OAM of their existence (at this point also relevant information about the slivers may be sent back to the user: not illustrated)
  7. The experimenter can pass his OMF experiment description to the OEC
  8. The OEC asks for information (e.g. availability) on the nodes requested by the OMF experiment description to the OAM
  9. The OEC directly sends directives to the ORC’s on the slivers to run the experiment

CONFINE portal as gateway

The second proposal is our preferred design and has three different “options”, from which we prefer the last option (“Preprocessing to describe nodes”). In this design proposal, the CONFINE portal will serve as gateway to allocate slivers (contrary to the ORC on the RD in the first design proposal).

Separate script to describe nodes [option 1]

In proposal two the experimenter has to pass the node information (number, names, image, OMF slice) to a provided script that will use the CONFINE portal API to allocate ORC’s for the new OMF experiment. The allocated ORC’s will register themselves at the OAM, so the OEC is ready to accept the OMF experiment description from the experimenter.

Thus, the experimenter will pass the OMF experiment description to the OEC which will ask availability of the ORC’s to the OAM (where the OAM maybe has to reset the ORC’S via the CONFINE portal API). The OEC will send the experiment directives directly to the ORC’s on the slivers of the RD.

Advantage

  • OMF and CONFINE communicate through the CONFINE API

Disadvantage

  • the experimenter always has to provide the configuration for the script and has to keep node information (e.g. names) between the script and the experiment in sync

Figure 2 makes this proposal more clear:

  1. The experiment sends his node information as parameters to the node allocation script
  2. The node allocation script uses the CONFINE portal API which will address the CONFINE software on the RD to boot the necessary slivers with the correct images
  3. Continuation of step 2: the CONFINE software will boot the slivers with the correct images
  4. The RC’s on the slivers will register their existence at the OAM
  5. The nodes are allocated, the experimenter can pass his experiment description to the OEC
  6. The OEC will check for information on the nodes with the OAM
  7. The OEC directly sends directives to the RC’s on the slivers to execute the experiment

OEDL extension to describe nodes [option 2]

A second option for this second proposal/design is not to use a standalone script but actually adding extra functionality to the OEDL created by OMF. Here the experimenter should define, at the start of his OMF experiment description, the nodes he wants the use (defining the name, number of NIC’s, image, OMF slice). Preprocessing the experiment description would identify these definitions and instruct the OAM to allocate these slivers (or this OEC could do this himself) by using the CONFINE portal API.

After the preprocessing happened and the nodes are allocated, the normal processing of the experiment OMF begins and executes the experiment.

Advantage

  • No two scripts/descriptions for the experimenter.

Disadvantage

  • Rewriting OEDL/OEC/OAM
  • OMF experiment descriptions from other testbed will not be directly applicable to the testbed.

Preprocessing to describe nodes [option 3]

As mentioned above, this is the option we prefer. We have already implemented the basics of this design, but this implementation is not yet stable.

A third option is also using only one OMF experiment description, but applying another form of preprocessing: the experimenter does not have to explicitly define the required nodes, but now the preprocessing will derive this from the experiment description. For example, the preprocessor could check how many interfaces of one node are used to determine the number of NIC’s are necessary on the particular node.

As in option two, the OAM will allocate the nodes (by the CONFINE portal API) and after the allocation the OMF experiment will start on the nodes.

Advantage

  • No rewrite OEDL
  • No explicit definition of required nodes = universal reuse of OMF experiment descriptions (not the case in option two)

Disadvantage

  • Preprocessing will become more complex
  • Rewriting OEC/OAM

Figure 3 makes this third “option” more clear:

  1. The experimenter sends his OMF experiment description to the OEC
  2. The OEC preprocessing starts and collects all the node information.
  3. The OEC sends the node information to the OAM
  4. With this node information the OAM uses the CONFINE portal API to allocate the correct slivers with the correct images
  5. Continuation of step 4: the CONFINE software on the RD will start up the correct slivers with the correct images
  6. The ORC’s on the slivers register their existence at the OAM
  7. The OEC starts the default OMF experiment description processing
  8. The OEC checks information on the requested nodes in the experiment description at the OAM
  9. OEC sends directives to the slivers to start the experiment

soft/omfintegration.txt · Last modified: 2014/03/13 14:45 by santiago