In this work, we’ll discuss three proposals as design solutions to integrate the OMF framework with the confine infrastructure.
The first proposal is based on the answer of the OMF community to the question we posted on their mailing list, asking how we could best handle image loading in combination with VM’s.
The first design proposal is based on the proposition to place an OMF Resource Controller (ORC) on the Research Device (RD), which is a relatively deep integration of OMF with the CONFINE software. This ORC would then instruct the CONFINE software to manage the slivers (each also being a ORC) as he commands.
This implies implementing a new type of experiment in OMF which will
The experimenter executes this new loadVM experiment which will trigger the OEC to ask the OAM a topology of all the available RD devices (suggesting the OAM is kept up-to-date of all these RD’s). Knowing these devices, the OEC can start contacting the necessary RD’s and start executing the above mentioned experiment. After the loadVM experiment, the experimenter (now aware of the nodes he can use in his experiment description) would then pass his OMF experiment description to the OMF Experiment Controller (OEC), which will send commands to the VM’s on the RD.
This proposal is made more clear in Figure 1.
The second proposal is our preferred design and has three different “options”, from which we prefer the last option (“Preprocessing to describe nodes”). In this design proposal, the CONFINE portal will serve as gateway to allocate slivers (contrary to the ORC on the RD in the first design proposal).
In proposal two the experimenter has to pass the node information (number, names, image, OMF slice) to a provided script that will use the CONFINE portal API to allocate ORC’s for the new OMF experiment. The allocated ORC’s will register themselves at the OAM, so the OEC is ready to accept the OMF experiment description from the experimenter.
Thus, the experimenter will pass the OMF experiment description to the OEC which will ask availability of the ORC’s to the OAM (where the OAM maybe has to reset the ORC’S via the CONFINE portal API). The OEC will send the experiment directives directly to the ORC’s on the slivers of the RD.
Figure 2 makes this proposal more clear:
A second option for this second proposal/design is not to use a standalone script but actually adding extra functionality to the OEDL created by OMF. Here the experimenter should define, at the start of his OMF experiment description, the nodes he wants the use (defining the name, number of NIC’s, image, OMF slice). Preprocessing the experiment description would identify these definitions and instruct the OAM to allocate these slivers (or this OEC could do this himself) by using the CONFINE portal API.
After the preprocessing happened and the nodes are allocated, the normal processing of the experiment OMF begins and executes the experiment.
As mentioned above, this is the option we prefer. We have already implemented the basics of this design, but this implementation is not yet stable.
A third option is also using only one OMF experiment description, but applying another form of preprocessing: the experimenter does not have to explicitly define the required nodes, but now the preprocessing will derive this from the experiment description. For example, the preprocessor could check how many interfaces of one node are used to determine the number of NIC’s are necessary on the particular node.
As in option two, the OAM will allocate the nodes (by the CONFINE portal API) and after the allocation the OMF experiment will start on the nodes.
Figure 3 makes this third “option” more clear: