The CONFINE Controller UI is responsible for managing the CONFINE testbeds. The VCT container (VCT-C) version also includes some extra features to create and manage LXC virtual nodes. As a first step, you can log-in using the default user and password: vct:vct.
As you see the vct user holds administration privileges inside the deployed controller. It gives developers the opportunity to test and monitor anything in their virtual environment. However, we are going to focus on the management of slices and slivers.
Although node creation is not our responsibility as researchers and we will not need to do it on the real deployment, it is a necessary step when you use the VCT container. First, open the research nodes tab by clicking on the top menu
Nodes. You will find the (empty) list of nodes previously created.
On this list you will find all the nodes that your group owns or has permissions to modify. To add a new node to the list, first we need to register it. We can do it clicking on the
Add node button. This brings up the registration form.
The only mandatory field is the name, so let's fill it and click on the
Save button. If everything goes well, we will be redirected back to the nodes list.
If you feel confident, you can create multiple nodes quickly just by clicking on the
Save and add another button instead.
The list now shows our node, and some useful data about it, like its unique ID, architecture, the number of network interfaces it has and the group it belongs to. There are three fields that are particularly interesting for us:
Set stateit gives information about the state that we or someone else set the device to. By default, all nodes are put in DEBUG state.
Sliversshows the number of slivers currently using the node.
State linkgives us information about the real state of the node. By default, all nodes report NO DATA until they are properly configured.
If you need more information about the different states and their meaning, you should take a look at the wiki page.
Nodes can be modified by clicking on their name and thus accessing their properties. As we are in a virtual scenario, we can see the
VM Management button. Click on that.
Two messages will appear saying that we have not configured any certificate for this node. It is not necessary because the internal VCT-C will manage the certificates for us. Same applies to the tinc public key.
Each time we create a new virtual node, we need to assign a built firmware. Each firmware is unique for each CONFINE image because it contains the configuration files and SSH keys to access and manage the node. To download and generate the proper firmware click on the
Build firmware button.
The next form allows us to configure the connectivity options. For now, keep everything activated and click on the
Build firmware button. It will start the generation process.
The build process can take from several seconds to several minutes (depending on your network bandwidth and the computer's performance). As soon as the process is done, you will see a new screen showing that you have a new firmware available. This firmware is a copy of the real node firmware with all the tinc and management SSH keys prepared to be used. If any of the operations during the firmware generation fail, check the permissions of your VCT-C to make sure it can mount new file systems.
With the firmware generated, it is time to click the
create the VM button to create a KVM instance of a VM that will run inside the VCT-C and will be managed by our controller. As we used the real firmware images, this virtual node will have the same behaviour as the real nodes with the extra advantage that they can be managed locally.
Notice that most of the operations are, in fact, system calls to VCT commands and could be called directly from the terminal. However, it is easier and recommended to use our graphical implementation. If you try to use some of the commands directly, like the vct_node_info, you might get warning or error messages.
As an example, early implementations of the VCT tried to read a file named vct.nodes.local to know the network virtual addresses and print a warning if it doesn't exist. None of it is really necessary, but the error message is still there.
The final step is to click the
start the VM button and go back to the nodes information screen.
When a node is contacted by the controller for the first time, its status changes to SAFE state. It means that the node is running a complete and valid configuration, but it is not available for hosting slivers.
In July, 2013 we decided that we will use the SAFE state to differentiate those nodes that are ready, but not tested. Please, put in PRODUCTION only those nodes that you verified that are working as they should.
The change of the node state to PRODUCTION has to be done manually from the node configuration screen that can be accessed by clicking on the node name
Virtual Node 1 and changing the
Set State field from SAFE to PRODUCTION and click the
As our node has been started, we can retrieve extra information from the tinc client section. On it, we will find the public tinc IPv6 of the node and the SSH public key associated with this interface.
Additionally, if we go back to the node configuration screen and click on the IPv6 address field (shown in the figure below), we can have historical ping info and see if the node is answering the ping requests from the controller (green) or not (red).
The addresses of nodes and slivers won't show properly with some combinations of node image and controller version. In this case the address can be found in the json data dump on the state page of the node/sliver.
In the case of VCT-C the information is not very useful because the server and the clients are running on the same machine. In a real deployment, researchers can use this information to pick nodes based on their reachability.
Repeat the steps above to create three more nodes, named “Virtual node 2”, “Virtual node 3” and “Virtual node 4”.