User Tools

Site Tools


bt-tutorial:local-deployment

Local slice deployment

Preamble

In this tutorial we will assume that you have a VCT configured and installed as we described on the first part of this tutorial. Also, we assume that all the next commands will be executed from the ~/confine-dist/utils/vct directory.

After following this tutorial we will have three (3) CONFINE virtual nodes configured with a sliver allocated and deployed inside them. Also, they will have an innocuous debian-hello-world experiment inside - just for testing purpose.

Creating and customizing virtual nodes

The good way to start an experiment in a CONFINE testbed is to deploy it inside VCT first, test that it works properly and then create both slice and experiment templates. Finally, deploy them on real nodes. Let us create our network architecture, which will look like this:

          node fd01                  node fd02                  node fd03
   +--------------------+     +--------------------+     +--------------------+
   |                    |     |                    |     |                    |
   +   +------------+   +     +   +------------+   +     +   +------------+   +
   |   |            |   |     |   |            |   |     |   |            |   |
   +   +            +   +     +   +            +   +     +   +            +   +
   |   |            |   |     |   |            |   |     |   |            |   |
   +   +            +   +     +   +            +   +     +   +            +   +
   |   |            |   |     |   |            |   |     |   |            |   |
   +   +            +   +     +   +            +   +     +   +            +   +
   |   |            |   |     |   |            |   |     |   |            |   |
   +   +            +   +     +   +            +   +     +   +            +   +
   |   |            |   |     |   |            |   |     |   |            |   |
   +   +------------+   +     +   +------------+   +     +   +------------+   +
   |  0123456789ab_fd01 |     |  0123456789ab_fd02 |     |  0123456789ab_fd03 |
   +      sliver        +     +      sliver        +     +       sliver       +
   |                    |     |                    |     |                    |
   +--------------------+     +--------------------+     +--------------------+            +----------------+
             |                         |                           |                       |                |
             +-------------------------+---------------------------+-----------------------+ localhost host +
                                                                                           |                |
                                                                                           +----------------+

Firstly, we will create a set of three nodes, named from fd01 to fd03 where we will deploy our experiment. The ./vct_node_create script takes as an argument a set of devices coded as 4 hexadecimal numbers from 0000 to ffff. So the next command will create nodes fd01, fd02 and fd03.

~$ ./vct_node_create fd01-fd03

You can get the information status of each node simply executing the ./vct_node_info command. Let us execute it and take a look at the information that it provides.

For each node, the command tells us what its state is, what are its rescue and management IPv6 addresses and whether they are accessible or not. Note that as all nodes are down, they are not answering to ping commands (the RTT shows the symbol).

VCT stores the KVM images in its VCT_VIRT_DIR, so you can access and modify them directly using other tools like qemu. In any case, it is recommended to do it before the next step: the customization process.

Customization needs to access the nodes' IPv6 rescue interfaces, so firstly we have to start them. After a short time we can see that the nodes are running.

~$ ./vct_node_start fd01-fd03

Customization is the process of making the research device accesible to the CONFINE testbed. On real hardware nodes, it consists of communicating with the CONFINE server and declaring the node's availability and announcing its capabilities. Then, the CONFINE server will register the node (by verifying its identity with its community network) and upgrade its credentials.

VCT's next command uses a fake server to perform similar actions, using a local static SSH key to contact and setup nodes. As a next step, VCT creates a fake sliver and allocates/deploys it on each node, just as a test that everything is working properly.

~$ ./vct_node_customize fd01-fd03

Of course, nodes are now 100% operative and part of a (virtual, in our case) CONFINE testbed. All of their interfaces are up and we are able to contact them using SSH commands. But there are easier ways to do it: VCT commands.

~$ ./vct_node_ssh fd01

Actually, performing this action on real devices requires special permissions that typically are not available to researchers. However, this option will allow us later to create our own sliver templates.

From within the node, you can work as if it was an official OpenWrt distribution (with root permissions). If you check network configuration, it should not surprise you that there are some extra virtualized network interfaces that allow Linux containers/slivers to communicate with each other and with the testbed server.

Slice allocation and deployment

Once the nodes are configured, it is time to setup the experiment. We will start by allocating a new slice named 0123456789ab on all nodes with a Debian operating system inside. As you can see, the command takes three arguments:

  1. unique identifier for the slice: coded as a 12-digit hexadecimal (lower case) number
  2. set of nodes where to allocate the sliver (already customized)
  3. optional OS: OpenWrt or Debian (by default, VCT allocates OpenWrt slices)

:!: Notice that on VCT, slice identifiers are provided by the researcher while on a real CONFINE deployment the ID value is provided by the testbed server.

~$ ./vct_sliver_allocate 0123456789ab fd01-fd03 debian

The allocation process creates a new slice. Then, it allocates the required resources (previously defined on the vct.conf file) on each node, performing a sliver allocation. Basically, it means that the sliver's SSH public key and the slice's UCI configuration file have been sent to each research device. That includes, of course, the operating system and experiment templates URLs.

You can get the information status about slices and slivers simply executing the ./vct_slice_info command.

The printed result shows, line by line, all the necessary information about slivers status. Each sliver ID is composed by the slice ID + _ + node ID, so it is easy to identify where each sliver is deployed.

Finally, it is important to note that slice and slivers status information did not match. This is because the VCT is expecting for a testbed server update confirming that all slivers are properly allocated before considering the slice allocated.

As VCT is using a fake server the confirmation message never will arrive, so we have to update manually with the attributes command. We will use this command after each sliver operation.

~$ ./vct_slice_attributes update 0123456789ab

The magic word all can be used instead the slice ID to update all available slices.

Once slivers and slice statuses have been synchronized, we can move to the next step. So, now it is time to download and deploy the slivers' template on each node and load the experiment's file system. The parameters should be now well known for all of us: slice ID and set of nodes.

~$ ./vct_sliver_deploy 0123456789ab fd01-fd03
 
.
.
.
	option if03_name 'iso1'
	option if03_mac '54:c0:fd:03:01:03'
	option if03_parent 'eth2'
	option state 'deployed'
 
~$ ./vct_slice_attributes update 0123456789ab

If you execute now the ./vct_slice_info command, you will see that both slice and slivers status on all nodes are shown as deployed for the hello-debian-experiment experiment. RTT shows the symbol because slivers are not started yed. We are fixing this now.

~$ ./vct_sliver_start 0123456789ab fd01-fd03
 
.
.
.
	option if03_mac '54:c0:fd:03:01:03'
	option if03_parent 'eth2'
	option state 'started'
 
~$ ./vct_slice_attributes update 0123456789ab

Now, the whole slice is accessible using both slivers' network interfaces: IPv6 and IPv4 .

Finally, we can use the SSH command tool to access them with the proper credentials: (Username: root / Password: root). On real hardware slivers, the user's private SSH key, username and password will be all required to get access.

Of course, we also provide an SSH tool to simplify this process.

~$ ./vct_sliver_ssh 0123456789ab fd01

To verify that we accessed the correct sliver, we can check the machine's name, which must match the sliver ID.

Now we have the scenario ready to start configuring our BitTorrent local experiment!

bt-tutorial/local-deployment.txt · Last modified: 2013/02/18 19:30 by savemanos