User Tools

Site Tools


arch:services

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
arch:services [2016/12/21 16:55]
ivilata [Integration with Cloudy] Only accept a single node VM, other changes after meeting with Agustí and Roger.
arch:services [2016/12/21 17:09] (current)
ivilata Move Cloudy section to soft:cloudy.
Line 4: Line 4:
  
 To get some context on the discussion, we recommend reading [[services:​start|Services in CONFINE]], and especially section [[services:​challenges|Challenges for Service Developers]]. To get some context on the discussion, we recommend reading [[services:​start|Services in CONFINE]], and especially section [[services:​challenges|Challenges for Service Developers]].
- 
-===== Integration with Cloudy ===== 
- 
-[[http://​cloudy.community/​|Cloudy]] is a distribution based on Debian GNU/Linux, aimed at end users, to foster the transition and adoption of the Community Network cloud environment. ​ It provides a FLOSS platform to discover and provide services in a Community Network in a user-friendly and decentralized fashion. 
- 
-A simple approach to increase the adoption of both Cloudy and Community-Lab by not necessarily tech-savvy community members is to allow **Community-Lab nodes as services inside of Cloudy hosts**, which keeps the standard and simpler maintenance of the Debian-based Cloudy instead of the more fragile OpenWrt-based CONFINE Node System (CNS). 
- 
-An easy way of adopting this integration is running the Community-Lab node as a //full virtual machine//​. ​ This allows the owner of the Cloudy host to decide //how many resources to share// with other users of Community-Lab by allocating them to the VM.  However, it is expected that Cloudy machines are not resource-powerful machines (for instance, the Cloudy-equipped Minix box given away by the Clommunity project only has 2 GiB RAM), so preallocating resources to a full VM may place a big burden on the host.  Although some techniques like memory ballooning may alleviate the situation, a //greater integration//​ may be desirable. ​ In particular, the following steps each provide more integration with the host system: 
- 
-  - Complete virtualization (QEMU/KVM, VirtualBox…):​ this should work with no changes to the CNS or the host (beyond maybe network bridges). 
-  - Paravirtualization (QEMU/KVM or VirtualBox with VirtIO drivers): this provides greater efficiency but it requires VirtIO support by the CNS.  Versions newer than 2016-11-16 already support this, so this is the current situation. 
-  - Container-based isolation (LXC, Docker…): nested containers for slivers should be supported, and some incompatibilities with the OpenWrt system may arise. ​ Some tests have already been carried and basic LXC sliver in LXC node seems to work.  Proper isolation of resources and security may be more complex. 
-  - Native execution of CNS: this implies porting the whole CNS (based in Lua) to Debian. ​ Special care should be put on adapting the network interfaces of the [[arch:​node]]. ​ Docker-based slivers, if supported, may be managed in a more integrated way. 
- 
-The ''​vnode/​confine-vm-new''​ script in [[https://​redmine.confine-project.eu/​projects/​confine-utils/​|confine-utils]] can be used to create a QEMU/KVM VM (via libvirt, with hardware virtualization) or VirtualBox VM (without hardware virtualization) from a custom node image downloaded from a Controller'​s image generator, while setting its resource limits. ​ Node VMs created in this fashion can also be upgraded in a manner that does not require interaction with the CNS itself. ​ For more information,​ please check the ''​README.md''​ file that accompanies the aforementioned script. 
- 
-Because of the expected lack of hardware resources, currently it makes no sense to run more than a single node VM in a Cloudy box.  So, a new service may be added to Cloudy that allows to create and run one single CONFINE node VM (relying on ''​vnode''​ scripts for particular tasks). ​ The following sections describe the different use cases to be supported, taking into account for interaction purposes that Cloudy boxes are usually accessed remotely and not physically like desktop computers. 
- 
-==== Creating a node VM ==== 
- 
-Some interface control allows the user to create a new VM.  When activated, the user is asked for different parameters of the new VM: 
- 
-  * the node image file to use for the system disk 
-  * VM name 
-  * optionally, a description of the VM 
-  * the kind of VM (some options may not be selectable depending on what is available on the system) 
-  * VM resources (with reasonable defaults, mainly RAM and //home// image size; CPUs may be fixed to 1 and the host network interface may be autoselected or fixed) 
-  * resource limits (with reasonable defaults, mainly CPU% and network bandwidth) 
- 
-When the settings are accepted, the system uses ''​vnode''​ scripts to create the node VM with the given parameters. 
- 
-==== Querying status of the node VM ==== 
- 
-The user accesses a screen (node VM screen) where information about the existing node VM is included: 
- 
-  * name 
-  * description,​ if any 
-  * state (at least //stopped// or //​started//,​ if possible also //​stopping//​ and //​starting//​) 
-  * uptime (or date of last state change) 
-  * kind 
-  * information on resources: RAM, CPUs, host NIC, disks paths and sizes 
-  * information on resource limits: CPU%, network bandwidth, I/O bandwidth 
- 
-The screen may also include controls to act on the VM. 
- 
-==== Starting and stopping the node VM ==== 
- 
-The node VM screen contains controls to start and stop the VM.  Feedback from the virtualization system should be used to disable meaningless or dangerous actions (like trying to start the VM twice in a row). 
- 
-==== Backing up the VM ==== 
- 
-A control in the node VM screen enables the user to create a backup of its disk images, as long as the VM is stopped. ​ The backups may be offered in some kind of compressed archive, and some techniques ([[http://​www.fsarchiver.org/​Main_Page|FSArchiver]],​ [[https://​frippery.org/​uml/​|zerofree]]) may be used to avoid copying unused data blocks. ​ A trivial approach would be to convert disk paths in the node VM screen into links that allow to download the image, although this may be very slow for big images like //home//. 
- 
-==== Modifying the node VM ==== 
- 
-The node VM screen allows the user to modify the VM. Depending on the virtualization system, it may need to be stopped before the changes are applied. 
- 
-At least the VM's resource limits (CPU%, network and I/O bandwidth) should be editable. If possible, other parameters like the size of RAM or the //home// image should be editable as well. 
- 
-==== Connecting to the node VM ==== 
- 
-A control in the node VM screen offers the user access to the VM console via its console socket. This may be useful in case that the node VM loses its connection to the network. 
- 
-Direct access via web could be possible with [[http://​www.dest-unreach.org/​socat/​|socat]] and [[http://​anyterm.org/​|Anyterm]] or similar. ​ If this is too difficult to implement, at least some custom instructions should be shown for the user to connect to the node (for instance with SSH+socat+picocom). 
- 
-==== Upgrading the node VM ==== 
- 
-A control in the node VM screen enables the user to upgrade a stopped VM with a new system image provided as a file, either a custom or generic node image. 
- 
-The file should replace the VM's //system// image. 
- 
-==== Removing the node VM ==== 
- 
-A control in the node VM screen enables allows to delete the VM as long as it is stopped, with an explicit confirmation from the user. 
- 
-As a result, all files and directories related with the VM should be deleted, even if the virtualization system left some behind. 
  
 ===== Architectural changes ===== ===== Architectural changes =====
arch/services.txt · Last modified: 2016/12/21 17:09 by ivilata