User Tools

Site Tools


soft:tinc

tinc

Introduction

This page describes the setup of an IPv6 overlay using tinc as a backend for a testbed management network with the IPv6 prefix X:Y:Z::/48. Variables like <SERVER/tinc/addresses/_/port> expand to their value in the database (e.g. 655).

This particular setup is based in this article which contains the full explanation of a similar setup.

In this setup tinc hosts should only ConnectTo the testbed servers acting as management network gateways. These components know the name, subnets and public key of each and every tinc host in the testbed that ConnectTo them, and they use the StrictSubnets option. In this way if a tinc host announces a subnet that has not been configured for it at the testbed registry, they can detect a line like the following one in tinc's log file

Ignoring unauthorized ADD_SUBNET from <HOST> (<IP> port <PORT>): <SUBNET>

and report the incident to testbed admins.

Testbed node and host configuration

Note: This is only a sample configuration for reference purposes. For a testbed node, the node system already includes tools that automate the configuration of tinc. For a testbed host you can get customized instructions via the Help button in its page in the controller's web interface.

The testbed node configuration uses <TINC_NAME> = node_<NODE/id> and (according to the addressing) <MGMT_SUBNET> = X:Y:Z:N:0:0:0:0/64 and <MGMT_ADDR> = X:Y:Z:N::2, with N = <NODE/id> in hexadecimal. It assumes an OpenWrt-like machine with the packages tinc and ip installed, otherwise run:

# opkg install tinc ip

The testbed host configuration uses <TINC_NAME> = host_<HOST/id> and (according to the addressing) <MGMT_SUBNET> = X:Y:Z:0:2000:H1:H2:H3/128 and <MGMT_ADDR> = X:Y:Z:0:2000:H1:H2:H3, with (((H1<<16)+H2)<<16)+H3 = <HOST/id> and Hn in hexadecimal. It assumes a Debian-like machine with the packages tinc and iproute installed, otherwise run:

# apt-get install tinc iproute

First create a working tinc configuration:

/etc/tinc/confine/tinc.conf

The daemon configuration file.

Name = <TINC_NAME>
# The list of management network gateways this machine connects to.
ConnectTo = server_1
ConnectTo = server_<SERVER/id>
...
/etc/tinc/confine/hosts/<TINC_NAME>

The host configuration file for this machine.

Subnet = <MGMT_SUBNET>

Run the following command to create the machine's RSA key pair (by default it appends the generated public key to the end of the previous file):

# tincd -n confine -K

(See below how to reuse SSH server keys for tinc in a testbed node.)

The resulting host configuration file should be provided to the components of the testbed in the machine's list of ConnectTo entries. Conversely, the host configuration files of testbed components in that list should be stored in the machine's /etc/tinc/confine/hosts directory.

Now create the following network configuration scripts:

/etc/tinc/confine/tinc-up

Configure the management network on top of the tinc overlay.

#!/bin/sh
ip -6 link set "$INTERFACE" up mtu 1400
ip -6 addr add <MGMT_ADDR>/48 dev "$INTERFACE"
/etc/tinc/confine/tinc-down

Deconfigure the management network.

#!/bin/sh
ip -6 addr del <MGMT_ADDR>/48 dev "$INTERFACE"
ip -6 link set "$INTERFACE" down

And make the scripts executable:

# chmod a+rx /etc/tinc/confine/tinc-{up,down}

Enable the network and start the tinc daemon for this machine. In the testbed node:

# tincd -n confine

In the testbed host:

# echo confine >> /etc/tinc/nets.boot
# invoke-rc.d tinc restart

Finally, open the configured tinc ports in the firewall to allow incoming connections and traffic (since other tinc hosts may try to directly communicate with this one):

# # with <NODE|HOST/tinc/port> as <PORT>:
# iptables -A INPUT -p udp -m udp --dport <PORT> -j ACCEPT
# iptables -A INPUT -p tcp -m tcp --dport <PORT> -j ACCEPT
# ip6tables -A INPUT -p udp -m udp --dport <PORT> -j ACCEPT
# ip6tables -A INPUT -p tcp -m tcp --dport <PORT> -j ACCEPT

The connection can be tested by running ping6 X:Y:Z::2 in the machine.

Testbed nodes (but not hosts) will be forwarding IPv6 traffic between their local network and the rest of the CONFINE IPv6 management network, so forwarding must be enabled both in the node's firewall and its kernel:

# ip6tables -P FORWARD ACCEPT
# sysctl -w net.ipv6.conf.all.forwarding=1

MacOS X testbed hosts

You may need to install support for TUN/TAP in MacOSX although it's already included if you use the Tunnelbrick VPN client (if so, start tinc before tunnelbrick).
You can follow the instructions for installing and configuring tinc but if you don't want to compile it you should use this tincd 1.0.23 binary instead of the tincd-1.0.9 referenced there that does not work well. The configuration files provided by the controller via the Help button in your host's page will have to be adjusted for MacOS X. In particular, we will use the $HOME/Library/tinc directory instead of /etc/tinc for tinc configuration and runtime files. Also, the network configuration scripts use different commands:

$HOME/Library/tinc/confine/tinc-up

Configure the management network on top of the tinc overlay.

#!/bin/sh
ifconfig $INTERFACE inet6 <MGMT_ADDR> prefixlen 48
$HOME/Library/tinc/confine/tinc-down

Deconfigure the management network.

#!/bin/sh
ifconfig $INTERFACE down

Start the tinc daemon for this machine by running:

$ sudo tincd -c $HOME/Library/tinc/confine --pidfile=$HOME/Library/tinc/confine/pid

Optional setup

Reusing SSH server keys

To reduce the amount of configuration items, a testbed node's tinc RSA key pair can be the same one used by its SSH server. To convert the RSA keys used by Dropbear to the PEM format used by tinc you'll need the packages dropbearconvert and openssh-keygen to be installed, otherwise run:

# opkg install dropbearconvert openssh-keygen

To get the equivalent private key file for tinc and append the public key to the end of its host configuration file run:

# dropbearconvert dropbear openssh \
  /etc/dropbear/dropbear_rsa_host_key /etc/tinc/confine/rsa_key.priv
# ssh-keygen -e -m PEM -f /etc/tinc/confine/rsa_key.priv \
  >> /etc/tinc/confine/hosts/node_<NODE/id>

Server as a default IPv6 gateway

Note: This is only included as an example of a potential use of the IPv6 overlay, and it is not natively supported by the CONFINE architecture.

In this case the management network uses a public prefix and some management network gateway must announce a route to ::/0, so its host configuration file needs an additional Subnet line:

Subnet = 0:0:0:0:0:0:0:0/0

And it must be configured to forward IPv6 traffic:

# ip6tables -P FORWARD ACCEPT
# sysctl -w net.ipv6.conf.all.forwarding=1

A testbed node or host can then use its tinc interface as a default route, just add this command at the end of /etc/tinc/confine/tinc-up:

ip -6 route add default dev "$INTERFACE"

And the following one at the beginning of /etc/tinc/confine/tinc-down:

ip -6 route del default dev "$INTERFACE"

Local discovery

If there are several tinc hosts in the same local network (e.g. for debugging purposes) they can automatically discover each other by adding the following option to /etc/tinc/confine/tinc.conf:

LocalDiscovery = yes

This needs that all hosts in the local network use the same port and have these ports open in the firewall (as we did above).

Tinc gateway configuration

Note: This is only a sample configuration for reference purposes. The controller software includes tools to ease the configuration of tinc.

The testbed server configuration assumes a Debian-like machine with the packages tinc and iproute installed, otherwise run:

# apt-get install tinc iproute

First create a working tinc configuration:

/etc/tinc/confine/tinc.conf

The daemon configuration file.

BindToAddress = <ADDR/addr> <ADDR/port>  # for <ADDR> in <SERVER/tinc/addresses>
Name = server_<SERVER/id>
StrictSubnets = yes
/etc/tinc/confine/hosts/server_<SERVER/id>

The host configuration file for this machine.

Address = <ADDR/addr> <ADDR/port>  # for <ADDR> in <SERVER/tinc/addresses>
Subnet = X:Y:Z:0:0:0:0:<SERVER/id>/128

Run the following command to create the machine's RSA key pair (by default it appends the generated public key to the end of the previous file):

# tincd -n confine -K

The resulting host configuration file should be provided to the components of the testbed that ConnectTo to this server. Conversely, the host configuration files of testbed components that ConnectTo to this server should be stored in the server's /etc/tinc/confine/hosts directory.

Now create the following network configuration scripts:

/etc/tinc/confine/tinc-up

Configure the management network on top of the tinc overlay.

#!/bin/sh
ip -6 link set "$INTERFACE" up mtu 1400
ip -6 addr add X:Y:Z::2/48 dev "$INTERFACE"
/etc/tinc/confine/tinc-down

Deconfigure the management network.

#!/bin/sh
ip -6 addr del X:Y:Z::2/48 dev "$INTERFACE"
ip -6 link set "$INTERFACE" down

And make the scripts executable:

# chmod a+rx /etc/tinc/confine/tinc-{up,down}

Enable the network and start the tinc daemon for this machine by running:

# echo confine >> /etc/tinc/nets.boot
# invoke-rc.d tinc restart

Finally, open the configured tinc ports in the firewall to allow incoming connections and traffic:

# # for <ADDR> in <SERVER/tinc/addresses>:
# iptables -A INPUT -p udp -m udp --dport <ADDR/port> -j ACCEPT
# iptables -A INPUT -p tcp -m tcp --dport <ADDR/port> -j ACCEPT
# ip6tables -A INPUT -p udp -m udp --dport <ADDR/port> -j ACCEPT
# ip6tables -A INPUT -p tcp -m tcp --dport <ADDR/port> -j ACCEPT
soft/tinc.txt · Last modified: 2014/07/31 15:54 by ivilata