LXC host featuring IPv6 connectivity
by Thomas Urban
This article is describing a way to setup physical host running LXC containers to be directly accessible over IPv6. It relies on running latest LTS release of Ubuntu Linux, which is Ubuntu 12.04 LTS as of December 2013. As a prerequisite, this version of Ubuntu has to be installed.
In described setup there is a single physical host available over single IPv4 address. In addition an IPv6 /64 subnet is assigned to this host. This howto is about setting up LXC running containers available over one unique IPv6 address in that assigned subnet. It doesn't cover properly configuring your DNS to map a hostname into any container's IPv6 address.
The setup is using a single bridge device to attach all containers to. This bridge is using local-only IPv4 network and a subset of addresses within assigned IPv6 /64 subnet mentioned before. Thus you might add further bridge devices to your setup for additionally clustering the interconnectivity of your LXC containers as all those containers sharing one bridge device may instantly talk to each other over IPv4. For being publicly accessible over IPv6 containers on different bridge devices still may communicate with each other over IPv6.
In our example, the public IPv4 address of physical host is 22.214.171.124. The assigned IPv6 /64 subnet is 1234:5678:9012:3456::2/64.
Setting up Physical Host
Installation of lxc and all its dependencies is as straight-forward and simple as invoking:
sudo apt-get install lxc
In an upcoming sequel to this description we'll talk about setting up http forwarding to publish websites hosted in LXC containers over IPv4. This is relying on using statically assigning local-only IPv4 addresses to your LXC containers and thus we don't want to use local-only DHCP support provided by dnsmasqd.
In /etc/default/lxc change variable USE_LXC_BRIDGE to false:
By disabling bridge, start up scripts of LXC skip starting dnsmasqd as well. On the downside we have to create a bridge device ourselves.
Setting up Network
First of all, make a backup of your existing network configuration in file /etc/network/interfaces. This might be essential to recover a headless remote server in rescue mode after accidental misconfiguration.
Modifying file /etc/network/interfaces serves in achieving several targets:
- Make it create new bridge device called vmnet on startup.
- Assign local-only subnet to this new bridge device. Here we assume to assign 10.101.0.1/16 enabling to assign 65000+ containers to this bridge for communicating over IPv4. Additional bridge devices might use 10.102.01/16 etc.
- Assign IPv6 address to the bridge device different from the address assigned to public NIC (eth0) for properly routing IPv6 traffic.
- Declare part of publicly assigned IPv6 /64 subnet to be routed over this bridge device. Choosing 5:6:7:8:101::1/80 enables to address far more containers than using IPv6 but 65000+ containers is a sufficiently high limit on count of attachable containers. In addition this IPv6 subnet prepares for similar addressing per container by sharing 101 for addressing part of locally available subnet:
- The subnet's address in IPv4 is 10.101.x.y/16, in IPv6 it is 5:6:7:8:101:x:y:z/80.
Masquerading IPv4 traffic
Our LXC containers work with local-only IP addresses and so they might send data to the public internet. But due to originating from a local-only address external routers are considered to drop any packet sent from or to one of the containers. By establishing IPv4 masquerading on setting up eth0 all outgoing traffic is stuffed with the physical host's public IPv4 address first and on response the physical host's address is replaced by the orignal container's local-only address again. This masquerading is achieved by adding a rule like this to firewall:
iptables -t nat -A POSTROUTING -s 10.0.0.1/8 ! -d 10.0.0.0/8 -j MASQUERADE
The Resulting Configuration File
Your version of /etc/network/interfaces should look like this (but don't copy and paste this example for it might lead to accidental misconfiguration mentioned before, but see the remarks following this file):
iface lo inet loopback
#!!! KEEP AS PROVIDED:
iface eth0 inet static
#!!! END OF KEEP AS PROVIDED
# establish IPv4 masquerading
up iptables -t nat -A POSTROUTING -s 10.0.0.0/8 ! -d 10.0.0.0/8 -j MASQUERADE
down iptables -t nat -D POSTROUTING -s 10.0.0.0/8 ! -d 10.0.0.0/8 -j MASQUERADE
# assign host-only IPv6 address to eth0 for routing all other
# traffic in assigned /64 subnet to proper bridge device
iface eth0 inet6 static
# create VM bridge to use
iface vmnet inet static
# mark to create bridge w/o any attached NICs
# implicitly set up IPv6 on bridge
up ifconfig vmnet add 5:6:7:8:101::1/80
- The first part of file regarding IPv4 on eth0 should be kept as-is by your provider. It doesn't matter if it's using DHCP or assigning some static address, whether it's using host-only address or preparing for some IPv4 subnet the physical host is part of.
- Replace occurrences of 5:6:7:8 by IPv6 subnet prefix actually assigned to your physical host.
An essential tip on setting up networking:
The scripts of Debian-based systems processing file /etc/network/interfaces are currently supporting multiple stanzas per device e.g. for setting up multiple IP addresses per NIC or separately configuring IPv4 and IPv6 as done in example given above. On setting up a NIC all its stanzas are processed sequentially. However, if an error occurs in any of the given stanzas all succeeding stanzas of same NIC are ignored. So, for example, if you improperly provide a second IPv4 address to eth0 your setup for IPv6 won't be processed at all as IPv4 is configured prior to IPv6. Don't try to fix IPv6 configuration without checking IPv4 configuration as well. For example, adding second IPv4 address usually doesn't include definition of another default gateway.
Enabling IP forwarding
At this point your physical host may receive all IPv6 traffic sent to any address in the publicly assigned /64 subnet. It's receiving this on eth0. This NIC is actually accepting traffic to a single IPv6 address, only. By enabling IPv6 forwarding on all devices the operating system of physical host is trying to forward all other traffic received on eth0 to some NIC (like the bridge vmnet) to be routed further on there.
The same applies for IPv4 traffic originating from a container. At first this is received by physical host on a bridge device this container is attached to. Any traffic not addressed to the subnet of that bridge device is dropped then unless IPv4 forwarding is enabled to try routing such traffic over all else devices using some default routes configured there.
Either forwarding is enabled in /etc/sysctl.conf by modifying existing occurrences of these lines or adding them to the end of that file if missing:
Adjusting network configuration as well as modifying /etc/sysctl.conf isn't applying changes to system instantly. Thus you might try restarting your physical host now to have them applied even though there are opportunities to achieve this without restarting the server.
Next edit LXC configuration template to use the created bridge for attaching new containers by default. Edit related line in /etc/lxc/lxc.conf to read like this:
Neighbour Discovery Protocol (NDP) is part of IPv6 and it's used to find peers directly attached to your host for setting up next-hop routes accordingly. This protocol is also used by any created VM looking for your physical host to be its neighbour. The daemon radvd is featuring this protocol and thus gets installed on physical host to help VMs in establishing routes to their common neighbour.
sudo apt-get install radvd
The configuration of radvd must be written into file /etc/radvd.conf and has to look similar to this one:
- The mentioned IPv6 prefix must be replaced by the prefix of your IPv6 sub-subnet assigned to the bridge vmnet before.
- The three lines starting with RDNSS are selecting three IPv6 name servers to be used by your containers. In the example their addresses are a:b:c:d:e::1, a:b:c:d:e::2 and a:b:c:d:e::3. You might want to adjust these to address your provider's name servers properly.
The following instructions have to be called from a running container. Thus you can't use them right now, but you might return here after having started LXC container on encountering issues.
In context of a running container you may try pinging a server later by invoking
This will restore any neighbouring. Use the following command at same prompt to see its current state:
sudo ip -6 neigh
This should result in a single line reading similar to the following one:
fe80::b061:ceff:fe88:f6cf dev eth0 lladdr 26:8e:ca:ef:16:46 router REACHABLE
The last word is the current state of that neighbour. REACHABLE is marking successfully set up neighbouring, while STALE is okay if your NIC didn't process any IPv6 traffic recently and DELAY is okay during the seconds after restoring neighbouring, thus transitioning from STALE to REACHABLE. Any other state marks some problem. In addition you shouldn't see multiple lines there while having single NIC, only.
Setting up First Container
You should have tried to restart your physical host recently. Now it's time to create your first LXC container.
Create LXC container
Every LXC container is having a unique name. This name shouldn't include any whitespace or special characters that mustn't be contained in usual filenames. This is due to creating LXC container implies creation of another subfolder in /var/lib/lxc by that container's name. The name might be the resulting container's public host name or some shorter internal ID. In our example we call it nameofvm.
For virtual machines inside LXC containers you need to select one of several avalaible container templates. Even though templates may be used to set up different kinds of LXC containers they are currently used to select one of several available Linux distributions and more specifically one of its releases. In Ubuntu 12.04 LTS it's possible to install Ubuntu, Debian, OpenSUSE and Fedora. See the official docs at ubuntu.com for any limitations applying to OpenSUSE or Fedora. In this example we continue using Ubuntu Linux. Its template features selection of specific release. If you omit the selection of a release the physical host's release is chosen by default: Precise Pangolin.
The template is chosen by option -t. The release may be selected by -r. Create new Ubuntu 12.04 LTS container by invoking
sudo lxc-create -t ubuntu -n nameofvm
This command takes a few minutes to complete on first invocation as it's going to debootstrap the selected release and creating some locally cached template to improve succeeding creations of further containers.
Assign Network Addresses
The created container must be configured manually now to use some statically assigned IPv4 and IPv6 address. Open its configuration file under /var/lib/lxc/nameofvm/config to add the following lines:
lxc.network.ipv4 = 10.101.1.1/16
lxc.network.ipv6 = 5:6:7:8:101:1:1:1/80
This is assigning IPv4 address 10.101.1.1 in subnet 10.101.* and IPv6 address 5:6:7:8:101:1:1:1 in /80 subnet of bridge device vmnet to the created container. Your next container might get 10.101.1.2/16 and 5:6:7:8:101:1:2:1/80 ...
Networking in Container
The container will have its own eth0 connected to the physical host's bridge device vmnet. By adding the two lines to container's configuration before its eth0 will be partly pre-configured on starting container. However, some final modifications are required to get the container's networking running properly. This is achieved by modifying its private file /etc/network/interfaces, which is /var/lib/lxc/nameofvm/rootfs/etc/network/interfaces in filesystem of physical host. It should look similar to this one:
iface lo inet loopback
iface eth0 inet manual
up route add default gw 10.101.0.1 eth0
iface eth0 inet6 static
Basically you need to replace the given IPv6 address by the one you've written into container's configuration before, excluding the trailing /80 for choosing the network mask as it's done in separate line.
It's time to start the container by invoking
sudo lxc-start -dn nameofvm
The option -d is important to have it starting and running in background. Otherwise you get bound to the container's console and can't detach until it's been shut down and powered off again. In addition, starting container might fail while running some particular applications such as midnight commander (mc). Thus try invoking it from vanilla shell prompt.
Check whether it's running or not by invoking:
This should list the container nameofvm in section RUNNING.
Log into Container
It's time to switch into your running container by attaching to its console, now.
sudo lxc-console -n nameofvm
This will open console of container. At first you will see the getty prompting for login. lxc-create was adding user ubuntu with password ubuntu before. That user may use sudo to work with elevated privileges.
If you don't see anything, wait a view minutes. The container is obviously blocked by trying to resolve some hostnames and is failing due to some network misconfiguration.
After logging in you see the shell prompt of your container and you may start working inside the container. Try to access network there by invoking these commands:
You may detach from your container's console at any time by simultaneously pressing Ctrl and A followed by Q on its own. This won't stop your container. Next time you attach to its console the same shell prompt will be available instantly without logging in again. You won't even get any new shell prompt as container didn't recognize your previous detaching and re-attaching now. Simultaneously press Ctrl and C then to get another prompt or blindly type some command (I'm kidding! Don't do!).
By default, LXC containers aren't restarted automatically next time you restart the physical host. You need to explicitly enable this feature by adding symbolic link to the folder /etc/lxc/auto pointing to your container's configuration file.
ln -s /var/lib/lxc/nameofvm/config /etc/lxc/auto/
Before installing any software in your container you might replace default user ubuntu by another user named differently. Ensure to provide proper password.