Build a Docker Swarm

by Thomas Urban

In this post you'll see how to turn some servers into a cluster using more recent versions of Docker Community Edition and BeeGFS distributed filesystem. The whole setup should take less than 60 minutes of your time which is most amazing about it. In the end you will have a running docker swarm ready for deploying and running applications, though missing certain tools for achieving this and maybe requiring some low-level organisation of resources available to the swarm. Also there will be no gateway for routing public requests into your cluster, yet. That may be up to a succeeding tutorial.

There is no additional magic in this tutorial, but some comprehension in comparison to the original tutorials at Docker CE and BeeGFS.

About The Setup

For the sake of this post we assume there are three servers named node1, node2 and node3 with the first becoming master node in BeeGFS. That names actually don't matter much. And the master node in BeeGFS might be any node you like or even some separate server (read below). There is another presumption here: every server is freshly set up with Ubuntu Xenial 16.04 LTS, but applying the same procedure to some other operating system such as RedHat Linux might require slight adjustments to a few statements, only. In such a case you are advised to check the original setup guides linked before, too.

In this example every node will be part of a Docker swarm for orchestrated running of distributed containers. In addition either node will participate in a distributed file system. This comes with some benefits, but includes some drawbacks, too.

On the pro side all data is available as fast as possible to either container no matter on what node it's running actually. This supports scalability by using replicated services for providing content. And it helps with not caring on what node any container is going to run actually. In addition by replicating data to all nodes the data is kept mirrored and thus safe in case of any node failing. Of course, there is still need to backing up data, though.

In opposition to those pros distribution of files provides eventual consistency among all nodes, only, and thus any container must not rely having the latest version of a file and can't assume to be the first and only one writing it. That's why you should run database services in a container in one of these cases, only:

  1. There is a single container per service, only, and you don't care about scalability by just replicating the container.
  2. If you intend to have replicated containers serving same content, e.g. for the sake of load balancing, you should not use it with this clustered filesystem. Instead, bind either container instance to a separate non-shared volume — in essence requiring you to provide two kinds of volume space in your cluster: distributed and local-node-only — and have the application take care of the replication itself. Contemporary database services such as MySQL, LDAP, MongoDB etc. come with support for one sort of replication or the other.

In BeeGFS there seems to be a slight case of suffering from a single point of failure at its sole instance of management service. Actually, the distributed file system keeps running even though the management service is down. But missing management service becomes critical on restarting a node or any of the BeeGFS-related services on it.
In this tutorial BeeGFS Management Service is set up on one of the nodes. But you may decide to put it on an external (virtual) server not participating in the rest of the cluster. This won't fix the single point of failure issue. But it reduces opportunities to be affected by docker swarm exhausting resources of that node. And it might somewhat improve security by keeping the controlling service of your distributed file system at distance from the hosted applications' point of view. If you decide to have a separate server for that you should adopt all steps related to the BeeGFS Management Service accordingly.

Installing Docker

The following steps apply to every node.

First finish setting up the operating system.

apt update && apt upgrade && apt autoremove

Some software is required for installation and might be missing on your installation. This command might succeed instantly, which is okay.

apt install nano curl apt-transport-https ca-certificates software-properties-common

Signature keys of docker's repository must be installed:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

Next, the repository itself is added:

add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Make the system discover all packages in added repository:

apt update

Find all current releases of Docker Community Edition to choose one for installation (e.g. the latest one):

apt-cache madison docker-ce

Choose your desired version, then put the value in second column of displayed table next to the assignment operator in this command (which is 17.09.0~ce-0~ubuntu in this example):

apt install docker-ce=17.09.0~ce-0~ubuntu

This is installing Docker. On success, you might wish to test it by running

docker run hello-world

Create the Swarm

Assuming you have set up Docker on every node. So you are ready to create the swarm now. Since there is no swarm at this time you need to start it on one of your nodes, only, by running this command:

docker swarm init

This is starting a single-node swarm displaying two commands to be invoked next. Actually the first one is for joining worker nodes w/o manager capabilities, but we want at least three nodes to be managers for the sake of failover support. Thus we have to invoke the second command on the same node we've started swarm on before:

docker swarm join-token manager

This will display a command with a different token, e.g.:

docker swarm join --token someVeryLongSequenceOfLettersWithDigitsPlusDashes node1:2377

Take a copy of the command as displayed in your case and invoke it on every other node but the one you've started the swarm on.

You don't have to join all your nodes as managers, maybe stick with three manager nodes for the moment. You may add further nodes as worker nodes, too. Invoke

docker swarm join-token worker

on a manager node of your swarm (which is the node you've created the swarm on by now, but may be any manager node later on adding further nodes to your swarm) to see the proper command to be invoked on every worker node to be.

Eventually, you might see a list of all nodes in swarm by invoking

docker node ls

This should list all your nodes with every manager node showing either Leader or Reachable in last column labelled MANAGER STATUS. Another useful source of information is available by running

docker info

on either node. This should include information on current swarm, its total number of nodes as well as the number of managing nodes:

...
Swarm: active
...
Managers: 3
Nodes: 3
...
Node Address: 1.2.3.4
Manager Addresses:
1.2.3.4:2377
1.2.3.5:2377
1.2.3.6:2377
...

 

Installing BeeGFS

BeeGFS is a shared filesystem used for keeping files in sync on all nodes of your swarm.

Unless noted otherwise the following steps apply to every node of your swarm.

Start with adding repository providing BeeGFS packages:

cat >/etc/apt/sources.list.d/beegfs-deb9.list <<EOT
# If you have an active BeeGFS support contract, use the alternative URL below
# to retrieve early updates. Replace username/password with your account for
# the BeeGFS customer login area.
#deb [arch=amd64] http://username:password@www.beegfs.io/login/release/beegfs_6 deb9 non-free
deb [arch=amd64] http://www.beegfs.io/release/beegfs_6 deb9 non-free
EOT

Next get the key required for processing signatures of packages in repository:

curl -fsSL https://www.beegfs.io/release/latest-stable/gpg/DEB-GPG-KEY-beegfs | apt-key add -

It's time to update index of packages:

apt update

Install all the packages

apt install beegfs-meta beegfs-storage beegfs-client beegfs-helperd beegfs-utils

Another service must be installed on first node considered to be master node, only:

apt install beegfs-mgmtd

Configuring BeeGFS Services

Only on first node considered to be master node configuration of BeeGFS Management Service must be adjusted by running nano /etc/beegfs/beegfs-mgmtd.conf and make sure to have set the following option:

Options listed here and in succeeding steps are found at the beginning of either configuration file and usually aren't set, yet.

storeMgmtdDirectory = /data/beegfs-mgmt

Configuration of BeeGFS Meta Data Service is adjusted by running nano /etc/beegfs/beegfs-meta.conf:

sysMgmtdHost = 94.130.185.164
storeMetaDirectory = /data/beegfs-meta

First option selects node hosting the management service. The second one is selecting the local folder on either node used to store meta data on distributed files.

Edit configuration of BeeGFS Storage Service is adjusted by running nano /etc/beegfs/beegfs-storage.conf:

sysMgmtdHost = 94.130.185.164
storeStorageDirectory = /data/beegfs-storage

Again, these options select the related management service and the local folder to contain storage of distributed file system.

This setup is using separate folders for meta data and storage residing on the same partition. You might consider more complex setup here e.g. by using different hard disks for either folder and/or select different local file systems each support the individual requirements of either kind of data, e.g. XFS for meta data and ext4 for storage. Read the official FAQ for more.

Every node in BeeGFS is going to be a client of this distributed file system, too, and thus configuration of BeeGFS Client Service integrating each node is adjusted by running nano /etc/beegfs/beegfs-client.conf:

sysMgmtdHost = node1

Here, the only option to be adjusted is the address of node hosting the management service. You might wish to adjust another option in this file to enable cluster-wide file locking:

tuneUseGlobalFileLocks = true

This option is set false by default, but some applications might require this for relying on system features flock() or fcntl().

There is another configuration file used to link mount points with configurations of BeeGFS Client Service. Default configuration is fine, but you may want to edit it nonetheless by running nano /etc/beegfs/beegfs-mounts.conf. Initially, it reads like this:

/mnt/beegfs /etc/beegfs/beegfs-client.conf

At this point it's time to reboot all nodes.

reboot

After rebooting you should check either involved service is running.

On first node considered to be master node, only, don't miss to check the BeeGFS Management Service, too:

systemctl status beegfs-mgmtd

On every node run these commands and observe either one's output:

systemctl status beegfs-meta
systemctl status beegfs-storage
systemctl status beegfs-client

All commands should indicate running service on either node. The following command should list all nodes reachable, too:

beegfs-check-servers

If anything is working you should prevent more nodes from adding themselves to the cluster by editing configuration of BeeGFS Meta Data Service again, by running nano /etc/beegfs/beegfs-mgmtd.conf on first node considered master node, only, and make sure the following two options are switch to false:

sysAllowNewServers = false
sysAllowNewTargets = false

Try It!

On any node create some text file in cluster filesystem, e.g. by running

echo 'Hello Cluster!' >/mnt/beegfs/welcome.msg

Then switch to another node and display the "same" file:

cat /mnt/beegfs/welcome.msg

This should display

Hello Cluster!

Go back