Welcome to our guide on the installation and configuration of Consul Service Discovery cluster on Ubuntu 20.04|18.04|16.04 and Debian 10/9 Linux systems. Consul is an open source, distributed and a highly available solution used for service discovery, configuration, and segmentation functionalities.
You can choose to use each of the Consul features individually as needed, or use them together to build a full-service mesh solution. It ships with a simple built-in proxy so that everything works out of the box, but also supports 3rd party proxy integrations such as Envoy.
Key features of Consul ( source: Consul site)
- Service Discovery: Clients register services and other applications can use Consul to discover services using DNS or HTTP.
- Secure Service Communication: Consul can generate and distribute TLS certificates for services to establish mutual TLS connections.
- KV Store: The Consul’s hierarchical key/value store can be used for dynamic configuration, coordination, leader election, feature flagging, and more. It has a simple and easy to use HTTP API.
- Health Checking: Consul clients do health checks, both for services (If OK) and for the local node (e.g resource utilization). This information is helpful to monitor cluster health and routing of traffic away from unhealthy nodes.
- Multi-Datacenter: Consul supports multiple data centers out of the box.
Consul Architecture
Every node that provides services to Consul runs a Consul agent which is responsible for health checking the services on the node as well as the node itself. Consul agents talk to one or more Consul servers which store and replicate data. Consul servers themselves elect a leader.
Your infrastructure systems that need to discover other services or nodes can query any of the Consul servers or any of the Consul agents. The agents forward query to the servers automatically.
While Consul can function with one server, 3 to 5 Consul servers are the recommended number for Production environments to avoid failure scenarios which could lead to a complete data loss.
Step 1: Download and install Consul on Ubuntu 20.04|18.04|16.04 & Debian 10/9
I have three nodes for this deployment.
Hostname | IP Address |
---|---|
consul-01 | 192.168.18.40 |
consul-02 | 192.168.18.41 |
consul-03 | 192.168.18.42 |
Install Consul on all the three nodes. Check the latest release of Consul from the releases page. Here we will download and install v1.8.4
export VER="1.8.4"
wget https://releases.hashicorp.com/consul/${VER}/consul_${VER}_linux_amd64.zip
Extract the file
sudo apt update
sudo apt install unzip
unzip consul_${VER}_linux_amd64.zip
Move extracted consul
binary to /usr/local/bin
directory
chmod +x consul
sudo mv consul /usr/local/bin/
To print consul help page, use --help
option
$ consul --help Usage: consul [--version] [--help] <command> [<args>] Available commands are: agent Runs a Consul agent catalog Interact with the catalog connect Interact with Consul Connect event Fire a new event exec Executes a command on Consul nodes force-leave Forces a member of the cluster to enter the "left" state info Provides debugging information for operators. intention Interact with Connect service intentions join Tell Consul agent to join cluster keygen Generates a new encryption key keyring Manages gossip layer encryption keys kv Interact with the key-value store leave Gracefully leaves the Consul cluster and shuts down lock Execute a command holding a lock maint Controls node or service maintenance mode members Lists the members of a Consul cluster monitor Stream logs from a Consul agent operator Provides cluster-level tools for Consul operators reload Triggers the agent to reload configuration files rtt Estimates network round trip time between nodes snapshot Saves, restores and inspects snapshots of Consul server state validate Validate config files/directories version Prints the Consul version watch Watch for changes in Consul
To verify Consul is properly installed, run consul -v
on your system.
$ consul version
Consul v1.8.4
Revision 12b16df32
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)
Step 2: Bootstrap and start Consul Cluster
Since we have three nodes to use for our Consul cluster setup, we will bootstrap one by one. If you want to do a single node Consul setup, you can skip the other two.
Create a consul
system user/group.
sudo groupadd --system consul
sudo useradd -s /sbin/nologin --system -g consul consul
Create consul data directory and set ownership to consul
user
sudo mkdir -p /var/lib/consul
sudo chown -R consul:consul /var/lib/consul
sudo chmod -R 775 /var/lib/consul
Create Consul configurations directory
sudo mkdir /etc/consul.d
sudo chown -R consul:consul /etc/consul.d
Setup DNS or edit /etc/hosts
file to configure hostnames for all servers ( set on all nodes)
$ sudo vim /etc/hosts 192.168.18.40 consul-01.example.com consul-01 192.168.18.41 consul-02.example.com consul-02 192.168.18.42 consul-03.example.com consul-03
Replace example.com
with your actual domain name.
Bootstrap Consul first node – consul-01
For a single Node Consul:
For a single server Consul setup, create a system service file in /etc/systemd/system/consul.service
with the following content.
[Unit]
Description=Consul Service Discovery Agent
Documentation=https://www.consul.io/
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=consul
Group=consul
ExecStart=/usr/local/bin/consul agent -server -ui \
-advertise=192.168.18.40 \
-bind=192.168.18.40 \
-data-dir=/var/lib/consul \
-node=consul-01 \
-config-dir=/etc/consul.d
ExecReload=/bin/kill -HUP $MAINPID
KillSignal=SIGINT
TimeoutStopSec=5
Restart=on-failure
SyslogIdentifier=consul
[Install]
WantedBy=multi-user.target
Where:
- 192.168.18.40 is the IP address of the node
- -server option: Switches agent to server mode.
- -advertise: Sets the advertise address to use.
- -ui: Enables the built-in static web UI server
- -node: Name of this node. Must be unique in the cluster.
- -data-dir: Path to a data directory to store agent state
For a three node cluster:
Create a systemd service file /etc/systemd/system/consul.service
and add:
[Unit]
Description=Consul Service Discovery Agent
Documentation=https://www.consul.io/
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=consul
Group=consul
ExecStart=/usr/local/bin/consul agent \
-node=<strong>consul-01</strong> \
-config-dir=/etc/consul.d
ExecReload=/bin/kill -HUP $MAINPID
KillSignal=SIGINT
TimeoutStopSec=5
Restart=on-failure
SyslogIdentifier=consul
[Install]
WantedBy=multi-user.target
Generate Consul secret
consul keygen
Then create a json configuration file for the node in /etc/consul.d/config.json
{ "advertise_addr": "192.168.18.40", "bind_addr": "192.168.18.40", "bootstrap_expect": 3, "client_addr": "0.0.0.0", "datacenter": "DC1", "data_dir": "/var/lib/consul", "domain": "consul", "enable_script_checks": true, "dns_config": { "enable_truncate": true, "only_passing": true }, "enable_syslog": true, "encrypt": "bnRHLmJ6TeLomirgEOWP2g==", "leave_on_terminate": true, "log_level": "INFO", "rejoin_after_leave": true, "retry_join": [ "consul-01", "consul-02", "consul-03" ], "server": true, "start_join": [ "consul-01", "consul-02", "consul-03" ], "ui": true }
Replace all occurrences of 192.168.18.40
with the correct IP address of this node and value of encrypt with your generated secret. You need to have DNS or hosts file configured for the short DNS names (consul-01, consul-02, and consul-03) to work.
Bootstrap Consul the second and third node
Consul Node 2
Consul systemd service:
# cat /etc/systemd/system/consul.service [Unit] Description=Consul Service Discovery Agent Documentation=https://www.consul.io/ After=network-online.target Wants=network-online.target [Service] Type=simple User=consul Group=consul ExecStart=/usr/local/bin/consul agent \ -node=consul-02 \ -config-dir=/etc/consul.d ExecReload=/bin/kill -HUP $MAINPID KillSignal=SIGINT TimeoutStopSec=5 Restart=on-failure SyslogIdentifier=consul [Install] WantedBy=multi-user.target
Consul json configuration file
# cat /etc/consul.d/config.json { "advertise_addr": "192.168.18.41", "bind_addr": "192.168.18.41", "bootstrap_expect": 3, "client_addr": "0.0.0.0", "datacenter": "DC1", "data_dir": "/var/lib/consul", "domain": "consul", "enable_script_checks": true, "dns_config": { "enable_truncate": true, "only_passing": true }, "enable_syslog": true, "encrypt": "bnRHLmJ6TeLomirgEOWP2g==", "leave_on_terminate": true, "log_level": "INFO", "rejoin_after_leave": true, "retry_join": [ "consul-01", "consul-02", "consul-03" ], "server": true, "start_join": [ "consul-01", "consul-02", "consul-03" ], "ui": true }
Consul Node 3
Consul systemd service:
# cat /etc/systemd/system/consul.service [Unit] Description=Consul Service Discovery Agent Documentation=https://www.consul.io/ After=network-online.target Wants=network-online.target [Service] Type=simple User=consul Group=consul ExecStart=/usr/local/bin/consul agent \ -node=consul-03 \ -config-dir=/etc/consul.d ExecReload=/bin/kill -HUP $MAINPID KillSignal=SIGINT TimeoutStopSec=5 Restart=on-failure SyslogIdentifier=consul [Install] WantedBy=multi-user.target
Consul json configuration file:
# cat /etc/consul.d/config.json { "advertise_addr": "192.168.18.42", "bind_addr": "192.168.18.42", "bootstrap_expect": 3, "client_addr": "0.0.0.0", "datacenter": "DC1", "data_dir": "/var/lib/consul", "domain": "consul", "enable_script_checks": true, "dns_config": { "enable_truncate": true, "only_passing": true }, "enable_syslog": true, "encrypt": "bnRHLmJ6TeLomirgEOWP2g==", "leave_on_terminate": true, "log_level": "INFO", "rejoin_after_leave": true, "retry_join": [ "consul-01", "consul-02", "consul-03" ], "server": true, "start_join": [ "consul-01", "consul-02", "consul-03" ], "ui": true }
Start consul service on all nodes
sudo systemctl start consul
Enable the service to start on boot
sudo systemctl enable consul
Check cluster members
# consul members Node Address Status Type Build Protocol DC Segment consul-01 192.168.18.40:8301 alive server 1.5.1 2 dc1 <all> consul-02 192.168.18.41:8301 alive server 1.5.1 2 dc1 <all> consul-03 192.168.18.42:8301 alive server 1.5.1 2 dc1 <all>
The output shows the address, health state, role in the cluster, and consul version of each node in the cluster. You can obtain additional metadata by providing the -detailed
flag.
Access Consul UI
You can access the Consul in-built Web interface using the URL http://<consul-IP>:8500/ui
List of active nodes:
Check healthy nodes
Congratulations! You have successfully installed consul and bootstrapped a three-node Consul cluster on Ubuntu 20.04|18.04|16.04 & Debian 10/9 Linux systems. In our next tutorial, I’ll cover how to monitor Consul with Grafana and Prometheus.
Also check:
Setup Consul HA Cluster on CentOS 8 / CentOS 7
How to Install Terraform on Linux
Setup HashiCorp Vault Server on Linux