This tutorial will walk you through the steps for installing and configuring Consul Cluster on CentOS/ RHEL 7/8. Consul is an open source, distributed and a highly available solution used for service discovery, configuration, and segmentation functionalities. Consul provides a simple built-in proxy but there is support for 3rd party proxy integrations such as Envoy.
In our previous guide, we covered installation of three node Consul Cluster on Ubuntu.
Key features of Consul ( source: Consul site)
- Service Discovery: Clients register services and other applications can use Consul to discover services using DNS or HTTP.
- Secure Service Communication: Consul can generate and distribute TLS certificates for services to establish mutual TLS connections.
- KV Store: The Consul’s hierarchical key/value store can be used for dynamic configuration, coordination, leader election, feature flagging, and more. It has a simple and easy to use HTTP API.
- Health Checking: Consul clients do health checks, both for services (If OK) and for the local node (e.g resource utilization). This information is helpful to monitor cluster health and routing of traffic away from unhealthy nodes.
- Multi-Datacenter: Consul supports multiple data centers out of the box.
Consul Architecture
Every node that provides services to Consul runs a Consul agent which is responsible for health checking the services on the node as well as the node itself. Consul agents talk to one or more Consul servers which store and replicate data. Consul servers themselves elect a leader.
Your infrastructure systems that need to discover other services or nodes can query any of the Consul servers or any of the Consul agents. The agents forward query to the servers automatically.
While Consul can function with one server, 3 to 5 Consul servers are the recommended number for Production environments to avoid failure scenarios which could lead to a complete data loss.
Consul Cluster Setup on CentOS 7
My Setup is based on the following CentOS 7 servers.
Short Hostname | IP Address |
---|---|
consul-01 | 192.168.10.10 |
consul-02 | 192.168.10.11 |
consul-03 | 192.168.10.12 |
Set Server host names, e.g:
# Server 1
$ sudo hostnamectl set-hostname consul-01.example.com --static
# Server 2
$ sudo hostnamectl set-hostname consul-02.example.com --static
# Server 2
$ sudo hostnamectl set-hostname consul-03.example.com --static
Then put SELinux in Permissive mode.
sudo setenforce 0
sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
Step 1: Install Consul on CentOS 7
We’ll need to install Consul on all the three nodes. You may need to check latest version on the releases page.
sudo yum install -y wget unzip export VER="1.6.2" wget https://releases.hashicorp.com/consul/${VER}/consul_${VER}_linux_amd64.zip
Extract the file
unzip consul_${VER}_linux_amd64.zip
Move extracted consul
binary to /usr/local/bin
directory
sudo mv consul /usr/local/bin/
To verify Consul is properly installed, run consul -v
on your system.
$ consul -v Consul v1.6.2 Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)
To print consul help page, use --help
option.
$ consul --help
Usage: consul [--version] [--help]
Available commands are:
acl Interact with Consul's ACLs
agent Runs a Consul agent
catalog Interact with the catalog
connect Interact with Consul Connect
debug Records a debugging archive for operators
event Fire a new event
exec Executes a command on Consul nodes
force-leave Forces a member of the cluster to enter the "left" state
info Provides debugging information for operators.
intention Interact with Connect service intentions
join Tell Consul agent to join cluster
keygen Generates a new encryption key
keyring Manages gossip layer encryption keys
kv Interact with the key-value store
leave Gracefully leaves the Consul cluster and shuts down
lock Execute a command holding a lock
maint Controls node or service maintenance mode
members Lists the members of a Consul cluster
monitor Stream logs from a Consul agent
operator Provides cluster-level tools for Consul operators
reload Triggers the agent to reload configuration files
rtt Estimates network round trip time between nodes
services Interact with services
snapshot Saves, restores and inspects snapshots of Consul server state
tls Builtin helpers for creating CAs and certificates
validate Validate config files/directories
version Prints the Consul version
watch Watch for changes in Consul
Enable bash completion:
consul -autocomplete-install
complete -C /usr/local/bin/consul consul
Step 2: Bootstrap and start Consul Cluster
Consul bootstrapping is done on the three nodes one by one. If you want to do a single node Consul setup, you can skip the other two.
Run on all Consul cluster nodes
1.
Create a consul
system user/group
sudo groupadd --system consul
sudo useradd -s /sbin/nologin --system -g consul consul
2.
Create consul data and configurations directory and set ownership to consul
user
sudo mkdir -p /var/lib/consul /etc/consul.d
sudo chown -R consul:consul /var/lib/consul /etc/consul.d
sudo chmod -R 775 /var/lib/consul /etc/consul.d
Setup DNS or edit /etc/hosts
file to configure hostnames for all servers ( set on all nodes).
$ sudo vi /etc/hosts # Consul Cluster Servers 192.168.10.10 consul-01.example.com consul-01 192.168.10.11 consul-02.example.com consul-02 192.168.10.12 consul-03.example.com consul-03
Replace example.com
with your actual domain name as used on host names setup.
Bootstrap Consul first node – consul-01
For a single Node Consul:
For a single server Consul setup, create a system service file in /etc/systemd/system/consul.service
with the following content.
# Consul systemd service unit file
[Unit]
Description=Consul Service Discovery Agent
Documentation=https://www.consul.io/
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=consul
Group=consul
ExecStart=/usr/local/bin/consul agent -server -ui \
-advertise=192.168.10.10 \
-bind=192.168.10.10 \
-data-dir=/var/lib/consul \
-node=consul-01 \
-config-dir=/etc/consul.d
ExecReload=/bin/kill -HUP $MAINPID
KillSignal=SIGINT
TimeoutStopSec=5
Restart=on-failure
SyslogIdentifier=consul
[Install]
WantedBy=multi-user.target
Where:
- 192.168.10.10 is the IP address of the Consul node
- -server option: Switches agent to server mode.
- -advertise: Sets the advertise address to use.
- -ui: Enables the built-in static web UI server
- -node: Name of this node. Must be unique in the cluster.
- -data-dir: Path to a data directory to store agent state
For a three node cluster:
Create a systemd service file /etc/systemd/system/consul.service
and add:
# Consul systemd service unit file
[Unit]
Description=Consul Service Discovery Agent
Documentation=https://www.consul.io/
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=consul
Group=consul
ExecStart=/usr/local/bin/consul agent \
-node=consul-01 \
-config-dir=/etc/consul.d
ExecReload=/bin/kill -HUP $MAINPID
KillSignal=SIGINT
TimeoutStopSec=5
Restart=on-failure
SyslogIdentifier=consul
[Install]
WantedBy=multi-user.target
Generate Consul secret
# consul keygen
pzDVYxESgPSkPhBFudHU5w==
Then create a json configuration file for the node in /etc/consul.d/config.json
{
"advertise_addr": "192.168.10.10",
"bind_addr": "192.168.10.10",
"bootstrap_expect": 3,
"client_addr": "0.0.0.0",
"datacenter": "DC1",
"data_dir": "/var/lib/consul",
"domain": "consul",
"enable_script_checks": true,
"dns_config": {
"enable_truncate": true,
"only_passing": true
},
"enable_syslog": true,
"encrypt": "pzDVYxESgPSkPhBFudHU5w==",
"leave_on_terminate": true,
"log_level": "INFO",
"rejoin_after_leave": true,
"retry_join": [
"consul-01",
"consul-02",
"consul-03"
],
"server": true,
"start_join": [
"consul-01",
"consul-02",
"consul-03"
],
"ui": true
}
Replace all occurrences of 192.168.10.10
with the correct IP address of this node and value of encrypt with your generated secret.
Validate consul configuration.
# consul validate /etc/consul.d/config.json
Configuration is valid!
You need to have DNS or hosts file configured for the short DNS names (consul-01, consul-02, and consul-03) to work.
Bootstrap Consul the second and third node
Consul Node 2
Create Consul systemd service:
$ sudo vi /etc/systemd/system/consul.service
[Unit]
Description=Consul Service Discovery Agent
Documentation=https://www.consul.io/
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=consul
Group=consul
ExecStart=/usr/local/bin/consul agent \
-node=consul-02 \
-config-dir=/etc/consul.d
ExecReload=/bin/kill -HUP $MAINPID
KillSignal=SIGINT
TimeoutStopSec=5
Restart=on-failure
SyslogIdentifier=consul
[Install]
WantedBy=multi-user.target
Create Consul json configuration file:
$ sudo vi /etc/consul.d/config.json
{
"advertise_addr": "192.168.10.11",
"bind_addr": "192.168.10.11",
"bootstrap_expect": 3,
"client_addr": "0.0.0.0",
"datacenter": "DC1",
"data_dir": "/var/lib/consul",
"domain": "consul",
"enable_script_checks": true,
"dns_config": {
"enable_truncate": true,
"only_passing": true
},
"enable_syslog": true,
"encrypt": "pzDVYxESgPSkPhBFudHU5w==",
"leave_on_terminate": true,
"log_level": "INFO",
"rejoin_after_leave": true,
"retry_join": [
"consul-01",
"consul-02",
"consul-03"
],
"server": true,
"start_join": [
"consul-01",
"consul-02",
"consul-03"
],
"ui": true
}
Consul Node 3
Create Consul systemd service:
$ sudo vi /etc/systemd/system/consul.service
[Unit]
Description=Consul Service Discovery Agent
Documentation=https://www.consul.io/
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=consul
Group=consul
ExecStart=/usr/local/bin/consul agent \
-node=consul-03 \
-config-dir=/etc/consul.d
ExecReload=/bin/kill -HUP $MAINPID
KillSignal=SIGINT
TimeoutStopSec=5
Restart=on-failure
SyslogIdentifier=consul
[Install]
WantedBy=multi-user.target
Create Consul json configuration file:
$ sudo vi /etc/consul.d/config.json
{
"advertise_addr": "192.168.10.12",
"bind_addr": "192.168.10.12",
"bootstrap_expect": 3,
"client_addr": "0.0.0.0",
"datacenter": "DC1",
"data_dir": "/var/lib/consul",
"domain": "consul",
"enable_script_checks": true,
"dns_config": {
"enable_truncate": true,
"only_passing": true
},
"enable_syslog": true,
"encrypt": "pzDVYxESgPSkPhBFudHU5w==",
"leave_on_terminate": true,
"log_level": "INFO",
"rejoin_after_leave": true,
"retry_join": [
"consul-01",
"consul-02",
"consul-03"
],
"server": true,
"start_join": [
"consul-01",
"consul-02",
"consul-03"
],
"ui": true
}
Start Consul Services
Allow consul ports on the firewall.
sudo firewall-cmd --add-port={8300,8301,8302,8400,8500,8600}/tcp --permanent
sudo firewall-cmd --add-port={8301,8302,8600}/udp --permanent
sudo firewall-cmd --reload
See Consul Ports for more details.
Start consul service on all nodes
sudo systemctl start consul
Enable the service to start on boot
sudo systemctl enable consul
Service status can checked with:
Check Consul cluster members:
# consul members
Node Address Status Type Build Protocol DC Segment
consul-01 192.168.10.10:8301 alive server 1.6.1 2 dc1 <all>
consul-02 192.168.10.11:8301 alive server 1.6.1 2 dc1 <all>
consul-03 192.168.10.12:8301 alive server 1.6.1 2 dc1 <all>
The output shows the address, health state, role in the cluster, and consul version of each node in the cluster. You can obtain additional metadata by providing the -detailed
flag.
# consul members -detailed
Access Consul UI
You can access the Consul in-built Web interface using the URL http://<consul-IP>:8500/ui
Congratulations!. You have successfully installed a three-node Consul cluster on CentOS / RHEL 7/8. Check Consul Documentation for usage guides.
Here are other guides on Consul Products.
Install and Configure Hashicorp Vault Server on Ubuntu / CentOS / Debian
How to Provision VMs on KVM with Terraform
How to setup Consul Cluster on Ubuntu 18.04 / Ubuntu 16.04 LTS