The Elastic stack (ELK) is made up of 3 open source components that work together to realize logs collection, analysis, and visualization. The 3 main components are:
- Elasticsearch – which is the core of the Elastic software. This is a search and analytics engine. Its task in the Elastic stack is to store incoming logs from Logstash and offer the ability to search the logs in real-time
- Logstash – It is used to collect data, transform logs incoming from multiple sources simultaneously, and sends them to storage.
- Kibana – This is a graphical tool that offers data visualization. In the Elastic stack, it is used to generate charts and graphs to make sense of the raw data in your database.
The Elastic stack can as well be used with Beats. These are lightweight data shippers that allow multiple data sources/indices, and send them to Elasticsearch or Logstash. There are several Beats, each with a distinct role.
- Filebeat – Its purpose is to forward files and centralize logs usually in either .log or .json format.
- Metricbeat – It collects metrics from systems and services including CPU, memory usage, and load, as well as other data statistics from network data and process data, before being shipped to either Logstash or Elasticsearch directly.
- Packetbeat – It supports a collection of network protocols from the application and lower-level protocols, databases, and key-value stores, including HTTP, DNS, Flows, DHCPv4, MySQL, and TLS. It helps identify suspicious network activities.
- Auditbeat – It is used to collect Linux audit framework data and monitor file integrity, before being shipped to either Logstash or Elasticsearch directly.
- Heartbeat – It is used for active probing to determine whether services are available.
This guide offers a deep illustration of how to run the Elastic stack (ELK) on Docker Containers using Docker Compose.
Setup Requirements.
For this guide, you need the following.
- Memory – 1.5 GB and above
- Docker Engine – version 18.06.0 or newer
- Docker Compose – version 1.26.0 or newer
Install the required packages below:
## On Debian/Ubuntu
sudo apt update && sudo apt upgrade
sudo apt install curl vim git
## On RHEL/CentOS/RockyLinux 8
sudo yum -y update
sudo yum -y install curl vim git
## On Fedora
sudo dnf update
sudo dnf -y install curl vim git
Step 1 – Install Docker and Docker Compose
Use the dedicated guide below to install the Docker Engine on your system.
Add your system user to the docker group.
sudo usermod -aG docker $USER
newgrp docker
Start and enable the Docker service.
sudo systemctl start docker && sudo systemctl enable docker
Now proceed and install Docker Compose with the aid of the below guide:
Step 2 – Provision the Elastic stack (ELK) Containers.
We will begin by cloning the file from Github as below
git clone https://github.com/deviantony/docker-elk.git
cd docker-elk
Open the deployment file for editing:
vim docker-compose.yml
The Elastic stack deployment file consists of 3 main parts.
- Elasticsearch – with ports:
- 9200: Elasticsearch HTTP
- 9300: Elasticsearch TCP transport
- Logstash – with ports:
- 5044: Logstash Beats input
- 5000: Logstash TCP input
- 9600: Logstash monitoring API
- Kibana – with port 5601
In the opened file, you can make the below adjustments:
- Configure Elasticsearch
The configuration file for Elasticsearch is stored in the elasticsearch/config/elasticsearch.yml file. So you can configure the environment by setting the cluster name, network host, and licensing as below
elasticsearch:
environment:
cluster.name: my-cluster
xpack.license.self_generated.type: basic
To disable paid features, you need to change the xpack.license.self_generated.type setting from trial(the self-generated license gives access only to all the features of an x-pack for 30 days) to basic.
- Configure Kibana
The configuration file is stored in the kibana/config/kibana.yml file. Here you can specify the environment variables as below.
kibana:
environment:
SERVER_NAME: kibana.example.com
- JVM tuning
Normally, both Elasticsearch and Logstash start with 1/4 of the total host memory allocated to the JVM Heap Size. You can adjust the memory by setting the below options.
For Logstash(An example with increased memory to 1GB)
logstash:
environment:
LS_JAVA_OPTS: -Xm1g -Xms1g
For Elasticsearch(An example with increased memory to 1GB)
elasticsearch:
environment:
ES_JAVA_OPTS: -Xm1g -Xms1g
Configure the Usernames and Passwords.
To configure the usernames, passwords, and version, edit the .env file.
vim .env
Make desired changes for the version, usernames, and passwords.
ELASTIC_VERSION=<VERSION>
## Passwords for stack users
#
# User 'elastic' (built-in)
#
# Superuser role, full access to cluster management and data indices.
# https://www.elastic.co/guide/en/elasticsearch/reference/current/built-in-users.html
ELASTIC_PASSWORD='StrongPassw0rd1'
# User 'logstash_internal' (custom)
#
# The user Logstash uses to connect and send data to Elasticsearch.
# https://www.elastic.co/guide/en/logstash/current/ls-security.html
LOGSTASH_INTERNAL_PASSWORD='StrongPassw0rd1'
# User 'kibana_system' (built-in)
#
# The user Kibana uses to connect and communicate with Elasticsearch.
# https://www.elastic.co/guide/en/elasticsearch/reference/current/built-in-users.html
KIBANA_SYSTEM_PASSWORD='StrongPassw0rd1'
Source environment:
source .env
Step 3 – Configure Persistent Volumes.
For the Elastic stack to persist data, we need to map the volumes correctly. In the YAML file, we have several volumes to be mapped. In this guide, I will configure a secondary disk attached to my device.
Identify the disk.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 40G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 39G 0 part
├─rl-root 253:0 0 35G 0 lvm /
└─rl-swap 253:1 0 4G 0 lvm [SWAP]
sdb 8:16 0 10G 0 disk
└─sdb1 8:17 0 10G 0 part
Format the disk and create an XFS file system to it.
sudo parted --script /dev/sdb "mklabel gpt"
sudo parted --script /dev/sdb "mkpart primary 0% 100%"
sudo mkfs.xfs /dev/sdb1
Mount the disk to your desired path.
sudo mkdir /mnt/datastore
sudo mount /dev/sdb1 /mnt/datastore
Update /etc/fstab
file for persistent mounting.
$ sudo vim /etc/fstab
/dev/sdb1 /mnt/datastore xfs defaults 0 0
Verify if the disk has been mounted.
$ sudo mount | grep /dev/sdb1
/dev/sdb1 on /mnt/datastore type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)
Create the persistent volumes in the disk.
sudo mkdir /mnt/datastore/setup
sudo mkdir /mnt/datastore/elasticsearch
Set the right permissions.
sudo chmod 775 -R /mnt/datastore
sudo chown -R $USER:docker /mnt/datastore
On Rhel-based systems, configure SELinux as below.
sudo setenforce 0
sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
Create the external volumes:
- For Elasticsearch
docker volume create --driver local \
--opt type=none \
--opt device=/mnt/datastore/elasticsearch \
--opt o=bind elasticsearch
- For setup
docker volume create --driver local \
--opt type=none \
--opt device=/mnt/datastore/setup \
--opt o=bind setup
Verify if the volumes have been created.
$ docker volume list
DRIVER VOLUME NAME
local elasticsearch
local setup
View more details about the volume.
$ docker volume inspect setup
[
{
"CreatedAt": "2022-05-06T13:19:33Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/setup/_data",
"Name": "setup",
"Options": {
"device": "/mnt/datastore/setup",
"o": "bind",
"type": "none"
},
"Scope": "local"
}
]
Go back to the YAML file and add these lines at the end of the file.
$ vim docker-compose.yml
.......
volumes:
setup:
external: true
elasticsearch:
external: true
Now you should have the YAML file with changes made in the below areas:
Step 4 – Bringing up the Elastic stack
After the desired changes have been made, bring up the Elastic stack with the command:
docker-compose up -d
Execution output:
[+] Building 6.4s (12/17)
=> [docker-elk_setup internal] load build definition from Dockerfile 0.3s
=> => transferring dockerfile: 389B 0.0s
=> [docker-elk_setup internal] load .dockerignore 0.5s
=> => transferring context: 250B 0.0s
=> [docker-elk_logstash internal] load build definition from Dockerfile 0.6s
=> => transferring dockerfile: 312B 0.0s
=> [docker-elk_elasticsearch internal] load build definition from Dockerfile 0.6s
=> => transferring dockerfile: 324B 0.0s
=> [docker-elk_logstash internal] load .dockerignore 0.7s
=> => transferring context: 188B
........
Once complete, check if the containers are running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
096ddc76c6b9 docker-elk_logstash "/usr/local/bin/dock…" 9 seconds ago Up 5 seconds 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp, 0.0.0.0:5044->5044/tcp, :::5044->5044/tcp, 0.0.0.0:9600->9600/tcp, 0.0.0.0:5000->5000/udp, :::9600->9600/tcp, :::5000->5000/udp docker-elk-logstash-1
ec3aab33a213 docker-elk_kibana "/bin/tini -- /usr/l…" 9 seconds ago Up 5 seconds 0.0.0.0:5601->5601/tcp, :::5601->5601/tcp docker-elk-kibana-1
b365f809d9f8 docker-elk_setup "/entrypoint.sh" 10 seconds ago Up 7 seconds 9200/tcp, 9300/tcp docker-elk-setup-1
45f6ba48a89f docker-elk_elasticsearch "/bin/tini -- /usr/l…" 10 seconds ago Up 7 seconds 0.0.0.0:9200->9200/tcp, :::9200->9200/tcp, 0.0.0.0:9300->9300/tcp, :::9300->9300/tcp docker-elk-elasticsearch-1
Verify if Elastic search is running:
$ curl http://localhost:9200 -u elastic:StrongPassw0rd1
{
"name" : "45f6ba48a89f",
"cluster_name" : "my-cluster",
"cluster_uuid" : "hGyChEAVQD682yVAx--iEQ",
"version" : {
"number" : "8.1.3",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "39afaa3c0fe7db4869a161985e240bd7182d7a07",
"build_date" : "2022-04-19T08:13:25.444693396Z",
"build_snapshot" : false,
"lucene_version" : "9.0.0",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
Step 5 – Access the Kibana Dashboard.
At this point, you can proceed and access the Kibana dashboard running on port 5601. But first, allow the required ports through the firewall.
##For Firewalld
sudo firewall-cmd --add-port=5601/tcp --permanent
sudo firewall-cmd --add-port=5044/tcp --permanent
sudo firewall-cmd --reload
##For UFW
sudo ufw allow 5601/tcp
sudo ufw allow 5044/tcp
Now proceed and access the Kibana dashboard with the URL http://IP_Address:5601 or http://Domain_name:5601.
Login using the credentials set for the Elasticsearch user:
Username: elastic
Password: StrongPassw0rd1
On successful authentication, you should see the dashboard.
Now to prove that the ELK stack is running as desired. We will inject some data/log entries. Logstash here allows us to send content via TCP as below.
# Using BSD netcat (Debian, Ubuntu, MacOS system, ...)
cat /path/to/logfile.log | nc -q0 localhost 5000
For example:
cat /var/log/syslog | nc -q0 localhost 5000
Once the logs have been loaded, proceed and view them under the Observability tab.
That is it! You have your Elastic stack (ELK) running perfectly.
Step 6 – Cleanup
In case you completely want to remove the Elastic stack (ELK) and all the persistent data, use the command:
$ docker-compose down -v
[+] Running 5/4
⠿ Container docker-elk-kibana-1 Removed 10.5s
⠿ Container docker-elk-setup-1 Removed 0.1s
⠿ Container docker-elk-logstash-1 Removed 9.9s
⠿ Container docker-elk-elasticsearch-1 Removed 3.0s
⠿ Network docker-elk_elk Removed 0.1s
Closing Thoughts.
We have successfully walked through how to run Elastic stack (ELK) on Docker Containers using Docker Compose. Futhermore, we have learned how to create an external persistent volume for Docker containers. I hope this was significant.
Related posts: