Wednesday, July 3, 2024
HomeDatabasesInstall and Configure Percona XtraDB Cluster on Rocky Linux 8

Install and Configure Percona XtraDB Cluster on Rocky Linux 8

Configuring Percona XtraDB Cluster on Rocky Linux 8

Building a secure, highly available multi-master, fault-tolerant MySQL environment has been challenging in the past due to the complex steps involved. Percona makes it easy to build a secure multi-master MySQL cluster with its own packaged distribution.

Percona Server is a packaged MySQL server with high availability performance enhancements while Percona XtraDB is the High availability solution for the MySQL cluster.

In this guide, we shall cover how to install and configure the Percona XtraDB Cluster on Rocky Linux 8. This fully packaged MySQL distribution provides simple steps which are very easy to configure.

Our setup involves three nodes running Percona Server in active mode. This means that the three nodes will all be masters with Read/Write enabled on them.

In our environment, we shall use three Rocky Linux 8 nodes with the details below:

NODE IP Address HOSTNAME
Node1 192.168.100.20 percona01
Node2 192.168.100.21 percona02
Node3 192.168.100.22 percona03

The steps below highlight the steps required to install and setup your highly available database cluster using Percona XtraDB.

  1. Setup static hostnames for the nodes
  2. Configure firewall
  3. Install Percona XtraDB cluster on all nodes
  4. Configure the Percona XtraDB nodes
  5. Test Replication within the cluster.

Step 1: Setup Static Hostnames

We need to setup the static hostnames for the nodes for hostname resolution.

On each node, edit the /etc/hosts file and add the details of your node as shown in the example below:

$ sudo vim /etc/hosts
192.168.100.20 percona01
192.168.100.21 percona02
192.168.100.22 percona03

Step 2: Configure Firewall on Rocky Linux 8 Nodes

We need to configure the firewall to allow ports used for the PXC setup. This should be done on all the nodes.

sudo firewall-cmd --add-port={3306/tcp,4444/tcp,4567/tcp,4568/tcp} --zone=public --permanent
sudo firewall-cmd --reload

Step 3: Install Percona XtraDB Cluster on Rocky Linux 8

Next, we need to install PXC on the three nodes. The steps below will guide us on how to install PXC on Rocky Linux.

1. Configure percona repositories

Configure the Percona repositories for Rocky Linux 8 so we can download the latest version of PXC.

sudo yum -y install https://repo.percona.com/yum/percona-release-latest.noarch.rpm

Enable Percona repositories:

sudo percona-release enable-only pxc-80 release
sudo percona-release enable tools release
sudo yum repolist 

2. Install Percona XtraDB Cluster

Install PXC on the three Rocky Linux nodes.

sudo yum -y module disable mysql
sudo yum install percona-xtradb-cluster

3. Start Percona XtraDB Cluster Server

Start the MySQL service for PXC.

sudo service mysql start

4. Configure the root password for PXC

We need to obtain the temporary password for the mysql superuser in the logs after the installation. We will then need to alter the password from the mysql console.

sudo grep 'temporary password' /var/log/mysqld.log

Use the password obtained to login to the database servers for each of the three nodes.

$ mysql -u root -p

Change the password to a strong password of your choice.

mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'Str0nGP@ssw0rd';
Query OK, 0 rows affected (0.00 sec)

mysql> exit
Bye

5. Stop MySQL Service

Stop MySQL service on the three nodes to allow us to configure the cluster.

sudo service mysql stop

Step 4: Configuring Percona XtraDB Cluster on Rocky Linux 8

The next step will be to configure PXC on the three nodes. We will need to do one node at a time. Follow the guide below:

1. Configure the First node

On the first node, modify and add the following information in the /etc/my.cnf file under the [mysqld] section.

$ sudo vi /etc/my.cnf

[mysqld]
datadir=/var/lib/mysql
user=mysql

# Path to Galera library
wsrep_provider=/usr/lib64/galera4/libgalera_smm.so

# Cluster connection URL contains the IPs of node#1, node#2 and node#3
wsrep_cluster_address=gcomm://192.168.100.20,192.168.100.21,192.168.100.22

# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW

# Using the MyISAM storage engine is not recommended.
default_storage_engine=InnoDB

# This InnoDB autoincrement locking mode is a requirement for Galera
innodb_autoinc_lock_mode=2

# Node 1 address
wsrep_node_address=192.168.100.20

# SST method
wsrep_sst_method=xtrabackup-v2

# Cluster name
wsrep_cluster_name=c4geeks_cluster

In the above, edit the following lines and put the correct information according to your cluster.

wsrep_cluster_address=gcomm://<your node IPs>
wsrep_node_address=<IP of first node>
wsrep_cluster_name=<the name of your cluster>

Bootstrap the cluster from the first node as below:

[root@percona01 ~]#  systemctl start [email protected]

Login to the MySQL server from the first node and check the status of the cluster.

$ mysql -u root -p
Enter Password:
mysql>
mysql> show status like 'wsrep%';
+----------------------------+--------------------------------------+
| Variable_name              | Value                                |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid     | aead1f81-ec9e-11eb-8dbe-ffa1e1724fd3 |
...
| wsrep_local_state          | 4                                    |
| wsrep_local_state_comment  | Synced                               |
...
| wsrep_cluster_size         | 1                                    |
| wsrep_cluster_status       | Primary                              |
| wsrep_connected            | ON                                   |
...
| wsrep_ready                | ON                                   |
+----------------------------+--------------------------------------+
75 rows in set (0.00 sec)

The above command shows that our cluster has been successfully bootstrapped. We now need to add the second and third nodes to the cluster.

2. Configure the Second Node

Edit and make the following configuration to the /etc/my.cnf file under the [mysqld] section.

$ sudo vi /etc/my.cnf

[mysqld]
datadir=/var/lib/mysql
user=mysql

# Path to Galera library
wsrep_provider=/usr/lib64/galera4/libgalera_smm.so

# Cluster connection URL contains the IPs of node#1, node#2 and node#3
wsrep_cluster_address=gcomm://192.168.100.20,192.168.100.21,192.168.100.22

# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW

# Using the MyISAM storage engine is not recommended.
default_storage_engine=InnoDB

# This InnoDB autoincrement locking mode is a requirement for Galera
innodb_autoinc_lock_mode=2

# Node 2 address
wsrep_node_address=192.168.100.21

# SST method
wsrep_sst_method=xtrabackup-v2

# Cluster name
wsrep_cluster_name=c4geeks_cluster

Make sure you use the correct details for the highlighted sections below:

wsrep_cluster_address=gcomm://<your node IPs>
wsrep_node_address=<IP of second node>
wsrep_cluster_name=<the name of your cluster>

Start mysql service on the second node.

[root@percona02 ~]# systemctl start mysql

Check the cluster status from any of the two nodes to see if the second node has successfully joined the cluster.

mysql> show status like 'wsrep%';
+----------------------------+--------------------------------------+
| Variable_name              | Value                                |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid     | aead1f81-ec9e-11eb-8dbe-ffa1e1724fd3 |
...
| wsrep_local_state          | 4                                    |
| wsrep_local_state_comment  | Synced                               |
...
| wsrep_cluster_size         | 2                                    |
| wsrep_cluster_status       | Primary                              |
| wsrep_connected            | ON                                   |
...
| wsrep_ready                | ON                                   |
+----------------------------+--------------------------------------+
40 rows in set (0.01 sec)

The above output shows that the node has successfully joined the cluster since the value of wsrep_cluster_size has changed from 1 to 2.

3. Configure the Third Node

To add the third node to the cluster, edit the /etc/my.cnf file and make the changes below.

$ sudo vi /etc/my.cnf

[mysqld]
datadir=/var/lib/mysql
user=mysql

# Path to Galera library
wsrep_provider=/usr/lib64/galera4/libgalera_smm.so

# Cluster connection URL contains the IPs of node#1, node#2 and node#3
wsrep_cluster_address=gcomm://192.168.100.20,192.168.100.21,192.168.100.22

# In order for Galera to work correctly binlog format should be ROW
binlog_format=ROW

# Using the MyISAM storage engine is not recommended.
default_storage_engine=InnoDB

# This InnoDB autoincrement locking mode is a requirement for Galera
innodb_autoinc_lock_mode=2

# Node 2 address
wsrep_node_address=192.168.100.22

# SST method
wsrep_sst_method=xtrabackup-v2

# Cluster name
wsrep_cluster_name=c4geeks_cluster

Take note of the below sections and add the correct information.

wsrep_cluster_address=gcomm://<your node IPs>
wsrep_node_address=<IP of second node>
wsrep_cluster_name=<the name of your cluster>

Start mysql service on the third node.

[root@percona03 ~]# systemctl start mysql

When the server starts on the third node, check the cluster status to ascertain if the third node has successfully joined the cluster.

mysql> show status like 'wsrep%';
+----------------------------+--------------------------------------+
| Variable_name              | Value                                |
+----------------------------+--------------------------------------+
| wsrep_local_state_uuid     | aead1f81-ec9e-11eb-8dbe-ffa1e1724fd3 |
...
| wsrep_local_state          | 4                                    |
| wsrep_local_state_comment  | Synced                               |
...
| wsrep_cluster_size         | 3                                    |
| wsrep_cluster_status       | Primary                              |
| wsrep_connected            | ON                                   |
...
| wsrep_ready                | ON                                   |
+----------------------------+--------------------------------------+
40 rows in set (0.01 sec)

We can see the node joined successfully in the above output.

Step 5: Testing Replication on Percona XtraDB Cluster

The next step is to confirm if replication is really working within the cluster. We will do a distributed modification of a database within the cluster and see if we get the correct output.

  • Create a database from the third node:
mysql@percona03> CREATE DATABASE percona_testdb;
Query OK, 1 row affected (0.01 sec)
  • Create a table to the database created from the second node:
mysql@percona02> USE percona_testdb;
Database changed

mysql@percona3> CREATE TABLE test_table (instance_id INT PRIMARY KEY, instance_name VARCHAR(30));
Query OK, 0 rows affected (0.05 sec)
  • Add some values to the table using the first node.
mysql@percona01> INSERT INTO percona_testdb.test_table VALUES (3, 'percona03');
Query OK, 1 row affected (0.02 sec)
  • From the third node, check if you can retrieve the values of the table we created above:
mysql@percona03> SELECT * FROM percona_testdb.test_table;
+---------+-----------+
| node_id | node_name |
+---------+-----------+
|       3 | percona03 |
+---------+-----------+
1 row in set (0.00 sec)

Wrap Up

We have demonstrated how to setup a scalable, highly available, and secure MySQL cluster using Percona XtraDB Cluster (PXC). Feel free to try this solution out as it uses MySQL as the database engine but is fine-tuned for easy configuration and clustering. Feel free to reach out in case you encounter any issues setting up this environment. Cheers!

Thapelo Manthata
I’m a desktop support specialist transitioning into a SharePoint developer role by day and Software Engineering student by night. My superpowers include customer service, coding, the Microsoft office 365 suite including SharePoint and power platform.
RELATED ARTICLES

Most Popular

Recent Comments