A good database solution that gives database administrators and companies the confidence that their data remains replicated and backed up in more than one instance is a pure dream come true. It eclipses all of the fears, the panic and the the doubts that usually shroud them whenever the topic of data loss hovers over their cup of coffee. Digging in and scratching every surface on the internet, it is hard to pass the conspicuous words of “Galera Cluster” when you are literally scouring for a highly available database solution for your deployment. Thanks to the overpowering magic of curiosty, we had to stretch our hands, grab “Galera Cluster” by the neck, pull it closer and take a better look at it. The result is this guide that goes in head first to deploy and proxy requests to the cluster using ProxySQL.
So you might be asking, “What is Galera Cluster?”. This is just for you. Let us begin with the word cluster so that everything becomes much clearer. A cluster is a group of servers that work together to achieve a common goal or objective. In this context, a cluster will be a group of three or more servers that will be working together as a complete database solution that clients can write, edit, query and fetch data. Now we are ready to roll.
Galera Cluster is a synchronous multi-master database cluster, based on synchronous replication and MySQL and InnoDB. Synchronous means that nodes(servers) keep all replicas synchronized(the same time as one another) by updating all replicas in a single transaction. Galera is an easy-to-use, high-availability solution, which provides high system up-time, no data loss and scalability for future growth.
World’s Most Advanced Features and Un-Seen Benefits of Galera Cluster
- True Multi-master, Active-Active Cluster Read and write to any node at any time.
- Synchronous Replication No slave lag, no data is lost at node crash.
- Tightly Coupled All nodes hold the same state. No diverged data between nodes allowed.
- Multi-threaded Slave For better performance. For any workload.
- No Master-Slave Failover Operations or Use of VIP.
- Hot Standby No downtime during failover (since there is no failover).
- Automatic Node Provisioning No need to manually back up the database and copy it to the new node.
- Supports InnoDB.
- Transparent to Applications Required no (or minimal changes) to the application.
- No Read and Write Splitting Needed.
- Easy to Use and Deploy
Later in the guide, we shall add ProxySQL which is a MySQL proxy server that is used as a bridge between the Galera cluster and the client applications trying to access (write, query, fetch) the database cluster.
Setup Pre-requisites
We assume that you have the following before we proceed
- Access to two or more hosts with sudo rights.
- Ansible host
- Public ssh key copied to your hosts
Step 1: Copy your public keys to the hosts
To copy your keys to the hosts, create a file like “list_of_servers“, and key in your IP hosts therein then you can create a script that will do the work for you.
$ vim list_of_servers
192.168.20.11
192.168.20.12
192.168.20.13
Then create the bash script
$ vim copy_ssh_keys.sh
#!/bin/sh
username="root" # Set remote servers ssh username
for ip in `cat list_of_servers`; do
ssh-copy-id -i ~/.ssh/id_rsa.pub ${username}@${ip}
done
Make the script executable
chmod +x copy_ssh_keys.sh
Then run it. Sample output is provided. Simply enter the password for the users.
$ ./copy_ssh_keys.sh
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/tech/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
(if you think this is a mistake, you may want to use -f option)
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/tech/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
(if you think this is a mistake, you may want to use -f option)
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/tech/.ssh/id_rsa.pub"
The authenticity of host '192.168.20.11 (192.168.20.11)' can't be established.
ECDSA key fingerprint is SHA256:daMKa1FL3+miBpGer95yq54f0//GOcNhSlb5FnTEPP8.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh '192.168.20.11'"
and check to make sure that only the key(s) you wanted were added.
Step 2: Create a role that will install mariadb and more in all of the servers
In this step we will leverage the power of ansible to install all of the packages we need to proceed. In addition to that, we will update each of the servers “/etc/hosts” file with the hosts we need.
Install ansible:
# Debian / Ubuntu
sudo apt update
sudo apt install ansible python3-pip
sudo pip3 install pymysql
# Fedora
sudo dnf install ansible python3-pip
sudo pip3 install pymysql
# CentOS / Rocky Linux
sudo yum -y install epel-release
sudo yum -y install ansible python3-pip
sudo pip3 install pymysql
After the role runs, most of the work remaining will be to just configure them which we shall do manually.
$ ansible-galaxy init install-mariadb
- Role install-mariadb was created successfully
The above command will create a directory called “install-mariadb” with the structure shown below.
$ tree install-mariadb
install-mariadb
├── README.md
├── defaults
│ └── main.yml
├── files
├── handlers
│ └── main.yml
├── meta
│ └── main.yml
├── tasks
│ └── main.yml
├── templates
├── tests
│ ├── inventory
│ └── test.yml
└── vars
└── main.yml
After that, update “main.yml” file in tasks directory within the new “install-mariadb” directory created by the command above.
$ vim install-mariadb/tasks/main.yml
---
- name: Insert hostnames config
blockinfile:
path: /etc/hosts
block: |
192.168.20.11 db1.geeksforgeeks.org db1
192.168.20.12 db2.geeksforgeeks.org db2
192.168.20.13 db3.geeksforgeeks.org db3
backup: yes
- name: Install prerequisites
apt: name={{ item }} update_cache=yes state=latest force_apt_get=yes
loop: [ 'aptitude','python3','python3-pip']
#Install MariaDB server and other packages
- name: Install MariaDB Packages
apt: name={{ item }} update_cache=yes state=latest
loop: [ 'mariadb-server','python3-pymysql', 'mariadb-client', 'vim', 'sudo']
# Start MariaDB Service
- name: Start MariaDB service
service:
name: mariadb
state: started
become: true
# MariaDB Configuration
- name: Sets the root password
mysql_user:
name: root
password: "{{ mysql_root_password }}"
login_user: "{{ mysql_user }}"
state: present
login_unix_socket: /var/run/mysqld/mysqld.sock
- name: Removes all anonymous user accounts
mysql_user:
name: ''
host_all: yes
state: absent
login_user: root
login_password: "{{ mysql_root_password }}"
- name: Removes the MySQL test database
mysql_db:
name: test
state: absent
login_user: root
login_password: "{{ mysql_root_password }}"
Add variables used in the tasks file
$ vim install-mariadb/vars/main.yml
# vars file for install-mariadb
mysql_root_password: "StrongPassword"
mysql_user: "root"
disable_default: true
username: devops
Our role is now ready. What remains is to add the hosts to the inventory then create a playbook that calls out our role. Open the default location of your inventory file and add the hosts as follows:
$ vim hosts
[galera]
192.168.20.11
192.168.20.12
192.168.20.13
Create a playbook that is similar to the following. Ensure that the playbook file is one directory outside your role directory.
$ vim playbook.yaml
---
- hosts: galera
become: yes
become_user: root
become_method: sudo
roles:
- install-mariadb
Then run ansible to install our packages and configure our servers. Enter the root password for the hosts when prompted
$ ansible-playbook -i hosts --become --user=tech --become-user=root --become-method=sudo --ask-become-pass -v playbook.yaml -e 'ansible_python_interpreter=/usr/bin/python3'
Using /etc/ansible/ansible.cfg as config file
BECOME password:
Replace tech username with server’s login username
Step 3: Configure Galera Cluster
The next step is to configure galera cluster on the three MariaDB hosts we have. Comment the bind line on the file /etc/mysql/mariadb.conf.d/50-server.cnf which binds MariaDB service to 127.0.0.1. We want all of them to collaborate with others in the cluster.
Configure on all nodes:
$ sudo vim /etc/mysql/mariadb.conf.d/50-server.cnf
#bind-address = 127.0.0.1
Once we are sure that our nodes are not bound to localhost, we can now comfortably configure them as follows:
Let us Configure Node 1
Add the following content to the MariaDB configuration file. Remember to modify the hostname at “wsrep_node_address” to the hostname or IP of your first host.
$ sudo vim /etc/mysql/mariadb.conf.d/galera.cnf
[galera]
wsrep_on = ON
wsrep_cluster_name = "MariaDB Galera Cluster"
wsrep_provider = /usr/lib/galera/libgalera_smm.so
wsrep_cluster_address = "gcomm://"
binlog_format = row
default_storage_engine = InnoDB
innodb_autoinc_lock_mode = 2
bind-address = 0.0.0.0
wsrep_node_address="db1"
Initialize galera cluster and restart MariaDB
sudo galera_new_cluster
sudo systemctl restart mariadb
Let us Configure Node 2:
Create and modify Database configuration for node 2:
$ sudo vim /etc/mysql/mariadb.conf.d/galera.cnf
[galera]
wsrep_on = ON
wsrep_cluster_name = "MariaDB Galera Cluster"
wsrep_provider = /usr/lib/galera/libgalera_smm.so
wsrep_cluster_address = "gcomm://db1,db2,db3"
binlog_format = row
default_storage_engine = InnoDB
innodb_autoinc_lock_mode = 2
bind-address = 0.0.0.0
wsrep_node_address="db2"
Let us Configure Node 3:
Open the same file as the others above and update it with the following configuration.
$ sudo vim /etc/mysql/mariadb.conf.d/galera.cnf
[galera]
wsrep_on = ON
wsrep_cluster_name = "MariaDB Galera Cluster"
wsrep_provider = /usr/lib/galera/libgalera_smm.so
wsrep_cluster_address = "gcomm://db1,db2,db3"
binlog_format = row
default_storage_engine = InnoDB
innodb_autoinc_lock_mode = 2
bind-address = 0.0.0.0
wsrep_node_address="db3"
Restart MariaDB service on node2 and node3
sudo systemctl restart mariadb
Step 4: Validate Galera Settings
Login to any of the three nodes as the root user, then confirm that the cluster settings are alright.
$ mysql -u root -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 11
Server version: 10.5.12-MariaDB-0+deb11u1 Debian 11
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
MariaDB [(none)]>
Check status
MariaDB [(none)]> show status like 'wsrep_%';
+-------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------+
| Variable_name | Value |
+-------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------+
| wsrep_local_state_uuid | 6f0406d3-3295-11ec-8446-770e11038a6f |
| wsrep_protocol_version | 10 |
| wsrep_last_committed | 3 |
| wsrep_replicated | 0 |
| wsrep_replicated_bytes | 0 |
| wsrep_repl_keys | 0 |
| wsrep_repl_keys_bytes | 0 |
| wsrep_repl_data_bytes | 0 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 3 |
| wsrep_received_bytes | 264 |
| wsrep_local_commits | 0 |
| wsrep_local_cert_failures | 0 |
| wsrep_local_replays | 0 |
| wsrep_local_send_queue | 0 |
| wsrep_local_send_queue_max | 1 |
| wsrep_local_send_queue_min | 0 |
| wsrep_local_send_queue_avg | 0 |
| wsrep_local_recv_queue | 0 |
| wsrep_local_recv_queue_max | 1 |
| wsrep_local_recv_queue_min | 0 |
| wsrep_local_recv_queue_avg | 0 |
| wsrep_local_cached_downto | 3 |
| wsrep_flow_control_paused_ns | 0 |
| wsrep_flow_control_paused | 0 |
| wsrep_flow_control_sent | 0 |
| wsrep_flow_control_recv | 0 |
| wsrep_flow_control_active | false |
| wsrep_flow_control_requested | false |
| wsrep_cert_deps_distance | 0 |
| wsrep_apply_oooe | 0 |
| wsrep_apply_oool | 0 |
| wsrep_apply_window | 0 |
| wsrep_commit_oooe | 0 |
| wsrep_commit_oool | 0 |
| wsrep_commit_window | 0 |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| wsrep_cert_index_size | 0 |
| wsrep_causal_reads | 0 |
| wsrep_cert_interval | 0 |
| wsrep_open_transactions | 0 |
| wsrep_open_connections | 0 |
| wsrep_incoming_addresses | AUTO,AUTO,AUTO |
| wsrep_cluster_weight | 3 |
| wsrep_desync_count | 0 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 0.000596085/0.00133766/0.00240084/0.000607165/5 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | cdf9b4e6-3295-11ec-8cb6-f2ce576d6ccc |
| wsrep_gmcast_segment | 0 |
| wsrep_applier_thread_count | 1 |
| wsrep_cluster_capabilities | |
| wsrep_cluster_conf_id | 3 |
| wsrep_cluster_size | 3 |
| wsrep_cluster_state_uuid | 6f0406d3-3295-11ec-8446-770e11038a6f |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 2 |
| wsrep_provider_capabilities | :MULTI_MASTER:CERTIFICATION:PARALLEL_APPLYING:TRX_REPLAY:ISOLATION:PAUSE:CAUSAL_READS:INCREMENTAL_WRITESET:UNORDERED:PREORDERED:STREAMING:NBO: |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <[email protected]> |
| wsrep_provider_version | 4.9(rcece3ba2) |
| wsrep_ready | ON |
| wsrep_rollbacker_thread_count | 1 |
| wsrep_thread_count | 2 |
+-------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------+
69 rows in set (0.002 sec)
MariaDB [(none)]>
Confirm that we have a cluster size of 3 under:
wsrep_cluster_size 3
We can create a test database on any of the nodes and check its availability on the other nodes.
root@db1:~# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 11
Server version: 10.5.12-MariaDB-0+deb11u1 Debian 11
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
MariaDB [(none)]> create database demodb1;
Query OK, 1 row affected (0.003 sec)
Login to the other two nodes (db2 and db3) and check if the database was replicated to them.
root@db2:~# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 11
Server version: 10.5.12-MariaDB-0+deb11u1 Debian 11
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| demodb1 |
| information_schema |
| mysql |
| performance_schema |
+--------------------+
4 rows in set (0.001 sec)
Node 3
root@db3:~# mysql -u root -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 11
Server version: 10.5.12-MariaDB-0+deb11u1 Debian 11
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
MariaDB [(none)]> show databases;
+--------------------+
| Database |
+--------------------+
| demodb1 |
| information_schema |
| mysql |
| performance_schema |
+--------------------+
4 rows in set (0.001 sec)
This simple exercise has managed to confirm that the database created on db1 is replicated across the cluster. What a good feeling!
Step 5: Install ProxySQL SQL Proxy server
With a working Galera cluster, we need to setup a ProxySQL server that will distribute traffic to the three nodes equally. ProxySQL can run on the server that has the application or run as an independent server. This article will cover how to set it up on an independent Debian|Rocky | CentOS Linux host with the steps below:
CentOS
Add ProxySQL repo
sudo tee /etc/yum.repos.d/proxysql.repo<<EOF
[proxysql_repo]
name= ProxySQL YUM repository
baseurl=https://repo.proxysql.com/ProxySQL/proxysql-2.3.x/centos/\$releasever
gpgcheck=1
gpgkey=https://repo.proxysql.com/ProxySQL/repo_pub_key
EOF
Get ProxySQL installed
sudo yum install proxysql mysql
Debian | Ubuntu
You can install ProxySQL using DEB package as follows:
wget https://github.com/sysown/proxysql/releases/download/v2.2.0/proxysql_2.2.0-debian10_amd64.deb
sudo dpkg -i proxysql_2.2.0-debian10_amd64.deb
sudo systemctl start proxysql
sudo systemctl enable proxysql
sudo systemctl status proxysql
Step 6: Configure ProxySQL on CentOS 8
Once ProxySQL has been installed successfully, ensure that you start and enable the service on your host as follows usind systemd
sudo systemctl enable --now proxysql
After that, we shall change gears and get on a path of configuring ProxySQL through admin interface. The admin interface allows you to save configuration without restarting the proxy. This is achieved through SQL queries to the admin database.
To connect to ProxySQL admin interface, we need a mysql-client. The admin interface runs locally on port 6032 and the default username and password combination is is admin and admin respectively.
$ mysql -u admin -padmin -h 127.0.0.1 -P6032 --prompt='Admin> '
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.5.30 (ProxySQL Admin Module)
Copyright (c) 2000, 2021, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
Admin>
Since we are good advocators of good security practices, we shall change the default admin password by running the command below
$ mysql -u admin -padmin -h 127.0.0.1 -P6032 --prompt='Admin> '
Admin> UPDATE global_variables SET variable_value='admin:StrongPassword' WHERE variable_name='admin-admin_credentials';
Query OK, 0 rows affected (0.01 sec)
Ensure that StrongPassword is set to a wonderfully strong password.
Now, ProxySQL configuration system consists of three layers:
- Memory – Altered when making modifications on the command-line
- Disk – used for persistent configuration changes
- Runtime – Used as the effective configuration for ProxySQL.
This consequently means that the query above has only been written to memory. To make it persistent, we need to copy the configuration to runtime then save them to disk.
To do that, the queries below come to the rescue aptly and perfectly:
LOAD ADMIN VARIABLES TO RUNTIME;
SAVE ADMIN VARIABLES TO DISK;
Step 7: Configure Monitoring in Galera cluster
For ProxySQL to know the health status of the nodes in the Galera Cluster, there has to be a way through which it communicates with them all. This means that ProxySQL has to connect to the nodes through a dedicated user.
We will create a user on one of the MariaDB nodes. As we know now, the user will be replicated automatically through the cluster since the cluster is already up and running.
MariaDB [(none)]> CREATE USER 'monitor'@'%' IDENTIFIED BY 'StrongPassword';
MariaDB [(none)]> flush privileges;
You are at liberty to use a good password that you prefer here.
Step 8: Configure Monitoring In ProxySQL
Configure ProxySQL admin to constantly monitor the backend nodes.
Add the user credentials that we configured in the step above. Remember to modify the value for password to fit whatever you have used in the previous step.
Admin> UPDATE global_variables SET variable_value='monitor' WHERE variable_name='mysql-monitor_username';
Query OK, 1 row affected (0.01 sec)
Admin> UPDATE global_variables SET variable_value='StrongPassword' WHERE variable_name='mysql-monitor_password';
Query OK, 1 row affected (0.00 sec)
Add the following monitoring parameters for intervals:
Admin> UPDATE global_variables SET variable_value='2000' WHERE variable_name IN ('mysql-monitor_connect_interval','mysql-monitor_ping_interval','mysql-monitor_read_only_interval');
Query OK, 3 rows affected (0.00 sec)
Confirm the variables we just configured in the above step:
Admin> SELECT * FROM global_variables WHERE variable_name LIKE 'mysql-monitor_%';
+----------------------------------------------------------------------+----------------+
| variable_name | variable_value |
+----------------------------------------------------------------------+----------------+
| mysql-monitor_enabled | true |
| mysql-monitor_connect_timeout | 600 |
| mysql-monitor_ping_max_failures | 3 |
| mysql-monitor_ping_timeout | 1000 |
| mysql-monitor_read_only_max_timeout_count | 3 |
| mysql-monitor_replication_lag_interval | 10000 |
| mysql-monitor_replication_lag_timeout | 1000 |
| mysql-monitor_replication_lag_count | 1 |
| mysql-monitor_groupreplication_healthcheck_interval | 5000 |
| mysql-monitor_groupreplication_healthcheck_timeout | 800 |
| mysql-monitor_groupreplication_healthcheck_max_timeout_count | 3 |
| mysql-monitor_groupreplication_max_transactions_behind_count | 3 |
| mysql-monitor_groupreplication_max_transactions_behind_for_read_only | 1 |
| mysql-monitor_galera_healthcheck_interval | 5000 |
| mysql-monitor_galera_healthcheck_timeout | 800 |
| mysql-monitor_galera_healthcheck_max_timeout_count | 3 |
| mysql-monitor_replication_lag_use_percona_heartbeat | |
| mysql-monitor_query_interval | 60000 |
| mysql-monitor_query_timeout | 100 |
| mysql-monitor_slave_lag_when_null | 60 |
| mysql-monitor_threads_min | 8 |
| mysql-monitor_threads_max | 128 |
| mysql-monitor_threads_queue_maxsize | 128 |
| mysql-monitor_wait_timeout | true |
| mysql-monitor_writer_is_also_reader | true |
| mysql-monitor_username | monitor |
| mysql-monitor_password | StrongPassword |
| mysql-monitor_history | 600000 |
| mysql-monitor_connect_interval | 2000 |
| mysql-monitor_ping_interval | 2000 |
| mysql-monitor_read_only_interval | 2000 |
| mysql-monitor_read_only_timeout | 500 |
+----------------------------------------------------------------------+----------------+
32 rows in set (0.00 sec)
Changes made to the MySQL Monitor in table global_variables will be applied after executing the LOAD MYSQL VARIABLES TO RUNTIME statement. To persist the configuration changes across restarts the SAVE MYSQL VARIABLES TO DISK must also be executed.
Admin> LOAD MYSQL VARIABLES TO RUNTIME;
Query OK, 0 rows affected (0.00 sec)
Admin> SAVE MYSQL VARIABLES TO DISK;
Query OK, 140 rows affected (0.01 sec)
Step 9: Add the Created Galera Cluster
ProxySQL uses host groups to categorize the backend nodes. A host group is a set of nodes identified by a positive number e.g. 1 or 2. The purpose of having host groups is to help ProxySQL route queries to different sets of hosts using ProxySQL query routing.
ProxySQL has the following logical host groups:
- Writers – these are MySQL nodes that can accept queries that can write/change data – Primary nodes.
- Readers – Nodes that can only accept read queries. They can be seen as slaves nodes.
We will assign the following host group IDs to the above hostgroups:
Writers: 1
Readers: 2
Writers are also readers by default.
Configure the table “mysql_replication_hostgroup” in the main database and specify the reader and writer hostgroups.
Admin> SHOW CREATE TABLE main.mysql_replication_hostgroups\G
*************************** 1. row ***************************
table: mysql_replication_hostgroups
Create Table: CREATE TABLE mysql_replication_hostgroups (
writer_hostgroup INT CHECK (writer_hostgroup>=0) NOT NULL PRIMARY KEY,
reader_hostgroup INT NOT NULL CHECK (reader_hostgroup<>writer_hostgroup AND reader_hostgroup>=0),
check_type VARCHAR CHECK (LOWER(check_type) IN ('read_only','innodb_read_only','super_read_only','read_only|innodb_read_only','read_only&innodb_read_only')) NOT NULL DEFAULT 'read_only',
comment VARCHAR NOT NULL DEFAULT '', UNIQUE (reader_hostgroup))
1 row in set (0.00 sec)
Then add the hostgroups in the table
Admin> INSERT INTO main.mysql_replication_hostgroups (writer_hostgroup,reader_hostgroup,comment) VALUES (1,2,'galera_cluster');
After that, add the Galera cluster nodes as follows:
Admin> INSERT INTO main.mysql_servers(hostgroup_id,hostname,port) VALUES (1,'192.168.79.184',3306);
Admin> INSERT INTO main.mysql_servers(hostgroup_id,hostname,port) VALUES (1,'192.168.78.69',3306);
Admin> INSERT INTO main.mysql_servers(hostgroup_id,hostname,port) VALUES (1,'192.168.77.127',3306);
Admin> LOAD MYSQL SERVERS TO RUNTIME
Query OK, 0 rows affected (0.01 sec)
Admin> SELECT * FROM mysql_servers;
+--------------+----------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+----------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1 | 192.168.79.184 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
| 1 | 192.168.78.69 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
| 1 | 192.168.77.127 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
+--------------+----------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
3 rows in set (0.00 sec)
Admin> SAVE MYSQL SERVERS TO DISK;
Query OK, 0 rows affected (0.14 sec)
Admin> SAVE MYSQL VARIABLES TO DISK;
Query OK, 147 rows affected (0.04 sec)
Save changes to disk;
LOAD MYSQL VARIABLES TO RUNTIME;
SAVE MYSQL VARIABLES TO DISK;
Confirm that the servers are reachable:
To enable the replication hostgroup load mysql_replication_hostgroups to runtime using the same LOAD command used for MySQL servers since LOAD MYSQL SERVERS TO RUNTIME processes both mysql_servers and mysql_replication_hostgroups tables.
Admin> LOAD MYSQL SERVERS TO RUNTIME;
Query OK, 0 rows affected (0.00 sec)
Now run the queries
Admin> SELECT * FROM monitor.mysql_server_connect_log ORDER BY time_start_us DESC LIMIT 3;
+----------------+------+------------------+-------------------------+---------------+
| hostname | port | time_start_us | connect_success_time_us | connect_error |
+----------------+------+------------------+-------------------------+---------------+
| 192.168.79.184 | 3306 | 1634852419531867 | 1252 | NULL |
| 192.168.78.69 | 3306 | 1634852419509819 | 2931 | NULL |
| 192.168.77.127 | 3306 | 1634852419487624 | 1821 | NULL |
+----------------+------+------------------+-------------------------+---------------+
3 rows in set (0.00 sec)
Admin> SELECT * FROM monitor.mysql_server_ping_log ORDER BY time_start_us DESC LIMIT 3;
+----------------+------+------------------+----------------------+------------+
| hostname | port | time_start_us | ping_success_time_us | ping_error |
+----------------+------+------------------+----------------------+------------+
| 192.168.77.127 | 3306 | 1634852441520867 | 476 | NULL |
| 192.168.79.184 | 3306 | 1634852441500600 | 768 | NULL |
| 192.168.78.69 | 3306 | 1634852441480454 | 1834 | NULL |
+----------------+------+------------------+----------------------+------------+
3 rows in set (0.00 sec)
Admin> SELECT * FROM mysql_servers;
+--------------+----------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| hostgroup_id | hostname | port | gtid_port | status | weight | compression | max_connections | max_replication_lag | use_ssl | max_latency_ms | comment |
+--------------+----------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
| 1 | 192.168.79.184 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
| 1 | 192.168.78.69 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
| 1 | 192.168.77.127 | 3306 | 0 | ONLINE | 1 | 0 | 1000 | 0 | 0 | 0 | |
+--------------+----------------+------+-----------+--------+--------+-------------+-----------------+---------------------+---------+----------------+---------+
3 rows in set (0.00 sec)
As a final step, persist the configuration to disk.
Admin> SAVE MYSQL SERVERS TO DISK;
Query OK, 0 rows affected (0.14 sec)
Admin> SAVE MYSQL VARIABLES TO DISK;
Query OK, 147 rows affected (0.04 sec)
Step 10: Create MySQL users to connect via ProxySQL
The last step is to create MySQL users that will be connecting to the cluster through the ProxySQL instance.
Create remote user on Galera cluster
To achieve this functionality, we have to create a MySQL user on one of the nodes on galera cluster that users will use to connect. Login to one of the hosts and create a database user as follows:
MariaDB [(none)]> create user 'demouser'@'%' identified by 'StrongPassword';
Query OK, 0 rows affected (0.079 sec)
Assign the neccessary roles to the user, for exmple access to a certain database.
MariaDB [(none)]> create database demodb;
MariaDB [(none)]> grant all privileges on demodb.* to 'demouser'@'%' with grant option;
MariaDB [(none)]> flush privileges;
Create remote user on ProxySQL Admin
This is done by adding entries in the “mysql_users” table in the main database.
Admin> SHOW CREATE TABLE mysql_users\G
*************************** 1. row ***************************
table: mysql_users
Create Table: CREATE TABLE mysql_users (
username VARCHAR NOT NULL,
password VARCHAR,
active INT CHECK (active IN (0,1)) NOT NULL DEFAULT 1,
use_ssl INT CHECK (use_ssl IN (0,1)) NOT NULL DEFAULT 0,
default_hostgroup INT NOT NULL DEFAULT 0,
default_schema VARCHAR,
schema_locked INT CHECK (schema_locked IN (0,1)) NOT NULL DEFAULT 0,
transaction_persistent INT CHECK (transaction_persistent IN (0,1)) NOT NULL DEFAULT 1,
fast_forward INT CHECK (fast_forward IN (0,1)) NOT NULL DEFAULT 0,
backend INT CHECK (backend IN (0,1)) NOT NULL DEFAULT 1,
frontend INT CHECK (frontend IN (0,1)) NOT NULL DEFAULT 1,
max_connections INT CHECK (max_connections >=0) NOT NULL DEFAULT 10000,
attributes VARCHAR CHECK (JSON_VALID(attributes) OR attributes = '') NOT NULL DEFAULT '',
comment VARCHAR NOT NULL DEFAULT '',
PRIMARY KEY (username, backend),
UNIQUE (username, frontend))
1 row in set (0.00 sec)
The table is usually empty and users are added by modifying the table. You specify the username, password and default hostgroup.
Admin> INSERT INTO mysql_users(username,password,default_hostgroup) VALUES ('demouser','StrongPassword',1);
You can check if the user was updated:
Admin> SELECT * FROM mysql_users;
+----------+----------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+------------+---------+
| username | password | active | use_ssl | default_hostgroup | default_schema | schema_locked | transaction_persistent | fast_forward | backend | frontend | max_connections | attributes | comment |
+----------+----------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+------------+---------+
| demouser | StrongPassword | 1 | 0 | 1 | NULL | 0 | 1 | 0 | 1 | 1 | 10000 | | |
+----------+----------------+--------+---------+-------------------+----------------+---------------+------------------------+--------------+---------+----------+-----------------+------------+---------+
1 row in set (0.00 sec)
If everything is okay, save changes:
LOAD MYSQL USERS TO RUNTIME;
SAVE MYSQL USERS TO DISK;
Step 11: Test if the user can connect
ProxySQL client runs on port 6033. We can try connecting to the proxy client using the user we created on galera and proxysql.
root@proxy:~# mysql -udemouser -h 127.0.0.1 -P6033 -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 17
Server version: 5.5.30 (ProxySQL)
Copyright (c) 2000, 2021, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
We can now try run queries on the cluster.
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| demodb |
| information_schema |
+--------------------+
2 rows in set (0.00 sec)
mysql> select @@hostname;
+------------+
| @@hostname |
+------------+
| db3 |
+------------+
1 row in set (0.00 sec)
mysql>
Other queries
$ mysql -u demouser -p demodb -h 127.0.0.1 -P6033 -e"SELECT @@port"
Enter password:
+--------+
| @@port |
+--------+
| 3306 |
+--------+
mysql>
As you can attest, we now have visibility on demodb a database we had assigned the rights to in the galera setup. We can also confirm that we are getting the responses from db3 of the galera cluster.
Celebratory Remarks
Our highly available cluster is now set up and ready for your applications to start pouring in torrents of data. You can add more nodes in your cluster as you deem fit and we hope that it will work best for you and afford you time to focus on other things.
We hope the guide was detailed and clear and it helped you set up your environment. We that your visitation, your readership and your awesome support. Keep at it!! In the meantime, check out other guides below.