Once you are in the Cloud, the opportunities for automation are endless. It is no longer viable to create resources and managing them manually because it does not scale once there are more users and the requests for cloud resources increase. The only way to reduce the work and to make the same as easy and as scalable as possible is to leverage on tools that will lighten this burden.
Having said that, this is a showcase of how you can be able to reduce the work involved in giving users access to databases in CloudSQL. After much thought and deliberations, the tool that won the tender to accomplish the reduction of the boring exertion involved in granting users database access manually is terraform. This is a guide that elucidates how it was done:
In this setup, we will be using three providers:
- Vault: Found on this link.
- Google Cloud: On this Link
- cyrilgdn/postgresql: You can get this one on this link.
Also note that we will be needing the following before everything works
- Google Cloud Account with CloudSQL Instances
- Vault Server up and running
Vault will be storing secrets for the “cyrilgdn/postgresql” provider. Cyrilgdn/postgresql will handle granting of rights to IAM roles which will be created by the Google provider.
The following steps explain how the entire project was created and tested.
Step 1: Prepare your CloudSQL Modules
Since you may be having multiple instances in CloudSQL zones, it makes sense to organise them into separate modules so that it becomes easier and clearer to add new records in future as well as provide a good representation of what lies in your environment. We will be consolidating our CloudSQL Terraform files into the following folders.
cd ~
mkdir -p databases/tf_cloudsql/modules
cd databases/tf_cloudsql/modules
mkdir {geekssales-db-production,geeksfinance-db-production,geeksusers-db-production}
Inside each database module, we will create sub-modules that will accommodate every department such as “Engineering”, “Finance”, Sales Service“ and so forth.
Finally, we should have something like this with “engineering” sub-module added.
$ tree -d
└── tf_cloudsql
└── modules
├── geekssales-db-production
│ └── engineering
├── geeksfinance-db-production
│ └── engineering
├── geeksusers-db-production
│ └── engineering
Step 2: Adding Fresh Condiments
In database_connect directory which is the root directory, create a ”main.tf” file and populate it as follows:
########################################################################
provider "vault" {
token = var.geeks_token
alias = "geeks_vault"
address = var.geeks_vault_address
}
provider "google" {
project = var.project_id
region = var.region
alias = "geeks_gcp"
}
#1. geekssales-db-production instance Module
module "geekssales-db-production" {
source = "./modules/geekssales-db-production"
providers = {
vault = vault.geeks_vault
google = google.geeks_gcp
}
}
#2. geeksfinance-db-production instance Module
module "geeksfinance-db-production" {
source = "./modules/geeksfinance-db-production"
providers = {
vault = vault.geeks_vault
google = google.geeks_gcp
}
}
#3. geeksusers-db-production instance Module
module "geeksusers-db-production" {
source = "./modules/geeksusers-db-production"
providers = {
vault = vault.geeks_vault
google = google.geeks_gcp
}
}
## This is the bucket where tfstate for CloudSQL will be stored in GCP
terraform {
backend "gcs" {
bucket = "geeks-terraform-state-bucket"
prefix = "terraform/cloudsql_roles_state"
}
}
Here we have two providers added for Vault and for Google and all of the modules we want. The reason why we didn’t include postgresql is because it does not work well with aliases like we have done for the other two. So we will have to add it in each of the modules as we will see later.
Note also that we are using GCP Bucket backend to store terraform’s state. Ensure that a bucket with the name given already exists or you can create it if it is not there yet. You will find the state in a the “terraform” sub-directory called “cloudsql_roles_state”
Create a ”vars.tf” file in the same directory and fill ip us as follows:
$ vim vars.tf
variable "geeks_token" {
type = string
default = "s.6fwP0qPixIajVlD3b0oH6bhy"
description = “Dummy Token”
}
variable "geeks_vault_address" {
type = string
default = "//vault.geeksforgeeks.org:8200"
}
variable "project_id" {
type = string
default = "neveropen"
}
variable "region" {
type = string
default = "us-central1"
}
The variables declared in “vars.tf” are the credentials and details that Terraform will use to connect to Vault and perform its cool operations we desire.
After we have completed that, in each module, create another ”main.tf” file that has this configuration:
$ cd databases/tf_cloudsql/modules/geekssales-db-production
$ vim main.tf
########################################################################
#Engineering SubModule
########################################################################
terraform {
required_providers {
postgresql = {
source = "cyrilgdn/postgresql"
version = "1.15.0"
}
vault = {
source = "vault"
version = "~> 3.0"
}
google = {
source = "hashicorp/google"
version = "4.13.0"
}
}
}
#1. Engineering Files SubModule
module "engineering" {
source = "./engineering"
}
This is another file that calls the sub-modules within each module. As you can see, we are calling “engineering” sub-module and other sub-modules can be added here such as the “Finance”, Sales Service“ and so forth.
Step 3: Create files we need for the sub-modules
Once the “main.tf” file in each module is well done, let us now get into the “engineering” sub-module and create the files of interest here. First, we shall add the “postgresql” provider that if you remember we mentioned that does not work well with aliases and we shall also add vault connection for “postgresql” provider secrets retrieval. The secret here is a user that can connect to the CloudSQL databases and hence be able to grant permissions to other users we will be creating. So we should already have the username and password for this user created in Vault within the “cloudsql/credentials” path. This is what the (data “vault_generic_secret” “cloudsql”) will be fetching from.
Once the “main.tf” file in each module is well done, we shall proceed to create the files of interest for each sub-module. Let us begin with “engineering”.
$ cd databases/tf_cloudsql/modules/geekssales-db-production/engineering
$ vim provider.tf
data "vault_generic_secret" "cloudsql" {
path = "cloudsql/credentials"
}
provider "postgresql" {
scheme = "gcppostgres"
host = "neveropen:us-central1:geekssales-db-production" # The CloudSQL Instance
port = 5432
username = data.vault_generic_secret.cloudsql.data["apps_user"] # Username from Vault
password = data.vault_generic_secret.cloudsql.data["apps_pass"] # Password from Vault
superuser = false
sslmode = "disable"
}
terraform {
required_providers {
postgresql = {
source = "cyrilgdn/postgresql"
version = "1.15.0"
}
vault = {
source = "vault"
version = "~> 3.0"
}
google = {
source = "hashicorp/google"
version = "4.13.0"
}
}
}
Next, we shall create a “roles.tf” file that will hold the new users we will be creating. We will be using the Google Provider resources here so we will have to specify the instances we are connecting to explicitly. Proceed to create them as shown below within the “engineering” directory:
$ cd databases/tf_cloudsql/modules/geekssales-db-production/engineering
$ vim roles.tf
######################################################
# Engineering Block
######################################################
## Junior Engineering Team Members IAM DB Roles Creation
resource "google_sql_user" "jnr_eng_member_roles" {
count = length(var.jnr_eng_member_roles)
name = var.jnr_eng_member_roles[count.index]
instance = "geekssales-db-production"
type = "CLOUD_IAM_USER"
}
## Engineering Team Members IAM DB Roles Creation
resource "google_sql_user" "eng_member_roles" {
count = length(var.eng_member_roles)
name = var.eng_member_roles[count.index]
instance = "geekssales-db-production"
type = "CLOUD_IAM_USER"
}
## Senior Engineering Team Members IAM DB Roles Creation
resource "google_sql_user" "snr_eng_member_roles" {
count = length(var.snr_eng_member_roles)
name = var.snr_eng_member_roles[count.index]
instance = "geekssales-db-production"
type = "CLOUD_IAM_USER"
}
You will immediately notice that we have separated the users into junior, senior and mid-level engineers. In conjunction with this, we will have to create a variable file that will hold the names of these users and in their respective categories. We will loop through each of the users in the variable that will be a list as we create them and we we grant them database privileges. Create a variable file and add the meat as shown below for better clarity:
$ vim vars.tf
######################################################
# Engineering Block
######################################################
## Junior Engineering Member Roles
variable "jnr_eng_member_roles" {
description = "CloudSQL IAM Users for Engineering Members"
type = list(string)
default = []
}
## Mid-level Engineering Member Roles
variable "eng_member_roles" {
description = "CloudSQL IAM Users for Engineering Members"
type = list(string)
default = ["[email protected]", "[email protected]"]
}
## Senior Engineering Member Roles
variable "snr_eng_member_roles" {
description = "CloudSQL IAM Users for Engineering Members"
type = list(string)
default = ["[email protected]", "[email protected]"]
}
Step 4: Let us create the privileges file
Within the same “engineering” directory, let us create a privileges files for each of the categories (senior, junior and mid-level) that will be granting users specific privies for specific databases. The junior’s file looks like below (truncated).
$ vim privileges-jnr-eng.tf
######################################################
# Junior Engineering DB Privileges File
######################################################
## Finance Database
resource "postgresql_grant" "grant_jnr_eng_db_privileges_on_courier_db" {
count = length(var.jnr_eng_member_roles)
#for_each = toset( ["table", "sequence"] )
database = "finance"
role = var.jnr_eng_member_roles[count.index]
schema = "public"
object_type = "table"
objects = []
privileges = ["SELECT"]
depends_on = [
google_sql_user.jnr_eng_member_roles,
]
}
## Users Database
resource "postgresql_grant" "grant_jnr_eng_db_privileges_on_datasync_db" {
count = length(var.jnr_eng_member_roles)
database = "users"
role = var.jnr_eng_member_roles[count.index]
schema = "public"
object_type = "table"
objects = []
privileges = ["SELECT"]
depends_on = [
google_sql_user.jnr_eng_member_roles,
]
}
## Sales Database
resource "postgresql_grant" "grant_jnr_eng_db_privileges_on_dispatch_db" {
count = length(var.jnr_eng_member_roles)
database = "sales"
role = var.jnr_eng_member_roles[count.index]
schema = "public"
object_type = "table"
objects = []
privileges = ["SELECT"]
depends_on = [
google_sql_user.jnr_eng_member_roles,
]
}
The senior’s file looks like below (truncated).
$ vim privileges-snr-eng.tf
######################################################
# Senior Engineering DB Privileges File
######################################################
## Finance Database
resource "postgresql_grant" "grant_snr_eng_db_privileges_on_courier_db" {
count = length(var.snr_eng_member_roles)
#for_each = toset( ["table", "sequence"] )
database = "finance"
role = var.snr_eng_member_roles[count.index]
schema = "public"
object_type = "table"
objects = []
privileges = ["INSERT", "SELECT", "UPDATE", "DELETE"]
depends_on = [
google_sql_user.snr_eng_member_roles,
]
}
## Sales Database
resource "postgresql_grant" "grant_snr_eng_db_privileges_on_datasync_db" {
count = length(var.snr_eng_member_roles)
database = "sales"
role = var.snr_eng_member_roles[count.index]
schema = "public"
object_type = "table"
objects = []
privileges = ["INSERT", "SELECT", "UPDATE", "DELETE"]
depends_on = [
google_sql_user.snr_eng_member_roles,
]
}
## Users Database
resource "postgresql_grant" "grant_snr_eng_db_privileges_on_dispatch_db" {
count = length(var.snr_eng_member_roles)
database = "users"
role = var.snr_eng_member_roles[count.index]
schema = "public"
object_type = "table"
objects = []
privileges = ["INSERT", "SELECT", "UPDATE", "DELETE"]
depends_on = [
google_sql_user.snr_eng_member_roles,
]
}
You can easily get how it goes.
To explain a bit, you see each resource points to a particular database in the CloudSQL “geekssales-db-production” instance. We have also ensured that the privileges files will “wait” for the roles to be created first before it runs. This is done via the “depends_on” line in each of the resources. You will also acknowledge that the “privileges” vary depending on your level of seniority.
Step 5: Users addition to other roles
We realised that PostgreSQL version 9.6 must have a user added to the group of the owner of the database for it to be able to actually receive the privileges added. Because of that, we have to add another file that adds the new CloudIAM users to the owners of the various databases. We shall create this file as follows within the same “engineering” directory.
$ vim privileges-grant-role-eng.tf
######################################################
# Engineering DB Grant users to various roles
######################################################
## Add Eng members to cloudsqlsuperuser role
resource "postgresql_grant_role" "grant_eng_db_privileges_on_all" {
count = length(var.eng_member_roles)
role = var.eng_member_roles[count.index]
grant_role = "cloudsqlsuperuser"
depends_on = [
google_sql_user.snr_eng_member_roles,
]
}
## Add eng members to geeks role
resource "postgresql_grant_role" "grant_eng_db_privileges_on_all" {
count = length(var.eng_member_roles)
role = var.eng_member_roles[count.index]
grant_role = "geeks"
depends_on = [
google_sql_user.eng_member_roles,
]
}
## Add senior eng members to geeks role
resource "postgresql_grant_role" "grant_snr_eng_db_privileges_on_all" {
count = length(var.snr_eng_member_roles)
role = var.snr_eng_member_roles[count.index]
grant_role = "geeks"
depends_on = [
google_sql_user.snr_eng_member_roles,
]
}
In the end, we should have a general structure (truncated) that looks like below:
$ tree
└── tf_cloudsql
├── main.tf
├── modules
│ ├── geekssales-db-production
│ ├── main.tf
│ │ └── engineering
│ │ ├── privileges-grant-role-eng.tf
│ │ ├── privileges-jnr-eng.tf
│ │ ├── privileges-snr-eng.tf
│ │ ├── privileges-eng.tf
│ │ ├── provider.tf
│ │ ├── roles.tf
│ │ └── vars.tf
│ ├── geeksusers-db-production
│ │ ├── main.tf
│ │ └── engineering
│ │ ├── privileges-grant-role-eng.tf
│ │ ├── privileges-jnr-eng.tf
│ │ ├── privileges-snr-eng.tf
│ │ ├── privileges-eng.tf
│ │ ├── provider.tf
│ │ ├── roles.tf
│ │ └── vars.tf
By this time, we should be ready to create the resources we have been creating. Let us initialize terraform then execute a plan and if we will be good with the results, we can comfortably apply the changes. The commands are as follows:
Initialize Terraform
$ terraform init
Initializing modules...
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of hashicorp/vault from the dependency lock file
- Reusing previous version of cyrilgdn/postgresql from the dependency lock file
- Reusing previous version of hashicorp/google from the dependency lock file
- Using previously-installed hashicorp/google v4.13.0
- Using previously-installed hashicorp/vault v3.3.1
- Using previously-installed cyrilgdn/postgresql v1.15.0
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Terraform Plan
$ terraform plan -out create_roles
Then apply if you are good with the results
$ terraform apply create_roles
Closing Words
There we have it. The entire project has its own challenges and we hope we shall continue to improve on it as time goes. In the end, the goal to be able to focus on code while Terraform or other tools do the bulk of the boring work. We hope the playbook was as informative as we intended 🙂.