One of the most well-liked relational database management systems (RDBMS) in the world, PostgreSQL is quite strong and allows developers access to a wide range of complex capabilities. One of the most useful features provided by Postgres is the support for JSON data types, together with the ability to write SQL. PostgreSQL is capable of CRUD activities like reading and writing JSONB data (binary JSON format). The greatest option for working with relational databases is SQLAlchemy, one of the most well-liked Python ORM libraries. This gives us the ability to interface with the database and Python, as well as operate with the JSONB data format in PostgreSQL.
JSONB
A PostgreSQL data type called JSONB stores JSON (JavaScript Object Notation) data in binary format, which uses less storage space and processes more data quickly and efficiently than JSON stored in text format. A well-liked Python library for working with relational databases, such as PostgreSQL, is SQLAlchemy.
You may use the strength and agility of JSON data in the Postgres database while still using Python to query and update the JSON data by combining JSONB with SQLAlchemy. Postgres is a powerful open-source database and is widely used. We’ll show you how to use JSONB with SQLAlchemy by going over a few examples. We will talk about the basics of making a JSONB column, running JSON queries, and editing JSON data.
Postgres stores data that is written in JSON format called JSONB, which is a binary format. This allows searching both index and nested JSON items within Postgres. We will use an ORM layer (SQLAlchemy) which makes it easier to interact and easier to work with JSONB data in PostgreSQL. SQLAlchemy uses JSON data and defines a JSONB column within our database. It also defines JSONB columns in the database schema using SQLAlchemy’s provision of a JSONB type. In addition to that, we are able to use the JSONB column to retrieve the data and carry out SQL operations on it.
In python, we need to use the PostgreSQL dialect of SQLAlchemy using the python package and wrapper psycopg2, which provides additional functionality for working with PostgreSQL-specific data types like JSONB. We also need to install the psycopg2 driver, which is a PostgreSQL adapter for Python.
Required Modules:
pip install psycopg2
Stepwise Implementation:
For the purpose of this article, we will be using the following database schema:
Step 1: Create a table in the existing data in Postgres and define the column in the table as JSONB.
Python3
from sqlalchemy import create_engine, Table, Column, Integer, MetaData, JSON from sqlalchemy.dialects.postgresql import JSONB, insert metadata = MetaData() # reflect the database schema to the metadata metadata.reflect(bind = engine) users = Table( 'users' , metadata, Column( 'id' , Integer, primary_key = True ), Column( 'data' , JSONB), extend_existing = True ) metadata.create_all(engine) # get the table object table_name = 'users' table = metadata.tables[table_name] # print the table schema print (table.__repr__()) |
Here, we have created a table named users with two columns: id and data. The data column is of type JSONB, which means it can store JSON data.
Step 2: Now, let’s insert some data into this table:
Python3
data = { 'name' : 'Vik' , 'age' : 30 , 'job_title' : 'Data Scientist' } insert_stmt = insert(users).values(data = data) conn = engine.connect() conn.execute(insert_stmt) conn.commit() |
In this example, we have inserted a dictionary of JSON data into the data column of the user’s table. We used the insert function from the sqlalchemy.dialects.postgresql module to construct the SQL statement.
Step 3: To query this data, we can use the select function:
Python3
# Fetch the data select_stmt = users.select() result = conn.execute(select_stmt) row = result.fetchone() print (row) |
Here, we have selected all rows from the users table and fetched the first row. We can access the JSON data in the data column using the square bracket notation, just like accessing a dictionary.
Step 4: We can also update the JSON data in the data column:
Python3
# Update data update_stmt = users.update().where(users.c. id = = 1 ).values( data = { 'job_title' : 'Software Engineer' }) conn.execute(update_stmt) conn.commit() |
In this example, we have updated the data column for the row with an id equal to 1, setting the job_title key to Software Engineer.
With SQLAlchemy, we are able to insert data that is formatted as JSONB into the database. We begin by constructing a JSONB object utilizing Python, and after that, we use SQLAlchemy to insert it into the database.
Example 2:
Python3
from sqlalchemy import create_engine, Column, Integer, JSON from sqlalchemy.orm import sessionmaker, declarative_base from sqlalchemy.dialects.postgresql import JSONB, insert Session = sessionmaker(bind = engine) session = Session() Base = declarative_base() class Employee(Base): __tablename__ = 'employees' id = Column(Integer, primary_key = True ) info = Column(JSONB) employee_info = { 'name' : 'Vik Singh' , 'age' : 35 , 'job_title' : 'ML Engineer' } employee = Employee( id = 15 , info = employee_info) session.add(employee) session.commit() |
In this demonstration, we will develop an Employee class that will map to the table titled “Employees” in the corresponding database table. We use a JSONB type for the info column’s definition. After that, a Python dictionary called employee info is constructed and populated with the relevant employee data. After adding the newly created instance to the session, we proceed to construct an instance of the Employee class using the employee info dictionary. Next, we make sure that the database is up to date by committing the modifications.