Facebook Instagram Twitter Vimeo Youtube
Sign in
  • Home
  • About
  • Team
  • Buy now!
Sign in
Welcome!Log into your account
Forgot your password?
Privacy Policy
Password recovery
Recover your password
Search
Logo
Sign in
Welcome! Log into your account
Forgot your password? Get help
Privacy Policy
Password recovery
Recover your password
A password will be e-mailed to you.
Thursday, July 10, 2025
Sign in / Join
  • Contact Us
  • Our Team
Facebook
Instagram
Twitter
Vimeo
Youtube
Logo
  • Home
  • News
    • News

      Dozens of Malicious Firefox Extensions Found Stealing Crypto Wallet Keys by Husain Parvez

      7 July 2025
      News

      16 Billion Login Credentials Leaked in One of the Largest Data Breaches Ever by Paige Henley

      28 June 2025
      News

      61 Million Records Allegedly Belonging to Verizon Listed for Sale Online by SafetyDetectives Cybersecurity Team

      27 June 2025
      News

      NordVPN Hit with Class-Action Lawsuit Over “Deceptive” Auto-Renewals by Paige Henley

      23 June 2025
      News

      WestJet Investigates Cybersecurity Incident, Flight Operations Unaffected by Paige Henley

      20 June 2025
  • Data Modelling & AI
    • AllBig dataBusiness AnalyticsData ScienceData Structure & AlgorithmDatabasesVector DatabaseDeep LearningEthical HackingGenerative AIMachine Learning
      Big data

      Claude Code vs Gemini CLI: Which One’s the Real Dev Co-Pilot?

      9 July 2025
      Big data

      Announcing VDBBench 1.0: Open-Source Vector Database Benchmarking with Your Real-World Production Workloads

      5 July 2025
      Big data

      15 System Design Building Blocks You Should Know

      5 July 2025
      Big data

      Our Journey to 35K+ GitHub Stars: The Real Story of Building Milvus from Scratch

      2 July 2025
    • Big data
    • Business Analytics
    • Databases
    • Data Structure & Algorithm
    • Data Science
    • Deep Learning
    • Ethical Hacking
    • Generative AI
    • Machine Learning
    • Security & Testing
  • Mobile
    • AllAndroidIOS
      IOS

      FiveDock16 allows five app icons in the Home Screen’s Dock on jailbroken iOS 16 devices

      9 July 2025
      Android

      This $130 smartwatch is all you need if you’re after something elegant, powerful, and budget-friendly

      9 July 2025
      Android

      Gemini now directly powers your Google Home broadcasts

      9 July 2025
      Android

      Save up to $500 on these game-changing Ecovacs robot vacuums during Prime Day

      9 July 2025
    • Android
    • IOS
  • Languages
    • AllAjaxAngularDynamic ProgrammingGolangJavaJavascriptPhpPythonReactVue
      Languages

      Working with Titles and Heading – Python docx Module

      25 June 2025
      Languages

      Creating a Receipt Calculator using Python

      25 June 2025
      Languages

      One Liner for Python if-elif-else Statements

      25 June 2025
      Languages

      Add Years to datetime Object in Python

      25 June 2025
    • Java
    • Python
  • Guest Blogs
  • Discussion
  • Our Team
HomeData Modelling & AIBig dataWhy is Redis so Fast and Efficient?
Big dataGuest Blogs

Why is Redis so Fast and Efficient?

Algomaster
By Algomaster
28 June 2025
0
1
Share
Facebook
Twitter
Pinterest
WhatsApp

    Why is Redis so Fast and Efficient?

    despite being single-threaded

    Ashish Pratap Singh's avatar

    Ashish Pratap Singh
    May 21, 2025

    Redis (Remote Dictionary Server) is a blazing-fast, open-source, in-memory key-value store that’s become a go-to choice for building real-time, high-performance applications.

    Despite being single-threaded, a single Redis server can handle over 100,000 requests per second.

    But, how does Redis achieve such incredible performance with a single-threaded architecture?

    In this article, we’ll break down the 5 key design choices and architectural optimizations that make Redis so fast and efficient:

    • In-Memory Storage: Data lives entirely in RAM, which is orders of magnitude faster than disk.

    • Single-Threaded Event Loop: Eliminates concurrency overhead for consistent, low-latency performance.

    • Optimized Data Structures: Built-in structures like hashes, lists, and sorted sets are implemented with speed and memory in mind.

    • I/O Efficiency: Event-driven networking, pipelining, and I/O threads help Redis scale to thousands of connections.

    • Server-Side Scripting: Lua scripts allow complex operations to run atomically, without round trips.

    Let’s get started!

    Share


    1. In-Memory Storage

    The single most important reason Redis is so fast comes down to one design decision:

    All data in Redis lives in RAM.

    Unlike traditional databases that store their data on disk and read it into memory when needed, Redis keeps the entire dataset in memory at all times.

    Even with a fast SSD, reading from disk is thousands of times slower than reading from RAM.

    So when Redis performs a GET, it doesn’t wait for disk I/O. It simply follows a pointer in memory—an operation that completes in nanoseconds, not milliseconds.

    Redis doesn’t just store data in RAM, it stores it efficiently.

    • Small values are packed into compact memory formats (ziplist, intset, listpack)

    • These formats improve CPU cache locality, letting Redis touch fewer memory locations per command

    But There’s a Trade-Off…

    While in-memory storage gives Redis its speed, it also introduces two important limitations:

    1. Memory-Bound Capacity

    Your dataset size is limited by how much RAM your machine has. For example:

    • On a 32 GB server, Redis can only store up to 32 GB of data (minus overhead)

    • If you exceed this, Redis starts evicting keys or rejecting writes unless you scale horizontally

    To deal with this, Redis offers key eviction policies like:

    • Least Recently Used (LRU)

    • Least Frequently Used (LFU)

    • Random

    • Volatile TTL-based eviction

    You can also shard your dataset across a Redis Cluster.

    2. Volatility & Durability

    RAM is volatile. It loses data when the server shuts down or crashes. That’s risky if you’re storing anything you care about long term.

    Redis solves this with optional persistence mechanisms, allowing you to write data to disk periodically or in real time.

    Redis provides two main persistence models to give you durability without compromising performance:

    • RDB (Redis Database Snapshot)

      • Takes point-in-time snapshots of your data

      • Runs in a forked child process, so the main thread keeps serving traffic

      • Good for backups or systems that can tolerate some data loss

    • AOF (Append-Only File)

      • Logs every write operation to disk

      • Offers configurable fsync options:

        • Every write (safe but slow)

        • Every second (balanced)

        • Never (fast but risky)

      • Supports AOF rewriting in the background to reduce file size

    These persistence methods are designed to run asynchronously, so the main thread never blocks.


    2. Single-Threaded Event Loop

    One of Redis’s most surprising design choices is this:

    All commands in Redis are executed by a single thread.

    In a world where most high-performance systems lean on multi-core CPUs, parallel processing, and thread pools, this seems almost counterintuitive.

    Shouldn’t more threads mean more performance?

    Not necessarily. Redis proves that sometimes, one well-utilized thread can outperform many, if the architecture is right.

    But How Does One Thread Handle Thousands of Clients?

    The answer lies in Redis’s event-driven I/O model, powered by I/O multiplexing.

    What is I/O Multiplexing?

    I/O Multiplexing allows a single thread to monitor multiple I/O channels (like network sockets, pipes, files) simultaneously.

    Instead of spinning up a new thread for each client, Redis tells the OS:

    “Watch these client sockets for me and let me know when any of them have data to read or are ready to write.”

    The implementation relies on highly optimized system calls specifically designed for this purpose:

    • epoll (Linux): High-performance I/O event notification system. Designed for scalability, it can handle thousands of concurrent connections efficiently.

    • kqueue (macOS): BSD-style I/O event notification system. Monitors a wide range of events: file descriptors, sockets, signals, and more.

    • select (fallback): Oldest and most portable I/O multiplexing method, supported on almost all platforms.

    These interfaces allow Redis to remain dormant, consuming no CPU cycles, until the moment data arrives or a socket becomes writable.

    The Redis Event Loop

    Redis event loop is a lightweight cycle that efficiently juggles thousands of connections without blocking.

    When a client sends a request, the operating system notifies Redis, which then:

    1. Reads the command

    2. Processes it

    3. Sends the response

    4. Moves to the next ready client

    This loop is tight, predictable, and fast. Redis cycles through ready connections, executes commands one at a time, and responds quickly without ever waiting on a slow client or thread switch.

    Internal Flow of a GET Command

    To understand the simplicity and speed of this model, let’s walk through how Redis handles a simple GET command:

    1. Client sends: GET user:42
    2. I/O multiplexer wakes the Redis event loop
    3. Redis reads the command from the socket buffer
    4. Parses the command
    5. Looks up the key in an in-memory hash table (O(1))
    6. Formats the response
    7. Writes the response to the socket buffer
    8. Returns to listening for more events

    All of this happens on a single thread, without any locking or waiting.

    Why Single-Threaded Works So Well

    By sticking to a single-threaded execution model, Redis avoids the typical overhead that comes with multithreaded systems:

    • No context switching

    • No thread scheduling

    • No locks, mutexes, or semaphores

    • No race conditions or deadlocks

    This means Redis spends almost all its CPU time doing actual work rather than wasting cycles coordinating between threads.

    Inherent Atomicity

    Since only one thread is modifying Redis’s in-memory data at a time, operations are inherently atomic:

    • No two clients can update the same key at the same time

    • You don’t need locks to ensure safety

    • You don’t get partial updates due to concurrency bugs

    This dramatically simplifies the internal logic and improves predictability and latency consistency.


    3. Optimized Data Structures

    Redis isn’t just fast because it stores everything in memory. It’s also fast because it stores data intelligently.

    It doesn’t use generic one-size-fits-all containers. It picks the right data structure for each use case and implements it in high-performance C code, with a focus on speed, memory efficiency, and predictable performance.

    Adaptive Internal Representations

    Each data type in Redis has multiple internal representations, and Redis automatically switches between them based on size and access pattern.

    Examples:

    • Hashes and Lists

      • Small collections → Stored as compact ziplist or listpack (memory-efficient and fast)

      • Larger collections → Converted to hashtable or linked list for scalability

    • Sets

      • If elements are integers and set is small → Stored as intset

      • Grows large → Upgraded to a standard hashtable

    • Sorted Sets

      • Backed by a hybrid of a skiplist and a hashtable, allowing fast score-based queries and O(log N) operations

    This design makes Redis fast and memory-efficient at every scale.

    Built for Big-O Performance

    Redis carefully picks and implements data structures to ensure excellent time complexity:

    These operations stay fast even as the dataset grows, thanks to efficient internal representations and fine-tuned implementations in C.

    Redis also takes advantage of low-level programming techniques to squeeze out every last bit of performance.


    4. I/O Efficiency

    Redis isn’t just fast at executing commands, it’s also extremely efficient at handling network I/O.

    Whether you’re serving a single API call or managing tens of thousands of concurrent clients, Redis keeps up with minimal latency and maximum throughput.

    So, what exactly makes Redis’s I/O so efficient?

    A Lightweight, Fast Protocol

    Redis uses a custom protocol called RESP (REdis Serialization Protocol), which is:

    • Text-based but easy to parse

    • Extremely lightweight (much simpler than HTTP or SQL)

    • Designed for high-speed communication

    Example of a RESP-formatted command::

    *2
    $3
    GET
    $5
    hello

    Each part of the message clearly defines the number of elements and their sizes. This structure allows Redis to read and parse commands with minimal CPU cycles, unlike parsing full SQL queries or nested JSON structures.

    Pipelining: Batching to Boost Throughput

    One of Redis’s most effective I/O optimization features is command pipelining.

    Normally, a client sends one command, waits for a response, then sends the next. This is fine for a few requests but inefficient when thousands of commands are involved.

    With pipelining, the client sends multiple commands in a single request without waiting for intermediate responses.

    Example:

    SET user:1 "Alice"
    GET user:1
    INCR counter

    These three commands can be sent in a single TCP packet. Redis reads and queues them, executes them in order, and returns all responses at once.

    Benefits of pipelining:

    • Fewer round-trips → reduced latency

    • Less back-and-forth → higher throughput

    • Less context switching → lower CPU overhead

    In real-world benchmarks, pipelining can help Redis achieve 1 million+ requests per second.

    Redis 6+: Optional I/O Threads

    While Redis has traditionally used a single thread for both command execution and I/O, Redis 6 introduced optional I/O threads to further improve performance—especially in network-heavy scenarios.

    When enabled, I/O threads handle:

    • Reading client requests from sockets

    • Writing responses back to clients

    Command execution still happens on the main thread, preserving Redis’s atomicity and simplicity.

    This hybrid model brings the best of both worlds:

    • Multi-core network processing

    • Single-threaded command execution

    In workloads where clients send or receive large payloads (e.g., big JSON blobs, long lists), I/O threads can double the throughput.

    Persistent Connections: Avoiding the Handshake Overhead

    Redis client libraries typically use persistent TCP connections, which means:

    • No repeated handshakes or reconnects

    • Lower latency for every command

    • More predictable performance under load

    Persistent connections also reduce CPU and memory usage on the server, since Redis doesn’t have to reallocate resources for new connections frequently.


    5. Server-side Scripting

    Redis also offers the ability to execute server-side scripts using Lua. This allows you to run complex logic directly inside Redis without bouncing back and forth between the client and server.

    Let’s say you want to perform this logic:

    1. Check if a user exists

    2. If they do, increment their score

    3. Add them to a leaderboard

    4. Return the new score

    Doing this using multiple client-server requests would involve:

    • Multiple round trips over the network

    • Race conditions if multiple clients do this concurrently

    • More code on the client to handle logic

    With Lua scripting, you can do all of this in one atomic operation, executed entirely on the Redis server.

    -- Lua script to increment score and update leaderboard
    local key = "user:" .. ARGV[1]
    local new_score = redis.call("INCRBY", key, tonumber(ARGV[2]))
    redis.call("ZADD", "leaderboard", new_score, ARGV[1])
    return new_score

    Run this script using the EVAL command:

    EVAL "<script>" 0 user123 50

    This increments the user’s score and updates the leaderboard in one atomic server-side operation.

    Scripting is Powerful, But Use Responsibly

    While Lua scripting is fast and atomic, there are a few things to watch out for:

    • Scripts run on the main thread: If your script is slow or CPU-heavy, it can block Redis from serving other requests.

    • Avoid unbounded loops or expensive computations

    • Keep scripts short and predictable


    Thank you for reading!

    If you found it valuable, hit a like ❤️ and consider subscribing for more such content every week.

    This post is public so feel free to share it.

    Share


    P.S. If you’re enjoying this newsletter and want to get even more value, consider becoming a paid subscriber.

    As a paid subscriber, you’ll receive an exclusive deep dive every Thursday, access to a structured system design resource, and other premium perks.

    Unlock Full Access

    There are group discounts, gift options, and referral bonuses available.


    Checkout my Youtube channel for more in-depth content.

    Follow me on LinkedIn and X to stay updated.

    Checkout my GitHub repositories for free interview preparation resources.

    I hope you have a lovely day!

    See you soon,

    Ashish

    Share
    Facebook
    Twitter
    Pinterest
    WhatsApp
      Previous article
      Top 10 WebSocket Use Cases in System Design
      Next article
      What are Checksums?
      Algomaster
      Algomasterhttps://blog.algomaster.io
      RELATED ARTICLES
      Guest Blogs

      From Hackers to Guardians: Inside PlutoSec’s Mission to Secure the Digital World by Petar Vojinovic

      9 July 2025
      Big data

      Claude Code vs Gemini CLI: Which One’s the Real Dev Co-Pilot?

      9 July 2025
      Guest Blogs

      5 Best VPNs for Sweden in 2025: Stream Privately by Raven Wu

      8 July 2025

      LEAVE A REPLY Cancel reply

      Log in to leave a comment

      Most Popular

      From Hackers to Guardians: Inside PlutoSec’s Mission to Secure the Digital World by Petar Vojinovic

      9 July 2025

      Claude Code vs Gemini CLI: Which One’s the Real Dev Co-Pilot?

      9 July 2025

      FiveDock16 allows five app icons in the Home Screen’s Dock on jailbroken iOS 16 devices

      9 July 2025

      This $130 smartwatch is all you need if you’re after something elegant, powerful, and budget-friendly

      9 July 2025
      Load more
      Algomaster
      Algomaster
      194 POSTS0 COMMENTS
      https://blog.algomaster.io
      Calisto Chipfumbu
      Calisto Chipfumbu
      6498 POSTS0 COMMENTS
      http://cchipfumbu@gmail.com
      Dominic
      Dominic
      32126 POSTS0 COMMENTS
      http://wardslaus.com
      Milvus
      Milvus
      66 POSTS0 COMMENTS
      https://milvus.io/
      Nango Kala
      Nango Kala
      6510 POSTS0 COMMENTS
      neverop
      neverop
      0 POSTS0 COMMENTS
      https://geeksforgeeks.org
      Nicole Veronica
      Nicole Veronica
      11658 POSTS0 COMMENTS
      Nokonwaba Nkukhwana
      Nokonwaba Nkukhwana
      11714 POSTS0 COMMENTS
      Safety Detectives
      Safety Detectives
      2482 POSTS0 COMMENTS
      https://www.safetydetectives.com/
      Shaida Kate Naidoo
      Shaida Kate Naidoo
      6605 POSTS0 COMMENTS
      Ted Musemwa
      Ted Musemwa
      6865 POSTS0 COMMENTS
      Thapelo Manthata
      Thapelo Manthata
      6565 POSTS0 COMMENTS
      Umr Jansen
      Umr Jansen
      6558 POSTS0 COMMENTS

      EDITOR PICKS

      From Hackers to Guardians: Inside PlutoSec’s Mission to Secure the Digital World by Petar Vojinovic

      9 July 2025

      Claude Code vs Gemini CLI: Which One’s the Real Dev Co-Pilot?

      9 July 2025

      FiveDock16 allows five app icons in the Home Screen’s Dock on jailbroken iOS 16 devices

      9 July 2025

      POPULAR POSTS

      From Hackers to Guardians: Inside PlutoSec’s Mission to Secure the Digital World by Petar Vojinovic

      9 July 2025

      Claude Code vs Gemini CLI: Which One’s the Real Dev Co-Pilot?

      9 July 2025

      FiveDock16 allows five app icons in the Home Screen’s Dock on jailbroken iOS 16 devices

      9 July 2025

      POPULAR CATEGORY

      • Languages45985
      • Data Modelling & AI17551
      • Java15156
      • Mobile12983
      • Android12817
      • Javascript12713
      • Guest Blogs12579
      • Data Structure & Algorithm10077
      Logo

      ABOUT US

      We provide you with the latest breaking news and videos straight from the technology industry.

      Contact us: hello@geeksforgeeks.org

      FOLLOW US

      Blogger
      Facebook
      Flickr
      Instagram
      VKontakte

      © NeverOpen 2022

      • Home
      • News
      • Data Modelling & AI
      • Mobile
      • Languages
      • Guest Blogs
      • Discussion
      • Our Team