The Distributed Locking Trap
I remember the first time I implemented distributed locking using Redis and the Redlock algorithm. It felt like I was reaching the peak of engineering sophistication. I had multiple nodes, a consensus-based locking mechanism, and microsecond latency. I felt invincible—until a network partition and a slight clock drift in our staging environment turned my 'sophisticated' system into a race-condition nightmare that corrupted our ledger. That was the day I realized that in our industry, we often mistake complexity for robustness.
If you are building a system that needs to ensure two processes don't perform the same task simultaneously—whether that's processing a payment or generating a report—the default impulse is to reach for Redis. But unless you are operating at the scale of Netflix or Uber, adding a separate, complex distributed coordination layer is often a mistake. For most of us, the ACID-compliant database we already own is not just a 'good enough' alternative; it is fundamentally better.
The Redlock Illusion: Speed vs. Safety
Redis is incredibly fast, but speed is a poor substitute for correctness when data integrity is on the line. The Redlock algorithm, once the gold standard for Redis-based locking, relies on a set of assumptions about time that rarely hold true in the messy reality of distributed systems. In his seminal critique, Martin Kleppmann proved that Redlock is fundamentally unsafe for systems that require absolute correctness. Because it relies on synchronized clocks and lacks 'fencing tokens,' a process can pause (due to a GC cycle or network lag), lose its lock, and then wake up to perform a write operation, unaware that another process has already taken its place.
This 'zombie process' problem is the silent killer of distributed locking. When your locking mechanism doesn't integrate with your storage layer, there is no way for the database to know that the process trying to write data is no longer the legitimate lock holder. This is where pessimistic locking database strategies shine.
The Elegance of PostgreSQL Advisory Locks
If you’re already using PostgreSQL, you have a world-class distributed lock manager built right into your stack. PostgreSQL 'Advisory Locks' are a feature specifically designed for application-level locking. They don't lock table rows; they lock an abstract integer or a name that you define. They are stored in memory, making them incredibly fast—frequently offering sub-millisecond performance.
Automatic Cleanup and Transaction Safety
One of the biggest headaches with Redis-based distributed locking is the 'dangling lock.' If your application node crashes after acquiring a lock in Redis, you have to wait for a TTL to expire before any other node can proceed. If your TTL is too short, you risk the lock expiring while the work is still happening. If it's too long, your system hangs during a crash.
PostgreSQL solves this with transaction-level advisory locks. When you call pg_advisory_xact_lock(), the lock is tied to the lifecycle of your database transaction. If the process crashes or the connection drops, PostgreSQL automatically rolls back the transaction and releases the lock instantly. There is no 'cleanup' logic to write and no risk of a lock being held by a dead process. As noted by industry practitioners, this drastically reduces the 'moving parts' in your architecture.
Why Infrastructure Simplicity Wins
Every piece of infrastructure you add is another thing that can fail, another thing to monitor, and another thing to keep secure. By moving your distributed locking logic to your database, you eliminate 'infrastructure sprawl.' You no longer need to worry about the 'dual-write' problem—where you successfully update Redis but fail to update your database—because your lock and your data reside in the same ACID-compliant environment.
Furthermore, PostgreSQL provides the 'fencing' capabilities that Redis lacks. By using serializable transactions or simple sequence-based tokens, you can ensure that even if a process experiences a long pause, it cannot commit its work if its lock has been superseded. This level of safety is nearly impossible to achieve with Redis Redlock alternatives without significant custom code.
Addressing the Bottleneck Myth
The most common argument against using a database for locking is scalability. 'The database will become a bottleneck!' critics cry. While it's true that a dedicated Redis instance can handle millions of operations per second, ask yourself: does your application actually need that? Most business-critical operations—like inventory management or user payouts—don't happen 100,000 times per second on a single resource. PostgreSQL can comfortably handle thousands of lock requests per second without breaking a sweat.
If you reach the point where your database lock contention is actually the primary bottleneck of your entire multi-million dollar business, congratulations! You have a high-class problem, and only then should you consider the architectural complexity of a dedicated coordination service like Etcd or Zookeeper. For the 99% of us, using the pessimistic locking database features we already have is the more professional choice.
Modern Patterns: SKIP LOCKED
Beyond advisory locks, modern PostgreSQL (version 9.5+) introduced the SKIP LOCKED clause. This feature allows you to build high-performance job queues directly in your relational tables. Instead of reaching for Redis and Sidekiq or BullMQ, you can select a row and lock it in one atomic operation, while other workers simply skip over it. This pattern provides the performance of a queue with the safety of a database, further proving that the 'need' for Redis in modern backends is often overstated.
Practical Advice for Architects
Choosing your tools isn't about picking the 'coolest' tech; it's about managing risk and complexity. When you use your database for distributed locking, you are choosing a system with 30 years of research into concurrency, crash recovery, and data integrity. You are choosing a system that simplifies your deployment pipeline and reduces the cognitive load on your team.
Next time you're tempted to spin up a Redis cluster just to handle a few race conditions, take a look at your pg_advisory_lock options first. Your future self—the one who doesn't have to debug a silent data corruption at 3:00 AM—will thank you. Are you still using Redis for your locks, or have you made the switch to 'boring' technology? Let's discuss in the comments.


