AboutBlogContact
Distributed SystemsApril 15, 2016 2 min read 36

Distributed Locking: etcd vs. Consul (2016)

AunimedaAunimeda
📋 Table of Contents

Distributed Locking: etcd vs. Consul

It's 2016, and we're all moving towards microservices. One of the biggest challenges is coordination: how do you ensure that only one instance of a service performs a specific task (like a database migration or a scheduled report)? You need a Distributed Lock.

The Raft Consensus Algorithm

Both etcd (the backbone of Kubernetes) and Consul (from HashiCorp) are built on the Raft consensus algorithm. This ensures that even if a node fails, the cluster can still agree on who holds the lock.

Locking with etcd v3

The newly released etcd v3 uses a gRPC API and the concept of "Leases."

// Go example using etcd v3
cli, _ := clientv3.New(clientv3.Config{Endpoints: []string{"localhost:2379"}})
s, _ := concurrency.NewSession(cli, concurrency.WithTTL(10))
defer s.Close()

m := concurrency.NewMutex(s, "/my-service-lock")

// Acquire the lock
if err := m.Lock(context.TODO()); err != nil {
    log.Fatal(err)
}

// Perform critical section...
fmt.Println("I have the lock!")

m.Unlock(context.TODO())

Locking with Consul

Consul uses a simpler HTTP API and "Sessions." A session is tied to a health check; if the node becomes unhealthy, the lock is automatically released.

# Acquire a lock via curl
curl -X PUT -d 'my-lock-data' http://localhost:8500/v1/kv/my-service/lock?acquire=<session_id>

Comparison: Which one to choose?

  • etcd is the better choice if you are already in the Kubernetes ecosystem. Its v3 API is extremely powerful but has a steeper learning curve.
  • Consul is more than just a KV store; it's a full service-discovery solution. Its DNS interface and built-in health checks make it a more "all-in-one" tool for traditional deployments.

In 2016, the choice between them often comes down to your existing infrastructure. But regardless of the tool, the goal is the same: absolute consistency in an uncertain network.

Read Also

Riak: Dynamo in Practice with Riak Core (2010)aunimeda
Distributed Systems

Riak: Dynamo in Practice with Riak Core (2010)

Basho took Amazon's Dynamo paper and made it real. Let's look at the vnode architecture and consistent hashing.

Thrift vs. Protocol Buffers: Choosing Your Binary Protocol (2007)aunimeda
Distributed Systems

Thrift vs. Protocol Buffers: Choosing Your Binary Protocol (2007)

In 2007, high-throughput RPC is all about binary. Facebook just open-sourced Thrift, and Google's Protobuf is the industry's open secret. Which one should you choose?

Fault-Tolerant Systems with Erlang/OTP Supervision Trees (2007)aunimeda
Distributed Systems

Fault-Tolerant Systems with Erlang/OTP Supervision Trees (2007)

Designing 'Nine Nines' availability. How Erlang 5.5 (R11B) uses the Let It Crash philosophy and OTP supervision hierarchies to build distributed systems that never die.

Need IT development for your business?

We build websites, mobile apps and AI solutions. Free consultation.

Get Consultation All articles