Distributed Locking: etcd vs. Consul
It's 2016, and we're all moving towards microservices. One of the biggest challenges is coordination: how do you ensure that only one instance of a service performs a specific task (like a database migration or a scheduled report)? You need a Distributed Lock.
The Raft Consensus Algorithm
Both etcd (the backbone of Kubernetes) and Consul (from HashiCorp) are built on the Raft consensus algorithm. This ensures that even if a node fails, the cluster can still agree on who holds the lock.
Locking with etcd v3
The newly released etcd v3 uses a gRPC API and the concept of "Leases."
// Go example using etcd v3
cli, _ := clientv3.New(clientv3.Config{Endpoints: []string{"localhost:2379"}})
s, _ := concurrency.NewSession(cli, concurrency.WithTTL(10))
defer s.Close()
m := concurrency.NewMutex(s, "/my-service-lock")
// Acquire the lock
if err := m.Lock(context.TODO()); err != nil {
log.Fatal(err)
}
// Perform critical section...
fmt.Println("I have the lock!")
m.Unlock(context.TODO())
Locking with Consul
Consul uses a simpler HTTP API and "Sessions." A session is tied to a health check; if the node becomes unhealthy, the lock is automatically released.
# Acquire a lock via curl
curl -X PUT -d 'my-lock-data' http://localhost:8500/v1/kv/my-service/lock?acquire=<session_id>
Comparison: Which one to choose?
- etcd is the better choice if you are already in the Kubernetes ecosystem. Its v3 API is extremely powerful but has a steeper learning curve.
- Consul is more than just a KV store; it's a full service-discovery solution. Its DNS interface and built-in health checks make it a more "all-in-one" tool for traditional deployments.
In 2016, the choice between them often comes down to your existing infrastructure. But regardless of the tool, the goal is the same: absolute consistency in an uncertain network.