Skip to main content

Leader election and Sharding Practices at Wix microservices

Leader election and Sharding Practices at Wix microservices

Photo by Glen Carrie on Unsplash

Wix’s distributed system of 2000 clustered microservices is required to process billions of business events every day with very high speed in a highly concurrent fashion.

There is a need to balance the load between the various cluster nodes, such that no bottlenecks are created. For serving HTTP requests, this can be done by load balancers such as NGINX or Amazon’s ELB — this is out of scope for this article.

A service acting as a client may also require to load-balance its calls in certain cases. For example when it initializes an internal cache from data retrieved from a different service.

There are also many cases where events and actions have to be processed in an atomic manner such that the stored data remains valid. E.g. changing account balance or updating inventory.

In this blog post, we will explore various practices used by Wix microservices that ensure atomic operation for updating the state of some resource (e.g. a cache or a DB entry), thus keeping the data valid but without compromising on high throughput and low latency.

The following practices are divided by their operation “granularity”:

  1. Selecting a single leader service node to run a task or a bunch of tasks
  2. Sharding the retrieval of a large dataset by multiple “leader” nodes
  3. Processing events sequentially for single domain entity by any random service node

1. Selecting a Leader for scheduling tasks using ZooKeeper

Motivation
There are many services at Wix that are required to perform scheduled tasks.

Let’s consider for example Contacts Importer Service that imports Wix Site Owners contacts from external locations such as gmail.

The Importer service DB accumulates many import jobs metadata that becomes stale and can be deleted or archived once the import process is completed. Otherwise the DB will grow bigger and have slower response times.

Scheduling a cron job
A periodic cleaning job needs to be scheduled that will perform the DB deletion operations.

Note that scheduling cron jobs by a clustered microservice comprising multiple nodes, requires that only one node will be in charge of the scheduling of the same task.

Otherwise, the cleaning task can potentially run more than once at the same time, causing unintended race condition errors, like ending up with incorrect import job state and it also puts more load on the DB.

Wix has a Cron Scheduler service called Cronulla that makes sure that jobs are scheduled in just one of the client service nodes. It accepts requests to schedule a REST call to the client service with some cron expression string.

e.g.: "0 7 * * * *"
This cron expression means run once every hour on the 7th minute.

Zookeeper and Curator
In order to make sure the requested job is only sent once to Contacts Importer service every hour, Cronulla enlists the help of Apache Zookeeper. Zookeeper is a centralized service used to coordinate distributed systems.

Cronulla uses Apache Curator library, which is a high level, robust zookeeper client. It offers built-in recipes, including shared counters and locks. The relevant recipe for Cronulla’s case is leader election.

Following are steps to take In order to configure Curator to execute some task on a single leader:

  1. First the Curator client is built, including the zookeeper connection string.

2. Then a LeaderSelector, (which is the leader election recipe abstraction) is created. It is provided with the following parameters:

  • The Curator client itself.
  • The path to a unique Zookeeper ZNode representing this leadership group
  • A LeaderSelectorListenerAdapter which defines the action to take once this node becomes leader. More details on item 4.

3. The LeaderSelector is then set to autoRequeue() so that it puts itself back in the election pool after it has relinquished leadership.

4. The LeaderSelectorListenerAdapter defines a takeLeadership callback, where actions can be performed, as this node is now the leader. In our case the actions that are performed are scheduled cron tasks.

It is important to periodically check if this thread has been interrupted — an indicator that the leader needs to be relinquished and cron jobs should no longer be executed on this node.

For more information about Curator API and usage visit this detailed blog post.

Analysis
Zookeeper Server together with Curator Client provide a powerful and relatively simple way to coordinate distributed microservices. Especially for leader election and guarantee of atomic scheduled task processing.

In reality, Wix Cron scheduler service is more complex and also uses Apache Kafka and Greyhound — Wix Kafka client, in order to guarantee eventual task successful processing (using Consumer-side retries)

Having a single leader eliminates concurrency issues like race conditions and corrupted state healing, but on the other hand it introduces a single point of failure and limited possibility of scaling.

Comments

Popular posts from this blog

How to add your Conda environment to your jupyter notebook in just 4 steps

 In this article I am going to detail the steps, to add the Conda environment to your Jupyter notebook. Step 1: Create a Conda environment. conda create --name firstEnv once you have created the environment you will see, output after you create your environment. Step 2: Activate the environment using the command as shown in the console. After you activate it, you can install any package you need in this environment. For example, I am going to install Tensorflow in this environment. The command to do so, conda install -c conda-forge tensorflow Step 3: Now you have successfully installed Tensorflow. Congratulations!! Now comes the step to set this conda environment on your jupyter notebook, to do so please install ipykernel. conda install -c anaconda ipykernel After installing this, just type, python -m ipykernel install --user --name=firstEnv Using the above command, I will now have this conda environment in my Jupyter notebook. Step 4: Just check your Jupyter Notebook, to se...

6 Rules of Thumb for MongoDB Schema Design

“I have lots of experience with SQL and normalized databases, but I’m just a beginner with MongoDB. How do I model a one-to-N relationship?” This is one of the more common questions I get from users attending MongoDB office hours. I don’t have a short answer to this question, because there isn’t just one way, there’s a whole rainbow’s worth of ways. MongoDB has a rich and nuanced vocabulary for expressing what, in SQL, gets flattened into the term “One-to-N.” Let me take you on a tour of your choices in modeling One-to-N relationships. There’s so much to talk about here, In this post, I’ll talk about the three basic ways to model One-to-N relationships. I’ll also cover more sophisticated schema designs, including denormalization and two-way referencing. And I’ll review the entire rainbow of choices, and give you some suggestions for choosing among the thousands (really, thousands) of choices that you may consider when modeling a single One-to-N relationship. Jump the end of the post ...

Apache Spark Discretized Streams (DStreams) with Pyspark

Apache Spark Discretized Streams (DStreams) with Pyspark SPARK STREAMING What is Streaming ? Try to imagine this; in every single second , nearly 9,000 tweets are sent , 1000 photos are uploaded on instagram, over 2,000,000 emails are sent and again nearly 80,000 searches are performed according to Internet Live Stats. So many data is generated without stopping from many sources and sent to another sources simultaneously in small packages. Many applications also generate consistently-updated data like sensors used in robotics, vehicles and many other industrial and electronical devices stream data for monitoring the progress and the performance. That’s why great numbers of generated data in every second have to be processed and analyzed rapidly in real time which means “ Streaming ”. DStreams Spark DStream (Discretized Stream) is the basic concept of Spark Streaming. DStream is a continuous stream of data.The data stream receives input from different kind of sources like Kafka, Kinesis...