An Introduction to Digital Advertising Metrics

Choosing the right digital advertising metrics to track and measure is crucial to your campaign’s success. If you aren’t tracking advertising efforts correctly, you’ll never know what’s working and…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Competing Consumers Pattern Explained

The Competing Consumers pattern explains how multiple consumers compete for messages on the same message channel to process multiple messages concurrently.

This pattern is useful when you want to process a discrete set of tasks asynchronously by distributing them among parallel consumers. In return, you’ll get a scalable, reliable, and resilient message processing system.

Let’s explore that with an example.

Let’s take an example of a component P requesting component C to perform a task that usually takes 5 minutes to complete on average.

P to C synchronous invocation

Having synchronous communication between P and C is frowned upon due to several reasons. Most importantly, P can’t be blocked until C completes the task. Also, a 5-minute task is too long to handle during a short HTTP request window.

As a solution, we can make this communication asynchronous by placing a message queue between P and C. P encapsulates tasks as a message and sends it to the message queue. C polls the queue to pick up tasks and processes them asynchronously. Thus, P is not blocked while C is processing a task.

P encapsulates the task as a message and sends to the queue. C polls the queue and processes.

However, having a single instance of C is not scalable. If C goes down, there’s no consumer to replace it and pick up his workload. Also, C needs to catch up with the rate at which P puts messages into the queue. Just imagine, if C needs 5 minutes to complete a task, what happens if 100,000 tasks are waiting in the queue? It’ll take days.

How can we scale this up to gain a better throughput, scalability, and availability?

The Competing Consumers pattern enables multiple concurrent consumers to process messages received on the same messaging channel.

In our example, we can have multiple instances of C, competing for messages on the same queue. They will concurrently process more messages to drain the queue faster.

When a message is available on the message queue, any of the consumers could potentially receive it. The messaging system’s implementation determines which consumer receives the message, but in effect, the consumers compete with each other to be the receiver.

The figure illustrates work items distributed among a pool of consumers via a message queue.

The Competing Consumers Pattern

Distributing asynchronous work items in a consumer pool is beneficial in terms of throughput, reliability, and scalability.

The consumer pool can be scaled up or scaled down by looking at the length of the queue. If each consumer runs in a VM, container, or as a serverless function, appropriate auto-scaling measures can be taken to ensure smooth scaling and cost optimisations.

If the consumer pool is exhausted (all consumers are occupied or not responsive), message producers can still put messages in the queue. Thus, making the system functional at least partially.

The message queue acts as a buffer, absorbs messages until the consumer pool becomes available to process messages. That prevents message loss and ensures at-least-once delivery guarantee.

If a consumer fails while processing a message, the message will be returned to the queue immediately, to be picked up by another consumer.

Competing Consumers pattern is not a silver bullet for solutions that require multiple consumers to process messages concurrently on the same message queue. The reason is the nature of consumers. Not all consumers are made equal.

Let’s explore several use cases that would be ideal to use this pattern.

1. The application workload is divided into tasks that can run asynchronously

This pattern works well if the task producer and task consumer communicate asynchronously. That is — the task producing logic doesn’t have to wait for a task to complete before continuing.

If the task producer expects a response from the task consumer in a synchronous manner, this pattern is not a good option.

2. Tasks are independent and can run in parallel

The tasks should be discrete and self-contained. There shouldn’t be a high degree of dependence between tasks.

3. The volume of work is highly variable, requiring a scalable solution

4. The solution must provide high availability, and must be resilient if the processing for a task fails

Ideal for reliable message processing use cases.

Add a comment

Related posts:

WHEN TEMPERANCE IS WORSE THAN THE DEVIL

Unless you find something to obsess about grand style, it’s not sure that anyone will take you seriously. Today we won’t talk about the obsession with lost lovers, resentment and hatred, and other…

Conversa

No mais curto e grosso, esse era o último dos meus dias. Era algum domingo de Setembro. Irônico. Calcei o mais belo dos meus sapatos, e com uma roupa qualquer parti do apartamento. Por que dava…