Back to Blog Archive

Differences between Synchronous and Non-Blocking Processing Strategies

Posted on: November 5, 2018
Jose Jurado

Mule offers three main processing strategies; Synchronous, Queued Asynchronous and Non-Blocking (introduced in Mule 3.7). Before Mule 3.7 any flow that started with an inbound endpoint that had the exchange pattern set to request-response had its processing strategy set to Synchronous.

As shown in the diagram below, the Synchronous Processing Strategy implies that the same thread (the thread taken from the connector of the inbound endpoint) is used for the entire lifetime of the message being processed. Hence, when a request is made to an endpoint that takes a lot of time to give a response the thread will be utilised to wait for the response to come back.

Synchronous Processing Strategy Diagram (Image extracted from the Mulesoft documentation)

On the other hand, the Non-Blocking Processing Strategy behaves similarly to the synchronous exchange pattern. However, when an outbound endpoint that supports the non-blocking exchange pattern is encountered the thread is placed back in the pool and when the response is received back from the outbound endpoint, the response is processed by a thread borrowed from the thread pool of the flow.

In this blog post, we demonstrate through experiments, the behaviour of the Synchronous and Non-blocking Processing Strategies in Mule 3. The flow below is used through the example to demonstrate the behaviour of the non-blocking vs the synchronous processing strategy:

Synchronous PS flow used in this example

This is a flow whose single purpose is to show how these two processing strategies behave when encountering HTTP elements. Mainly, we focus on how threads are managed when receiving several concurrent calls.

Regarding the two HTTP outbound endpoints, the only thing we need to know is that they consume some time before returning the response, e.g. it is calling a REST API. The ‘Logger’ shows the names of the threads that are being used at every point of the flow execution, and the ‘Variable’ element is needed to store certain values used in the loggers.

First we run this flow using a Synchronous Processing Strategy, here we make three calls with three different clients, in this case, 3 different client-browsers, launching 3 consecutive calls, like this:

We use three different client-browsers to make the calls for the first example

Click on the picture to enlarge

Representation of the threads used in the execution for Synchronous PS

Here we can see that each execution uses a different thread, this thread is used from the beginning to the end of the flow execution and all of them are ‘HTTP Listener Threads’.

Now, we execute the same flow with the same calls, but this time the flow has a Non-Blocking processing strategy:

Modifying the Processing Strategy in a flow

Non-Blocking PS flow used in this example

Results (click on the picture to enlarge)

Representation of the threads used in the execution for Non-Blocking PS

Here we can see that the three executions use more or less the same threads, this is because, in this case, when a thread is not being used it is freed up.

Note: Obviously, for different runs of this example, the order of the used threads will not always be the same, but the number of used threads will be this or very similar. In this particular execution, we forced ‘Logger-2’ to take more time so as to show a visually clearer example.

For this example, there seems to be no major difference in the results obtained by the two executions, but we can point out the most important difference relating to how the process strategies behave; in the case of the flow with Non-Blocking PS, when a call to an HTTP element is made, the thread being used is released. The Synchronous processing strategy is different, in this case the thread is waiting for the HTTP request to return a response to be able to continue.

This is the reason why each execution in Synchronous-PS always uses the same thread and, on the other hand, in Non-Blocking-PS the same execution can use several threads (and the same thread is shared between different executions).

Although the behaviour of Non-Blocking PS does not entail any appreciable improvement in this first example, we can see how it could be useful in other cases, as shown in the following example.

Before seeing the next example, we can summarize the differences between the two processing strategies in the following table:

Summary table of the differences between both Processing Strategies

Therefore Non-Blocking processing strategy behaviour can be represented in a diagram like this:

Non-Blocking Processing Strategy Diagram (Image extracted from the Mulesoft documentation)

As both cases use 3 threads, with only a change in the order, we could conclude that there is not a big difference between them. Well, let’s see the following example.

Here we make 20 concurrent requests for each of the two processing strategies we are analysing, this time we do it from the bash console of Windows by executing a command like this:

Note: You can use any other tool which allows you to make multiple concurrent requests to the same URL.

For a Synchronous PS:

And this is the output shown in the console. As we can see, 20 threads are used for this case.

Note: About the maximum number of active threads; for this example, we have not modified the default values. For the Synchronous-PS flow, it is defined by the HTTP Listener Connector which is using ‘default worker threading profile’ that set maxThreadsActive to 128. The Non-Blocking-PS flow thread pool has a default maximum of 128 threads (which differs from the default maximum number of threads in the other processing strategies) but the 128 value matches the default number of threads used by the HTTP listener, so we have a maximum of 128 threads available for both scenarios shown in this example.

Now we run the corresponding command for the Non-Blocking Processing Strategy:

Output: Non-Blocking PS output console. In this case, only 5 threads are used.

Clearly, the Non-Blocking PS consumes a smaller number of threads than the Synchronous PS and, obviously, this can affect the performance of our application because the amount of available threads at one point on the execution could be insufficient.

Imagine a scenario where our outbound endpoints are calling some APIs that take a lot of time to reply back and we are receiving a high number of messages in our application.

Note: Also for those examples, the number of used threads can vary depending on several factors as the response time of every HTTP-outbound but, as a general rule, Non-Blocking PS will require a smaller number of threads than the Synchronous PS.

It is true that one way to get this in the Synchronous processing strategy is to increase the ‘listener thread pool size’, but this is not a good solution, because it involves modifying the ‘listener thread pool size’ each time the concurrency changes, or setting a very large number of threads, but this is not a good idea either since threads consume resources and could eventually overload the OS.

To make it clear, we are talking about the number of used threads but, in scenarios where all the needed threads are available, non-blocking PS does not imply an improvement in execution time with respect to the synchronous PS.

In cases where components that don’t support Non-Blocking I/O are used it is not recommended to use the non-blocking processing strategy (in those cases processing model would revert to execute synchronously from the point in the flow).

Note: It is not the purpose of this post to give a detailed explanation about when to use Non-Blocking PS and when not (for that you can access to the Mulesoft documentation link at the end of this document). But, as general rules, we can mention:

  • Only HTTP is really non-blocking.
  • Other connectors can work in (but not take advantage of) a non-blocking flow.
  • Some message processors would revert the processing strategies because they cannot work in non-blocking flows.


As we observed from the logs produced through our sample flows, the major difference between the Synchronous and Non-blocking processing strategies is that the non-blocking processing strategy relieves threads from the burden of waiting for a response.

By reducing the amount of time the threads spend waiting for responses, we can achieve a higher throughput of messages, mainly because we relieve the thread pools from waiting threads. It is to be noted, that the performance advantage is more significant when the outbound endpoint (that supports the non-blocking processing strategy) has higher latency.

> More information on Mulesoft website

Jose Jurado

12 Comments for “Differences between Synchronous and Non-Blocking Processing Strategies”

  1. Jason Pinn says:

    This is a good article – explains the concepts very well.

  2. xuan says:

    This article is so well explained that is even better than the mule documents over thread pooling etc.!

  3. Sebas says:

    This article is very well articulated. Respect, and greetings from London! 😉

  4. Lakshmishree H I says:

    Awesome! Thank you for this beautiful explanation.

  5. Yogesh says:





Contact Us

Ricston Ltd.
Triq G.F. Agius De Soldanis,
Birkirkara, BKR 4850,
MT: +356 2133 4457
UK: +44 (0)2071935107

Send our experts a message

Need Help?
Ask our Experts!