Decomposing modular monoliths into Microservices

Saharsh Sinha
5 min readApr 6, 2021

Part III of the “Monolithing in the era of Microservices” series

Now that we’ve seen how a Component is structured and consumed, let’s look at how to pull a micro-component out of a monolith into a micro-service. Let’s take the eShopOn.Ordering service as the example, assuming that it now needs to scale independently of the monolith because of the large number of orders that have started coming in. The first thing to do is move the Ordering component into a separate solution folder called Services:

Next up, we add a web API project that routes incoming HTTP calls to the underlying Core implementation via the declaration library interfaces using Autofac. The MonolithWebApp will need to start communicating with this Web API over HTTPs for Order related functionality. With the addition of this public interface, the eShopOn.Ordering Component has “grown” into a Service.

Now that we have an Ordering service, the Monolith WebApp needs to communicate with this service instead of the in process implementation. To enable Service communication with minimal disruption to the consuming code within the monolith, let’s write a client library that implements the same IOrderProvider interface that the monolith WebApp currently depends on. So now, we have two classes that implement IOrderProvider. One is the original OrderProvider class within the Definition library (the one that provides actual Order functionality). The other is this new OrderClient class that’s basically a client proxy whose only job is to relay in-process method calls over HTTP to the Order web service and then relay the response back to the calling code.

Cool. Now we need to configure the MonolithWebApp to use the OrderClient instead of the OrderProvider. This can be achieved simply by making use of the Autofac module feature that we saw in Part II of this series. The autofac control flow worked as below in the MonolithWebApp

1. Startup.cs →
2. Call WebAPP.DependencyConfg.Load() →
3. Call eShopOn.Ordering.DependencyConfig.Load() →
4. load eShopOn.Ordering.Config.Dependency.json file →
5. process eShopOn.Ordering.Core.DependencyConfig module →
6. Call eShopOn.Ordering.Core.DependencyConfig.Load().

At step 4. load eShopOn.Ordering.Config.Dependency.json file, Autofac is loading modules based on the configuration within the json. Like we already saw in Part II, changing the module here would do the trick for us. By switching the module

“eShopOn.Ordering.Core.DependencyConfig, eShopOn.Ordering.Core” with “eShopOn.Ordering.API.Client.DependencyConfig, eShopOn.Ordering.API.Client”, we’re configuring Autofac to inject OrderClient into dependent classes. The new autofac control flow goes as follows:

1. Startup.cs →
2. Call WebAPP.DependencyConfg.Load() →
3. Call eShopOn.Ordering.DependencyConfig.Load() →
4. load eShopOn.Ordering.Config.Dependency.json file →
5. process eShopOn.Ordering.API.Client.DependencyConfig module →
6. Call eShopOn.Ordering.API.Client.DependencyConfig.Load()

This summarizes the high-level steps involved in pulling a component as a microservice out of the monolith.

WebAPI calls over HTTP are synchronous communication calls. Let’s look at an example of creating an Asynchronous interface. Message/event queues are the most common medium for async communication between microservices. For our example, we’ll pull the Basket service out of the MonolithWebApp and create a PubSub Interface (PSI) on top of it. So here’s our Basket component in its original form:

Moving it to Services

Now adding the PubSub interface within this project to listen for the DeleteBasketById event. Every time there’s a message on the DeleteBasket queue, the PSI library dequeues the message, forwards that request to BasketProvider and then acknowledges the message after successful processing.

The example microservice source code can be found here

So there we have it, the monolith is incrementally getting decomposed into individual microservices. Except, that it’s really not that straightforward, is it? Beyond migrating the code and hosting the service, here’s a look at just some of the additional considerations involved with micro-services:

  1. In-process method calls have practically zero latency. When the call is changed from in-process to a remote HTTP call, there will be some non-zero latency introduced by the network. While the transition from in-process to remote call will be transparent within client code, the introduction of latency might start becoming visible to the end-user if the client code is not adept at handling it. Take for example three successive in-process calls that get converted into HTTP calls. The increased execution time due to newly introduced network latency for those three successive calls might be noticeable by the end users if the user interaction is blocked on those calls.
  2. Fundamental change in nature: moving a component from monolith to microservice is as much a paradigm change as it is a code change. Robust microservices need to have additional abilities to be successful such as Fault tolerance, Availability, Reliability, Statelessness and Resilience. These things are not at the forefront when designing Components for a monolith since the larger Monolith App accounts for these attributes and they automatically trickle down to the components within the monolith. But they need to be accounted for when converting a component into an individual, self-hosted microservice.
  3. Because of the nature of some message and event queues, when employing an async message/event bus, other aspects such as “at least once delivery” or “at most once delivery” start coming into play. For e.g., at least once delivery means that some messages might come in twice. Does the microservice already know to handle the second message properly? It’s unlikely that the original, in-process component was written to account for duplicate message delivery.

I hope this was helpful. If you have questions or feedback, please leave a comment. Thanks for reading.

--

--