Asynchronous microservices

Tram Ho

In this article I will share some insights on “asynchronous microservices” – “handling asynchronous microservices”.

As I mentioned before, each service has its own data – “each service has its own private database” to make sure the service are connected. Because of that, if you want to deploy microservice for your application, the first thing you need to do is solve the “distributed data management” problem. ( link previous post )

Firstly, let’s see how much benefit and limitation it will have when processing a single database with lots of limitations (maybe when you come here, you will think again and choose to keep the model unchanged). Old monolithic should also: 3)

1. The Problem of Distributed Data Management

With the old monolithic model using only one database, the first benefit that it brings is that your application can use ACID transactions – important properties of a database to ensure integrity. integrity when processing any transaction (whether there is an error or power outage):

  • Atomicity – Changes are now elementally displayed (the idea is that in a single transaction there are many operations, all of which are not performed).
  • Consistency – Consistency (valid -> commit, error -> rollback)
  • Isolation – transactions are performed simultaneously but also guaranteed separately from each other.
  • Durable – Sustainability (once committed -> cannot be rolled back)

-> Understand simple, implementation is simple: just start a transaction -> change (insert, update, delete) records, and then commit transactions

The other great thing about using a single database is that you can use SQL statements – a rich, standard query language for databases to use. This may seem familiar to you, writing a simple query connecting the tables in the database is really easy. There is not even a need to worry about how to access other tables – because all the data in your application is in one database -> very easy to query.

When applying microservices, we have to solve two problems:

  1. How to make transactions while maintaining consistency across multiple services .
  • For example, we have 2 services Customer Service and Order service. The Order Service manages orders and must confirm that new records cannot exceed a user’s credit card limit. For monolithic structured applications, we simply need to use the ACID transaction to check existing credit cards and create new orders. But with microservice the ORDER and CUSTOMER tables are separate tables in the two services. Order service cannot access CUSTOMER table directly to query, but can only use API provided by Customer Service.
  1. How to execute queries that retrieve data from multiple services ?
  • For example, you want to display customer information with recent orders. Order service provides access API to retrieve data of all orders of that customer, then you can filter out the data you want. However, the Order Service only supports searching for orders by primary key. Obviously, no y / c data could be obtained in this situation.

So how do we solve these 2 problems? Let’s find out.

2. Event-Driven Architecture

The event-driven structure has been applied in many practical applications. But how can this method solve both problems? First, a microservice will send an eventsự kiện when something influential happens, such as when a record is updated. Other microservices will subscribe to each of these events . And when a microservice receives an event and then updates the records in its service, this also leads to many events being sent.

You can also use events to perform transaction in many services . A transaction consists of many steps. Each step consists of a service performing an update of a record in it and sending event to the next step. Microservices recalled the event via Message Broker , let’s look at an example to better understand its mechanism.

  1. The Order Service creates a new record with its initial state of NEW and sends an Order Created initialization event
  2. Customer Service receives the Order Created event, stores credit for that Order, and sends a Credit Reserved event
  3. The Order Serice receives the Credit Reserved event and changes the status of the Order -> OPEN

Advantages:

  1. It allows to perform multiple transactions across multiple services and also ensures consistency between transaction to the end.
  2. It allows applications to maintain specific views

Defect:

  1. The programming model becomes more complex when applying ACID transactions
  2. If you want to register events, you must detect and eliminate duplicates.

Here we will look for the solution to problem 1 first

Use local Transaction

One of the ways to ensure atomicity in ACID is to send events using multi ‑ step process involving only local transactions . This means that we have added an event table, in the database will save the state of the objects contained in it. The application starts with a database transaction , updates the status of the object, inserts an event into the EVENT table and commits the transaction. A separate application is queryed by thread or process, to send events to the Message Broker, and then will use local transaction to mark the events that have been sent.

Order Service insert 1 record into the ORDER table and insert 1 Order Created event and EVENT table. The Event Publisher queries the flow or the EVENT table and process to identify events that have not been sent, sends events, and updates the EVENT table to mark events that have been sent.

This way has some advantages as well as drawbacks:

  1. Advantages
  • It ensures an event is sent during each update without the need for 2PC ( two phase commits ).
  • The application sends events at different levels -> regardless of the event classification
  1. Defect
  • There is a possibility of error because the developer must remember to send events
  • One limitation of this method is the use of a NoSQL database because of its limited transaction and query capabilities.

Again, with this method, we do not need 2PCs to update the status as well as send events instead, using local transaction . If you’re still not satisfied, then go to the next method

Mining a Database Transaction Log

The application updates the database, which is the result of changing the record – and that is the main purpose of this method. Transaction Log Miner reads the transaction log and sends events to the Message Broker.

For example, the LinkedIn Databus project. Databus exploits Oracle transaction logs and sends events corresponding to each change (such as update records, create new, …). LinkedIn uses Databus to keep data from different sources in a consistent way.

As above, this method also has advantages and disadvantages.

  1. Advantages
  • It ensures an event is sent during each update without the need for 2PC (two phase commits).
  • Simplify your application by dividing sent events out of the application logic
  1. Defect
  • The transaction log format is unique to each database and we can even change between database versions. So each service uses different types of databases, we need to find the right transaction log format ? headache spreads us.
  • Difficult to design to reverse high-level events from low-level events within the transaction log

To remove 2PC, in this solution we only need to update the database: v

Using Event Sourcing

Let’s take a look at what the problem is here: How to update the database (atomicity) and send events in the most reliable and accurate way? and request is not allowed to use 2PC !

And our solution is Event Sourcing. By storing objects like Order or Customer it is like a series of state-changing events. Whenever the state of an object is changed, a new event is assigned to the list of current events. Saving an event is a separate operation, so this is really guaranteed to be atomicity.

Events will exist inside the Event Store , which is the database of events. The storage place provides an API to add and store objects of events. Event Store is like Message Broker . It provides an API to allow services subscribe to events. Event Store passes all events to all subscribers. Event Store is the focus of the event-driven structure.

Like other methods Event sourcing has its own advantages.

  1. Advantages:
  • It solves the main problem of following an event-driven structure and can send an event (reliably) when there is any change. => Solve the problem of data consistency in microservice
  • It provides 100% reliable logs of changes within an object
  • It is possible to perform temporary queries to determine the state of objects at any time
  • Logic will consist of objects that are loosely linked and exchanged between events. How this use for? It makes the transition from monolithic to microservices easier.
  1. Defect
  • It is very difficult to learn ? )
  • Stored events are difficult to retrieve because this requires specific queries to reconfigure the state of the object. As a result, we have to use additional CQRS to make the query easier

3. Summary

Each microservice will have its own database. And the difference between microservice is the use of SQL or NoSQL databases. If each service uses a database to suit each function, this is a necessity, but also creates new difficulties because distributed data management is not easy to implement. First, that’s how to make transactions while ensuring consistency across services. Second, it’s how to query to retrieve data between multiple services.

In particular, event-driven architecture is widely applied. The remaining problem is how to reliably update the status and how to send events. Of course there will be a solution – using message queue , transaction log and event sourcing .

Leave a comment !!! ?

Source:

  1. https://microservices.io/patterns/data/database-per-service.html
  2. https://eventuate.io/whyeventdriven.html
  3. Microservices Designing Deploying Book
Share the news now

Source : Viblo