5 common application scenarios of message queues

5 common application scenarios of message queues

1. Introduction

Message queuing middleware is an important component in a distributed system. It mainly solves problems such as application coupling, asynchronous messaging, and traffic cutting. Achieve high-performance, high-availability, scalability and eventually consistent architecture. The most used message queues are ActiveMQ, RabbitMQ, ZeroMQ, Kafka, MetaMQ, RocketMQ.

2. message queue application scenarios

The following introduces the common usage scenarios of message queues in practical applications: asynchronous processing, application decoupling, traffic cutting, and message communication.

1. Asynchronous processing

Scenario description: After registering, users need to send registration emails and registration SMS. There are two traditional methods: serial and parallel.

Serial mode : After the registration information is successfully written into the database, the registration email is sent, and then the registration SMS is sent. After all the above three tasks are completed, return to the customer.

Parallel mode : After the registration information is successfully written into the database, the registration email is sent at the same time as the registration SMS. After the above three tasks are completed, return to the client. The difference with serial is that the parallel method can increase the processing time.

Assuming that each of the three business nodes uses 50 milliseconds, regardless of network and other overheads, the serial mode time is 150 milliseconds, and the parallel time may be 100 milliseconds.

Because the number of requests processed by the CPU in a unit time is constant, suppose that the throughput of the CPU is 100 times in 1 second. Then the CPU can handle 7 requests (1000/150) in 1 second in serial mode. The number of requests processed in parallel is 10 times (1000/100).

Summary : As described in the above case, the performance (concurrency, throughput, response time) of the traditional way of the system will have a bottleneck. How to solve this problem?

The introduction of message queues will not be necessary business logic, asynchronous processing. The reconstructed structure is as follows:

According to the above agreement, the user's response time is equivalent to the time the registration information is written into the database, that is, 50 milliseconds. After registering emails and sending short messages into the message queue, they return directly. Therefore, the speed of writing into the message queue is very fast and can basically be ignored. Therefore, the user's response time may be 50 milliseconds. Therefore, after the architecture is changed, the throughput of the system is increased to 20QPS per second. It is 3 times higher than serial, and twice higher than parallel!

2. Application decoupling

Scenario description: After the user places an order, the order system needs to notify the inventory system. The traditional approach is that the order system calls the interface of the inventory system. As shown below:

Disadvantages of the traditional model:

If the inventory system is inaccessible, the order reduction will fail, resulting in the failure of the order and the coupling of the order system and the inventory system.

How to solve the above problems? The scheme after introducing the application message queue, as shown in the figure below:

Order system: After the user places an order, the order system completes the persistence processing, writes the message into the message queue, and returns the user's order to be placed successfully

Inventory system: subscribe to the order information, use pull/push method to obtain order information, and the inventory system performs inventory operations based on the order information

If: The inventory system cannot be used normally when the order is placed. It does not affect the normal order placement, because after the order is placed, the order system writes to the message queue and no longer cares about other follow-up operations. Realize the application decoupling of order system and inventory system.

3. Traffic cut

Traffic cutting is also a common scenario in message queues, and is generally widely used in spike or group grab activities!

Application scenario: The spike activity usually causes a surge in traffic and application hangs due to excessive traffic. To solve this problem, it is generally necessary to add a message queue at the front end of the application.

The number of active people can be controlled, and the application can be relieved by high flow in a short period of time.

After receiving the user's request, the server writes it to the message queue first. If the message queue length exceeds the maximum number, the user request is directly discarded or the error page is jumped to.

The spike service performs follow-up processing based on the request information in the message queue.

4. Log processing

Log processing refers to the use of message queues in log processing, such as the application of Kafka, to solve the problem of a large number of log transmissions. The architecture is simplified as follows:

Log collection client, responsible for log data collection, regular write and write to Kafka queue; Kafka message queue, responsible for log data reception, storage and forwarding; log processing application: subscribe to and consume log data in Kafka queue.

The following is an application case of Sina Kafka log processing:

Kafka : message queue for receiving user logs;

Logstash : Do log analysis, unified into JSON output to Elasticsearch;

Elasticsearch : The core technology of real-time log analysis service, a schemaless, real-time data storage service, organizes data through index, and has powerful search and statistics functions;

Kibana : Based on the data visualization component of Elasticsearch, the super data visualization ability is an important reason why many companies choose ELK stack.

5. News communication

Message communication means that message queues generally have built-in efficient communication mechanisms, so they can also be used in pure message communication. Such as the realization of point-to-point message queues, or chat rooms.

Point-to-point communication:

Client A and client B use the same queue for message communication.

Chat room communication:

Client A, client B, and client N subscribe to the same topic to publish and receive messages. Achieve a similar chat room effect.

The above are actually two message modes of message queues, point-to-point or publish-subscribe mode. The model is a schematic diagram for reference.

3. message middleware example

1. E-commerce system

The message queue adopts high-availability and persistent message middleware. Such as Active MQ, Rabbit MQ, Rocket Mq.

After the application completes the processing of the main logic, it is written to the message queue. Whether the message is sent successfully can open the confirmation mode of the message. (After the message queue returns the status of successful message reception, the application returns again to ensure the integrity of the message);

The extended process (sending SMS, delivery processing) subscribes to queue messages. Use push or pull to get the message and process it;

While the message decouples the application, it also brings about the problem of data consistency, which can be solved in an eventual consistency method. For example, the main data is written into the database, and the extended application implements the subsequent processing based on the message queue according to the message queue and the database method;

2. Log collection system

It is divided into four parts: Zookeeper registration center, log collection client, Kafka cluster and Storm cluster (OtherApp).

Zookeeper registration center, proposes load balancing and address lookup services;

The log collection client is used to collect the logs of the application system and push the data to the Kafka queue;

Kafka cluster: message processing such as receiving, routing, storing, and forwarding;

Storm cluster: At the same level as OtherApp, the data in the queue is consumed by pulling;

Original: http://www.fx114.net/qa-36-149204.aspx

Recommended reading

End

Writing is not easy, your forwarding is your greatest support