I'm using AMQP in a reliability pattern and my use-case is to put messages in a queue, then consume them and insert the information into a web service. My web service is slow, and my queue can have many, many messages and I would like to ensure that the consumer doesn't kill my database.
Is there a build-in way to perform throttling in RabbitMQ, either time-based(only X messages per minute/second/hour) or some other mechanism?
There is per-connection flow control, so if you have too much messages on server, publishers will be awaiting. RabbitMQ is very reliable system, i can say that you can no worry about it.
If you are talking about how to limit consumption, probably you have to take care about it by yourself. You may also look on channel.flow (deprecated as of RabbitMQ 3.3.0) and basic.qos methods or you can even temporary disconnect consumer(s) and reconnect them back when your services will be capable to take the load.
UPD
I can suggest that you consume messages with basic.consume and feed it to your web service. Based on how long does you web service process payload you may guess it's load and do some kind of sleep(N). While your consumer be sleeping it will not consume anything so no web service will be fed.
I'm wondering if "Per-Connection Flow Control" is related with the channel.flow().
Basically you can call channel.flow(false); to inform the broker to stop sending messages.
Calling channel.flow(true); makes the flow active again. Here's the javadoc.
Related
I have an application that uses Java on the backend end, Angular on the frontend, and I'm trying to use STOMP messaging between the two to exchange state data.
What I would like to do is have my services, on startup, publish their states and have that data stay in the queue for any client that later connects to the server.
(edit)
For clarification, I don't mean I want to messages to survive a server reboot. What I want is for certain message queues to retain all messages until the server reboots.
How do I tell Spring Boot's STOMP implementation to not delete the contents of a /queue?
You can configure ActiveMQ Artemis as an "external broker" and use a "non-destructive" queue. When a STOMP client receives and acknowledges a message from a non-destructive queue the broker will not remove it. You can define a special "initialization" queue which all clients connect to initially to receive the state data which you care about and then they can connect to whatever other queues they need to complete their normal work.
In this kind of use-case the queue is typically configured as non-destructive and as a "last value" queue. This way each client can use its own "last value" and can keep their state data up-to-date without the complication of stale state data on the queue.
I realize your question was asking about how to do this with Spring's built-in broker, but all my research indicates that Spring's simple in-memory broker neither supports last-value queue semantics nor non-destructive queue semantics nor even persistent messages. From what I understand Spring's broker is only meant for the most basic use-cases which is why then enable integration with 3rd party brokers which can support more advanced use-cases (e.g. like yours).
I was wondering if there was a way to connect to the Business Activity Monitor located on a WebLogic server via the client applications. I am wanting to replace the log statements in my JMS Producer/Consumer clients with BAM statements so BAM is updated with message progress/errors? I am hoping there is an API for this, but have not been successful in locating it?
Oracle BAM is able to consume XML messages from a queue or a topic as input for a data item. Here is BAM configuration documentation to do so.
As your client application is already JMS based, it should be easy to make them send additional JMS messages to a dedicated queue for progress monitoring in BAM.
Beware of the transaction demarcation: you have to take care if your BAM message is supposed to be included in the same transaction as your regular business or in a specific one, to avoid a rollback in case of business failure.
I'm looking for opinion from you all. I have a web application that need to records data into another web application database. I not prefer to use HTTP request GET on 2nd application because of latency issue. I looking for fast way to save records on 2nd application quickly, I came across the idea of "fire and forget" , will JMS suit for this scenario? from my understanding JMS will guarantee message delivery, guarantee whether message will be 100% deliver is not important as long as can serve as many requests as possible. Let say I need to call at least 1000 random requests per seconds to 2nd application should I use JMS? HTTP request? or XMPP instead?
I think you're misunderstanding networking in general. There's positively no reason that a HTTP GET would have to be any slower than anything else, and if HTTP takes advantage of keep alives it's faster that most options.
JMX isn't a protocol, it's a specification that wraps many other protocols including, possibly, HTTP or XMPP.
In the end, at the levels where Java will operate, there's either UDP or TCP. TCP has more overhead by guarantees delivery (via retransmission) and ordering. UDP offers neither guaranteed delivery nor in-order delivery. If you can deal with UDP's limitations you'll find it "faster", and if you can't then any lightweight TCP wrapper (of which HTTP is one) is just about the same.
Your requirements seem to be:
one client and one server (inferred from your first sentence),
HTTP is mandatory (inferred from your talking about a web application database),
1000 or more record updates per second, and
individual updates do not need to be acknowledged synchronously (you are willing to use "fire and forget" approach.
The way I would approach this is to have the client threads queue the updates internally, and implement a client thread that periodically assembles queued updates into one HTTP request and sends it to the server. If necessary, the server can send a response that indicates the status for individual updates.
Batching eliminates the impact of latency on the client, and potentially allows the server to process the updates more efficiently.
The big difference between HTTP and JMS or XMPP is that JMS and XMPP allow asynchronous fire and forget messaging (where the client does not really know when and if a message will reach its destination and does not expect a response or an acknowledgment from the receiver). This would allow the first app to respond fast regardless of the second application processing time.
Asynchronous messaging is usually preferred for high-volume distributed messaging where the message consumers are slower than the producers. I can't say if this is exactly your case here.
If you have full control and the two web applications run in the same web container and hence in the same JVM, I would suggest using JNDI to allow both web applications to get access to a common data structure (a list?) which allows concurrent modification, namely to allow application A to add new entries and application B to consume the oldest entries simultaneously.
This is most likely the fastest way possible.
Note, that you should keep the information you put in the list to classes found in the JRE, or you will most likely run into class cast exceptions. These can be circumvented, but the easiest is most likely to just transfer strings in the common data structure.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I was just reading abit about JMS and Apache ActiveMQ.
And was wondering what real world use have people here used JMS or similar message queue technologies for ?
JMS (ActiveMQ is a JMS broker implementation) can be used as a mechanism to allow asynchronous request processing. You may wish to do this because the request take a long time to complete or because several parties may be interested in the actual request. Another reason for using it is to allow multiple clients (potentially written in different languages) to access information via JMS. ActiveMQ is a good example here because you can use the STOMP protocol to allow access from a C#/Java/Ruby client.
A real world example is that of a web application that is used to place an order for a particular customer. As part of placing that order (and storing it in a database) you may wish to carry a number of additional tasks:
Store the order in some sort of third party back-end system (such as SAP)
Send an email to the customer to inform them their order has been placed
To do this your application code would publish a message onto a JMS queue which includes an order id. One part of your application listening to the queue may respond to the event by taking the orderId, looking the order up in the database and then place that order with another third party system. Another part of your application may be responsible for taking the orderId and sending a confirmation email to the customer.
Use them all the time to process long-running operations asynchronously. A web user won't want to wait for more than 5 seconds for a request to process. If you have one that runs longer than that, one design is to submit the request to a queue and immediately send back a URL that the user can check to see when the job is finished.
Publish/subscribe is another good technique for decoupling senders from many receivers. It's a flexible architecture, because subscribers can come and go as needed.
I've had so many amazing uses for JMS:
Web chat communication for customer service.
Debug logging on the backend. All app servers broadcasted debug messages at various levels. A JMS client could then be launched to watch for debug messages. Sure I could've used something like syslog, but this gave me all sorts of ways to filter the output based on contextual information (e.q. by app server name, api call, log level, userid, message type, etc...). I also colorized the output.
Debug logging to file. Same as above, only specific pieces were pulled out using filters, and logged to file for general logging.
Alerting. Again, a similar setup to the above logging, watching for specific errors, and alerting people via various means (email, text message, IM, Growl pop-up...)
Dynamically configuring and controlling software clusters. Each app server would broadcast a "configure me" message, then a configuration daemon that would respond with a message containing all kinds of config info. Later, if all the app servers needed their configurations changed at once, it could be done from the config daemon.
And the usual - queued transactions for delayed activity such as billing, order processing, provisioning, email generation...
It's great anywhere you want to guarantee delivery of messages asynchronously.
Distributed (a)synchronous computing.
A real world example could be an application-wide notification framework, which sends mails to the stakeholders at various points during the course of application usage. So the application would act as a Producer by create a Message object, putting it on a particular Queue, and moving forward.
There would be a set of Consumers who would subscribe to the Queue in question, and would take care handling the Message sent across. Note that during the course of this transaction, the Producers are decoupled from the logic of how a given Message would be handled.
Messaging frameworks (ActiveMQ and the likes) act as a backbone to facilitate such Message transactions by providing MessageBrokers.
I've used it to send intraday trades between different fund management systems. If you want to learn more about what a great technology messaging is, I can thoroughly recommend the book "Enterprise Integration Patterns". There are some JMS examples for things like request/reply and publish/subscribe.
Messaging is an excellent tool for integration.
We use it to initiate asynchronous processing that we don't want to interrupt or conflict with an existing transaction.
For example, say you've got an expensive and very important piece of logic like "buy stuff", an important part of buy stuff would be 'notify stuff store'. We make the notify call asynchronous so that whatever logic/processing that is involved in the notify call doesn't block or contend with resources with the buy business logic. End result, buy completes, user is happy, we get our money and because the queue is guaranteed delivery the store gets notified as soon as it opens or as soon as there's a new item in the queue.
I have used it for my academic project which was online retail website similar to Amazon.
JMS was used to handle following features :
Update the position of the orders placed by the customers, as the shipment travels from one location to another. This was done by continuously sending messages to JMS Queue.
Alerting about any unusual events like shipment getting delayed and then sending email to customer.
If the delivery is reached its destination, sending a delivery event.
We had multiple also implemented remote clients connected to main Server. If connection is available, they use to access the main database or if not use their own database. In order to handle data consistency, we had implemented 2PC mechanism.
For this, we used JMS for exchange the messages between these systems i.e one acting as coordinator who will initiate the process by sending message on the queue and others will respond accordingly by sending back again a message on the queue.
As others have already mentioned, this was similar to pub/sub model.
I have seen JMS used in different commercial and academic projects. JMS can easily come into your picture, whenever you want to have a totally decoupled distributed systems. Generally speaking, when you need to send your request from one node, and someone in your network takes care of it without/with giving the sender any information about the receiver.
In my case, I have used JMS in developing a message-oriented middleware (MOM) in my thesis, where specific types of object-oriented objects are generated in one side as your request, and compiled and executed on the other side as your response.
Apache Camel used in conjunction with ActiveMQ is great way to do Enterprise Integration Patterns
We have used messaging to generate online Quotes
We are using JMS for communication with systems in a huge number of remote sites over unreliable networks. The loose coupling in combination with reliable messaging produces a stable system landscape: Each message will be sent as soon it is technically possible, bigger problems in network will not have influence on the whole system landscape...
I've a WEB application (with pure Java servlet) that have some heavy computational work, with database access, that can be done in asynchronous mode.
I'm planning to use a dedicated server to execute such batch jobs and I'm wondering which tools/techniques/protocols to use for communication between servlets in the WEB server and batch jobs in the new dedicated server.
I'm looking at JMS. Is it the right choice?
There are industry standard and/or widely adopted techniques?
I need also queue and priority handling for multiple simultaneous jobs.
JMS is a pretty standard solution. The high-end platforms (Sun's JCAPS, for example) makes heavy use of JMS to partition and manage the workload of web services.
There are many advantages to buying a high-end JMS implementation from Sun (or IBM or Microsoft). First, you get things like reliable message queues that are backed to the file system. No message can get lost. Second, you get some monitoring and management tools.
One cool thing is to have a JMS queue with (potentially) multiple subscribers to do workload balancing.
Another cool thing is to have JMS topic which has a logging process as well as the real work process subscribed. The logging process picks off the messages and simply records the essential stages of the job being started and stopped.
Messaging is one of the best options.
Make the messaging framework very generic so that it can handle any type of batch jobs.
One approach is to have an event/task manager where you put an event on the queue and the queue consumer processes the event and converts it into a set of tasks. The tasks can then be executed by separate task handlers. A task can also generate some more events that can be again put on the queues to provide a feedback loop. This way you can add work flow like features to the framework and allow your batch jobs to have dependencies on each other.
JMS would be the appropriate solution for sending your batch jobs from the servlet. It may not be the best solution for the batch server to communicate with the servlet though, as it cannot be a listener to messages.
As I don't know what the communication from the batch server to the servlet is supposed to entail, I can only say that there are probably several options you can use (yes JMS is one of them). But they all basically rely on polling calls to the servlet which will then check in some way to see if there is anything from the batch server waiting. This could simply be a servlet on the batch server or making receive calls to a JMS response queue. Other solutions are available, but the point is it is not asynchronous, unless you have the ability to push from the batch server all the way to you client end (a browser I am guessing) via something like AJAX.
Anyway, just something to keep in mind.
Another alternative for asynchronous processing is to have the web application store the request in the database, and have the batch process poll the database for new batch jobs to process. Since your application appears to be smaller (pure Java Servlets) this may be a simpler and lower cost solution.
Hope it helps.
We use JMS with web services:
Client requests computation via web service
Server writes JMS message, and creates an ID value which is stored in a database along with a status (initially "Pending"). Server returns the id to the client.
Server (can be separate server) reads JMS message, does computation, and when finished updates the status to "Completed" in the database
While the computation is ongoing, the client is polling the server to determine the status using another web service (along with the id). The server returns the status which is retrieved from the database. Once the server computation is completed, the client will see the "Completed" status and know that the computation is complete.