I need to implement a demo system for prove of concept.
Basically, the system description can be reduced to 2 modules:
Module 1 sends requests
Module 2 picks them up, processes and sends the response back
(Note: the modules reside in the same intranet, so I probably want the protocol to be faster than http.
I thought of the following options:
Message Queue
ESB
Protobuf
Ideally, the system would be (but not limited to) java-based, run on Linux RH and be able to scale linearly.However, the performance is out of the scope for the POC.
I was looking at ServiceMix and ActiveMQ.
My idea was to implement in java theses modules. The architecture will be message-driven. The modules will communicate over message queue or the service bus.
The 'consumer' sends the requests as messages to the message queue, the 'producer' picks them up by certain subscription topic, processes the requests and posts the response back to the same queue. The 'consumer' that is subscribed on 'response' topic picks the results from the queue. END.
My questions are:
What are other good options (protocols, architecture, existing libraries) to be considered in order to implement the above functionality?
In order to achieve the above I tried to look at ServiceMixESB User Guide but it seems like in order to get something like above running I got to learn bunch of stuff I am not familiar with: JBI, NMR, Karaf, Camel etc and I do not have time to do it. So, I wonder: is there any quick start guide or java sample code for ESB/Message Queue 'Hello World' application that could help just set everything in motion?
ActiveMQ with XML messages should be enough, unless your messages are big and lots of them, in which case I would go for protobuf(disclaimer: I used them on the last project).
As a matter of fact I would probably go for some amqp implementation, like Apache Qpid(disclaimer: used that also some time ago) over ActiveMQ. But this is more a personal reason.
The downside of protobuf is that you need some knowledge about them, there are hello worlds all around the web, but once you try to face 'real problems' it does not get too easy.
You will also need a maven plugin to build and compile the files, unless you want to do it manually.
ActiveMQ is simply a JMS Provider, and I am sure you already looked at this examples:
Hello World ActiveMQ
On an implementation side, when module1 sends the request you want to be sure that the response will be read by the same module. Temporary queues is what I would suggest. Send the request to some queue (and also the temporary queue name for example that the response is expected to come to); module2 processes the message and sends the response to the temporary queue, where it is read by module1 with a message listener.
Now, you have to delete this temporary queues really fast so that don't pile up, and also check that ActiveMQ provides them with unique names.
In QPID with a simple parameter auto-delete=true, when there are no active listeners the queue is deleted, I have no idea how that is handled in ActiveMQ, but there should be a way.
Just my 0.02$
Related
I am working on a project that is making a REST call to another Service to save DATA on the DB. The Data is very important so we can't afford losing anything.
If there is a problem in the network this message will be lost, which can't happen. I already searched about Spring Retry and I saw that it is designed to handle temporary network glitches, which is not what I need.
I need a method to put the REST calls in some kind of Queue (Like Active MQ) and preserve the order (this is very important because I receive Save, Delete and Update REST calls.)
Any ideas? Thanks.
If a standalone installation of ActiveMQ is not desirable, you can use an embedded in-memory broker.
http://activemq.apache.org/how-do-i-embed-a-broker-inside-a-connection.html
Not sure if you can configure the in-memory broker to use KahaDB. Of course the scope of persistence will be limited to your application process i.e. messages in the queue will not be available if the application is restarted . This is probably the most important reason why in-memory or vanilla code based approaches are no good.
If you must reinvent the wheel, then have a look at this topic, which talks about using Executors and BlockingQueue to implement your own pseudo-MQ.
Producer/Consumer threads using a Queue.
On a side note, retry mechanism is not something provided by the MQ broker. It is the client that implements it. Be it ActiveMQs bundled client library or other Messaging libraries such as Camel.
You can also retrospect your current tech-stack to see if any of the existing components have JMS capabilities. For example: Oracle database bundles an MQ called Oracle AQ
Have your service keep its own internal queue of jobs, and only move onto the next REST call until the previous one returns a success code.
There are many better ways to do this but the limitation will come down to what your company will let you do.
I have Apache ActiveMQ embedded into my java 8 server side project. Its working fine, and I am able to send and consume messages from pre-configured queues. I now need to be able programatically remove messages from the queue upon request. After reading some docs I found that Apache ActiveMQ has a sub-project called Artemis that seems to provide the required functionality. But I am a bit confused on how to do it. Is Artemis sort of plugin on top of ActiveMQ and I just need to add required dependencies and use the tools or is it a separate product and it doesn't work with Active MQ but as an independent product. If so how do I manage individual messages (in particular delete requested message) in Active MQ?
First off, 'ActiveMQ Artemis' is a sub-project within the ActiveMQ project that represents an entirely new broker with a radically different underlying architecture than the main ActiveMQ broker. You would run one or the other.
To manage messages in the ActiveMQ broker you would use the JMX Mamagement API and the Queue#remove methods it exposes to remove specific messages. This can be done using the Message ID or more broadly using a message selector to capture more than one message if need be. The JMX API is also exposed via Jolokia so that you can manage the broker via simple REST calls instead of the JMX way if you prefer.
In any case this sort of message level management on the broker is a bit of an anti-pattern in the messaging world. If you find yourself needing to treat the broker as a database then you should ask yourself why you aren't using a database since a broker is not a database. Often you will run into many more issues trying to manage your messages this way as opposed to just putting them into a database.
Can someone point me to a tutorial or similar code where JMS is used by a web app to execute a long running background process? (instead of using threads), I'm fairly familiar with the concepts of JMS messaging, but never used any JMS API or brokers (i'm looking at learning Apache ActiveMQ)
I'd like to be able to:
submit a message to the queue to run a process
check the status (progress) on that process at arbitrary times
Thanks!
The real point of using JMS in your context is to start tasks asynchronously. This is called fire and forget in middleware lingo. JMS has guaranteed delivery semantics, meaning that once the message has been put on the queue it is guaranteed to get there ... eventually.
The idea is you do any tasks you need to do and if you have any tasks in the process that can be done at a later time, then you put a message on a queue and later it will execute. This allows you to cut down processing by a significant amount while somebody is waiting for a response.
Another benefit of JMS is that the different parts of the system do not need to be running at the same time. The part that consumes messages can be down for maintenance while your front end still works.
The previous post is accurate in terms of a model to put orders or requests into a queue asynchronously and then have them be picked up later. However, it doesn't really address the question of long running processes.
In terms of queues and topics, the benefit of persistent queues is that if there are no consumers on the queue then messages will be waiting for consumption until there is a subscriber. In a topic, you need to create a durable subscription in order to make sure a consumer that is not connected will receive messages that are sent in its absence once it reconnects.
So, how are you defining a long-running-process? For a multi-step process you would typically use something like a workflow engine. There are options like a BPM tool or something like "OS Workflow". You can also do a home-grown solution that could look like the example below
1) There would need to be some sort of workflow definition that defines the steps in the process. This could be a properties file or an XML file.
2) Web App puts a message on a queue or topic (pub/sub) with an indication of the process to be executed (or you can have specific destinations for different processes)
3) A Dispatcher MDB picks the 'order' up off the queue with a status of 'NEW' and starts processing the first step.
4) Once the step is complete, the MDB puts a new message on the queue indicating the process being executed and either the next step to be executed, or the last step that was executed (depending on how deterministic you want the process to be)
5) The MDB picks up the message and sees that the process is 'IN_PROGRESS'. It either determines the next step to be executed or reads the step to be executed next from the message (either a JMS header value or within the message, perhaps in an XML format)
6) Steps 4 & 5 are repeated until the process instance is complete
In this case you will need an external representation of the order and process instance information. This will allow you to check the status of a request from your WebApp. Your order would need to be read and persisted with an updated status after each step in the process such that the WebApp could access the status information.
The key component of this architecture is the dispatcher MDB that listens for messages and executes the next step of the process. When I worked with OS Workflow that was one key piece that was missing. In this manner, you can control the number of threads that are executing process steps by controlling the number of MDB's in the pool and consumers on the queue. In this architecture I would recommend a queue over a topic for the workflow steps. However after each process step you could publish a message to a topic for subscribers to get updated status information.
With the Java EE6 technologies including JPA you could easily create an XSD, generate domain data model POJO's with JAXB and use JPA for persistence. We did a webcast earlier this year that covered the JEE6 technologies that are currently supported in WebLogic. Here are the replays: http://www.oracle.com/technetwork/middleware/weblogic/learnmore/weblogic-javaee6-webcasts-358613.html.
I'm also still interested to speak with you about your JBoss migration :) jeffrey.west#oracle.com
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I was just reading abit about JMS and Apache ActiveMQ.
And was wondering what real world use have people here used JMS or similar message queue technologies for ?
JMS (ActiveMQ is a JMS broker implementation) can be used as a mechanism to allow asynchronous request processing. You may wish to do this because the request take a long time to complete or because several parties may be interested in the actual request. Another reason for using it is to allow multiple clients (potentially written in different languages) to access information via JMS. ActiveMQ is a good example here because you can use the STOMP protocol to allow access from a C#/Java/Ruby client.
A real world example is that of a web application that is used to place an order for a particular customer. As part of placing that order (and storing it in a database) you may wish to carry a number of additional tasks:
Store the order in some sort of third party back-end system (such as SAP)
Send an email to the customer to inform them their order has been placed
To do this your application code would publish a message onto a JMS queue which includes an order id. One part of your application listening to the queue may respond to the event by taking the orderId, looking the order up in the database and then place that order with another third party system. Another part of your application may be responsible for taking the orderId and sending a confirmation email to the customer.
Use them all the time to process long-running operations asynchronously. A web user won't want to wait for more than 5 seconds for a request to process. If you have one that runs longer than that, one design is to submit the request to a queue and immediately send back a URL that the user can check to see when the job is finished.
Publish/subscribe is another good technique for decoupling senders from many receivers. It's a flexible architecture, because subscribers can come and go as needed.
I've had so many amazing uses for JMS:
Web chat communication for customer service.
Debug logging on the backend. All app servers broadcasted debug messages at various levels. A JMS client could then be launched to watch for debug messages. Sure I could've used something like syslog, but this gave me all sorts of ways to filter the output based on contextual information (e.q. by app server name, api call, log level, userid, message type, etc...). I also colorized the output.
Debug logging to file. Same as above, only specific pieces were pulled out using filters, and logged to file for general logging.
Alerting. Again, a similar setup to the above logging, watching for specific errors, and alerting people via various means (email, text message, IM, Growl pop-up...)
Dynamically configuring and controlling software clusters. Each app server would broadcast a "configure me" message, then a configuration daemon that would respond with a message containing all kinds of config info. Later, if all the app servers needed their configurations changed at once, it could be done from the config daemon.
And the usual - queued transactions for delayed activity such as billing, order processing, provisioning, email generation...
It's great anywhere you want to guarantee delivery of messages asynchronously.
Distributed (a)synchronous computing.
A real world example could be an application-wide notification framework, which sends mails to the stakeholders at various points during the course of application usage. So the application would act as a Producer by create a Message object, putting it on a particular Queue, and moving forward.
There would be a set of Consumers who would subscribe to the Queue in question, and would take care handling the Message sent across. Note that during the course of this transaction, the Producers are decoupled from the logic of how a given Message would be handled.
Messaging frameworks (ActiveMQ and the likes) act as a backbone to facilitate such Message transactions by providing MessageBrokers.
I've used it to send intraday trades between different fund management systems. If you want to learn more about what a great technology messaging is, I can thoroughly recommend the book "Enterprise Integration Patterns". There are some JMS examples for things like request/reply and publish/subscribe.
Messaging is an excellent tool for integration.
We use it to initiate asynchronous processing that we don't want to interrupt or conflict with an existing transaction.
For example, say you've got an expensive and very important piece of logic like "buy stuff", an important part of buy stuff would be 'notify stuff store'. We make the notify call asynchronous so that whatever logic/processing that is involved in the notify call doesn't block or contend with resources with the buy business logic. End result, buy completes, user is happy, we get our money and because the queue is guaranteed delivery the store gets notified as soon as it opens or as soon as there's a new item in the queue.
I have used it for my academic project which was online retail website similar to Amazon.
JMS was used to handle following features :
Update the position of the orders placed by the customers, as the shipment travels from one location to another. This was done by continuously sending messages to JMS Queue.
Alerting about any unusual events like shipment getting delayed and then sending email to customer.
If the delivery is reached its destination, sending a delivery event.
We had multiple also implemented remote clients connected to main Server. If connection is available, they use to access the main database or if not use their own database. In order to handle data consistency, we had implemented 2PC mechanism.
For this, we used JMS for exchange the messages between these systems i.e one acting as coordinator who will initiate the process by sending message on the queue and others will respond accordingly by sending back again a message on the queue.
As others have already mentioned, this was similar to pub/sub model.
I have seen JMS used in different commercial and academic projects. JMS can easily come into your picture, whenever you want to have a totally decoupled distributed systems. Generally speaking, when you need to send your request from one node, and someone in your network takes care of it without/with giving the sender any information about the receiver.
In my case, I have used JMS in developing a message-oriented middleware (MOM) in my thesis, where specific types of object-oriented objects are generated in one side as your request, and compiled and executed on the other side as your response.
Apache Camel used in conjunction with ActiveMQ is great way to do Enterprise Integration Patterns
We have used messaging to generate online Quotes
We are using JMS for communication with systems in a huge number of remote sites over unreliable networks. The loose coupling in combination with reliable messaging produces a stable system landscape: Each message will be sent as soon it is technically possible, bigger problems in network will not have influence on the whole system landscape...
I'm having a hard time figuring out how to architect the final piece of my system. Currently I'm running a Tomcat server that has a servlet that responds to client requests. Each request in turn adds a processing message to an asynchronous queue (I'll probably be using JMS via Spring or more likely Amazon SQS).
The sequence of events is this:
Sending side:
1. Take a client request
2. Add some data into a DB related to this request with a unique ID
3. Add a message object representing this request to the message queue
Receiving side:
1. Pull a new message object from the queue
2. Unwrap the object and grab some information from a web site based on information contained in the msg object.
3. Send an email alert
4. update my DB row (same unique ID) with the information that operation was completed for this request.
I'm having a hard figuring out how to properly deal with the receiving side. On one hand I can probably create a simple java program that I kick off from the command line that picks each item in the queue and processes it. Is that safe? Does it make more sense to have that program running as another thread inside the Tomcat container? I will not want to do this serially, meaning the receiving end should be able to process several objects at a time -- using multiple threads. I want this to be always running, 24 hours a day.
What are some options for building the receiving side?
"On one hand I can probably create a simple java program that I kick off from the command line that picks each item in the queue and processes it. Is that safe?"
What's unsafe about it? It works great.
"Does it make more sense to have that program running as another thread inside the Tomcat container?"
Only if Tomcat has a lot of free time to handle background processing. Often, this is the case -- you have free time to do this kind of processing.
However, threads aren't optimal. Threads share common I/O resources, and your background thread may slow down the front-end.
Better is to have a JMS queue between the "port 80" front-end, and a separate backend process. The back-end process starts, connects to the queue, fetches and executes the requests. The backend process can (if necessary) be multi-threaded.
If you are using JMS, why are you placing the tasks into a DB?
You can use a durable Queue in JMS. This would keep tasks, even if the JMS broker dies, until they have been acknowledged. You can have redundant brokers so that if one broker dies, the second automatically takes over. This could be more reliable than using a single DB.
If you are already using Spring, check out DefaultMessageListenerContainer. It allows you to create a POJO message driven bean. This can be used from within an existing application container (your WAR file) or as a separate process.
I've done this sort of thing by hosting the receiver in an app server, weblogic in my case, but tomcat works fine, too. Don't poll the queue, use an event-based model. This could be hand-coded or it could be a message-driven web service. If the database update is idempotent, you could update the database and send the email, then issue the commit on the queue. It's not a problem to have several threads that all read from the same queue.
I've use various JMS solutions, including tibco, activemq (before apache subsumed it) and joram. Joram was the more reliable opensource solution, but that may have changed now that it's part of apache.