I recently watched a nice presentation about how RabbitMQ works and it kinda intrigued on how whole AMQP implentation works.
I was considering using it for a project but I would like some answers about the following questions:
1) Is it possible to have a broker and a producer of a message at the same place? I do understand that RabbitMQ allows the use of Virtual Hosts so something like this could be possible right?
2) Can RabbitMQ transmit it's messages over two diferrent subnets? I know it can trasmit over lan or wan, but how easy it to do this over two subnets? (One answer here would actually be to have them bridged).
3) Regarding question 1, how hard would it be to fail over the broker functionality to another place in case the original broker goes down?
4) I do understand that RabbitMQ actually provides different types of message transmissions. One of those is the fanout type which is more or less similar to a broadcast action. Would it be possible though to have something that is the inversed type of that. Meaning that you have multiple producers with multiple queues that all transmit to a single consumer?
1) It doesn't matter where the consumers/producers are, as long as they can reach (access IP:port) the broker. Virtual hosts have nothing to do with that.
2) More or less same as answer to first one, RabbitMQ us using the network and has no knowledge about what kind of networks is it in; also doesn't need to know or care.
3)Failover is easy, look for rabbitmq cluster and high availability. For the clients you'd have to take care on your own (so how to reconnect etc).
4) yes, broadcast is possible, you should have a look at tutorials and what kind of exchanges are there. EDIT As zapl pointed out in the comments, you can also do the inverse of broadcast.
Related
I'm doing a microservice that produces messages for an ActiveMQ broker.
My posible messages are;
1) Logs for my application.
2) The business messages I need.
Later I'll develop a microservice that consumes those messages, and I thought that it could be better to have two different queues at ActiveMQ.
My question is, should I use 2 queues, or should I use 1 queue with a flag to differenciate messages?
When we talk about microservices, it's about segregation of responsibilities and loosly coupled architecture which could be extensible lateron.
If you'll identify message based on flag
It will be harcoded even when messages are not related
Highly coupled architecture
Queue maintenance and scaling would be affected later on
and so on ..
I would recommend using different queues for different types of messages which serves unique purpose.
With CQRS architecture, in write intensive realtime applications like trading systems, traditional approaches of loading aggregates from database + distributed cache + distributed lock do not perform well.
Actor model (AKKA) fits well here but i am looking for an alternative solution. What I have in mind is to use Kafka for sending commands, make use of topic partitioning to make sure commands for the same aggregate always arrive on the same node. Then use database + local cache + local pessimistic lock to load aggregate roots and handle commands. This brings 3 main benefits:
aggregates are distributed across multiple nodes
no network traffics for looking up central cache and distributed locks
no serialization & deserialization when saving and loading aggregates
One problem of this approach is when consumer groups rebalance, may produce stale aggregate state in the local cache, setting short cache timeouts should work most of time.
Has anyone used this approach in real projects?
Is it a good design and what are the down sides?
Please share your thoughts and experiences. Thank you.
IMHO Kafka will do the job for you. You need to ensure, if the network is fast enough.
In our project are reacting on soft real time on customer needs and purchases, and we are sending over Kafka infos to different services, which are performing busines logic. This works well.
Confirmantions on network levels are done well within Kafka broker.
For example when one of Broker nodes crashes, we do not loose messages.
Another matter is if you need any kind of very strong transactional confirmations for all actions, then you need to be careful in design - perhaps you need more topics, to send infos and all needed logical confirmations.
If you need to implement some more logic, like confirmation when the message is processed by other external service, perhaps you will need also disable auto commits.
I do not know if it is complete answer to your question.
We use Tibco EMS as our messaging system and have used apache camel to write our application. In our application, messages are written to a queue. A component, with concurrentConsumers set to 8, reads from the queue, processes the message, then writes to another queue. Another component, again with concurrentConsumers set to 8, then reads from this new queue and so on. Up until now, maintaining message order has not been important, but a new requirement means that it now is. Looking at the camel documentation, it is suggested that jmsxgroupid is used to maintain ordering. Unfortunately, this functionality is not available with Tibco EMS. Are there any other ways of maintaining ordering in camel in a multithreaded application? I have looked at sticky load balancing, but this seems to be applicable to end point load balancing only.
Thanks
Bruce
In the enterprise integration world, we generally use the Resequencer design pattern to solve such kind of problems that you need to ensure ordering in the messages.
Apache Camel covers a broad extent of the Enterprise Integration Patterns including Resequencer at its core and it has out-of-the-box imprementations for those patterns. So what you are looking for should be this:
http://camel.apache.org/resequencer.html
In your specific case, all you need to do would be add a custom message header like myMessageNo, which has a sequential number that specifies the ordering, to outgoing messages to TIBCO EMS. Then, at consumer side, use the resequencer EIP to restore the ordering of incoming messages from TIBCO EMS.
As you can see, however, it's not as easy as just putting the resequencer EIP to your Camel routes. (Any asynchronous solutions are always hard to build correctly.) For the resequencer, you need to consider when sad paths happen, e.g. when some messages get lost and never reach. To make sure your routes work fine even with those exceptional cases, you need to choose from two options: maximum batch size or timeout. Depending on the condition chosen, the resequenser will flush messages when the batch reaches the maximum size or it timeouts waiting for a missing message.
I have two applications.
First one creates typical files.
Second application uses these files.
When first application change some file, the second application should be noted about this.
I tried to do this with ServerSocket and it does work.
First application is a client (java.net.Socket) and second is a server (java.net.ServerSocket).
But it should work also for multiple instances of applications.
In case we have multiple instances of application two, the first should alert each one.
Both application are desktop application running on same machine without any databases. The question is how to implement it and not on actual code. The actual code runs OK. It just don't fit the specifications.
For understanding the problem lets take one example.
There is one application which is producing some thing lets call it as prodApp and there are many other applications which should get notified lets call them as consApp1 ,consApp2,...consAppN
Solution to this problem can be designed using the JMS (Java Messaging Service).
JMS provide the way by which multiple consApp can register at one place (which is called as TOPIC in JMS) and they got notified as soon as some thing has been put on TOPIC (which in this case will be done by prodApp).
So it will work like this prodApp will do its processing writes its status on JMS TOPIC as a result of this all the consApp will get notified and start there own processing.
In case the number of files is small and they are known to be saved in a single place, the second application[s] could check the files periodically (like every minute) for the last modification time of each file.
This could be even faster than socket, RMI or other network communications.
In my environment I need to schedule long-running task. I have application A which just shows to the client the list of currently running tasks and allows to schedule new ones. There is also application B which does the actual hard work.
So app A needs to schedule a task in app B. The only thing they have in common is the database. The simplest thing to do seems to be adding a table with a list of tasks and having app B query that table every once in a while and execute newly scheduled tasks.
Yet, it doesn't seem to be the proper way of doing it. At first glance it seems that the tool for the job in an enterprise environment is a message queue. App A sends a message with task description to the queue, app B reads a message from the queue and executes the task. Is it possible in such case for app A to get the status of all the tasks scheduled (persistent queue?) without creating a table like the one mentioned above to which app B would write the status of completed tasks? Note also that there may be multiple instances of app A and each of them needs to know about all tasks of all instances.
The disadvantage of the 'table approach' is that I need to have DB polling.
The disadvantage of the 'message queue approach' is that I'm introducing a new communication channel into the infrastructure (yet another thing that can fail).
What do you think? Any other ideas?
Thank you in advance for any advice :)
========== UPDATE ==========
Eventually I decided on the following approach: there are two sides of this problem: one is communication between A and B. The other is getting information about the tasks.
For communication the right tool for the job is JMS. For getting data the right tool is the database.
So I'll have app A add a new row to the 'tasks' table descibing a task (I can query this table later on to get list of all tasks). Then A will send a message to B via JMS just to say 'you have work to do'. B will do the work and update task status in the table.
Thank you for all responses!
You need to think about your deployment environment both now and likely changes in the future.
You're effectively looking at two problems, both which can be solved in several ways, depending on how much infrastructure you able to obtain and are also willing to introduce, but it's also important to "right size" your design for your problems.
Whilst you're correct to think about the use of both databases and messaging, you need to consider whether these items are overkill for your domain and only you and others who know your domain can really answer that.
My advice would be to look at what is already in use in your area. If you already have database infrastructure that you can build into, then monitoring task activity and scheduling jobs in a database are not a bad idea. However, if you would have to run your own database, get new hardware, don't have sufficient support resources then introduction of a database may not be a sensible option and you could look at a simpler, but potentially more fragile approach of having your processes write files to schedule jobs and report tasks.
At the same time, don't look at the introduction of a DB or JMS as inherently error prone. Correctly implemented they are stable and proven technologies that will make your system scalable and manageable.
As #kan says, use exposing an web service interface is also a useful option.
Another option is to make the B as a service, e.g. expose control and status interfaces as REST or SOAP interfaces. In this case the A will just be as a client application of the B. The B stores its state in the database. The A is a stateless application which just communicates with B.
BTW, using Spring Remote you could expose an interface and use any of JMS, REST, SOAP or RMI as a transport layer which could be changed later if necessary.
You have messages (JMS) in enterprise architecture. Use these, they are available in Java EE containers like Glassfish. Messages can be serialized to be sure they will be delivered even if the server reboots while they are in the queue. And you even do not need to care how all this is implemented.
There can be couple of approaches here. First, as #kan suggested to have app B expose some web service for the interactions. This will heterogenous clients to communicate with app B. Seems a good approach. App B can internally use whatever persistent store it deems fit.
Alternatively, you can have app B expose some management interface via JMX and have applications like app A talk to app B through this management interface. Implementing the task submission and retrieving the statistics etc. would be simpler. Additionally, you can also leverage JMX notifications for real time updates on task submissions and accomplishments etc. Downside to this is that this would be a Java specific solution and hence supporting heterogenous clients will be distant dream.