I have a two instance of tomcat server running same web application due to durability.
These web applications consume some queue/topic form ActiveMQ using Apache-camel lib.
My issue is how to sync these two consumer so that only one consumer can get a particular message.I mean ActiveMQ send different message on each node.
If your have two consumers subscribe to the same queue/topic, you can use selector to make sure only one consumer can get a particular message. You can find some explanations here
Camel JMS component has the selector option could be use.
Related
I need to write tests for an application which is integrated with Kafka and sends event messages to a remote Kafka server. My goal is to ensure, as a consumer, that those messages are created and to check their content, if that is possible.
I looked through Kafka documentation and found the consumer API is where I am supposed to start but I’m unsure how to start implementing it.
If you want to stay away from programming much, you can configure Mockintosh's "Validating Consumer" (https://mockintosh.io/Async.html#validating-consumer).
Then you can validate the facts of message created via simple HTTP calls to Mockintosh API. Also there is UI to see those messages ad-hoc.
I have an application that uses Java on the backend end, Angular on the frontend, and I'm trying to use STOMP messaging between the two to exchange state data.
What I would like to do is have my services, on startup, publish their states and have that data stay in the queue for any client that later connects to the server.
(edit)
For clarification, I don't mean I want to messages to survive a server reboot. What I want is for certain message queues to retain all messages until the server reboots.
How do I tell Spring Boot's STOMP implementation to not delete the contents of a /queue?
You can configure ActiveMQ Artemis as an "external broker" and use a "non-destructive" queue. When a STOMP client receives and acknowledges a message from a non-destructive queue the broker will not remove it. You can define a special "initialization" queue which all clients connect to initially to receive the state data which you care about and then they can connect to whatever other queues they need to complete their normal work.
In this kind of use-case the queue is typically configured as non-destructive and as a "last value" queue. This way each client can use its own "last value" and can keep their state data up-to-date without the complication of stale state data on the queue.
I realize your question was asking about how to do this with Spring's built-in broker, but all my research indicates that Spring's simple in-memory broker neither supports last-value queue semantics nor non-destructive queue semantics nor even persistent messages. From what I understand Spring's broker is only meant for the most basic use-cases which is why then enable integration with 3rd party brokers which can support more advanced use-cases (e.g. like yours).
I've got a spring boot app, working with a kafka (let's say kafka#1)
Now i have a case, when i need to connect to kafka server of an external service (kafka#2) and tomorrow another external service's kafka should be added.
Each of kafka#1, kafka#2, kafka#3 has separate topics. I've managed tod find that topic where a simple thing is adviced - add all servers into bootstrap.servers property, separated by comma.
I'm a little worried about server-to-topic mappings - i don't think that it's right that kafka can "ask" all servers about all topics...
What is the right approach for this?
In point of the app, mho, it would be better to have multiple configs, for example:
kafka1.properties kafka2.properties kafka3.properties. Then i could create kafka beans with apropriate settings (consumer container factories & factories) and define the required factories at #KafkaListener listeners. So i could avoid any unnescessary server-topic mapping problems...
Or maybe that's odd and i just need to add bootstrap.servers at a single config file kafka.properties and don't worry? Couldn't find any information about that...
If all your kafka servers belong to the same cluster, it is sufficient to have a single configuration for your clients. Kafka servers communicate internally with each other and share the latest metadata on all topics across the cluster even if the topic is not located on a particular server.
When defining bootstrap.server, it is therefore enough to mention only one of the servers. This is explained in more details in the description of that config:
"A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down)."
It is recommended to list more than one server in the bootstrap servers, in case one of the servers is currently not available.
Also, the intercommunication between Kafka servers is also explained in the book "Kafka -The Definitive Guide" which can be downloaded here:
"How do the clients know where to send the requests? Kafka clients use another
request type called a metadata request, which includes a list of topics the client is interested in. The server response specifies which partitions exist in the topics, the replicas for each partition, and which replica is the leader. Metadata requests can be sent to any broker because all brokers have a metadata cache that contains this information."
I am trying to do event scouring with Apache Camel.
For messaging bus I am using ActiveMQ.
Use cases
I want to audit each messages that are pushed to ActiveMQ using MongoDB as persistent storage. I have tried with mirrored queues in ActiveMQ. This pushes the message to a topic with the same name as queue.
But I have to implement worker based (load balancing) approach. This is not possible with topic (message duplication not allowed).
So I planned to use ActiveMQ with Camel by using the wiretap pattern.
Desired output:
Can I pull the message from wiretap destination and insert it into MongoDB or is there a way that can Camel directly insert it into MongoDB?
One possible way to tackle this on the broker side is with Composite Destinations. You can instruct the broker to forward messages sent to a Queue on to another Queue. Some care needs to be taken when doing this as by default this only happens when the Queue exists (static configuration of destinations can get round this). There is an option to always forward and you also have the option of applying selectors to reduce what gets sent. The thing to keep in mind is that unless you have something periodically purging the audit queue you will eventually run out of space.
You can configure the forwarding as follows:
<compositeQueue name="myQueue" forwardOnly="false">
<forwardTo>
<queue physicalName="myAuditQueue" />
</forwardTo>
</compositeQueue>
I have Apache ActiveMQ embedded into my java 8 server side project. Its working fine, and I am able to send and consume messages from pre-configured queues. I now need to be able programatically remove messages from the queue upon request. After reading some docs I found that Apache ActiveMQ has a sub-project called Artemis that seems to provide the required functionality. But I am a bit confused on how to do it. Is Artemis sort of plugin on top of ActiveMQ and I just need to add required dependencies and use the tools or is it a separate product and it doesn't work with Active MQ but as an independent product. If so how do I manage individual messages (in particular delete requested message) in Active MQ?
First off, 'ActiveMQ Artemis' is a sub-project within the ActiveMQ project that represents an entirely new broker with a radically different underlying architecture than the main ActiveMQ broker. You would run one or the other.
To manage messages in the ActiveMQ broker you would use the JMX Mamagement API and the Queue#remove methods it exposes to remove specific messages. This can be done using the Message ID or more broadly using a message selector to capture more than one message if need be. The JMX API is also exposed via Jolokia so that you can manage the broker via simple REST calls instead of the JMX way if you prefer.
In any case this sort of message level management on the broker is a bit of an anti-pattern in the messaging world. If you find yourself needing to treat the broker as a database then you should ask yourself why you aren't using a database since a broker is not a database. Often you will run into many more issues trying to manage your messages this way as opposed to just putting them into a database.