How to manage messages in apache ActiveMQ - java

I have Apache ActiveMQ embedded into my java 8 server side project. Its working fine, and I am able to send and consume messages from pre-configured queues. I now need to be able programatically remove messages from the queue upon request. After reading some docs I found that Apache ActiveMQ has a sub-project called Artemis that seems to provide the required functionality. But I am a bit confused on how to do it. Is Artemis sort of plugin on top of ActiveMQ and I just need to add required dependencies and use the tools or is it a separate product and it doesn't work with Active MQ but as an independent product. If so how do I manage individual messages (in particular delete requested message) in Active MQ?

First off, 'ActiveMQ Artemis' is a sub-project within the ActiveMQ project that represents an entirely new broker with a radically different underlying architecture than the main ActiveMQ broker. You would run one or the other.
To manage messages in the ActiveMQ broker you would use the JMX Mamagement API and the Queue#remove methods it exposes to remove specific messages. This can be done using the Message ID or more broadly using a message selector to capture more than one message if need be. The JMX API is also exposed via Jolokia so that you can manage the broker via simple REST calls instead of the JMX way if you prefer.
In any case this sort of message level management on the broker is a bit of an anti-pattern in the messaging world. If you find yourself needing to treat the broker as a database then you should ask yourself why you aren't using a database since a broker is not a database. Often you will run into many more issues trying to manage your messages this way as opposed to just putting them into a database.

Related

When using Spring Boot and STOMP, is there a way to make queues not delete messages?

I have an application that uses Java on the backend end, Angular on the frontend, and I'm trying to use STOMP messaging between the two to exchange state data.
What I would like to do is have my services, on startup, publish their states and have that data stay in the queue for any client that later connects to the server.
(edit)
For clarification, I don't mean I want to messages to survive a server reboot. What I want is for certain message queues to retain all messages until the server reboots.
How do I tell Spring Boot's STOMP implementation to not delete the contents of a /queue?
You can configure ActiveMQ Artemis as an "external broker" and use a "non-destructive" queue. When a STOMP client receives and acknowledges a message from a non-destructive queue the broker will not remove it. You can define a special "initialization" queue which all clients connect to initially to receive the state data which you care about and then they can connect to whatever other queues they need to complete their normal work.
In this kind of use-case the queue is typically configured as non-destructive and as a "last value" queue. This way each client can use its own "last value" and can keep their state data up-to-date without the complication of stale state data on the queue.
I realize your question was asking about how to do this with Spring's built-in broker, but all my research indicates that Spring's simple in-memory broker neither supports last-value queue semantics nor non-destructive queue semantics nor even persistent messages. From what I understand Spring's broker is only meant for the most basic use-cases which is why then enable integration with 3rd party brokers which can support more advanced use-cases (e.g. like yours).

What is an approach to use multiple different kafka servers in one spring boot app?

I've got a spring boot app, working with a kafka (let's say kafka#1)
Now i have a case, when i need to connect to kafka server of an external service (kafka#2) and tomorrow another external service's kafka should be added.
Each of kafka#1, kafka#2, kafka#3 has separate topics. I've managed tod find that topic where a simple thing is adviced - add all servers into bootstrap.servers property, separated by comma.
I'm a little worried about server-to-topic mappings - i don't think that it's right that kafka can "ask" all servers about all topics...
What is the right approach for this?
In point of the app, mho, it would be better to have multiple configs, for example:
kafka1.properties kafka2.properties kafka3.properties. Then i could create kafka beans with apropriate settings (consumer container factories & factories) and define the required factories at #KafkaListener listeners. So i could avoid any unnescessary server-topic mapping problems...
Or maybe that's odd and i just need to add bootstrap.servers at a single config file kafka.properties and don't worry? Couldn't find any information about that...
If all your kafka servers belong to the same cluster, it is sufficient to have a single configuration for your clients. Kafka servers communicate internally with each other and share the latest metadata on all topics across the cluster even if the topic is not located on a particular server.
When defining bootstrap.server, it is therefore enough to mention only one of the servers. This is explained in more details in the description of that config:
"A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form host1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down)."
It is recommended to list more than one server in the bootstrap servers, in case one of the servers is currently not available.
Also, the intercommunication between Kafka servers is also explained in the book "Kafka -The Definitive Guide" which can be downloaded here:
"How do the clients know where to send the requests? Kafka clients use another
request type called a metadata request, which includes a list of topics the client is interested in. The server response specifies which partitions exist in the topics, the replicas for each partition, and which replica is the leader. Metadata requests can be sent to any broker because all brokers have a metadata cache that contains this information."

Interest of using activeMQ resource adapter

I am creating a Java application in eclipse to let different devices communicate together using a publish/subscribe protocol.
I am using Jboss and ActiveMQ and I want to know if I should use an ActiveMQ resource adapter to integrate the broker in jboss in a standalone mode or I should just add dependencies in my pom.xml file and use explicit java code like indicated here http://activemq.apache.org/how-do-i-embed-a-broker-inside-a-connection.html.
Here the documentation I found to integrate ActiveMQ within jboss in a standalone mode https://access.redhat.com/documentation/en-US/Red_Hat_JBoss_A-MQ/6.1/html/Integrating_with_JBoss_Enterprise_Application_Platform/DeployRar-InstallRar.html
Could someone tell me what is the difference between the two approaches?
Here is the answer for my question:
The first approach starts a broker within your webapp itself. You can use a
normal consumer (not a message-driven bean - MDB), but only your webapp can
access it, via the VM transport (vm://).
The second approach lets the app server manage both the connection to the
broker and the creation of the broker, so it's probably also within the JVM
that runs your webapp and probably only accessible to your webapp, but
those details are hidden from you by the app server. You can only consume
messages via an MDB, but this provides a uniform interface that doesn't
need to change if you switch to another JMS provider in the future.
Since the standard way to integrate a JEE webapp with a JMS broker is via
the RA, I'd recommend using that approach simply for consistency and
standardization. That should also allow you to switch to a standalone
ActiveMQ broker (or another JMS product) in the future with minimal effort.

How to read jms queue statistics programmatically

I found the following link to read messages from JMS Queue and its working.
https://blogs.oracle.com/soaproactive/entry/jms_step_3_using_the
Now I want to read JMS queue statistics programmatically like number of messages, number of pending messages and message in/out time etc. Is it possible in weblogic or weblogic provide any API for this purpose?
Please help.
Statistics are part of a message broker implementation and thus vendor-specific. One popular implementations is ActiveMQ. It can be run in WebLogic Server or WebLogic Express.
Note: There are obviously many other JMS implementations around, and you should carefully evaluate for yourself which implementation suits your needs. Nevertheless, I shall use it as an example to point out the relevant features for your case:
Beginning with version 5.3, ActiveMQ ships with a statistics plugin
that can be used to retrieve statistics from the broker or its destinations.
You should be able to actively poll statistics from within your code by sending messages to specific destinations within the broker, see linked documentation for details.
Another feature of ActiveMQ is Advisory messages. Enable it in your broker's configuration and it
allows you to watch the system using regular JMS messages.
In this way, you can passively react to certain events in the messaging system , e.g. when a queue exceeds some threshold.
There is no API for statistics in JMS spec. However you can use JMX to monitor the statistics.
From docs,
Monitoring JMS Servers
You can monitor statistics on active JMS servers defined in your
domain via the Administration Console or through the
JMSServerRuntimeMBean. JMS servers act as management containers for
JMS queue and topic resources within JMS modules that are specifically
targeted to JMS servers.
This post (new way) may be helpful.
JMS API doesn't provide such information. It serves to receive and send messages, but isn't to grab statistics from underlying middleware.
Check direct API of the underlying MQ which you use. For instance, IBM WebSphere MQ has such API.

Why is Mule required if we have Active MQ?

I am working as a software engineer on a project which uses ActiveMQ and Mule for Java Messaging Service. But I have one question: since ActiveMQ is transferring all message to one queue to another queue, why Mule is required?
The above image is from the official Mule 3 User Manual
Following is a very simplistic overview to give you a general idea. Without knowing the specifics about your application, its difficult to tell how everything works together.
Mule is not a message broker, it is a service bus; it provides integration and communication services. In a basic form, it can act like a message broker, but that's just a side effect of any integration layer.
The real power of mule is the various integration points across different applications, system, and services; providing for security, reporting, etc.
ActiveMQ is just a message broker - whole job is to effectively provide a messaging bus. Mule takes different requests, transform/translates them, logs them, and may then posts them on to ActiveMQ as part of its defined flow.
It could also be that ActiveMQ is acting as the queue for messages that later need to be processed through Mule (as in the image above).
Mule can use ActiveMQ as a message source and destination. Using MQ in such a way provides a guarantee that messages will be processed and none are lost.

Categories

Resources