VertX EventBus not receiving messages in AWS context - java

I have a Java service running on 3 different ec2 instances. They form a cluster using Hazelcast. Here's part of my cluster.xml configuration:
<join>
<multicast enabled="false"></multicast>
<tcp-ip enabled="false"></tcp-ip>
<aws enabled="${AWS_ENABLED}">
<iam-role>DEFAULT</iam-role>
<region>us-east-1</region>
<security-group-name>sec-group-name</security-group-name>
<hz-port>6100-6110</hz-port>
</aws>
</join>
Here's the log message that the discovery is successful:
[3.12.2] (This is the hazelcast version)
Members {size:3, ver:31} [
Member [10.0.3.117]:6100 - f5a9d579-ae9c-4c3d-8126-0e8d3a1ecdb9
Member [10.0.1.32]:6100 - 5799f451-f122-4886-92de-e351704e6980
Member [10.0.1.193]:6100 - 626de40a-197a-446e-a44f-ac456a52d118 this
]
vertxInstance.sharedData() is working fine, meaning we can cache data between the instances.
However, the issue is when publishing messages to the instances using the vertx eventbus:
this.vertx.eventBus().publish(EventBusService.TOPIC, memberId);
and having this listener:
eventBus.consumer(TOPIC, event -> {
logger.warn("Captured message: {}", event.body());
});
This configuration works locally, the consumer get's the messages, but once deployed to AWS it doesn't work.
I have tried setting up the host explicitly just for test, but this does not work either:
VertxOptions options = new VertxOptions();
options.setHAEnabled(true);
options.getEventBusOptions().setClustered(true);
options.getEventBusOptions().setHost("10.0.1.0");
What am I doing wrong and what are my options to debug this issue further?

eventbus communication does not use the cluster manager, but rather direct tcp connections
Quote from this conversation: https://groups.google.com/g/vertx/c/fCiJpQh66fk
The solution was to explicitly set the public host and port options for the eventbus:
vertxOptions.getEventBusOptions().setClusterPublicHost(privateIpAddress);
vertxOptions.getEventBusOptions().setClusterPublicPort(5702);

Related

server failover with Quarkus Reactive MySQL Clients / io.vertx.mysqlclient

Does io.vertx.mysqlclient support server failover as it can be set up with MySQL Connector/J?
My application is based on quarkus using io.vertx.mutiny.mysqlclient.MySQLPool which in turn is based on io.vertx.mysqlclient. If there is support for server failover in that stack, how can it be set up? I did not find any hints in the documentation and code.
No it doesn't support failover.
You could create two clients and then use Munity failover methods to get the same effect:
MySQLPool client1 = ...
MySQLPool client2 = ...
private Uni<List<Data>> query(MySQLPool client) {
// Use client param to send queries to the database
}
Uni<List<Data>> results = query(client1)
.onFailure().recoverWithUni(() -> query(client2));

Apache Camel + RabbitMq - Camel defines it's own queues and won't read from already defined queues

I have a problem where I want to let Camel-RabbitMq consume from my own defined queues.
Writing and reading from queues via Camel routes works but only via camels own defined queues. I cannot seem to point Camel to my defined queues on RabbitMQ.
Essential information
I'm running camel version & camel-rabbitmq V3.3.0 via Spring boot V2.3.0.RELEASE.
I have 2 services running on my localhost:
on localhost:5672 a RabbitMq v3.8.3 instance
on localhost:15672 a RabbitMq management instance
I run these instances via a simple docker-compose file:
version: '3'
services:
rabbitmq:
image: "rabbitmq:3.8.3"
ports:
- "5672:5672"
rabbitmq-management:
image: "rabbitmq:3-management"
ports:
- "15672:15672"
On there I have created 1 exchange and 1 queue via the admin panel:
main_exchange
in_queue
Main_exchange and in_queue are binded to each other via routing key "in_queue_routing_key" routing key.
Problem
Now when I try to connect to read from this in_queue via a camel route:
from("rabbitmq:main_exchange?addresses=localhost:5672" +
"&passive=true"+
"&autoDelete=false" +
"&declare=false" +
"&queue=in_queue" +
"&routingKey=in_queue_routing_key")
.log("received from queue")
.to("file:done");
When I publish a message to the in_queue via the main exchange, nothing happens. The Camel route does not pick up the message.
I tried following possible solutions:
Setting passive to true, so RabbitMq doesn't make the queue itself.
Passive queues depend on the queue already to be available at RabbitMQ.
Setting declare to false, so RabbitMq does not declare the exchange and queue itself.
If the option is true, camel declare the exchange and queue name and bind them together. If the option is false, camel won’t declare the exchange and queue name on the server.
Writing to the queue worked, But this didn't show up in the self defined "in-queue" via admin console:
Code example:
from("file:test")
.log("add to route")
.to("rabbitmq:main_exchange?addresses=localhost:5672" +
"&passive=true"+
"&autoDelete=false" +
"&declare=false" +
"&queue=in_queue" +
"&routingKey=in_queue_routing_key");
But the consumer route did pick up after restarting the consumer route (the one above this code example).
So it looks like the Camel-RabbitMq route defines it's queue elsewhere. How can I define that the Camel route consumes on my own defined queues and not on his own?
Sources:
https://camel.apache.org/components/latest/rabbitmq-component.html
It looks like I found the mistake, Rabbitmq has a management + instance image and NOT a standalone management image. This resulted in running 2 instances of RabbitMQ, one that I was polling and looking at and the second one where the operations happen resulting in me not finding anything but the application still working.
This is my docker-compose file now:
version: '3'
services:
rabbitmq-with-management:
image: "rabbitmq:3-management"
ports:
- "5672:5672"
- "15672:15672"
Everything works as expected now.
This answer had a similar problem and the exact match of properties was the problem.
So if your connection string does not exactly match the properties of the predefined queue, Camel "does not find" it and creates an own instead. Differences can be hidden in default values of the Camel consumer.
In the mentioned answer the difference was the autoDelete flag. It seems to be true by default in Camel and when it is false on your Rabbit queue there is no match.
They had to add &autoDelete=false to the connection string to match the predefined queue.
Perhaps you also have a "property matching problem" with the predefined queue.

RabbitMQ Listening to a queue from multiple servers

I need to realize listening from a queue from two servers. The queue name is the same. The first server is the primary, the second is the backup.
When the main server is down, work with the backup server queue should continue.
My class:
#RabbitListener(queues = "to_client")
public class ClientRabbitService {
Now I use RoutingConnectionFactory:
#Bean
#Primary
public ConnectionFactory routingConnectionFactory() {
SimpleRoutingConnectionFactory rcf = new SimpleRoutingConnectionFactory();
Map<Object, ConnectionFactory> map = new HashMap<>();
map.put("[to_kernel]", mainConnectionFactory());
map.put("[to_kernel_reserve]", reserveConnectionFactory());
map.put("[to_client]", mainConnectionFactory());
rcf.setTargetConnectionFactories(map);
return rcf;
}
[to_kernel] and [to_kernel_reserve] - the queues for sending messages only, [to_client] - to receive them.
Any ideas please?
Is the queue on backup server populated only when primary server is down? If yes you may always listen to both queues (queue on secondary server will be empty when primary is up).
Note that your solution would be more reliable if you use RabbitMQ clustering.
Then, you connect to the cluster (you specify addresses of all machines in cluster).
It is explained in official documentation https://docs.spring.io/spring-amqp/reference/htmlsingle/#connections
Alternatively, if running in a clustered environment, use the
addresses attribute.
<rabbit:connection-factory id="connectionFactory" addresses="host1:5672,host2:5672"/>
When using cluster you will have single queue (replicated across cluster). Note that RabbitMQ suffers significant performance hit when using replication, be sure to read official documentation how to configure clustering https://www.rabbitmq.com/clustering.html

Azure Service bus with AMQP - how to specify the session ID

I am trying to send messages to Service bus using AMQP QPID java library
I am getting this error:
"SessionId needs to be set for all brokered messages to a Partitioned
Topic that supports Ordering"
My topic has "Enforce Message ordering" turned on (this is way i get this error i guess)
When using the Azure Service bus java library (and not AMQP) i have this function :
this.entity.setSessionId(...);
When using the AMQP library i do not see an option to set the session ID on the message i want to send
Note that if i un-check the option "Enforce Message ordering" the message will be sent successfully
This is my code
private boolean sendServiceBusMsg(MessageProducer sender,Session sendSession) {
try {
// generate message
BytesMessage createBytesMessage = (BytesMessage)sendSession.createBytesMessage();
createBytesMessage.setStringProperty(CAMPAIGN_ID, campaignKey);
createBytesMessage.setJMSMessageID("ID:" + bm.getMessageId());
createBytesMessage.setContentType(Symbol.getSymbol("application/octet-stream"));
/*message is the actual data i send / not seen here*/
createBytesMessage.writeBytes(message.toByteArray());
sender.send(createBytesMessage);
} catch (JMSException e) {
}
The SessionId property is mapped to AMQP message properties.group-id. The Qpid JMS client should map it to JMSXGroupID property, so try the following,
createBytesMessage.setStringProperty("JMSXGroupID", "session-1");
As you guessed, there is a similar SO thread Azure Service Bus topics partitioning verified that to disable the feature Enforce Message Ordering via set SupportOrdering with false can solve the issue, but it can't be done via Azure Service Bus Java library because the property supportsOrdering is privated now.
And you can try to set property Group as #XinChen said using AMQP, as the content below from here.
Service Bus Sessions, also called 'Groups' in the AMQP 1.0 protocol, are unbounded sequences of related messages. ServiceBus guarantees ordering of messages in a session.
Hope it helps.

How to programmatically stop JMS listeners from consuming?

How to stop JMS consumers in WebLogic?
I need to programmatically notify WebLogic to stop JMS listeners (MDBs). What I mean by that is to stop consumption of messages of the queue, and start it again later also by code.
The Admin Console might have an option for that (maybe stop the factory?), but we need to do it in the code.
Something equivalent to Spring's DefaultMessageListenerContainer.stop().
The equivalent in Weblogic is pauseConsumption() on the destination mBean. Pausing should affect anything the destination is "targeted" to - which could be a cluster or a single server.
You can do it thru the admin console, via WLST scripting, or Java JMX programatically. The following links show all 3 methods:
Via the admin console:
JMS Modules -> <Module Name> -> Destination Name -> Pause Consumption
http://docs.oracle.com/cd/E17904_01/apirefs.1111/e13952/pagehelp/JMSjmsdestinationsjmsqueuepauseconsumptiontitle.html
Via WLST scripting:
cd('JMSRuntime/'+serverName+'.jms/JMSServers/'+jmsServerName+'/Destinations/'+jmsModName+'!'+ queueName)
cmo.pauseProduction()
http://middlewaremagic.com/weblogic/?p=6687
Via Java JMX:
for (String queueName : queueNames) {
exception = null;
try {
ObjectName destination = findQueueObjectName(con, jmsModuleName, queueName);
Object pauseConsResult = con.invoke(destination,"pauseConsumption", null, null);
http://khylo.blogspot.com/2011/02/weblogic-jms-pause-and-resume.html

Categories

Resources