can not delete specific message from activemq queue - java

I'm trying to get list of all messages I have in activeMQ queue using java and delete one of the messages based on their ID. My code looks like the following:
Connection connection = connectionFactory.createConnection("username","password");
connection.start();
Session session = connection.createSession(false,Session.AUTO_ACKNOWLEDGE);
Destination topicDestination = session.createQueue(queue_name);
QueueBrowser browser = session.createBrowser((Queue) topicDestination);
Enumeration<?> messages = browser.getEnumeration();
int count=0;
while( messages.hasMoreElements()){
count++;
TextMessage messageInTheQueue = (TextMessage)messages.nextElement();
System.out.println("Message "+count+" in the queue:" );
System.out.println(messageInTheQueue.getJMSMessageID());
System.out.println(messageInTheQueue.getText());
System.out.println("===============================================");
System.out.println(" ");
when I run it I get the following output:
Message 1 in the queue:
ID:message1-server-42764-1483561148119-0:0:1:1:1
Today is warm
===============================================
Message 2 in the queue:
ID:message1-server-42764-1483561148119-0:0:1:1:2
Today is dry
===============================================
I use the ID I get like for example the second ID message1-server-42764-1483561148119-0:0:1:1:2 to consume or delete the message like the following:
Connection connection = connectionFactory.createConnection("username","password");
Session session = connection.createSession(true,Session.AUTO_ACKNOWLEDGE);
Destination topicDestination = session.createQueue(queue_name);
MessageConsumer consumer = session.createConsumer(topicDestination, "JMSMessageID="+message_id);
connection.start();
consumer.receive();
consumer.close();
session.commit();
session.close();
connection.stop();
but I keep getting jms exception:
javax.jms.InvalidSelectorException: JMSMessageID=message1-server-42764-1483561148119-0:0:1:1:2
at org.apache.activemq.selector.SelectorParser.parse(SelectorParser.java:47)
at org.apache.activemq.ActiveMQMessageConsumer.<init>(ActiveMQMessageConsumer.java:186)
at org.apache.activemq.ActiveMQSession.createConsumer(ActiveMQSession.java:840)
at activeMQ.DeleteSingleMessage.run(DeleteSingleMessage.java:30)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.activemq.selector.TokenMgrError: Lexical error at line 1, column 51. Encountered: ":" (58), after : ""
at org.apache.activemq.selector.SelectorParserTokenManager.getNextToken(SelectorParserTokenManager.java:1057)
at org.apache.activemq.selector.SelectorParser.jj_scan_token(SelectorParser.java:1133)
at org.apache.activemq.selector.SelectorParser.jj_3R_18(SelectorParser.java:849)
at org.apache.activemq.selector.SelectorParser.jj_3R_11(SelectorParser.java:857)
at org.apache.activemq.selector.SelectorParser.jj_3R_9(SelectorParser.java:883)
at org.apache.activemq.selector.SelectorParser.jj_3_5(SelectorParser.java:916)
at org.apache.activemq.selector.SelectorParser.jj_2_5(SelectorParser.java:563)
at org.apache.activemq.selector.SelectorParser.addExpression(SelectorParser.java:323)
at org.apache.activemq.selector.SelectorParser.comparisonExpression(SelectorParser.java:172)
at org.apache.activemq.selector.SelectorParser.equalityExpression(SelectorParser.java:132)
at org.apache.activemq.selector.SelectorParser.andExpression(SelectorParser.java:96)
at org.apache.activemq.selector.SelectorParser.orExpression(SelectorParser.java:75)
at org.apache.activemq.selector.SelectorParser.JmsSelector(SelectorParser.java:67)
at org.apache.activemq.selector.SelectorParser.parse(SelectorParser.java:44)
... 4 more
I tried following this post but I'm not sure what I'm missing?

javax.jms.InvalidSelectorException: JMSMessageID=message1-server-42764-1483561148119-0:0:1:1:2
you forgot ID: of JMSMessageID
add it and wrap selector value in single quotes
String message_id = "'ID:message1-server-42764-1483561148119-0:0:1:1:2'";
MessageConsumer consumer = session.createConsumer(topicDestination, "JMSMessageID=" + message_id);

Related

How is concurrency handled where two jms listeners have the same Listener Container Factory in Spring

I'm supposed to be listening on two queues and processing the messages in a concurrent manner. In a single moment I should not be processing more than 10 messages. To test this, I configured my DefaultJmsListenerContainerFactory 5-5 like below:
#Bean
public ActiveMQConnectionFactory activeMQConnectionFactory() {
ActiveMQConnectionFactory activeMQConnectionFactory = new ActiveMQConnectionFactory(BROKER_URL);
return activeMQConnectionFactory;
}
#Bean
public DefaultJmsListenerContainerFactory jmsFactory() {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(activeMQConnectionFactory());
factory.setSessionAcknowledgeMode(ActiveMQSession.INDIVIDUAL_ACKNOWLEDGE);
factory.setConcurrency("5-5");
return factory;
}
And the listeners as below:
#JmsListener(id = "queue1", destination = "QUEUE1", containerFactory = "jmsFactory")
#JmsListener(id = "queue2", destination = "QUEUE2", containerFactory = "jmsFactory")
public void test(ActiveMQTextMessage message) throws InterruptedException, JMSException {
log.info("Received Task: " + message.getText());
long randomLong = (long)(Math.random() * 500);
Thread.sleep(randomLong);
log.info("Slept for " + randomLong + "ms for "+ message.getText());
message.acknowledge();
}
Is each listener assigned 5 consumers or are the 5 consumers shared between the two listeners? If the former is true, is there any way to configure such that the 5 consumers are shared?
I sent 10 requests to both queues using two for loops:
for(int i = 0; i < 10; i++) {
Queue1Sender.sendMessage("Queue1 Request: " + (i+1));
}
for(int i = 0; i < 10; i++) {
Queue2Sender.sendMessage("Queue2 Request: " + (i+1));
}
This is what the logs printed:
Received Task: Queue1 Request: 2
Received Task: Queue1 Request: 3
Received Task: Queue1 Request: 1
Received Task: Queue1 Request: 4
Received Task: Queue1 Request: 5
Received Task: Queue2 Request: 1
Received Task: Queue2 Request: 2
Received Task: Queue2 Request: 3
Received Task: Queue2 Request: 4
Received Task: Queue2 Request: 5
Received Task: Queue2 Request: 6
Received Task: Queue1 Request: 6
Received Task: Queue2 Request: 7
Received Task: Queue2 Request: 8
Received Task: Queue1 Request: 7
Received Task: Queue1 Request: 8
Received Task: Queue1 Request: 9
Received Task: Queue1 Request: 10
Received Task: Queue2 Request: 9
Received Task: Queue2 Request: 10
I can't tell whether the consumers are being shared. Is there a better testing strategy?
You will get two complete listener containers with that configuration; each with 5 consumers.

Apache Kafka Java consumer does not receive message for topic with replication factor more than one

I'm starting on Apache Kakfa with a simple Producer, Consumer app in Java. I'm using kafka-clients version 0.10.0.1 and running it on a Mac.
I created a topic named replicated_topic_partitioned with 3 partitions and with replication factor as 3.
I started the zookeeper at port 2181. I started three brokers with id 1, 2 and 3 on ports 9092, 9093 and 9094 respectively.
Here's the output of the describe command
kafka_2.12-2.3.0/bin/kafka-topics.sh --describe --topic replicated_topic_partitioned --bootstrap-server localhost:9092
Topic:replicated_topic_partitioned PartitionCount:3 ReplicationFactor:3 Configs:segment.bytes=1073741824
Topic: replicated_topic_partitioned Partition: 0 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2
Topic: replicated_topic_partitioned Partition: 1 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
Topic: replicated_topic_partitioned Partition: 2 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
I wrote a simple producer and a consumer code. The producer ran successfully and published the messages. But when I start the consumer, the poll call just waits indefinitely. On debugging, I found that it keeps on looping at the awaitMetadataUpdate method on the ConsumerNetworkClient.
Here are the code for Producer and Consumer
Properties properties = new Properties();
properties.put("bootstrap.servers", "localhost:9092");
properties.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
properties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
KafkaProducer<String, String> myProducer = new KafkaProducer<>(properties);
DateFormat dtFormat = new SimpleDateFormat("yyyy/MM/dd HH:mm:ss:SSS");
String topic = "replicated_topic_partitioned";
int numberOfRecords = 10;
try {
for (int i = 0; i < numberOfRecords; i++) {
String message = String.format("Message: %s sent at %s", Integer.toString(i), dtFormat.format(new Date()));
System.out.println("Sending " + message);
myProducer.send(new ProducerRecord<String, String>(topic, message));
}
} catch (Exception e) {
e.printStackTrace();
} finally {
myProducer.close();
}
Consumer.java
Properties properties = new Properties();
properties.put("bootstrap.servers", "localhost:9092");
properties.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
properties.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
properties.put("group.id", UUID.randomUUID().toString());
properties.put("auto.offset.reset", "earliest");
KafkaConsumer<String, String> myConsumer = new KafkaConsumer<>(properties);
String topic = "replicated_topic_partitioned";
myConsumer.subscribe(Collections.singletonList(topic));
try {
while (true){
ConsumerRecords<String, String> records = myConsumer.poll(1000);
printRecords(records);
}
} finally {
myConsumer.close();
}
Adding some key-fields from server.properties
broker.id=1
host.name=localhost
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs-1
num.partitions=1
num.recovery.threads.per.data.dir=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
The server.properties for the other two brokers was a replica of the above with broker.id, the port and thelog.dirs changed.
This did not work for me:
Kafka 0.9.0.1 Java Consumer stuck in awaitMetadataUpdate()
But, if I start the consumer from the command line passing a partition, it successfully reads the messages for that partition. But it does not receive any message when just a topic is specified.
Works:
kafka_2.12-2.3.0/bin/kafka-console-consumer.sh --topic replicated_topic_partitioned --bootstrap-server localhost:9092
--from-beginning --partition 1
Does not work:
kafka_2.12-2.3.0/bin/kafka-console-consumer.sh --topic replicated_topic_partitioned --bootstrap-server localhost:9092
--from-beginning
NOTE: The above consumer works perfectly for a topic with replication factor equals 1.
Question:
Why does the Java Producer not read any message for topic with replication factor more than one (even when assigning it to a partition) (like myConsumer.assign(Collections.singletonList(new TopicPartition(topic, 2))?
Why does the console consumer read message only when passed a partition (again works for a topic with replication factor of one)
so, youre sending 10 records, but all 10 records have the SAME key:
for (int i = 0; i < numberOfRecords; i++) {
String message = String.format("Message: %s sent at %s", Integer.toString(i), dtFormat.format(new Date()));
System.out.println("Sending " + message);
myProducer.send(new ProducerRecord<String, String>(topic, message)); <--- KEY=topic
}
unless told otherwise (by setting a partition directly on the ProducerRecord) the partition into which a record is delivered is determine by something like:
partition = murmur2(serialize(key)) % numPartitions
so same key means same partition.
have you tried searching for your 10 records on partitions 0 and 2 maybe?
if you want a better "spread" of records amongst partitions, either use a null key (you'd get round robin) or a variable key.
Disclaimer: This is not an answer.
The Java consumer is now working as expected. I did not do any change to the code or the configuration. The only thing I did was to restart my Mac. This caused the kafka-logs folder (and the zookeeper folder too I guess) to be deleted.
I re-created the topic (with the same command - 3 partitions, replication factor of 3). Then re-started the brokers with the same configuration - no advertised.host.name or advertised.port config.
So, recreation of the kafka-logs and topics remediated something that was causing an issue earlier.
My only suspect is a non-properly terminated consumer. I ran the consumer code without the close call on the consumer in the finally block initially. I also had the same group.id. Maybe, all 3 partitions were assigned to consumers that weren't properly terminated or closed. This is just a guess..
But even calling myConsumer.position(new TopicPartition(topic, 2)) did not return a response earlier when I assigned the consumer to a partition. It was looping in the same awaitMetadataUpdate method.

Reactive Programming with Reactor and RabbitMQ

Recently I wrote a demo program to launch reactive programming with a combination of Reactor and RabbitMQ. This is my demo code :
public class FluxWithRabbitMQDemo {
private static final String QUEUE = "demo_thong";
private final reactor.rabbitmq.Sender sender;
private final Receiver receiver;
public FluxWithRabbitMQDemo() {
this.sender = ReactorRabbitMq.createSender();
this.receiver = ReactorRabbitMq.createReceiver();
}
public void run(int count) {
ConnectionFactory connectionFactory = new ConnectionFactory();
connectionFactory.useNio();
SenderOptions senderOptions = new SenderOptions()
.connectionFactory(connectionFactory)
.resourceCreationScheduler(Schedulers.elastic());
reactor.rabbitmq.Sender sender = ReactorRabbitMq.createSender(senderOptions);
Mono<AMQP.Queue.DeclareOk> queueDeclaration = sender.declareQueue(QueueSpecification.queue(QUEUE));
Flux<Delivery> messages = receiver.consumeAutoAck(QUEUE);
queueDeclaration.thenMany(messages).subscribe(m->System.out.println("Get message "+ new String(m.getBody())));
Flux<OutboundMessageResult> dataStream = sender.sendWithPublishConfirms(Flux.range(1, count)
.filter(m -> !m.equals(10))
.parallel()
.runOn(Schedulers.parallel())
.doOnNext(i->System.out.println("Message " + i + " run on thread "+Thread.currentThread().getId()))
.map(i -> new OutboundMessage("", QUEUE, ("Message " + i).getBytes())));
sender.declareQueue(QueueSpecification.queue(QUEUE))
.thenMany(dataStream)
.doOnError(e -> System.out.println("Send failed"+ e))
.subscribe(m->{
if (m!= null){
System.out.println("Sent successfully message "+new String(m.getOutboundMessage().getBody()));
}
});
try {
Thread.sleep(20000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
public static void main(String[] args) throws Exception {
int count = 20;
FluxWithRabbitMQDemo sender = new FluxWithRabbitMQDemo();
sender.run(count);
}
}
I expected that after Flux emit an item, Sender must send it to RabbitMQ and after receiving RabbitMQ the Receiver must receive it.
But everything happened sequentially and this is the result I got
Message 3 run on thread 25
Message 4 run on thread 26
Message 8 run on thread 26
Message 13 run on thread 26
Message 17 run on thread 26
Message 2 run on thread 24
Message 1 run on thread 23
Message 6 run on thread 24
Message 5 run on thread 23
Message 9 run on thread 23
Message 14 run on thread 23
Message 18 run on thread 23
Message 11 run on thread 24
Message 15 run on thread 24
Message 19 run on thread 24
Message 7 run on thread 25
Message 12 run on thread 25
Message 16 run on thread 25
Message 20 run on thread 25
Sent successfully message Message 3
Sent successfully message Message 1
Sent successfully message Message 2
Sent successfully message Message 4
Sent successfully message Message 5
Sent successfully message Message 6
Sent successfully message Message 8
Sent successfully message Message 9
Sent successfully message Message 11
Sent successfully message Message 13
Sent successfully message Message 14
Sent successfully message Message 15
Sent successfully message Message 17
Sent successfully message Message 18
Sent successfully message Message 19
Sent successfully message Message 7
Sent successfully message Message 12
Sent successfully message Message 16
Sent successfully message Message 20
Get message Message 3
Get message Message 1
Get message Message 2
Get message Message 4
Get message Message 5
Get message Message 6
Get message Message 8
Get message Message 9
Get message Message 11
Get message Message 13
Get message Message 14
Get message Message 15
Get message Message 17
Get message Message 18
Get message Message 19
Get message Message 7
Get message Message 12
Get message Message 16
Get message Message 20
I do not know what to do with my code to achieve the results as expected. Can someone help me? Thanks for advance!!!
Messages are generated too fast. To see the interleaving, in dataStream add
.doOnNext(i->Thread.sleep(10))

Can produce to Kafka but cannot consume

I'm using the Kafka JDK client ver 0.10.2.1 . I am able to produce simple messages to Kafka for a "heartbeat" test, but I cannot consume a message from that same topic using the sdk. I am able to consume that message when I go into the Kafka CLI, so I have confirmed the message is there. Here's the function I'm using to consume from my Kafka server, with the props - I pass the message I produced to the topic only after I have indeed confirmed the produce() was succesful, I can post that function later if requested:
private def consumeFromKafka(topic: String, expectedMessage: String): Boolean = {
val props: Properties = initProps("consumer")
val consumer = new KafkaConsumer[String, String](props)
consumer.subscribe(List(topic).asJava)
var readExpectedRecord = false
try {
val records = {
val firstPollRecs = consumer.poll(MAX_POLLTIME_MS)
// increase timeout and try again if nothing comes back the first time in case system is busy
if (firstPollRecs.count() == 0) firstPollRecs else {
logger.info("KafkaHeartBeat: First poll had 0 records- trying again - doubling timeout to "
+ (MAX_POLLTIME_MS * 2)/1000 + " sec.")
consumer.poll(MAX_POLLTIME_MS * 2)
}
}
records.forEach(rec => {
if (rec.value() == expectedMessage) readExpectedRecord = true
})
} catch {
case e: Throwable => //log error
} finally {
consumer.close()
}
readExpectedRecord
}
private def initProps(propsType: String): Properties = {
val prop = new Properties()
prop.put("bootstrap.servers", kafkaServer + ":" + kafkaPort)
propsType match {
case "producer" => {
prop.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer")
prop.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer")
prop.put("acks", "1")
prop.put("producer.type", "sync")
prop.put("retries", "3")
prop.put("linger.ms", "5")
}
case "consumer" => {
prop.put("group.id", groupId)
prop.put("enable.auto.commit", "false")
prop.put("auto.commit.interval.ms", "1000")
prop.put("session.timeout.ms", "30000")
prop.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
prop.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer")
prop.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest")
// poll just once, should only be one record for the heartbeat
prop.put("max.poll.records", "1")
}
}
prop
}
Now when I run the code, here's what it outputs in the console:
13:04:21 - Discovered coordinator serverName:9092 (id: 2147483647
rack: null) for group 0b8947e1-eb68-4af3-ac7b-be3f7c02e76e. 13:04:23
INFO o.a.k.c.c.i.ConsumerCoordinator - Revoking previously assigned
partitions [] for group 0b8947e1-eb68-4af3-ac7b-be3f7c02e76e 13:04:24
INFO o.a.k.c.c.i.AbstractCoordinator - (Re-)joining group
0b8947e1-eb68-4af3-ac7b-be3f7c02e76e 13:04:25 INFO
o.a.k.c.c.i.AbstractCoordinator - Successfully joined group
0b8947e1-eb68-4af3-ac7b-be3f7c02e76e with generation 1 13:04:26 INFO
o.a.k.c.c.i.ConsumerCoordinator - Setting newly assigned partitions
[HeartBeat_Topic.Service_5.2018-08-03.13_04_10.377-0] for group
0b8947e1-eb68-4af3-ac7b-be3f7c02e76e 13:04:27 INFO
c.p.p.l.util.KafkaHeartBeatUtil - KafkaHeartBeat: First poll had 0
records- trying again - doubling timeout to 60 sec.
And then nothing else, no errors thrown -so no records are polled. Does anyone have any idea what's preventing the 'consume' from happening? The subscriber seems to be successful, as I'm able to successfully call the listTopics and list partions no problem.
Your code has a bug. It seems your line:
if (firstPollRecs.count() == 0)
Should say this instead
if (firstPollRecs.count() > 0)
Otherwise, you're passing in an empty firstPollRecs, and then iterating over that, which obviously returns nothing.

Operation timed out using CouchbaseClient

I am getting Timeout exceptions even though there is not much load on the Couchbase server.
net.spy.memcached.OperationTimeoutException: Timeout waiting for value
at net.spy.memcached.MemcachedClient.get(MemcachedClient.java:1003)
at net.spy.memcached.MemcachedClient.get(MemcachedClient.java:1018)
at com.eos.cache.CacheClient.get(CacheClient.java:280)
at com.eos.cache.GenericCacheAccessObject.get(GenericCacheAccessObject.java:55)
...
...
Caused by: net.spy.memcached.internal.CheckedOperationTimeoutException: Timed out waiting for operation - failing node: /192.168.4.12:11210
at net.spy.memcached.internal.OperationFuture.get(OperationFuture.java:157)
at net.spy.memcached.internal.GetFuture.get(GetFuture.java:62)
at net.spy.memcached.MemcachedClient.get(MemcachedClient.java:997)
...30 more
This is how I am creating the client.
List<URI> uris = new ArrayList<URI>();
String[] serverTokens = getServers().split(" ");
for (int index = 0; index < serverTokens.length; index++) {
uris.add(new URI(serverTokens[index]));
}
CouchbaseConnectionFactoryBuilder ccfb = new CouchbaseConnectionFactoryBuilder();
ccfb.setProtocol(Protocol.BINARY);
ccfb.setOpTimeout(10000); // wait up to 10 seconds for an operation to
// succeed
ccfb.setOpQueueMaxBlockTime(5000); // wait up to 5 seconds when trying
// to enqueue an operation
ccfb.setMaxReconnectDelay(1500);
CouchbaseConnectionFactory cf = ccfb.buildCouchbaseConnection(uris, bucket, "");
CouchbaseClient client = new CouchbaseClient(cf);
I am maintaining a pool of persistent clients in our web server. And we are not even touching the max conn limit which has been set to 15 only.
Pls help me guys in solving this.

Categories

Resources