I am writing transport adapter for the messages(I receive the message from java native methods and send it to the RabbitMQ's queue) and I mustn't lose any messages(For example the connection to RabbitMQ's server is unavailable). I need an persistence storage for my messages when process or send it was failed. Now I use mapDB library and its queue implementation
public void send(byte[] message){
queue.add(message);
db.commit();
//do process and send the message
queue.poll();//message sent success, we can remove it from storage
db.commit();
}
But it implementation doing very long. Please advice the best implementation for this case.
The message must be in the right order.
Related
I'm new to RabbitMQ and want to implement asynchronous messaging of SAGA with RabbitMQ.So I used RPC example of RabbitMQ to do the task. I've one orchestrator ( RPCClient) and multiple microservices ( RPCServer). Orchestrator uses unique queues to command microservices.And each microservice uses a common queue ( Reply_ Queue) to reply orchestrator. To keep log I want to get notifications in orchestrator side, when any microservice is down for any configurable time.
I read about consumer cancellation,but it only works when I delete the queue.How to get notifications in JAVA with keeping queue messages? And is it correct way to implement saga asynchronous messaging?
To implement a reliable RPC is hard, I can't give a detail guide about how to do this. If we ignore same special failure situation, I can give a simple workaround:
First, we assume that RPCClient never fail, RPCServer may fail anytime.
RPCClient need to know which request is timeout, so it can send request message with a TTL. After RPCServer receive request message and send response message, it should ACK the request message.
If RPCServer:
has failed before consume request message
OR
has failed before send response message
The request message will be republish to Dead Letter Exchange, so RPCClient can consume to some queue binded with that exchange, it can know which request is timeout.
I have a problem, and I don't know exactly what to search for.
I have a spring boot app which broadcast the message via web socket with a stomp javascript client. The question is if I can put a lock on the message when it is sent because I want no one to send another message at the same time. The system that I want to make is like a traffic light.
If you can give me an example or what to look for.
You should use synchronized keyword and wait for the client response. synchronized keyword ensures that only one thread can execute the method at the same time. And you need client response because you can sequentially send two messages, say in two seconds interval, but your client will get them at the same time. Response can be some dummy ok-message.
public class Traffic {
synchronized void Send() {
// write message to websocket
// read response from websocket
}
}
I have an application using MQTT implemented with the paho-mqtt-1.0.2 and I am using ActiveMQ as the broker. I have a class implementing the MqttCallback, what I am wondering is why does the client hang
#Override
messageArrived(...)
do work
mqtt.publish(TOPIC,PAYLOAD,2,false) <- here
I want to send a "response" message to the broker for the next step of the work to be done. Similar to this, I read in the docs for that callback function
It is possible to send a new message within an implementation of this callback (for example, a response to this message), but the implementation must not disconnect the client, as it will be impossible to send an acknowledgment for the message being processed, and a deadlock will occur.
Has anyone out there tried doing the above and get it to work?
I also tried using the MqttAsyncClient and that ended up with
"Error too many publishes in progress" leading to undelivered messages.
I know how to get around this issue, I'm not looking for workaround; I'm looking for receiving and publishing on the thread where messageArrived() gets executed.
Happy Hunting!
I'm running a 0.8 Kafka, and build a producer using the provided Java API.
The API functions of sending a message (or messages) return void.
Is there a way to get the status of the sent message? If it sent or failed?
This is extremely important to us since we are reading the messages from a file and we want to delete the file after all messages were sent. But if there were errors and some messages weren't sent and I delete the file it will cause a loss of a very important data.
You can configure your producer to wait until it gets n acks from the Kafka cluster (request.required.acks) so that you have some kind of guarantee that the data has been committed properly before deleting your source file.
If really you need to be sure that the message sent succeeded, you might want to consider the alternative of making the producer to be synchronous (producer.type=sync). This way, you would be able to catch any exception thrown by the blocking invocation and act accordingly. The exception thrown by send() is kafka.common.FailedToSendMessageException.
Kafka's Java API is not ideal, hope this helps you.
I have made an client - server example using netty.
I have defined handlers for the server and the client.
Basically I connect with the client to the server and send some messages.
Every received message gets send back (with the content of the message being converted to upper case).
All the work on the received messages, on server and client-side, is done by the defined handlers.
But I would like to use, or better receive/accept, some of the messages directly in the client
not (just) in the handler. So my question is is it possible to have some listener to receive messages directly in the client-program and not in its handlers. The thin is I would like to access the received messages within the (executable) program (basically the class with a main method) that created a new client object, using something like a timer which (or a loop) which would periodically check for new messages.
I would appreciate if someone could help me with this issue. Or at least tell me if its even possible with netty .
You're looking to translate netty's event-based model into a polling model. A simple way to do that is to create a message queue:
//import java.util.concurrent.BlockingQueue;
//import java.util.concurrent.LinkedBlockingQueue;
BlockingQueue queue = new LinkedBlockingQueue();
You need to make the queue available to your handler as a constructor argument, and when a message arrives you put it into the queue:
// Your netty handler:
queue.put(message);
On the client end, you can poll the queue for messages:
// The polling loop in your program:
message = queue.poll(5, TimeUnit.SECONDS);
The BlockingQueue offers you the choice between waiting for a message to arrive (take()), waiting a certain amount of time for a message to arrive (poll(long, TimeUnit)), or merely checking whether any message is available right now (poll()).
From a design perspective, this approach kills the non-blocking IO advantage netty is supposed to give you. You could have used a normal Socket connection for the same net result.