I have Kafka Produce which sends the message to kafka .And i log the message in database in the both onsucess and onFailure with the help stored procedure . As shown in the code i am using asynchronous
should i mark my callStoredProcedure method in the repository as synchronised to avoid deadlocks? i believe synchronised is not needed as callback will be executed sequentially in a single thread.
from the below link
https://kafka.apache.org/10/javadoc/org/apache/kafka/clients/producer/KafkaProducer.html
Note that callbacks will generally execute in the I/O thread of the
producer and so should be reasonably fast or they will delay the
sending of messages from other threads. If you want to execute
blocking or computationally expensive callbacks it is recommended to
use your own Executor in the callback body to parallelize processing.
Should i execute callbacks in other thread ?
And can u share the code snippet how to excute callback in other thread. like parallelise callback in 3 threads
My code snippet
#Autowired
private Myrepository myrepository;
public void sendMessageToKafka(List<String> message) {
for (String s : message) {
future = kafkaTemplate.send(topicName, message);
future.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
#Override
public void onSuccess(SendResult<String, String> result) {
System.out.println("Message Sent " + result.getRecordMetadata().timestamp());
myrepository.callStoredProcedure(result,"SUCCESS");
}
#Override
public void onFailure(Throwable ex) {
System.out.println(" sending failed ");
myrepository.callStoredProcedure(result,"FAILED");
}
});
}
private final ExecutorService exec = Executors.newSingleThreadExecutor();
...
this.exec.submit(() -> myrepository.callStoredProcedure(result,"SUCCESS"));
The tasks will still be run on a single thread (but not the Kafka IO thread).
If it can't keep up with your publishing rate, you might need to use a different executor such as a cached thread pool executor or Spring's ThreadPoolTaskExecutor.
Related
I'm using hazel cast IMGD for my app. I have used queues for internal communication. I added an item listener to queue and it works great. Whenever a queue gets a message, listener wakes up and needed processing is done.
Problem is its single threaded. Sometimes, a message takes 30 seconds to process and messages in queue just have to wait until previous message is done processing. I'm told to use Java executor service to have a pool of threads and add an item listener to every thread so that multiple messages can be processed at same time.
Is there any better way to do it ? may be configure some kind of MDB or make the processing asynchronous so that my listener can process the messages faster
#PostConstruct
public void init() {
logger.info(LogFormatter.format(BG_GUID, "Starting up GridMapper Queue reader"));
HazelcastInstance hazelcastInstance = dc.getInstance();
queue = hazelcastInstance.getQueue(FactoryConstants.QUEUE_GRIDMAPPER);
queue.addItemListener(new Listener(), true);
}
class Listener implements ItemListener<QueueMessage> {
#Override
public void itemAdded(ItemEvent<QueueMessage> item) {
try {
QueueMessage message = queue.take();
processor.process(message.getJobId());
} catch (Exception ex) {
logger.error(LogFormatter.format(BG_GUID, ex));
}
}
#Override
public void itemRemoved(ItemEvent<QueueMessage> item) {
logger.info("Item removed: " + item.getItem().getJobId());
}
}
Hazelcast IQueue does not support asynchronous interface. Anyway, asynchronous access would not be faster. MDB requires JMS, which is pure overhead.
What you really need is multithreaded executor. You can use default executor:
private final ExecutorService execService = ForkJoinPool.commonPool();
I have n number of worker threads that retrieve records from a kinesis stream (this is not important for this problem), which are then pushed on to an executor service where the records are processed and persisted to a backend database. This same executor service instance is used for all worker threads.
Now there is a scenario where any given worker loop stops processing records and blocks until all records that were submitted by it are processed completely. This essentially means that there should be no pending/running threads in the executor service for a record from that particular worker thread.
A very trivial example of the implementation is like this:
Worker class
public class Worker {
Worker(Listener listener){
this.listener = listener;
}
//called periodically to fetch records from a kinesis stream
public void processRecords(Record records) {
for (Record record : records) {
listener.handleRecord(record);
}
//if 15 minutes has elapsed, run below code. This is blocking.
listener.blockTillAllRecordsAreProcessed()
}
}
Listener class
public class Listener {
ExecutorService es;
// same executor service is shared across all listeners.
Listener(ExecutorService es){
this.es = es;
}
public void handleRecord(Record record) {
//submit record to es and return
// non blocking
}
public boolean blockTillAllRecordsAreProcessed(){
// this should block until all records are processed
// no clue how to implement this with a common es
}
}
The only approach I could think of is to have a local executor service for each worker and do something like invokeAll for each batch, which would be change the implementation slightly but get the job done. but I feel like there should be a better approach to tackle this problem.
You could use the CountdownLatch class to block as follows:
public void processRecords(List<Record> records) {
CountDownLatch latch = new CountDownLatch(records.size());
for (Record record : records) {
listener.handleRecord(record, latch);
}
//if 15 minutes has elapsed, run below code. This is blocking.
listener.blockTillAllRecordsAreProcessed(latch)
}
public class Listener {
ExecutorService es;
...
public void handleRecord(Record record, CountDownLatch latch) {
//submit record to es and return
// non blocking
es.submit(()->{
someSyncTask(record);
latch.countDown();
})
}
public boolean blockTillAllRecordsAreProcessed(CountDownLatch latch){
System.out.println("waiting for processes to complete....");
try {
//current thread will get notified if all chidren's are done
// and thread will resume from wait() mode.
latch.await();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
Read more here: https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/CountDownLatch.html
I want to create a server to handle socket connections from users, and inside my server I want to have a connection to a RabbitMQ, one per connection, but in the examples provided in their webpage I see only "while" loops to wait for the message, in this case I will need to create a thread per connection only to process the message from RabbitMQ.
Is there a way to do this in Java using Spring or any framework that I just create the call back for the RabbitMQ instead of using while loops?
I was using node.js and there it is pretty straightforward to do this,
and I want to know some proposals for Java
You should take a look at the Channel.basicConsume and the DefaultConsumer abstract class: https://www.rabbitmq.com/api-guide.html#consuming
Java concurrency will require a thread for the callback to handle each message, but you can use a thread pool to reuse threads.
static final ExecutorService threadPool;
static {
threadPool = Executors.newCachedThreadPool();
}
Now you need to create a consumer that will handle each delivery by creating a Runnable instance that will be passed to the thread pool to execute.
channel.basicConsume(queueName, false, new DefaultConsumer(channel) {
#Override
public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] body) throws IOException {
final byte msgBody = body; // a 'final' copy of the body that you can pass to the runnable
final long msgTag = envelope.getDeliveryTag();
Runnable runnable = new Runnable() {
#Override
public void run() {
// handle the message here
doStuff(msgBody);
channel.basicAck(msgTag, false);
}
};
threadPool.submit(runnable);
}
});
This shows how you can handle concurrent deliveries on a single connection and channel without a while loop in a single thread that would be blocked on each delivery. For your sanity, you probably will want to factor your Runnable implementation into its own class that could accept the channel, msgBody, msgTag and any other data as parameters that will be accessible when the run() method is called.
I have to write a server that manages several Datagram sockets.
I know about the selector capabily of java.nio that allows async management of different sockets
However, after having parsed the message I need to fire an event to another thread (optionally with some parameters).
Is there a way to make other threads to "register" to the one that manages the sockets and make them aware that data is ready for them?
Thanks in advance
You can use have an ExecutorService that backs the async network call. The Future returned at that point can act as the message passer upon completion.
public class SocketWork<T>{
private final ExecutorService service = Executors.newSingleThreadExecutor();
private final Future<T> future;
public SocketWork(){
future = service.submit(new SocketWorkCallable<T>());
}
public T register(){
// all threads entering register will block until
// the SocketWorkCallable is completed and returns.
return future.get();
}
}
An alternative is to use a CountdownLatch.
public class SocketWork<T>{
private final CountdownLatch latch = new CountdownLatch(1);
private T message;
public void executeSocketWork(){
//execute work and get message
this.message = returnedMessage;
latch.countDown();
}
public T register(){
latch.await();
return message;
}
}
You could do something similar with a ReadWriteLock as well.
As you can imagine this will only work for a single task. Per instance of SocketWork
I'm creating a reader application. The reader identifies based on the parameters which file to read, does some processing and returns the result to the caller.
I am trying to make this multi-threaded, so that multiple requests can be processed. I thought it was simple but later realized it has some complexity. Even though i create threads using executor service, I still need to return the results back to the caller. So this means waiting for the thread to execute.
Only way i can think of is write to some common location or db and let the caller pick the result from there. Is there any approach possible?
Maybe an ExecutorCompletionService can help you. The submitted tasks are placed on a queue when completed. You can use the methods take or poll depending on if you want to wait or not for a task to be available on the completion queue.
ExecutorCompletionService javadoc
Use an ExecutorService with a thread pool of size > 1, post custom FutureTask derivatives which override the done() method to signal completion of the task to the UI:
public class MyTask extends FutureTask<MyModel> {
private final MyUI ui;
public MyTask(MyUI toUpdateWhenDone, Callable<MyModel> taskToRun) {
super(taskToRun);
ui=toUpdateWhenDone;
}
#Override
protected void done() {
try {
// retrieve computed result
final MyModel computed=get();
// trigger an UI update with the new model
java.awt.EventQueue.invokeLater(new Runnable() {
#Override
public void run() {
ui.setModel(computed); // set the new UI model
}
});
}
catch(InterruptedException canceled) {
// task was canceled ... handle this case here
}
catch(TimeoutException timeout) {
// task timed out (if there are any such constraints).
// will not happen if there are no constraints on when the task must complete
}
catch(ExecutionException error) {
// handle exceptions thrown during computation of the MyModel object...
// happens if the callable passed during construction of the task throws an
// exception when it's call() method is invoked.
}
}
}
EDIT: For more complex tasks which need to signal status updates, it may be a good idea to create custom SwingWorker derivatives in this manner and post those on the ExecutorService. (You should for the time being not attempt to run multiple SwingWorkers concurrently as the current SwingWorker implementation effectively does not permit it.)