I want to know if exists a framework (or any way) to wait for a message inside a java code on asynchronous way. I repeat "asynchronous way".
For example:
int a=5;
MyObject mio= new MyObject();
mio.setx(6);
String message='x|1|5ABC';
MOM.enqueue(message,'myResponseQueue'); // <-- Shoot and forget!!
String res=MOM.dequeue('myResponseQueue'); // instance of class sleep (not in memory) waiting for message.
if (a=mio.getx()){int b=6; } // code awake and continue
a=7;
message='x|2|6ABC';
MOM.enqueue(message,'myResponseQueue'); // <-- Shoot and forget again!!
String res=MOM.dequeue('myResponseQueue'); // instance of class sleep (not in memory) again waiting for message.
if (res!=null){ // <-- code awake again
if (a==5){int b=7;}
}
I know with a Message-Drive Bean i can create a method to wait for the message, but if I have many messages I will break the code in many methods and I will lost the values of the local variables, this makes a very ugly code.
JMS is the java (java-ee) standard way to go. Your requierements are not a problem, you could go with a MDB (and queue) per message type or a single MDB coupled with a strategy pattern to select the appropriate executor per message type.
Otherwise for an event based / reactive framework you can have a look to either akka or RxJava
Related
How is Apache NIO HttpAsyncClient able to wait for a remote response without blocking any thread? Does it have a way to setup a callback with the OS (I doubt so?). Otherwise does it perform some sort of polling?
EDIT - THIS ANSWER IS WRONG. PLEASE IGNORE AS IT IS INCORRECT.
You did not specify a version, so I can not point you to source code. But to answer your question, the way that Apache does it is by returning a Future<T>.
Take a look at this link -- https://hc.apache.org/httpcomponents-asyncclient-4.1.x/current/httpasyncclient/apidocs/org/apache/http/nio/client/HttpAsyncClient.html
Notice how the link says nio in the package. That stands for "non-blocking IO". And 9 times out of 10, that is done by doing some work with a new thread.
This operates almost exactly like a CompletableFuture<T> from your first question. Long story short, the library kicks off the process in a new thread (just like CompletableFuture<T>), stores that thread into the Future<T>, then allows you to use that Future<T> to manage that newly created thread containing your non-blocking task. By doing this, you get to decide exactly when and where the code blocks, potentially giving you the chance to make some significant performance optimizations.
To be more explicit, let's give a pseudocode example. Let's say I have a method attached to an endpoint. Whenever the endpoint is hit, the method is executed. The method takes in a single parameter --- userID. I then use that userID to perform 2 operations --- fetch the user's personal info, and fetch the user's suggested content. I need both pieces, and neither request needs to wait for the other to finish before starting. So, what I do is something like the following.
public StoreFrontPage visitStorePage(int userID)
{
final Future<UserInfo> userInfoFuture = this.fetchUserInfo(userID);
final Future<PageSuggestion> recommendedContentFuture = this.fetchRecommendedContent(userId);
final UserInfo userInfo = userInfoFuture.get();
final PageSuggestion recommendedContent = recommendedContentFuture.get();
return new StoreFrontPage(userInfo, recommendedContent);
}
When I call this.fetchUserInfo(userID), my code creates a new thread, starts fetching user info on that new thread, but let's my main thread continue and kick off this.fetchRecommendedContent(userID) in the meantime. The 2 fetches are occurring in parallel.
However, I need both results in order to create my StoreFrontPage. So, when I decided that I cannot continue any further until I have the results from both fetches, I call Future::get on each of my fetches. What this method does is merge the new thread back into my original one. In short, it says "wait for that one thread you created to finish doing what it was doing, then output the result as a return value".
And to more explicitly answer your question, no, this tool does not require you to do anything involving callbacks or polling. All it does is give you a Future<T> and lets you decide when you need to block the thread to wait on that Future<T> to finish.
EDIT - THIS ANSWER IS WRONG. PLEASE IGNORE AS IT IS INCORRECT.
I have a simple class named QueueService with some methods that wrap the methods from the AWS SQS SDK for Java. For example:
public ArrayList<Hashtable<String, String>> receiveMessages(String queueURL) {
List<Message> messages = this.sqsClient.receiveMessage(queueURL).getMessages();
ArrayList<Hashtable<String, String>> resultList = new ArrayList<Hashtable<String, String>>();
for(Message message : messages) {
Hashtable<String, String> resultItem = new Hashtable<String, String>();
resultItem.put("MessageId", message.getMessageId());
resultItem.put("ReceiptHandle", message.getReceiptHandle());
resultItem.put("Body", message.getBody());
resultList.add(resultItem);
}
return resultList;
}
I have another another class named App that has a main and creates an instace of the QueueService.
I looking for a "pattern" to make the main in App to listen for new messages in the queue. Right now I have a while(true) loop where I call the receiveMessagesmethod:
while(true) {
messages = queueService.receiveMessages(queueURL);
for(Hashtable<String, String> message: messages) {
String receiptHandle = message.get("ReceiptHandle");
String messageBody = message.get("MessageBody");
System.out.println(messageBody);
queueService.deleteMessage(queueURL, receiptHandle);
}
}
Is this the correct way? Should I use the async message receive method in SQS SDK?
To my knowledge, there is no way in Amazon SQS to support an active listener model where Amazon SQS would "push" messages to your listener, or would invoke your message listener when there are messages.
So, you would always have to poll for messages. There are two polling mechanisms supported for polling - Short Polling and Long Polling. Each has its own pros and cons, but Long Polling is the one you would typically end up using in most cases, although the default one is Short Polling. Long Polling mechanism is definitely more efficient in terms of network traffic, is more cost efficient (because Amazon charges you by the number of requests made), and is also the preferred mechanism when you want your messages to be processed in a time sensitive manner (~= process as soon as possible).
There are more intricacies around Long Polling and Short Polling that are worth knowing, and its somewhat difficult to paraphrase all of that here, but if you like, you can read a lot more details about this through the following blog. It has a few code examples as well that should be helpful.
http://pragmaticnotes.com/2017/11/20/amazon-sqs-long-polling-versus-short-polling/
In terms of a while(true) loop, I would say it depends.
If you are using Long Polling, and you can set the wait time to be (max) 20 seconds, that way you do not poll SQS more often than 20 seconds if there are no messages. If there are messages, you can decide whether to poll frequently (to process messages as soon as they arrive) or whether to always process them in time intervals (say every n seconds).
Another point to note would be that you could read upto 10 messages in a single receiveMessages request, so that would also reduce the number of calls you make to SQS, thereby reducing costs. And as the above blog explains in details, you may request to read 10 messages, but it may not return you 10 even if there are that many messages in the queue.
In general though, I would say you need to build appropriate hooks and exception handling to turn off the polling if you wish to at runtime, in case you are using a while(true) kind of a structure.
Another aspect to consider is whether you would like to poll SQS in your main application thread or you would like to spawn another thread. So another option could be to create a ScheduledThreadPoolExecutor with a single thread in the main to schedule a thread to poll the SQS periodically (every few seconds), and you may not need a while(true) structure.
There are a few things that you're missing:
Use the receiveMessages(ReceiveMessageRequest) and set a wait time to enable long polling.
Wrap your AWS calls in try/catch blocks. In particular, pay attention to OverLimitException, which can be thrown from receiveMessages() if you would have too many in-flight messages.
Wrap the entire body of the while loop in its own try/catch block, logging any exceptions that are caught (there shouldn't be -- this is here to ensure that your application doesn't crash because AWS changed their API or you neglected to handle an expected exception).
See doc for more information about long polling and possible exceptions.
As for using the async client: do you have any particular reason to use it? If not, then don't: a single receiver thread is much easier to manage.
If you want to use SQS and then lambda to process the request you can follow the steps given in the link or you always use lambda instead of SQS and invoke lambda for every request.
As of 2019 SQS can trigger lambdas:
https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html
I found one solution for actively listening the queue.
For Node. I have used the following package and resolved my issue.
sqs-consumer
Link
https://www.npmjs.com/package/sqs-consumer
(disclaimer: I'm less than a beginner with akka)
Suppose I have an actor calling a method that never terminates (it's an extreme example, you can think about calling a method that has the chance to terminate in a long time or never).
For example (Java)
public static class InfiniteLoop{
public static int neverReturns(){
int x = 0;
while(true){
int++;
}
return x;
}
}
Now if, while processing a message, an actor calls
InfiniteLoop.neverReturns()
the actor will never terminate.
Is there a way to kill it while it is still processing a message? If yes, will the loop continue in background?
(what I'm trying to understand is if there is a way to recover from a "infinite loop"-style fault in an akka system)
There is no way to implement such thing in Akka. All methods to stop an actor rely on sending a message to the actor. If your actor is stuck processing the current message because of the loop, it will never process the message telling it to stop. This is the base of the akka actor model, messages in the mailbox are processed in order, and I don't think you can find a way around that. Check this article for your options on stopping/killing an actor: https://petabridge.com/blog/how-to-stop-an-actor-akkadotnet/. You will see how the semantics of the stopping change for each method, but they all start by sending a message to the actor.
It would be nice to know why you need such a thing, cause maybe your base requirement can be implemented in a more akka-ish way. For instance, the potentially blocking action could be wrapped in a future if it's okay for the actor to pass to the next message.
I have method which is passed in real-time data constantly.
The method then evaluates the data:
void processMessage(String messageBeingPassed) {
//evaluate the message here and do something with it
//depending on the current state of the message
//if message.equals("test")
//call separate thread to save to database etc...
//etc...
}
My question is, is there any advantage to putting the entire method body inside a thread for better performance?
such as:
void processMessage(String messageBeingPassed) {
Runnable runnable = new Runnable() {
public void run() {
//evaluate the message here and do something
//depending on the current state of the message
//if message.equals("test")
//call separate thread to save to database etc...
//etc...
}
//start main body thread for this current message etc...
}
}
Thanks for any response.
It will depend on various factors. If that method is a bottleneck for your application (i.e. you get long queues of messages waiting to be processed), then it will likely improve your performance up to a certain point, and then degrade again if you use too many threads. So you should use a thread pool and have like 4 threads responsible for that, or some other amount that works best.
However, if you don't get such queues of messages, then that's hardly going to help you.
Either way, the only way to know for sure is through testing and profiling of what performs best in your application.
The advantage is that you can process multiple messages at once, and the calling method won't need to block while the message is being processed (in other words, message processing will be asynchronous instead of synchronous). The disadvantage is that you open yourself up to data races / deadlocks / etc if you're not careful about designing your methods - generally, if your runnable will ONLY be operating on the messageBeingPassed object (and not e.g. on any static fields), then you should be fine. In addition, threads carry some overhead with them, which you can reduce by using an ExecutorService instead of constructing your own thread objects.
It's depend on the rate of data and the time taken by the "processMessage". If the next data arrives before the "processMessage" method finishes its execution of the previous data, it is a good idea to use a thread inside the "processMessage" method
I have a method that will be used to send out email. i want to lock this method so only one thread can accses it per time and the rest pool up concurrently. should i synchronized the method or use spring #transactional PROPAGATION_REQUIRED ?
in my service layer
//each time use new thread to send out email
public void sendThroughSMTP(List<String> emails,String subject,String content){
//each time will open and sent through port 25. dont u think this will caused too many threads spawned?
BlastEmailThread blastEmailThread = new BlastEmailThread(emails,subject,content);
blastEmailThread.start();
}
Why not make the method thread-safe by not using any instance level things?
However, I don't see how Spring's Transaction Management fits here. I mean Spring provides few transaction managers, i.e. DataSourceTransactionManager, JtaTransactionManager, HibernateTransactionManager all this is about database persistence. What will you configure for this email send out?
I believe, first you should show us why you worry about the thread-safety in the first place. Most probably you would like to show us some relevant code snippet or something. Then we might be able to suggest you something.
[Addendum]
When you are spawning a thread for every call to that method and not using anything from the state, then why you want to make the method synchronized. Making the method synchronized will not limit the number of threads in any way. There might be chance that before starting a new thread, previous thread might have finished the work, because of synchronization. The process of spawning a thread might go slower.
However, you should go with this until you find out that there are really many threads running and you are going out of memory. And if you really want to tackle that before time, then you should choose some blocking mechanism, something like Semaphore.
I'm not sure if it answers your question, but instead of creating a new thread for every mail and calling start on it you could have an Executor or ExecutorService as a member of your class, as an implementation you could use a ThreadPoolExecutor with a pool size of 1. Your sendMail method would then submit Runnables to the executor.
Another possibility would be to use JMS queues and put the email sending code in a Message Driven Bean (or through Spring JMS). You can then use your app server to control how many concurrent instances of your MDB will be used and throttle the outgoing emails that way.
in Sping 3.0 you can use #Async annotation to do task execution, so your method will be executed later and the method is returned directly without waiting for email to be sent.
#Async
public void sendThroughSMTP(List<String> emails,String subject,String content){
//Send emails here, you can directly send lots of email
}
then in application context you specify and don't forget to add xmlns for task schema.
If you want to delay the execution for certain amount of time, you may use #Scheduled annotation to your method.
Further tutorial about #Async and #Scheduled can be found here :
http://blog.springsource.com/2010/01/05/task-scheduling-simplifications-in-spring-3-0/
Make your service a singleton and add synchronized to your method.
Spring #Transactional is not quite correct used in your case. The best bet is using synchorized method and add some thread pooling if your method called by hundreds time. But i guess you dont need thread pool here.
If you use thread to send blast email, then what's point synchronizing the method? if one process call your method and send email, other process will call you method even the first sending email process not yet finish.
If you intent to throttle the email sending process, you need to condider a queue (collection) and protect the collection with synchronize block. Create another process to monitor that queue, if there is one item in queue, pop it and send blast email, then wait until sending email process finish and check again the queue, if there is any item, continue to sending email process. If no item in the queue, make the monitor thread sleep for some chunk of time, then if sleep time is finish check the queue again.