How do I get All the mesages from the SQS queue - java

I am using SQS to read the data. But I am not sure how to read all the data from the queue.
public List<Customer> getMessage() {
int numberOfMessages= getMessageCount();
System.out.println(numberOfMessages);
int count=0;
while(count<10) {
System.out.println("Messages remaining in the queue-
>>>"+numberOfMessages);
System.out.println("Recieving Messages from the Queue: ");
final ReceiveMessageRequest receiveMessageRequest =
new ReceiveMessageRequest(queueURL)
.withMaxNumberOfMessages(10)
.withWaitTimeSeconds(20);
final List<com.amazonaws.services.sqs.model.Message> customers =
sqs.receiveMessage(receiveMessageRequest).getMessages();
for(com.amazonaws.services.sqs.model.Message cust: customers) {
System.out.println("Current message number->>>>>"+(count+1));
System.out.println(cust.getBody());
sqs.deleteMessage(new DeleteMessageRequest(queueURL,
cust.getReceiptHandle()));
count++;
}
//numberOfMessages=getMessageCount();
}
return null;
}
public int getMessageCount() {
Set<String> attrs = new HashSet<String>();
attrs.add("ApproximateNumberOfMessages");
CreateQueueRequest createQueueRequest = new CreateQueueRequest().withQueueName("sampleQueueSharma");
GetQueueAttributesRequest a = new GetQueueAttributesRequest().withQueueUrl(sqs.createQueue(createQueueRequest).getQueueUrl()).withAttributeNames(attrs);
Map<String,String> result = sqs.getQueueAttributes(a).getAttributes();
int num = Integer.parseInt(result.get("ApproximateNumberOfMessages"));
return num;
}
I am reading the data this way, but this doesnt seem right.
I also tried replacing while(count<10) with while(numberOfMessages>0) and uncommenting numberOfMessages=getMessageCount() this line , but by doing this, the code runs indefinitely. It seems like it always returns a
value greater than 1.
Can someone help me with this thing?

First a few notes:
Using count as you are, you're only reading about 10 messages (may be slightly more due to batching). You probably don't want to use this beyond simple proof-of-concept stage
Using while (numberOfMessages > 0), you're going to keep reading as long as SQS's approximation of the message count says it has messages. Note that this is an approximation, so you shouldn't rely on it being exact (it will be eventually consistent).
Your getMessageCount() method looks like it's trying to recreate the queue each time it's called - while that'll work, you don't need to do this. Create it once and just use it.
Based on the code I can see, getMessageCount() would return > 1 if (a) you simply have a ton of messages, (b) something else is constantly adding messages to the queue (or (c) if you're not properly deleting them, but you are doing this).
I would suggest the following modifications to your code:
Log the result of getMessageCount() each time it's called. This will give you indication if you are putting messages into your queue faster than you can process them, or have some message source that will never run out.
Log the number of messages received by your ReceiveMessageRequest. This will let you know you are indeed processing messages.
Instead of basing your control flow on the value of getMessageCount(), keep calling until ReceiveMessageRequest result (with waitTimeSeconds=20) returns 0 messages - this is the guarantee that your queue is empty at that moment (instead of an approximation).

Used ListIterator on messages
List<com.amazonaws.services.sqs.model.Message> messages = amazonSQS.receiveMessage(receiveMessageRequest).getMessages();
ListIterator<Message> messageListIterator = messages.listIterator();
List<String> message=new ArrayList<>();
while (messageListIterator.hasNext()){
Message msg= messageListIterator.next();
message.add(msg.getBody());
}

Related

Log line taking 10's of milliseconds

I am seeing very high latencies when invoking java.util.logging.Logger.log() in some instances, in the following code:
private static Object[] NETWORK_LOG_TOKEN = new Object[] {Integer.valueOf(1)};
private final TimeProbe probe_ = new TimeProbe();
public void onTextMessagesReceived(ArrayList<String> msgs_list) {
final long start_ts = probe_.addTs(); // probe A
// Loop through the messages
for (String msg: msgs_list) {
probe_.addTs(); // probe B
log_.log(Level.INFO, "<-- " + msg, NETWORK_LOG_TOKEN);
probe_.addTs(); // probe C
// Do some work on the message ...
probe_.addTs(); // probe D
}
final long end_ts = probe_.addTs(); // probe E
if (end_ts - start_ts >= 50) {
// If the run was slow (>= 50 millis) we print all the recorded timestamps
log_.info(probe_.print("Slow run with " + msgs_list.size() + " msgs: "));
}
probe_.clear();
}
The probe_ is simply an instance of this very basic class:
public class TimeProbe {
final ArrayList<Long> timestamps_ = new ArrayList<>();
final StringBuilder builder_ = new StringBuilder();
public void addTs() {
final long ts = System.currentTimeMillis();
timestamps_.add(ts);
return ts;
}
public String print(String prefix) {
builder_.setLength(0);
builder_.append(prefix);
for (long ts: timestamps_) {
builder_.append(ts);
builder_.append(", ");
}
builder_.append("in millis");
return builder_.toString();
}
public void clear() {
timestamps_.clear();
}
}
And here is the handler that logs the NETWORK_LOG_TOKEN entries:
final FileHandler network_logger = new FileHandler("/home/users/dummy.logs", true);
network_logger2.setFilter(record -> {
final Object[] params = record.getParameters();
// This filter returns true if the params suggest that the record is a network log
// We use Integer.valueOf(1) as our "network token"
return (params != null && params.length > 0 && params[0] == Integer.valueOf(1));
});
In some cases, I am getting the following ouputs (adding labels with probe A,B,C,D,E to make things more clear):
// A B C D B C D E
slow run with 2 msgs: 1616069594883, 1616069594883, 1616069594956, 1616069594957, 1616069594957, 1616069594957, 1616069594957, 1616069594957
Everything takes less than 1ms, except for the line of code between B and C (during the first iteration of the for loop), which takes a whopping 73 milliseconds. This does not occur every time onTextMessagesReceived() is called, but the fact that it does is big problem. I would welcome any ideas explaining where this lack of predictability comes from.
As a side note, I have checked that my disk IO is super low, and no GC pause occurred around this time. I would think my NETWORK_LOG_TOKEN setup is pretty flimsy at best in terms of design, but I still cannot think of reasons why sometimes, this first log line takes forever. Any pointers or suggestions as to what could be happening would be really appreciated :)!
Things to try:
Enable JVM safepoint logs. VM pauses are not always caused by GC.
If you use JDK < 15, disable Biased Locking: -XX:-UseBiasedLocking. There are many synchronized places in JUL framework. In a multithreaded application, this could cause biased lock revocation, which is a common reason for a safepoint pause.
Run async-profiler in the wall-clock mode with .jfr output. Then, using JMC, you'll be able to find what a thread was exactly doing near the given moment of time.
Try putting a log file onto tmpfs to exclude disk latency, or use MemoryHandler instead of FileHandler to check whether file I/O affects pauses at all.
Everything takes less than 1ms, except for the line of code between B and C (during the first iteration of the for loop), which takes a whopping 73 milliseconds. [snip] ...but I still cannot think of reasons why sometimes, this first log line takes forever.
The first log record that is published to the root logger or its handlers will
trigger lazy loading of the root handlers.
If you don't need to publish to the root logger handlers then call log_.setUseParentHandlers(false) when you add your FileHandler. This will make it so your log records don't travel up to the root logger. It also ensures that you are not publishing to other handlers attached to the parent loggers.
You can also load the root handlers by doing Logger.getLogger("").getHandlers() before you start your loop. You'll pay the price for loading them but at a different time.
log_.log(Level.INFO, "<-- " + msg, NETWORK_LOG_TOKEN);
The string concatenation in this line is going to do array copies and create garbage. Try to do:
log_.log(Level.INFO, msg, NETWORK_LOG_TOKEN);
The default log method will walk the current thread stack. You can avoid that walk by using logp​ methods in tight loops:
public Foo {
private static final String CLASS_NAME = Foo.class.getName();
private static final Logger log_ = Logger.getLogger(CLASS_NAME);
public void onTextMessagesReceived(ArrayList<String> msgs_list) {
String methodName = "onTextMessagesReceived";
// Loop through the messages
for (String msg: msgs_list) {
probe_.addTs(); // probe B
log_.logp(Level.INFO, CLASS_NAME, methodName, msg, NETWORK_LOG_TOKEN);
probe_.addTs(); // probe C
// Do some work on the message ...
probe_.addTs(); // probe D
}
}
}
In your code you are attaching a filter to the FileHandler. Depends on the use case but loggers also accept filters. Sometimes it makes sense to install a filter on the logger instead of the handler if you are targeting a specific message.

Android queue of user objects that overwrites duplicates instead of ignoring them like LinkedHashSet?

I'm trying to create a queue for outgoing bluetooth messages in Android. The UI can generate a bunch of messages to be sent out via bluetooth. Sometimes these are of the same message type, but with different data. If I have 10 of the same message type waiting to go out, I don't want to send all 10, I just want to send the last one. This is mainly to save bandwidth as BLE is fairly limited.
The queue contains message objects that have message_type and message_data Strings. The behaviour I'm looking for, but can't seem to figure out is: When adding a new object to the queue, it should check the existing items in the queue to see if any are of the same message_type. If so, the new object would overwrite that object in the queue (or delete the existing object and add the new one to the end of the queue. either would work) If an object with a matching message_type isn't found, then the new object would just be added to the end of the queue.
I haven't found anything that does this. The closest I have found is a LinkedHashSet, but this would just not add the new element instead of replacing the existing element with the new one. Maybe this behaviour can be modified?
The Bluetooth Message Obj:
public class BluetoothMessageObj {
private String message_type;
private String message_data;
}
EDIT
Here's what I ended up going with:
private LinkedList<BluetoothMessageObj> outgoingMessageQueue = new LinkedList<>();
public void addMessageToOutgoingMessageQueue(BluetoothMessageObj newObj){
for (int i = 0; i < outgoingMessageQueue.size(); i++) {
BluetoothMessageObj existingObj = outgoingMessageQueue.get(i);
if( existingObj.getMessage_type().equals( newObj.getMessage_type() ) ){
outgoingMessageQueue.remove(i); // remove the existing message
break;
}
}
outgoingMessageQueue.add(newObj); // add the new message
}
For that you can create your own logic as following:
private void replaceValue(ArrayList<BluetoothMessageObj> list, BluetoothMessageObj newBluetoothMessage) {
for (int i = 0; i < list.size(); i++) {
if (list.get(i).getMessage_type().equalsIgnoreCase(newBluetoothMessage.getMessage_type())) {
list.set(i, newBluetoothMessage);
break;
}
}
}
Now everytime you add new element pass existing list of elements and new element you want to add and this would do it for you.

ArrayList concurrency across socket connections

I have an issue. My belief is that is has something to do with concurrency or synchronization of threads, though I cannot put the finger on just what is happening.
Here is my description of the data-flow for our FriendRequestList object.
From the client side we send a friend request to another user(worker).
So we send the request and add our own username to their User.incFriendReq-list. The instance is an ArrayList just for nice-to-have reference.
So now we send the request to the server to receive our own list of friend requests (FriendRequestList.java).
So now the problem. If I use this code below the user won't see the friend request before he terminates his connection (logout), which will close the connection. When he logs in, he will only then see the request in his list.
Server side code:
... Worker.java ...
private Object getFriendRequests() {
User me = Data.getUser(myUserName);
if ( me == null ) {
return new NoSuchUserException(); // Don't worry about it ;)
}
return new FriendRequestList(me.getFriendReq());
}
... User.java ...
private List<String> incFriendReq;
public List<String> getFriendReq() {
return incFriendReq;
}
Client side
... Communication.java ...
public FriendRequestList getRequests() {
sendObject(new GetRequests());
return inputHandler.containsRequests();
}
... MessageListener.java ...
public void run() {
...
FriendRequestList requests = communication.getRequests();
update(requestList, requests);
// Here we have the problem. requests.size is never different from 0
}
How ever, if I update Worker.java to do this instead:
private Object getFriendRequests() {
User me = Data.getUser(myUserName);
if ( me == null ) {
return new NoSuchUserException();
}
return new FriendList(me.getFriends().stream().collect(Collectors.toList()));
}
The instant the other user requests my friendship, I see the request on my list.
What gives? This sounds to me like the underlying datastructure is not updated, race conditions or something.
But the fix is how I retrieve the data on the server side, by using a stream.
Please someone explain and also how this would be done in Java 7 before streams would solve my, to me, curious problem.
On a note
I want to add that the users are placed inside an LinkedBlockingDeque and retrieve from the Data object, a shared resource for the workers.
Looks to me like returning the incFriendReq list directly in getFriendReq is one of the sources of your problem. When you use java 8 and stream that list into a new list, you are just making a copy, so there is no useful addition. If this is the case, your server-side code should also work then by using new ArrayList<>(me.getFriends()).
I would verify that all accesses to the list are properly synchronized and that you know where and when that list is being mutated.

Null pointer exception in server

This is a client-server programm.
For each client server has a method, which checks if there are some messages to this client.
Code:
while (bool) {
for(int j = 0;j<Start.bases.size();j++){
if(Start.bases.get(j).getId() == id){
if(!Start.bases.get(j).ifEmpty()){
String output = Start.bases.get(j).getMessage();
os.println(output);
System.out.println(output +" *FOT* "+ addr.getHostName());
}
}
}
Each thread has an id.
So everything seems to be OK, but I get strange null pointer Exception at this line
if(Start.bases.get(j).getId() == id){
id - integer.
It is really strange, because I have run in debug this part and checked that "bases" and "id" are not null and bases have apropriate fields.
bases is not empty.
By the way bases is static(because every thread can use it) and bases is declared before this method is used.
This line doesn't cause problems
for(int j = 0;j<Start.bases.size();j++){
May it is because of method getId() ?
public int getId(){
return id;
}
What is the problem?
Edited.
static ArrayList<Base> bases;
bases = new ArrayList<Base>();
Class Base:
public class Base {
private ServerThread st;
private int id;
private String name;
private ArrayList<String> messages;
public Base(String n, ServerThread s_t, int i_d){
messages = new ArrayList<String>();
st = s_t;
name = n;
id = i_d;
}
public String getName(){
return name;
}
public int getId(){
return id;
}
public ServerThread getThr(){
return st;
}
public String getMessage(){
String ret = "";
if(!messages.isEmpty()){
ret = messages.get(0);
messages.remove(messages.get(0));
}
return ret;
}
public void addMessage(String m){
messages.add(m);
}
public boolean ifEmpty(){
return messages.isEmpty();
}
}
Thanks.
In this line of code:
(Start.bases.get(j).getId() == id
you may have such exception in such cases:
1) bases is null - you said its wrong
2) bases.get(j) - it may occur only if you collection size was reduced during iteration(as mentioned Gray)
3) Start.bases.get(j).getId() is null. But as you mentioned getId() method return primitive int, so its not the case as in this situation you receive null ponter while casting - in line " return id;".
So you should check second case.
Given this:
"I have run in debug this part and checked that "bases" and "id" are not null and bases have apropriate fields"
and this:
bases is static(because every thread can use it)
I think it's pretty likely that you have a race condition. In a race condition, there are two threads simultaneously accessing the same data structure (in this case, Start.bases). Most of the time, one thread's code completes faster, and everything goes the way you expect them to, but occasionally the other thread gets a head-start or goes a little faster than usual and things go "boom".
When you introduce a debugger with a break point, you pretty much guarantee that the code with the break point will execute last, because you've stopped it mid-execution while all your other threads are still going.
I'd suggest that the size of your list is probably changing as you execute. When a user leaves, is their entry removed from the "base" list? Is there some other circumstance where the list can be changed from another thread during execution?
The first thing I'll suggest is that you switch your code to use iterators rather than straight "for" loops. It won't make the problem go away (it might actually make it more visible), but it will make what's happening a lot clearer. You'll get a ConcurrentModificationException at the point where the modification happens, rather than the less helpful NullPointerException only when a certain combination of changes happens.):
for(Base currentBase : Start.bases)
{
if(currentBase.getId() == id && !currentBase.ifEmpty())
{
String output = currentBase.getMessage();
os.println(output);
System.out.println(output +" *FOT* "+ addr.getHostName());
}
}
If you do get a concurrent modification exception with the above code, then you're definitely dealing with a race condition. That means that you'll have to synchronize your code.
There are a couple of ways to do this, depending on how your application is structured.
Assuming that the race is only between this bit of code and one other (the part doing the removing-from-the-list), you can probably solve this scenario by wrapping both chunks of code in
synchronized(Start.bases)
{
[your for-loop/item removal code goes here]
}
This will acquire a lock on the list itself, so that those two pieces of code will not attempt to update the same list at the same time in different threads. (Note that it won't stop concurrent modification to the Base objects themselves, but I doubt that's the problem in this case).
All of that said, any time you have a variable which is read/write accessed by multiple threads it really should be synchronized. That's a fairly complicated job. It's better to keep the synchronization inside the object you're managing if you can. That way you can see all the synchronization code in one place, making you less likely to accidentally create deadlocks. (In your code above, you'd need to make the "for" loop a method inside your Start class, along with anything else which uses that list, then make "bases" private so that the rest of the application must use those methods).
Without seeing all the other places in your code where this list is accessed, I can't say exactly what changes you should make, but hopefully that's enough to get you started. Remember that multi-threading in Java requires a very delicate hand!

Find messages from certain key till certain key while being able to remove stale keys

My problem
Let's say I want to hold my messages in some sort of datastructure for longpolling application:
1. "dude"
2. "where"
3. "is"
4. "my"
5. "car"
Asking for messages from index[4,5] should return:
"my","car".
Next let's assume that after a while I would like to purge old messages because they aren't useful anymore and I want to save memory. Let's say after time x messages[1-3] became stale. I assume that it would be most efficient to just do the deletion once every x seconds. Next my datastructure should contain:
4. "my"
5. "car"
My solution?
I was thinking of using a concurrentskiplistset or concurrentskiplist map. Also I was thinking of deleting the old messages from inside a newSingleThreadScheduledExecutor. I would like to know how you would implement(efficiently/thread-safe) this or maybe use a library?
The big concern, as I gather it, is how to let certain elements expire after a period. I had a similar requirement and I created a message class that implemented the Delayed Interface. This class held everything I needed for a message and (through the Delayed interface) told me when it has expired.
I used instances of this object within a concurrent collection, you could use a ConcurrentMap because it will allow you to key those objects with an integer key.
I reaped the collection once every so often, removing items whose delay has passed. We test for expiration by using the getDelay method of the Delayed interface:
message.getDelay(TimeUnit.MILLISECONDS);
I used a normal thread that would sleep for a period then reap the expired items. In my requirements it wasn't important that the items be removed as soon as their delay had expired. It seems that you have a similar flexibility.
If you needed to remove items as soon as their delay expired, then instead of sleeping a set period in your reaping thread, you would sleep for the delay of the message that will expire first.
Here's my delayed message class:
class DelayedMessage implements Delayed {
long endOfDelay;
Date requestTime;
String message;
public DelayedMessage(String m, int delay) {
requestTime = new Date();
endOfDelay = System.currentTimeMillis()
+ delay;
this.message = m;
}
public long getDelay(TimeUnit unit) {
long delay = unit.convert(
endOfDelay - System.currentTimeMillis(),
TimeUnit.MILLISECONDS);
return delay;
}
public int compareTo(Delayed o) {
DelayedMessage that = (DelayedMessage) o;
if (this.endOfDelay < that.endOfDelay) {
return -1;
}
if (this.endOfDelay > that.endOfDelay) {
return 1;
}
return this.requestTime.compareTo(that.requestTime);
}
#Override
public String toString() {
return message;
}
}
I'm not sure if this is what you want, but it looks like you need a NavigableMap<K,V> to me.
import java.util.*;
public class NaviMap {
public static void main(String[] args) {
NavigableMap<Integer,String> nmap = new TreeMap<Integer,String>();
nmap.put(1, "dude");
nmap.put(2, "where");
nmap.put(3, "is");
nmap.put(4, "my");
nmap.put(5, "car");
System.out.println(nmap);
// prints "{1=dude, 2=where, 3=is, 4=my, 5=car}"
System.out.println(nmap.subMap(4, true, 5, true).values());
// prints "[my, car]" ^inclusive^
nmap.subMap(1, true, 3, true).clear();
System.out.println(nmap);
// prints "{4=my, 5=car}"
// wrap into synchronized SortedMap
SortedMap<Integer,String> ssmap =Collections.synchronizedSortedMap(nmap);
System.out.println(ssmap.subMap(4, 5));
// prints "{4=my}" ^exclusive upper bound!
System.out.println(ssmap.subMap(4, 5+1));
// prints "{4=my, 5=car}" ^ugly but "works"
}
}
Now, unfortunately there's no easy way to get a synchronized version of a NavigableMap<K,V>, but a SortedMap does have a subMap, but only one overload where the upper bound is strictly exclusive.
API links
SortedMap.subMap
NavigableMap.subMap
Collections.synchronizedSortedMap

Categories

Resources