Why are threads using the same variable value? RxJava Mqtt - java

I'm using rxmqtt (which uses rxjava and paho) to communicate whith a mqtt broker. I'm using javax to accept rest requests and publish some content to the broker and wait for a response. The code below works fine if I make one request at a time, but if I have more than one concurrent requests it only returns a response for the last one and the others fall into the timeout exception.
The mqttConn.getMqttMessages() returns a flowable which is already subscribed to all topics i need:
public Flowable<MqttMessage> getMqttMessages() {
return this.obsClient.subscribe("pahoRx/fa/#", 1);
}
and MqttConnection is a singleton because i only want one single connection to broker and all the publishes are done in this connection
I've noticed that my queryParam id is different in each thread execution of the web service request (expected behavior), but when it enters the subscription part of the code it only considers the last id value and does not pass my validation in the takeUntil method:
mqttConn.getMqttMessages().timeout(20, TimeUnit.SECONDS).takeUntil(msgRcv -> {
System.out.println("received: " + new String(msgRcv.getPayload()) + " ID: " + id);
return id.equals(new String(msgRcv.getPayload()));
}).blockingSubscribe(msgRcv -> {
final byte[] body = msgRcv.getPayload();
System.out.println(new String(body)); //printing... but not sending the reponse
response.set("Message Receiced: " + new String(msgRcv.getPayload()));
return;
}, e -> {
if (e instanceof TimeoutException) {
response.set("Timeout Occured");
} else {
response.set("Some kind of error occured " + e.getLocalizedMessage());
}
});
The thing is, why is it only considering the last id received when each request should have its own independent thread? I've tried getting mqttConn.getMqttConnection() as a ThreadLocal object... doesn't fix.
Full WS code:
#Path("/test")
#GET
public String test(#QueryParam("id") String id) throws InterruptedException, MqttException {
String funcExec = "pahoRx/fe/";
String content = "unlock with single connection to broker";
int qos = 1;
AtomicReference<String> response = new AtomicReference<String>();
response.set("Initial Value");
MqttConnection mqttConn = MqttConnection.getMqttConnection();
ObservableMqttClient obsClient = mqttConn.getBrokerClient();
MqttMessage msg = MqttMessage.create(78, content.getBytes(), qos, false);
String topicPub = funcExec + id;
obsClient.publish(topicPub, msg).subscribe(t -> {
System.out.println("Message Published");
}, e -> {
System.out.println("Failed to publish message: " + e.getLocalizedMessage());
});
mqttConn.getMqttMessages().timeout(20, TimeUnit.SECONDS).takeUntil(msgRcv -> {
System.out.println("received: " + new String(msgRcv.getPayload()) + " ID: " + id);
return id.equals(new String(msgRcv.getPayload()));
}).blockingSubscribe(msgRcv -> {
final byte[] body = msgRcv.getPayload();
System.out.println(new String(body)); //printing... but not sending the reponse
response.set("Message Receiced: " + new String(msgRcv.getPayload()));
return;
}, e -> {
if (e instanceof TimeoutException) {
response.set("Timeout Occured");
} else {
response.set("Some kind of error occured " + e.getLocalizedMessage());
}
});
return response.get();
}
I hope the explanation is clear enough!
Ty in advance

Related

Asynchronous Recursive AWS Lambda call does not work

I'm trying to call an AWS Lambda function within itself (i.e. recursively). But, unless I block the thread and wait for the response using future.get(), the 2nd invocation does not happen.
I have shown my code for the 2 approaches below. I have implemented the Lambda functions using Java (SDK Version 2).
Am I missing something? Can someone explain the reason for this difference we see here? Thanks.
Non-blocking Approach - HandlerNonBlocking Lambda function (non-blocking)
public class HandlerNonBlocking {
public void process(Map<String, Object> event, Context context) {
context.getLogger().log("HandlerNonBlocking.process() invoked\n");
Object isSecond = event.get("isSecond");
if (isSecond == null) {
context.getLogger().log("First\n");
} else {
context.getLogger().log("Second\n");
return;
}
String json = "{\"isSecond\": \"Y\"}";
SdkBytes payload = SdkBytes.fromUtf8String(json) ;
InvokeRequest request = InvokeRequest.builder()
.functionName("recurring-function")
.invocationType(InvocationType.EVENT)
.payload(payload)
.build();
try (LambdaAsyncClient lambdaClient = LambdaAsyncClient.create()) {
context.getLogger().log("Calling again\n");
CompletableFuture<InvokeResponse> future = lambdaClient.invoke(request);
// Set a callback
future.thenAccept(response -> context.getLogger().log("Response status code: " + response.statusCode() + "\n"));
} catch (Exception ex) {
context.getLogger().log("Error when invoking Lambda function. " +
ex.getClass().getSimpleName() + ": " + ex.getMessage() + "\n");
}
try {
Thread.sleep(5_000);
} catch (InterruptedException ex) {
context.getLogger().log("Sleep interrupted. " + ex.getMessage() + "\n");
}
}
}
Log - First invocation
START RequestId: c44b0384-c159-4954-a241-8d14044f85db Version: $LATEST
HandlerNonBlocking.process() invoked
First
Calling again
END RequestId: c44b0384-c159-4954-a241-8d14044f85db
There is no second invocation in CloudWatch logs.
Blocking Approach - HandlerBlocking Lambda function (block and wait for response)
public class HandlerBlocking {
public void process(Map<String, Object> event, Context context) {
context.getLogger().log("HandlerBlocking.process() invoked\n");
Object isSecond = event.get("isSecond");
if (isSecond == null) {
context.getLogger().log("First\n");
} else {
context.getLogger().log("Second\n");
return;
}
String json = "{\"isSecond\": \"Y\"}";
SdkBytes payload = SdkBytes.fromUtf8String(json) ;
InvokeRequest request = InvokeRequest.builder()
.functionName("recurring-function")
.invocationType(InvocationType.EVENT)
.payload(payload)
.build();
try (LambdaAsyncClient lambdaClient = LambdaAsyncClient.create()) {
context.getLogger().log("Calling again\n");
CompletableFuture<InvokeResponse> future = lambdaClient.invoke(request);
// Wait for response
InvokeResponse response = future.get();
context.getLogger().log("Response status code: " + response.statusCode() + "\n");
} catch (Exception ex) {
context.getLogger().log("Error when invoking Lambda function. " +
ex.getClass().getSimpleName() + ": " + ex.getMessage() + "\n");
}
try {
Thread.sleep(5_000);
} catch (InterruptedException ex) {
context.getLogger().log("Sleep interrupted. " + ex.getMessage() + "\n");
}
}
}
Log - First invocation
START RequestId: 1a70cb3e-6752-4427-a703-69ff6f6c404b Version: $LATEST
HandlerBlocking.process() invoked
First
Calling again
Response status code: 202
END RequestId: 1a70cb3e-6752-4427-a703-69ff6f6c404b
Log - Second invocation
START RequestId: db71eb54-547a-4ee9-9e9d-0d5b689588f8 Version: $LATEST
HandlerBlocking.process() invoked
Second
END RequestId: db71eb54-547a-4ee9-9e9d-0d5b689588f8
I have tried the 2 approaches shown above.

Javamail : Setting timeout for folder.search

I have been using Javamail (version 1.6.4) for a while now, mostly for event listening, mail parsing and copy/delete mails (from Exchnage using IMAPS). Lately i was asked to use sub-folders for a business usecase. When moving mail from the Inbox folder to a sub-folder the UID changes so i'm using folder.search() in order to find the new UID. This works most of the times but sometimes the search goes indefnitley (or its duration is very long) which may lag the entire application.
Is there a way to set a timeout (or other way to throw an exception if it runs too long) for folder.search()?
I understood that imap search is done on the server side but i just wanted to validate. this is an example of how i send search to the server (we can assume subject is unique for this discussion):
private static String findMailIdBySubject(String mailbox, String srcFolder, String subject) {
Folder srcFolderObj = null;
boolean wasConnectionEstablished = connectToMail(mailbox);
if (!wasConnectionEstablished) {
return null;
}
// get mailboxStore object
MailboxStore mailboxStore = MailBoxStoreList.getMailStoreByMailBox(mailbox);
try {
// Open inbox folder to get messages metadata
srcFolderObj = mailboxStore.getStore().getFolder(srcFolder);
srcFolderObj.open(Folder.READ_WRITE);
// create search Term
SearchTerm term = new SearchTerm() {
private static final long serialVersionUID = 7L;
#Override
public boolean match(Message message) {
try {
String mailSubject = message.getSubject();
if (mailSubject == null) {
mailSubject = "";
}
if (mailSubject.equals(subject)) {
return true;
}
} catch (MessagingException ex) {
log.error("Failed to search for mail with mailbox: " + mailbox + " in folder: " + srcFolder
+ " subject: " + subject + " Error: " + ExceptionUtils.getStackTrace(ex));
}
return false;
}
};
// search for the relevant message
Message[] messages = srcFolderObj.search(term);
UIDFolder uf = (UIDFolder) srcFolderObj;
return String.valueOf(uf.getUID(messages[0]));
} catch (Exception e) {
log.error("Subject: Failed to find id of mail in mailbox " + mailbox + " in folder " + srcFolder
+ " , Error: " + ExceptionUtils.getStackTrace(e));
return null;
} finally {
try {
if (srcFolderObj != null && srcFolderObj.isOpen()) {
srcFolderObj.close();
}
} catch (Exception e) {
}
}
}
i also tried to replace SearchTerm override with the following but performance was the same:
SearchTerm searchTerm = new SubjectTerm(subject);
Thanks in advance!

No response in SQSMessageSuccess while detecting faces inside a video uploaded on Amazon s3

I had been trying to detect faces from a video stored on Amazon S3, the faces have to be matched against the collection that has the faces which are to be searched for in the video.
I have used Amazon VideoDetect.
My piece of code, goes like this:
CreateCollection createCollection = new CreateCollection(collection);
createCollection.makeCollection();
AddFacesToCollection addFacesToCollection = new AddFacesToCollection(collection, bucketName, image);
addFacesToCollection.addFaces();
VideoDetect videoDetect = new VideoDetect(video, bucketName, collection);
videoDetect.CreateTopicandQueue();
try {
videoDetect.StartFaceSearchCollection(bucketName, video, collection);
if (videoDetect.GetSQSMessageSuccess())
videoDetect.GetFaceSearchCollectionResults();
} catch (Exception e) {
e.printStackTrace();
return false;
}
videoDetect.DeleteTopicandQueue();
return true;
The things seem to work fine till StartFaceSearchCollection and I am getting a jobId being made and a queue as well. But when it is trying to go around to get GetSQSMessageSuccess, its never returning me any message.
The code which is trying to fetch the message is :
ReceiveMessageRequest.Builder receiveMessageRequest = ReceiveMessageRequest.builder().queueUrl(sqsQueueUrl);
messages = sqs.receiveMessage(receiveMessageRequest.build()).messages();
Its having the correct sqsQueueUrl which exist. But I am not getting anything in the message.
On timeout its giving me this exception :
software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: sqs.region.amazonaws.com
at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:97)
Caused by: java.net.UnknownHostException: sqs.region.amazonaws.com
So is there any alternative to this, instead of SQSMessage, can we track/poll the jobId any other way ?? Or I am missing out on anything ??
The simple working code snippet to receive SQS message with the valid sqsQueueUrl for more
ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest(sqsQueueUrl);
final List<Message> messages = sqs.receiveMessage(receiveMessageRequest).getMessages();
for (final Message message : messages) {
System.out.println("Message");
System.out.println(" MessageId: " + message.getMessageId());
System.out.println(" ReceiptHandle: " + message.getReceiptHandle());
System.out.println(" MD5OfBody: " + message.getMD5OfBody());
System.out.println(" Body: " + message.getBody());
for (final Entry<String, String> entry : message.getAttributes().entrySet()) {
System.out.println("Attribute");
System.out.println(" Name: " + entry.getKey());
System.out.println(" Value: " + entry.getValue());
}
}
System.out.println();

Could not establish socket with any provided host Openfire

I've been searching for a solution for this problem for 2 days now..
I have an android chat application that I want to implement sending files into it.
Here's the sending code:
public void sendFile(Uri uri) {
FileTransferManager fileTransferManager = FileTransferManager.getInstanceFor(app.getConnection());
OutgoingFileTransfer fileTransfer = fileTransferManager.createOutgoingFileTransfer(userId + "/Spark");
try {
fileTransfer.sendFile(new File(uri.getPath()), "this is the description");
System.out.println("status is:" + fileTransfer.getStatus());
System.out.println("sent .. just");
while (!fileTransfer.isDone()) {
if (fileTransfer.getStatus() == FileTransfer.Status.refused) {
Toast.makeText(getActivity(), "File refused.", Toast.LENGTH_SHORT).show();
return;
}
if (fileTransfer.getStatus() == FileTransfer.Status.error) {
Toast.makeText(getActivity(), "Error occured.", Toast.LENGTH_SHORT).show();
return;
}
}
System.out.println(fileTransfer.getFileName() + "has been successfully transferred.");
System.out.println("The Transfer is " + fileTransfer.isDone());
} catch (Exception e) {
// TODO: handle exception
e.printStackTrace();
}
}
I know this code works fine as I sent file from android to spark and received it successfully.. The problem is in receiving that file in android.. Here's the code:
ProviderManager.addIQProvider("si", "http://jabber.org/protocol/si",
new StreamInitiationProvider());
ProviderManager.addIQProvider("query", "http://jabber.org/protocol/bytestreams",
new BytestreamsProvider());
ProviderManager.addIQProvider("open", "http://jabber.org/protocol/ibb",
new OpenIQProvider());
ProviderManager.addIQProvider("close", "http://jabber.org/protocol/ibb",
new CloseIQProvider());
ServiceDiscoveryManager sdm = ServiceDiscoveryManager.getInstanceFor(connection);
sdm.addFeature("http://jabber.org/protocol/disco#info");
sdm.addFeature("jabber:iq:privacy");
final FileTransferManager manager = FileTransferManager.getInstanceFor(connection);
manager.addFileTransferListener(new FileTransferListener() {
public void fileTransferRequest(FileTransferRequest request) {
IncomingFileTransfer transfer = request.accept();
try {
File file = new File(Environment.getExternalStorageDirectory() + "/" + request.getFileName());
Log.i("Tawasol", "File Name: " + request.getFileName());
transfer.recieveFile(file);
while (!transfer.isDone() || (transfer.getProgress() < 1)) {
Thread.sleep(1000);
Log.i("Tawasol", "still receiving : " + (transfer.getProgress()) + " status " + transfer.getStatus());
if (transfer.getStatus().equals(org.jivesoftware.smackx.filetransfer.FileTransfer.Status.error)) {
// Log.i("Error file",
// transfer.getError().getMessage());
Log.i("Tawasol",
"cancelling still receiving : "
+ (transfer.getProgress())
+ " status "
+ transfer.getStatus() + ": " + transfer.getException().toString());
transfer.cancel();
break;
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
});
still receiving : 0.0 status Negotiating Stream
I get this log for about 5 seconds the I get that:
cancelling still receiving : 0.0 status Error: org.jivesoftware.smack.SmackException: Error in execution
I think that the problem is in the openfire server that I'm using.. I've openfire 3.9.3 server installed on my windows 7 64bit.. In the Smack logs I noticed this one:
<iq id="L87BF-73" to="59xrd#rightsho/Smack" type="set" from="h97qa#rightsho/Spark"><query xmlns="http://jabber.org/protocol/bytestreams" sid="jsi_4840101552711519219" mode="tcp"><streamhost jid="proxy.rightsho" host="192.168.56.1" port="7777"/></query></iq>
The host here is 192.168.56.1 which I think is local ip so that I can't access it from android.. So I wan't to use the IP of the pc to transfer files..
Excuse me for my lack of knowledge in this field.
From my little knowledge of smack the issue may be in this piece of code:
while (!transfer.isDone() || (transfer.getProgress() < 1)) {
Thread.sleep(1000);
Log.i("Tawasol", "still receiving : " + (transfer.getProgress()) + " status " + transfer.getStatus());
if (transfer.getStatus().equals(org.jivesoftware.smackx.filetransfer.FileTransfer.Status.error)) {
// Log.i("Error file",
// transfer.getError().getMessage());
Log.i("Tawasol",
"cancelling still receiving : "
+ (transfer.getProgress())
+ " status "
+ transfer.getStatus() + ": " + transfer.getException().toString());
transfer.cancel();
break;
}
}
If you move the monitoring while loop to another thread, suddenly this error goes away. I'm not sure why, but it has worked for me and my friends in the past.

Put multiple items into DynamoDB by Java code

I would like use batchWriteItem method of SDK Amazon to put a lot of items into table.
I retrive the items from Kinesis, ad it has a lot of shard.
I used this method for one item:
public static void addSingleRecord(Item thingRecord) {
// Add an item
try
{
DynamoDB dynamo = new DynamoDB(dynamoDB);
Table table = dynamo.getTable(dataTable);
table.putItem(thingRecord);
} catch (AmazonServiceException ase) {
System.out.println("addThingsData request "
+ "to AWS was rejected with an error response for some reason.");
System.out.println("Error Message: " + ase.getMessage());
System.out.println("HTTP Status Code: " + ase.getStatusCode());
System.out.println("AWS Error Code: " + ase.getErrorCode());
System.out.println("Error Type: " + ase.getErrorType());
System.out.println("Request ID: " + ase.getRequestId());
} catch (AmazonClientException ace) {
System.out.println("addThingsData - Caught an AmazonClientException, which means the client encountered "
+ "a serious internal problem while trying to communicate with AWS, "
+ "such as not being able to access the network.");
System.out.println("Error Message: " + ace.getMessage());
}
}
public static void addThings(String thingDatum) {
Item itemJ2;
itemJ2 = Item.fromJSON(thingDatum);
addSingleRecord(itemJ2);
}
The item is passed from:
private void processSingleRecord(Record record) {
// TODO Add your own record processing logic here
String data = null;
try {
// For this app, we interpret the payload as UTF-8 chars.
data = decoder.decode(record.getData()).toString();
System.out.println("**processSingleRecord - data " + data);
AmazonDynamoDBSample.addThings(data);
} catch (NumberFormatException e) {
LOG.info("Record does not match sample record format. Ignoring record with data; " + data);
} catch (CharacterCodingException e) {
LOG.error("Malformed data: " + data, e);
}
}
Now if i want to put a lot of record, I will use:
public static void writeMultipleItemsBatchWrite(Item thingRecord) {
try {
dataTableWriteItems.addItemToPut(thingRecord);
System.out.println("Making the request.");
BatchWriteItemOutcome outcome = dynamo.batchWriteItem(dataTableWriteItems);
do {
// Check for unprocessed keys which could happen if you exceed provisioned throughput
Map<String, List<WriteRequest>> unprocessedItems = outcome.getUnprocessedItems();
if (outcome.getUnprocessedItems().size() == 0) {
System.out.println("No unprocessed items found");
} else {
System.out.println("Retrieving the unprocessed items");
outcome = dynamo.batchWriteItemUnprocessed(unprocessedItems);
}
} while (outcome.getUnprocessedItems().size() > 0);
} catch (Exception e) {
System.err.println("Failed to retrieve items: ");
e.printStackTrace(System.err);
}
}
but how can I send the last group? because I send only when I have 25 items, but at the end the number is lower.
You can write items to your DynamoDB table one at a time using the Document SDK in a Lambda function attached to your Kinesis Stream using PutItem or UpdateItem. This way, you can react to Stream Records as they appear in the Stream without worrying about whether there are any more records to process. Behind the scenes, BatchWriteItem consumes the same amount of write capacity units as the corresponding PutItem calls. A BatchWriteItem will be as latent as the PUT in the batch that takes the longest. Therefore, using BatchWriteItem, you may experience higher average latency than with parallel PutItem/UpdateItem calls.

Categories

Resources