Using a Commonj Work Manager to send Asynchronous HTTP calls - java

I switched from making sequential HTTP calls to 4 REST services, to making 4 simultaneous calls using a commonj4 work manager task executor. I'm using WebLogic 12c. This new code works on my development environment, but in our test environment under load conditions, and occasionally while not under load, the results map is not populated with all of the results. The logging suggests that each work item did receive back the results though. Could this be a problem with the ConcurrentHashMap? In this example from IBM, they use their own version of Work and there's a getData() method, although it doesn't like that method really exists in their class definition. I had followed a different example that just used the Work class but didn't demonstrate how to get the data out of those threads into the main thread. Should I be using execute() instead of schedule()? The API doesn't appear to be well documented. The stuckthreadtimeout is sufficiently high. component.processInbound() actually contains the code for the HTTP call, but I the problem isn't there because I can switch back to the synchronous version of the class below and not have any issues.
http://publib.boulder.ibm.com/infocenter/wsdoc400/v6r0/index.jsp?topic=/com.ibm.websphere.iseries.doc/info/ae/asyncbns/concepts/casb_workmgr.html
My code:
public class WorkManagerAsyncLinkedComponentRouter implements
MessageDispatcher<Object, Object> {
private List<Component<Object, Object>> components;
protected ConcurrentHashMap<String, Object> workItemsResultsMap;
protected ConcurrentHashMap<String, Exception> componentExceptionsInThreads;
...
//components is populated at this point with one component for each REST call to be made.
public Object route(final Object message) throws RouterException {
...
try {
workItemsResultsMap = new ConcurrentHashMap<String, Object>();
componentExceptionsInThreads = new ConcurrentHashMap<String, Exception>();
final String parentThreadID = Thread.currentThread().getName();
List<WorkItem> producerWorkItems = new ArrayList<WorkItem>();
for (final Component<Object, Object> component : this.components) {
producerWorkItems.add(workManagerTaskExecutor.schedule(new Work() {
public void run() {
//ExecuteThread th = (ExecuteThread) Thread.currentThread();
//th.setName(component.getName());
LOG.info("Child thread " + Thread.currentThread().getName() +" Parent thread: " + parentThreadID + " Executing work item for: " + component.getName());
try {
Object returnObj = component.processInbound(message);
if (returnObj == null)
LOG.info("Object returned to work item is null, not adding to producer components results map, for this producer: "
+ component.getName());
else {
LOG.info("Added producer component thread result for: "
+ component.getName());
workItemsResultsMap.put(component.getName(), returnObj);
}
LOG.info("Finished executing work item for: " + component.getName());
} catch (Exception e) {
componentExceptionsInThreads.put(component.getName(), e);
}
}
...
}));
} // end loop over producer components
// Block until all items are done
workManagerTaskExecutor.waitForAll(producerWorkItems, stuckThreadTimeout);
LOG.info("Finished waiting for all producer component threads.");
if (componentExceptionsInThreads != null
&& componentExceptionsInThreads.size() > 0) {
...
}
List<Object> resultsList = new ArrayList<Object>(workItemsResultsMap.values());
if (resultsList.size() == 0)
throw new RouterException(
"The producer thread results are all empty. The threads were likely not created. In testing this was observed when either 1)the system was almost out of memory (Perhaps the there is not enough memory to create a new thread for each producer, for this REST request), or 2)Timeouts were reached for all producers.");
//** The problem is identified here. The results in the ConcurrentHashMap aren't the number expected .
if (workItemsResultsMap.size() != this.components.size()) {
StringBuilder sb = new StringBuilder();
for (String str : workItemsResultsMap.keySet()) {
sb.append(str + " ");
}
throw new RouterException(
"Did not receive results from all threads within the thread timeout period. Only retrieved:"
+ sb.toString());
}
LOG.info("Returning " + String.valueOf(resultsList.size()) + " results.");
LOG.debug("List of returned feeds: " + String.valueOf(resultsList));
return resultsList;
}
...
}
}

I ended up cloning the DOM document used as a parameter. There must be some downstream code that has side effects on the parameter.

Related

FirestoreException: Backend ended Listen Stream

I'm trying to use Firestore in order to set up realtime listeners for a collection. Whenever a document is added, modified, or deleted in a collection, I want the listener to be called. My code is currently working for one collection, but when I try the same code on a larger collection, it fails with the error:
Listen failed: com.google.cloud.firestore.FirestoreException: Backend ended Listen stream: The datastore operation timed out, or the data was temporarily unavailable.
Here's my actual listener code:
/**
* Sets up a listener at the given collection reference. When changes are made in this collection, it writes a flat
* text file for import into backend.
* #param collectionReference The Collection Reference that we want to listen to for changes.
*/
public static void listenToCollection(CollectionReference collectionReference) {
AtomicBoolean initialUpdate = new AtomicBoolean(true);
System.out.println("Initializing listener for: " + collectionReference.getId());
collectionReference.addSnapshotListener(new EventListener<QuerySnapshot>() {
#Override
public void onEvent(#Nullable QuerySnapshot queryDocumentSnapshots, #Nullable FirestoreException e) {
// Error Handling
if (e != null) {
System.err.println("Listen failed: " + e);
return;
}
// If this is the first time this function is called, it's simply reading everything in the collection
// We don't care about the initial value, only the updates, so we simply ignore the first call
if (initialUpdate.get()) {
initialUpdate.set(false);
System.out.println("Initial update complete...\nListener active for " + collectionReference.getId() + "...");
return;
}
// A document has changed, propagate this back to backend by writing text file.
for (DocumentChange dc : queryDocumentSnapshots.getDocumentChanges()) {
String docId = dc.getDocument().getId();
Map<String, Object> docData = dc.getDocument().getData();
String folderPath = createFolderPath(collectionReference, docId, docData);
switch (dc.getType()) {
case ADDED:
System.out.println("Document Created: " + docId);
writeMapToFile(docData, folderPath, "CREATE");
break;
case MODIFIED:
System.out.println("Document Updated: " + docId);
writeMapToFile(docData, folderPath, "UPDATE");
break;
case REMOVED:
System.out.println("Document Deleted: " + docId);
writeMapToFile(docData, folderPath, "DELETE");
break;
default:
break;
}
}
}
});
}
It seems to me that the collection is too large, and the initial download of the collection is timing out. Is there some sort of work around I can use in order to get updates to this collection in real time?
I reached out to the Firebase team, and they're currently getting back to me on the issue. In the meantime, I was able to reduce the size of my listener by querying the collection based on a Last Updated timestamp attribute. I only looked at documents that were recently updated, and had my app change this attribute whenever a change was made.

Aggregate finished threads and send the response after timeout rX Java

I have a use case where I need to aggregate the finished thread responses from multiple Observable objects and return back to the client. My question is how to achieve it with using the rX Java. Here I have written a code snippet but the issue of this one is that this won't return anything after the timeout.
Observable<AggregateResponse> aggregateResponse = Observable.
zip(callServiceA(endpoint), callServiceB(endpoint), callServiceC(endpoint),
(Mashup resultA, Mashup resultB, Mashup resultC) -> {
AggregateResponse result = new AggregateResponse();
result.setResult(resultA.getName() + " " + resultB.getName() + " " + resultC.getName());
return result;
}).timeout(5, TimeUnit.SECONDS);
Subscriber
aggregateResponse.subscribe(new Subscriber<AggregateResponse>() {
#Override
public void onCompleted() {
}
#Override
public void onError(Throwable throwable) {
//Timeout execute this rather than aggregating the finished tasks
System.out.println(throwable.getMessage());
System.out.println(throwable.getClass());
}
#Override
public void onNext(AggregateResponse response) {
asyncResponse.resume(response);
}
});
You need to put the timeout operator on each Observable, zip will wait for all Observables to emit a value before emitting a result, so if only one of them take longer while others already emitted, you will cut down the stream with the timeout (with onError) before the zipped Observable will have a chance to emit.
What you should do, assuming you want to ignore timed out sources while keeping the rest, is to add timeout operator to each Observable and also add error handling like onErrorReturn to each one, the error return can return some kind of 'empty' result (you can't use null in RxJava2), and when you aggregate result ignore those empty results:
Observable<AggregateResponse> aggregateResponse = Observable.
zip(callServiceA(endpoint)
.timeout(5, TimeUnit.SECONDS)
.onErrorReturn(throwable -> new Mashup()),
callServiceB(endpoint)
.timeout(5, TimeUnit.SECONDS)
.onErrorReturn(throwable -> new Mashup()),
callServiceC(endpoint)
.timeout(5, TimeUnit.SECONDS)
.onErrorReturn(throwable -> new Mashup()),
(Mashup resultA, Mashup resultB, Mashup resultC) -> {
AggregateResponse result = new AggregateResponse();
result.setResult(resultA.getName() + " " + resultB.getName() + " " + resultC.getName());
return result;
});

Apache Kafka System Error Handling

We are trying to implement Kafka as our message broker solution. We are deploying our Spring Boot microservices in IBM BLuemix, whose internal message broker implementation is Kafka version 0.10. Since my experience is more on the JMS, ActiveMQ end, I was wondering what should be the ideal way to handle system level errors in the java consumers?
Here is how we have implemented it currently
Consumer properties
enable.auto.commit=false
auto.offset.reset=latest
We are using the default properties for
max.partition.fetch.bytes
session.timeout.ms
Kafka Consumer
We are spinning up 3 threads per topic all having the same groupId, i.e one KafkaConsumer instance per thread. We have only one partition as of now. The consumer code looks like this in the constructor of the thread class
kafkaConsumer = new KafkaConsumer<String, String>(properties);
final List<String> topicList = new ArrayList<String>();
topicList.add(properties.getTopic());
kafkaConsumer.subscribe(topicList, new ConsumerRebalanceListener() {
#Override
public void onPartitionsRevoked(final Collection<TopicPartition> partitions) {
}
#Override
public void onPartitionsAssigned(final Collection<TopicPartition> partitions) {
try {
logger.info("Partitions assigned, consumer seeking to end.");
for (final TopicPartition partition : partitions) {
final long position = kafkaConsumer.position(partition);
logger.info("current Position: " + position);
logger.info("Seeking to end...");
kafkaConsumer.seekToEnd(Arrays.asList(partition));
logger.info("Seek from the current position: " + kafkaConsumer.position(partition));
kafkaConsumer.seek(partition, position);
}
logger.info("Consumer can now begin consuming messages.");
} catch (final Exception e) {
logger.error("Consumer can now begin consuming messages.");
}
}
});
The actual reading happens in the run method of the thread
try {
// Poll on the Kafka consumer every second.
final ConsumerRecords<String, String> records = kafkaConsumer.poll(1000);
// Iterate through all the messages received and print their
// content.
for (final TopicPartition partition : records.partitions()) {
final List<ConsumerRecord<String, String>> partitionRecords = records.records(partition);
logger.info("consumer is alive and is processing "+ partitionRecords.size() +" records");
for (final ConsumerRecord<String, String> record : partitionRecords) {
logger.info("processing topic "+ record.topic()+" for key "+record.key()+" on offset "+ record.offset());
final Class<? extends Event> resourceClass = eventProcessors.getResourceClass();
final Object obj = converter.convertToObject(record.value(), resourceClass);
if (obj != null) {
logger.info("Event: " + obj + " acquired by " + Thread.currentThread().getName());
final CommsEvent event = resourceClass.cast(converter.convertToObject(record.value(), resourceClass));
final MessageResults results = eventProcessors.processEvent(event
);
if ("Success".equals(results.getStatus())) {
// commit the processed message which changes
// the offset
kafkaConsumer.commitSync();
logger.info("Message processed sucessfully");
} else {
kafkaConsumer.seek(new TopicPartition(record.topic(), record.partition()), record.offset());
logger.error("Error processing message : {} with error : {},resetting offset to {} ", obj,results.getError().getMessage(),record.offset());
break;
}
}
}
}
// TODO add return
} catch (final Exception e) {
logger.error("Consumer has failed with exception: " + e, e);
shutdown();
}
You will notice the EventProcessor which is a service class which processes each record, in most cases commits the record in database. If the processor throws an error (System Exception or ValidationException) we do not commit but programatically set the seek to that offset, so that subsequent poll will return from that offset for that group id.
The doubt now is that, is this the right approach? If we get an error and we set the offset then until that is fixed no other message is processed. This might work for system errors like not able to connect to DB, but if the problem is only with that event and not others to process this one record we wont be able to process any other record. We thought of the concept of ErrorTopic where when we get an error the consumer will publish that event to the ErrorTopic and in the meantime it will keep on processing other subsequent events. But it looks like we are trying to bring in the design concepts of JMS (due to my previous experience) into kafka and there may be better way to solve error handling in kafka. Also reprocessing it from error topic may change the sequence of messages which we don't want for some scenarios
Please let me know how anyone has handled this scenario in their projects following the Kafka standards.
-Tatha
if the problem is only with that event and not others to process this one record we wont be able to process any other record
that's correct and your suggestion to use an error topic seems a possible one.
I also noticed that with your handling of onPartitionsAssigned you essentially do not use the consumer committed offset, as you seem you'll always seek to the end.
If you want to restart from the last succesfully committed offset, you should not perform a seek
Finally, I'd like to point out, though it looks like you know that, having 3 consumers in the same group subscribed to a single partition - means that 2 out of 3 will be idle.
HTH
Edo

Where is the deadlock in this example?

I am currently reading a section on concurrency in The Well-Grounded Java Developer book and this particular code sample demonstrating block concurrency should deadlock, but as far as I can see it does not. Here's the code:
public class MicroBlogNode implements SimpleMicroBlogNode {
private final String ident;
public MicroBlogNode(String ident_){
ident = ident_;
}
public String getIdent(){
return ident;
}
public static Update getUpdate(String _name){
return new Update(_name);
}
public synchronized void propagateUpdate(Update upd_, MicroBlogNode backup_){
System.out.println(ident + ": received: " + upd_.getUpdateText() + " ; backup: " + backup_.getIdent());
backup_.confirmUpdate(this, upd_);
}
public synchronized void confirmUpdate(MicroBlogNode other_, Update update_){
System.out.println(ident + ": received confirm: " + update_.getUpdateText() + " from " + other_.getIdent() + "\n");
}
public static void main(String[] args) {
final MicroBlogNode local = new MicroBlogNode("localhost");
final MicroBlogNode other = new MicroBlogNode("remotehost");
final Update first = getUpdate("1");
final Update second = getUpdate("2");
new Thread(new Runnable() {
public void run() {
local.propagateUpdate(first, other);
}
}).start();
new Thread(new Runnable() {
public void run() {
other.propagateUpdate(second, local);
}
}).start();
}
}
When I run it I get the following output:
localhost: received: 1 ; backup: remotehost
remotehost: received confirm: 1 from localhost
remotehost: received: 2 ; backup: localhost
localhost: received confirm: 2 from remotehost
The book says that if you run the code, you’ll normally see an example of a deadlock—both threads will report receiving the update, but neither will confirm receiving the update for
which they’re the backup thread. The reason for this is that each thread requires the other to release the lock it holds before the confirmation method can progress.
As far as I can see this is not the case - each thread confirms receiving the update for which they are the backup thread.
Thanks in advance.
This looks like timing. Your output is showing that the localhost thread has completed before the remotehost (other) thread has started.
Try putting a Thread.sleep(1000) in the propogateUpdate method after the System.out
public synchronized void propagateUpdate(Update upd_, MicroBlogNode backup_){
System.out.println(ident + ": received: " + upd_.getUpdateText() + " ; backup: " + backup_.getIdent());
Thread.sleep(1000);
backup_.confirmUpdate(this, upd_);
}
This should force a deadlock.
The deadlock is happening when you have your local calling a threaded operation on confirmUpdate when other is attempting to make the same call. Hence, the deadlock happens following this order of operations
Local locks itself by calling propagateUpdate due to the declaration that it is synchronized (see Synchronized Member Function in Java)
'Other' locks itself by calling propagateUpdate
Local attempts to acquire the lock on Other to call confirmUpdate but can't since Other has already been locked in the other thread.
Other attempts to do the same thing and fails for the same reason.
If it's actually working, it's probably because it's happening so fast. Run it a few more times. Thread issue never work when you want them to work.

Retrieve multiple messages from SQS

I have multiple messages in SQS. The following code always returns only one, even if there are dozens visible (not in flight). setMaxNumberOfMessages I thought would allow multiple to be consumed at once .. have i misunderstood this?
CreateQueueRequest createQueueRequest = new CreateQueueRequest().withQueueName(queueName);
String queueUrl = sqs.createQueue(createQueueRequest).getQueueUrl();
ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest(queueUrl);
receiveMessageRequest.setMaxNumberOfMessages(10);
List<Message> messages = sqs.receiveMessage(receiveMessageRequest).getMessages();
for (Message message : messages) {
// i'm a message from SQS
}
I've also tried using withMaxNumberOfMessages without any such luck:
receiveMessageRequest.withMaxNumberOfMessages(10);
How do I know there are messages in the queue? More than 1?
Set<String> attrs = new HashSet<String>();
attrs.add("ApproximateNumberOfMessages");
CreateQueueRequest createQueueRequest = new CreateQueueRequest().withQueueName(queueName);
GetQueueAttributesRequest a = new GetQueueAttributesRequest().withQueueUrl(sqs.createQueue(createQueueRequest).getQueueUrl()).withAttributeNames(attrs);
Map<String,String> result = sqs.getQueueAttributes(a).getAttributes();
int num = Integer.parseInt(result.get("ApproximateNumberOfMessages"));
The above always is run prior and gives me an int that is >1
Thanks for your input
AWS API Reference Guide: Query/QueryReceiveMessage
Due to the distributed nature of the queue, a weighted random set of machines is sampled on a ReceiveMessage call. That means only the messages on the sampled machines are returned. If the number of messages in the queue is small (less than 1000), it is likely you will get fewer messages than you requested per ReceiveMessage call. If the number of messages in the queue is extremely small, you might not receive any messages in a particular ReceiveMessage response; in which case you should repeat the request.
and
MaxNumberOfMessages: Maximum number of messages to return. SQS never returns more messages than this value but might return fewer.
There is a comprehensive explanation for this (arguably rather idiosyncratic) behaviour in the SQS reference documentation.
SQS stores copies of messages on multiple servers and receive message requests are made to these servers with one of two possible strategies,
Short Polling : The default behaviour, only a subset of the servers (based on a weighted random distribution) are queried.
Long Polling : Enabled by setting the WaitTimeSeconds attribute to a non-zero value, all of the servers are queried.
In practice, for my limited tests, I always seem to get one message with short polling just as you did.
I had the same problem. What is your Receive Message Wait Time for your queue set to? When mine was at 0, it only returned 1 message even if there were 8 in the queue. When I increased the Receive Message Wait Time, then I got all of them. Seems kind of buggy to me.
I was just trying the same and with the help of these two attributes setMaxNumberOfMessages and setWaitTimeSeconds i was able to get 10 messages.
ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest(myQueueUrl);
receiveMessageRequest.setMaxNumberOfMessages(10);
receiveMessageRequest.setWaitTimeSeconds(20);
Snapshot of o/p:
Receiving messages from TestQueue.
Number of messages:10
Message
MessageId: 31a7c669-1f0c-4bf1-b18b-c7fa31f4e82d
...
receiveMessageRequest.withMaxNumberOfMessages(10);
Just to be clear, the more practical use of this would be to add to your constructor like this:
ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest(queueUrl).withMaxNumberOfMessages(10);
Otherwise, you might as well just do:
receiveMessageRequest.setMaxNumberOfMessages(10);
That being said, changing this won't help the original problem.
Thanks Caoilte!
I faced this issue also. Finally solved by using long polling follow the configuration here:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-configure-long-polling-for-queue.html
Unfortunately, to use long polling, you must create your queue as FIFO one. I tried standard queue with no luck.
And when receiving, need also set MaxNumberOfMessages. So my code is like:
ReceiveMessageRequest receive_request = new ReceiveMessageRequest()
.withQueueUrl(QUEUE_URL)
.withWaitTimeSeconds(20)
.withMaxNumberOfMessages(10);
Although solved, still feel too wired. AWS should definitely provide a more neat API for this kind of basic receiving operation.
From my point, AWS has many many cool features but not good APIs. Like those guys are rushing out all the time.
For small task list I use FIFO queue like stackoverflow.com/a/55149351/13678017
for example modified AWS tutorial
// Create a queue.
System.out.println("Creating a new Amazon SQS FIFO queue called " + "MyFifoQueue.fifo.\n");
final Map<String, String> attributes = new HashMap<>();
// A FIFO queue must have the FifoQueue attribute set to true.
attributes.put("FifoQueue", "true");
/*
* If the user doesn't provide a MessageDeduplicationId, generate a
* MessageDeduplicationId based on the content.
*/
attributes.put("ContentBasedDeduplication", "true");
// The FIFO queue name must end with the .fifo suffix.
final CreateQueueRequest createQueueRequest = new CreateQueueRequest("MyFifoQueue4.fifo")
.withAttributes(attributes);
final String myQueueUrl = sqs.createQueue(createQueueRequest).getQueueUrl();
// List all queues.
System.out.println("Listing all queues in your account.\n");
for (final String queueUrl : sqs.listQueues().getQueueUrls()) {
System.out.println(" QueueUrl: " + queueUrl);
}
System.out.println();
// Send a message.
System.out.println("Sending a message to MyQueue.\n");
for (int i = 0; i < 4; i++) {
var request = new SendMessageRequest()
.withQueueUrl(myQueueUrl)
.withMessageBody("message " + i)
.withMessageGroupId("userId1");
;
sqs.sendMessage(request);
}
for (int i = 0; i < 6; i++) {
var request = new SendMessageRequest()
.withQueueUrl(myQueueUrl)
.withMessageBody("message " + i)
.withMessageGroupId("userId2");
;
sqs.sendMessage(request);
}
// Receive messages.
System.out.println("Receiving messages from MyQueue.\n");
var receiveMessageRequest = new ReceiveMessageRequest(myQueueUrl);
receiveMessageRequest.setMaxNumberOfMessages(10);
receiveMessageRequest.setWaitTimeSeconds(20);
// what receive?
receiveMessageRequest.withMessageAttributeNames("userId2");
final List<Message> messages = sqs.receiveMessage(receiveMessageRequest).getMessages();
for (final Message message : messages) {
System.out.println("Message");
System.out.println(" MessageId: "
+ message.getMessageId());
System.out.println(" ReceiptHandle: "
+ message.getReceiptHandle());
System.out.println(" MD5OfBody: "
+ message.getMD5OfBody());
System.out.println(" Body: "
+ message.getBody());
for (final Entry<String, String> entry : message.getAttributes()
.entrySet()) {
System.out.println("Attribute");
System.out.println(" Name: " + entry
.getKey());
System.out.println(" Value: " + entry
.getValue());
}
}
Here's a workaround, you can call receiveMessageFromSQS method asynchronously.
bulkReceiveFromSQS (queueUrl, totalMessages, asyncLimit, batchSize, visibilityTimeout, waitTime, callback) {
batchSize = Math.min(batchSize, 10);
let self = this,
noOfIterations = Math.ceil(totalMessages / batchSize);
async.timesLimit(noOfIterations, asyncLimit, function(n, next) {
self.receiveMessageFromSQS(queueUrl, batchSize, visibilityTimeout, waitTime,
function(err, result) {
if (err) {
return next(err);
}
return next(null, _.get(result, 'Messages'));
});
}, function (err, listOfMessages) {
if (err) {
return callback(err);
}
listOfMessages = _.flatten(listOfMessages).filter(Boolean);
return callback(null, listOfMessages);
});
}
It will return you an array with a given number of messages

Categories

Resources