spring async rest client orchestrate few calls - java

I have following problem in my service I am building object X however In order to build it I need to make few http calls in order to get all required data to fill it (each rest fills certain part of the object.) In order to keep performance high I thought it would be nice to make call async and after all calls are done return object to the caller. It looks something like this
ListenableFuture<ResponseEntity<String>> future1 = asycTemp.exchange(url, method, requestEntity, responseType);
future1.addCallback({
//process response and set fields
complexObject.field1 = "PARSERD RESPONSE"
},{
//in case of fail fill default or take some ather actions
})
I don't know how to wait for all features to be done. I guess that they are some standard spring ways of solving this kind of issue. Thanks in advance for any suggestions. Spring version - 4.2.4.RELEASE
Best regards

Adapted from Waiting for callback for multiple futures.
This example simply requests the Google and Microsoft homepages. When the response is received in the callback, and I've done my processing, I decrement a CountDownLatch. I await the CountDownLatch, "blocking" the current thread until the CountDownLatch reaches 0.
It's important that you decrement if your call fails or succeeds, as you must hit 0 to continue with the method!
public static void main(String[] args) throws Exception {
String googleUrl = "http://www.google.com";
String microsoftUrl = "http://www.microsoft.com";
AsyncRestTemplate asyncRestTemplate = new AsyncRestTemplate();
ListenableFuture<ResponseEntity<String>> googleFuture = asyncRestTemplate.exchange(googleUrl, HttpMethod.GET, null, String.class);
ListenableFuture<ResponseEntity<String>> microsoftFuture = asyncRestTemplate.exchange(microsoftUrl, HttpMethod.GET, null, String.class);
final CountDownLatch countDownLatch = new CountDownLatch(2);
ListenableFutureCallback<ResponseEntity<java.lang.String>> listenableFutureCallback = new ListenableFutureCallback<ResponseEntity<String>>() {
public void onSuccess(ResponseEntity<String> stringResponseEntity) {
System.out.println(String.format("[Thread %d] Status Code: %d. Body size: %d",
Thread.currentThread().getId(),
stringResponseEntity.getStatusCode().value(),
stringResponseEntity.getBody().length()
));
countDownLatch.countDown();
}
public void onFailure(Throwable throwable) {
System.err.println(throwable.getMessage());
countDownLatch.countDown();
}
};
googleFuture.addCallback(listenableFutureCallback);
microsoftFuture.addCallback(listenableFutureCallback);
System.out.println(String.format("[Thread %d] This line executed immediately.", Thread.currentThread().getId()));
countDownLatch.await();
System.out.println(String.format("[Thread %d] All responses received.", Thread.currentThread().getId()));
}
The output from my console:
[Thread 1] This line executed immediately.
[Thread 14] Status Code: 200. Body size: 112654
[Thread 13] Status Code: 200. Body size: 19087
[Thread 1] All responses received.

Related

How to move error message to Azure dead letter queue(Topics - Subscription) using Java?

I need to send my messages to Dead letter queue from azure topic subscription incase of any error while reading and processing the message from topic. So I tried testing pushing message directly to DLQ.
My sample code will be like
static void sendMessage()
{
// create a Service Bus Sender client for the queue
ServiceBusSenderClient senderClient = new ServiceBusClientBuilder()
.connectionString(connectionString)
.sender()
.topicName(topicName)
.buildClient();
// send one message to the topic
senderClient.sendMessage(new ServiceBusMessage("Hello, World!"));
}
static void resceiveAsync() {
ServiceBusReceiverAsyncClient receiver = new ServiceBusClientBuilder()
.connectionString(connectionString)
.receiver()
.topicName(topicName)
.subscriptionName(subName)
.buildAsyncClient();
// receive() operation continuously fetches messages until the subscription is disposed.
// The stream is infinite, and completes when the subscription or receiver is closed.
Disposable subscription = receiver.receiveMessages().subscribe(message -> {
System.out.printf("Id: %s%n", message.getMessageId());
System.out.printf("Contents: %s%n", message.getBody().toString());
}, error -> {
System.err.println("Error occurred while receiving messages: " + error);
}, () -> {
System.out.println("Finished receiving messages.");
});
// Continue application processing. When you are finished receiving messages, dispose of the subscription.
subscription.dispose();
// When you are done using the receiver, dispose of it.
receiver.close();
}
I tried getting the deadletter queue path
String dlq = EntityNameHelper.formatDeadLetterPath(topicName);
I got path of dead letter queue like = "mytopic/$deadletterqueue"
But It's not working while passing path as topic name. It throwing a Entity topic not found exception.
Any one can you please advise me on this
Reference :
How to move error message to Azure dead letter queue using Java?
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dead-letter-queues#moving-messages-to-the-dlq
How to push the failure messages to Azure service bus Dead Letter Queue in Spring Boot Java?
https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-java-how-to-use-topics-subscriptions-legacy#receive-messages-from-a-subscription
You probably know that a message will be automatically moved to the deadletter queue if you throw exceptions during processing, and the maximum delievery count is exceeded. If you want to explicitly move the message to the DLQ, you can do so as well. A common case for this is if you know that the message can never succeed because of its contents.
You cannot send new messages directly to the DLQ, because then you would have two messages in the system. You need to call a special operation on the parent entity. Also, <topic path>/$deadletterqueue does not work, because this would be the DLQ of all subscriptions. The correct entity path is built like this:
<queue path>/$deadletterqueue
<topic path>/Subscriptions/<subscription path>/$deadletterqueue
https://github.com/Azure/azure-service-bus/blob/master/samples/Java/azure-servicebus/DeadletterQueue/src/main/java/com/microsoft/azure/servicebus/samples/deadletterqueue/DeadletterQueue.java
This sample code is for queues, but you should be able to adapt it to topics quite easily:
// register the RegisterMessageHandler callback
receiver.registerMessageHandler(
new IMessageHandler() {
// callback invoked when the message handler loop has obtained a message
public CompletableFuture<Void> onMessageAsync(IMessage message) {
// receives message is passed to callback
if (message.getLabel() != null &&
message.getContentType() != null &&
message.getLabel().contentEquals("Scientist") &&
message.getContentType().contentEquals("application/json")) {
// ...
} else {
return receiver.deadLetterAsync(message.getLockToken());
}
return receiver.completeAsync(message.getLockToken());
}
// callback invoked when the message handler has an exception to report
public void notifyException(Throwable throwable, ExceptionPhase exceptionPhase) {
System.out.printf(exceptionPhase + "-" + throwable.getMessage());
}
},
// 1 concurrent call, messages are auto-completed, auto-renew duration
new MessageHandlerOptions(1, false, Duration.ofMinutes(1)),
executorService);

RxJava: OnErrorFailedException. Identifying the correct cause

Being inspired by T.Nurkiewicz's "Reactive Programming with RxJava" I tried to apply it in a project that I am working on and here's the issue that I am facing.
I have a Rest end point that takes an input stream and a username and either returns a link for the updated username or returns a Bad Request error. Here's how I tried to implement this using RxJava:
#PUT
#Path("{username}")
public Response updateCredential(#PathParam("username") final String username, InputStream stream) {
CredentialCandidate candidate = new CredentialCandidate();
Observable.just(repository.getByUsername(username))
.subscribe(
credential -> {
serializeCandidate(candidate, stream);
try {
repository.updateCredential(build(credential, candidate));
} catch (Exception e) {
String msg = "Failed to update credential +\""+username+"\": "+e.getMessage();
throw new BadRequestException(msg, Response.status(Response.Status.BAD_REQUEST).build());
}
},
ex -> {
String msg = "Couldn't update credential \""+username+"\""
+ ". A credential with such username doesn't exist: " + ex.getMessage();
logger.error(msg);
throw new BadRequestException(msg, Response.status(Response.Status.BAD_REQUEST).build());
});//if the Observable completes without exceptions we have a success case
Map<String, String> map = new HashMap<>();
map.put("path", "credential/" + username);
return Response.ok(getJsonRepr("link", uriGenerator.apply(appsUriBuilder, map).toASCIIString())).build();
}
My issue is at the line 11 (the catch clause of the onNext method). This is the log output that quickly will demonstrate what happens:
19:23:50.472 [http-listener(4)] ERROR com.vgorcinschi.rimmanew.rest.services.CredentialResourceService - Couldn't update credential "admin". A credential with such username doesn't exist: Failed to update credential +"admin": Password too weak!
So the exception thrown in the onNext method goes to the upstream and ends-up in the onError method! Apparently this works as designed, but I am confused as to how I could return the correct reason of the Bad Request Error. After all in my test case a credential with the user was found by the repository, the correct error was that the suggested password was too weak. This is the helper method that generated the error:
private Credential build(Credential credential, CredentialCandidate candidate) {
if(!isOkPsswd.test(candidate.getPassword())){
throw new BadRequestException("Password too weak!", Response.status(Response.Status.BAD_REQUEST).build());
}
...
}
I am still fairly new to Reactive Programming so I realise I may be missing something that is obvious. Skimming through the book didn't get me to an answer, so I would appreciate any help.
Just in case, this is the full stack trace:
updateCredentialTest(com.vgorcinschi.rimmanew.services.CredentialResourceServiceTest) Time elapsed: 0.798 sec <<< ERROR!
rx.exceptions.OnErrorFailedException: Error occurred when trying to propagate error to Observer.onError
at com.vgorcinschi.rimmanew.rest.services.CredentialResourceService.lambda$updateCredential$9(CredentialResourceService.java:245)
at rx.internal.util.ActionSubscriber.onNext(ActionSubscriber.java:39)
at rx.observers.SafeSubscriber.onNext(SafeSubscriber.java:134)
at rx.internal.util.ScalarSynchronousObservable$WeakSingleProducer.request(ScalarSynchronousObservable.java:276)
at rx.Subscriber.setProducer(Subscriber.java:209)
at rx.Subscriber.setProducer(Subscriber.java:205)
at rx.internal.util.ScalarSynchronousObservable$JustOnSubscribe.call(ScalarSynchronousObservable.java:138)
at rx.internal.util.ScalarSynchronousObservable$JustOnSubscribe.call(ScalarSynchronousObservable.java:129)
at rx.Observable.subscribe(Observable.java:10238)
at rx.Observable.subscribe(Observable.java:10205)
at rx.Observable.subscribe(Observable.java:10045)
at com.vgorcinschi.rimmanew.rest.services.CredentialResourceService.updateCredential(CredentialResourceService.java:238)
at com.vgorcinschi.rimmanew.services.CredentialResourceServiceTest.updateCredentialTest(CredentialResourceServiceTest.java:140)
It's seems you didn't grasp Reactive programming principles right.
First thing is that Observable are asynchronous by their API, while you are trying to enforce it to be synchronous API, by trying to return the Response value directly from the method, instead of returning Observable<Response> that emits this Response value over time by its onNext() notification.
That's why you are struggling with the exception, each notification lambda method (onNext/onError) is encapsulated by the Observable mechanism, in order to create a proper stream that obey some rules (the Observable contract), some of those expected behaviors are that errors should be redirect to the onError() method, which is the exception catch method, you shouldn't throw there, and throwing there will be considered as fatal error and will swallowed by throwing OnErrorFailedException.
Ideally it will be something like this:
public Observable<Response> updateCredential(#PathParam("username") final String username,
InputStream stream) {
rerurn Observable.fromCallable(() -> {
CredentialCandidate candidate = new CredentialCandidate();
Credential credential = repository.getByUsername(username);
serializeCandidate(candidate, stream);
repository.updateCredential(build(credential, candidate));
Map<String, String> map = new HashMap<>();
map.put("path", "credential/" + username);
return Response.ok(getJsonRepr("link", uriGenerator.apply(appsUriBuilder, map).toASCIIString())).build();
})
.onErrorReturn(throwable -> {
String msg = "Failed to update credential +\"" + username + "\": " + e.getMessage();
throw new BadRequestException(msg, Response.status(Response.Status.BAD_REQUEST).build());
});
}
use fromCallable in order to make the request happen when subscribing (while Observable.just(repository.getByUsername(username)) will act synchronously when the Observable is constructs ), the success path is withing the callable itself, while if any error occurred, you will transform it to your custom exception using onErrorReturn operator.
with his approach you will return Observable object that will act when you will subscribe to it, you will get all the benefits of Observable and Reactive approach such being able to compose it with some other operations, being able to specify from outside whether it will act synchronously (current thread) or async on some other thread (using Scheduler) .
For more detailed explanation regarding reactive programming I suggest to start from this great tutorial from André Staltz.

HTTP/2 priority & dependency test with Jetty

Priority & Dependency:
Here I made I simple test. But the result seems not so good.
I tried to make 100 request in a for loop in the same connection(the request url is the same, I am wondering whether this part influence the results).
If the index is i, then my request stream_id is i while the dependent stream_id is 100+i. If our assumption is right, the request can never get response because there is no stream_id from 101 to 200.
But the results shows there is no difference for setting the dependency and not. I got the response data frame one by one without timeout or waiting.
And also some other related test, the start point is to let the stream which depends on other stream to be sent first and the stream dependent later. But the result is same.
I am still thinking the reason of the results. Can anyone help me? Many thanks.
Code here:
public void run() throws Exception
{
host = "google.com";
port = 443;
//client init
HTTP2Client client = new HTTP2Client();
SslContextFactory sslContextFactory = new SslContextFactory(true);
client.addBean(sslContextFactory);
client.start();
//connect init
FuturePromise<Session> sessionPromise = new FuturePromise<>();
client.connect(sslContextFactory, new InetSocketAddress(host, port), new ServerSessionListener.Adapter(), sessionPromise);
Session session = sessionPromise.get(10, TimeUnit.SECONDS);
//headers init
HttpFields requestFields = new HttpFields();
requestFields.put("User-Agent", client.getClass().getName() + "/" + Jetty.VERSION);
final Phaser phaser = new Phaser(2);
//multiple request in one connection
for(int i=0;i<100;i++)
{
MetaData.Request metaData = new MetaData.Request("GET", new HttpURI("https://" + host + ":" + port + "/"), HttpVersion.HTTP_2, requestFields);
PriorityFrame testPriorityFrame = new PriorityFrame(i, 100+i, 4, true);
HeadersFrame headersFrame = new HeadersFrame(0, metaData, testPriorityFrame, true);
//listen header/data/push frame
session.newStream(headersFrame, new Promise.Adapter<Stream>(), new Stream.Listener.Adapter()
{
#Override
public void onHeaders(Stream stream, HeadersFrame frame)
{
System.err.println(frame+"headId:"+frame.getStreamId());
if (frame.isEndStream())
phaser.arrive();
}
#Override
public void onData(Stream stream, DataFrame frame, Callback callback)
{
System.err.println(frame +"streamid:"+ frame.getStreamId());
callback.succeeded();
if (frame.isEndStream())
phaser.arrive();
}
#Override
public Stream.Listener onPush(Stream stream, PushPromiseFrame frame)
{
System.err.println(frame+"pushid:"+frame.getStreamId());
phaser.register();
return this;
}
});
}
phaser.awaitAdvanceInterruptibly(phaser.arrive(), 5, TimeUnit.SECONDS);
client.stop();
}
The Jetty project did not implement (yet) HTTP/2 request prioritization.
We are discussing whether this is any useful for a server, whose concern is to write back the responses as quick as it can.
Having one client changing its mind on the priority of the requests, or making a request knowing that in reality it first wanted another request served, it's a lot of work for the server that in the meantime has to serve the other 10,000 clients connected to it.
By the time we the server has recomputed the priority tree for the dependent requests, it could have probably have served the requests already.
By the time the client realizes that it has to change the priority of a request, the whole response for it could already be in flight.
Having said that, we are certainly interested in real world use cases where request prioritization performed by the server yields a real performance improvement. We just have not seen it yet.
I would love to hear why you are interested in request prioritization and how you are leveraging it. Your answer could be a drive for the Jetty project to implement HTTP/2 priorities.

Streaming in jersey 2?

I've been trying to get json streaming to work in jersey 2. For the life of me nothing streams until the stream is complete.
I've tried this example trying to simulate a slow producer of data.
#Path("/foo")
#GET
public void getAsyncStream(#Suspended AsyncResponse response) {
StreamingOutput streamingOutput = output -> {
JsonGenerator jg = new ObjectMapper().getFactory().createGenerator(output, JsonEncoding.UTF8);
jg.writeStartArray();
for (int i = 0; i < 100; i++) {
jg.writeObject(i);
try {
Thread.sleep(100);
}
catch (InterruptedException e) {
logger.error(e, "Error");
}
}
jg.writeEndArray();
jg.flush();
jg.close();
};
response.resume(Response.ok(streamingOutput).build());
}
And yet jersey just sits there until the json generator is done to return the results. I'm watching the results come through in charles proxy.
Do I need to enable something? Not sure why this won't stream out
Edit:
This may actually be working, just not how I expected it. I dont' think stream is writing things realtime which is what I wanted, its more for not having to buffer responses and immediately write them out to the client. If I run a loop of a million and no thread sleep then data does get written out in chunks without having to buffer it in memory.
Your edit it correct. It is working as expected. StreamingOutput is just a wrapper that let's us write directly to the response stream, but does not actually mean the response is streamed on each server side write to the stream. Also AsyncResponse does not provide any different response as far as the client is concerned. It is simply to help increase throughput with long running tasks. The long running task should actually be done in another thread, so the method can return.
See more at Asynchronous Server API
What you seem to be looking for instead is Chunked Output
Jersey offers a facility for sending response to the client in multiple more-or-less independent chunks using a chunked output. Each response chunk usually takes some (longer) time to prepare before sending it to the client. The most important fact about response chunks is that you want to send them to the client immediately as they become available without waiting for the remaining chunks to become available too.
Not sure how it will work for your particular use case, as the JsonGenerator expects an OutputStream (of which the ChuckedOutput we use is not), but here is a simpler example
#Path("async")
public class AsyncResource {
#GET
public ChunkedOutput<String> getChunkedStream() throws Exception {
final ChunkedOutput<String> output = new ChunkedOutput<>(String.class);
new Thread(() -> {
try {
String chunk = "Message";
for (int i = 0; i < 10; i++) {
output.write(chunk + "#" + i);
Thread.sleep(1000);
}
} catch (Exception e) {
} finally {
try {
output.close();
} catch (IOException ex) {
Logger.getLogger(AsyncResource.class.getName())
.log(Level.SEVERE, null, ex);
}
}
}).start();
return output;
}
}
Note: I had a problem getting this to work at first. I would only get the delayed complete result. The problem seemed to have been with something completely separate from the program. It was actually my AVG causing the problem. Some feature called "LinkScanner" was stopping this chunking process to occur. I disabled that feature and it started working.
I haven't explored chunking much, and am not sure the security implications, so I am not sure why the AVG application has a problem with it.
EDIT
Seems the real problem is due to Jersey buffering the response in order to calculate the Content-Length header. You can see this post for how you can change this behavior

Single-threaded Java Websocket for Testing

We are developing an application with Scala and Websockets. For the latter we use Java-Websocket. The application itself works great and we are in the middle of writing unit tests.
We use a WebSocket class as follows
class WebSocket(uri : URI) extends WebSocketClient(uri) {
connectBlocking()
var response = ""
def onOpen(handshakedata : ServerHandshake) {
println("onOpen")
}
def onMessage(message : String) {
println("Received: " + message)
response = message
}
def onClose(code : Int, reason : String, remote : Boolean) {
println("onClose")
}
def onError(ex : Exception) {
println("onError")
}
}
A test might look like this (pseudo code)
websocketTest {
ws = new WebSocket("ws://example.org")
ws.send("foo")
res = ws.getResponse()
....
}
Sending and receiving data works. However, the problem is that connecting to the websocket creates a new thread and only the new thread will have access to response using the onMessage handler. What is the best way to either make the websocket implementation single-threaded or connect the two threads so that we can access the response in the test case? Or is there another, even better way of doing it? In the end we should be able to somehow test the response of the websocket.
There are a number of ways you could try to do this. The issue will be that you might get an error or a successful response from the server. As a result, the best way is probably to use some sort of timeout. In the past I have used a pattern like (note, this is untested code):
...
use response in the onMessage like you did
...
long start = System.currentTimeMillis();
long timeout = 5000;//5 seconds
while((system.currentTimeMillis()-start)<timeout && response==null)
{
Thread.sleep(100);
}
if(response == null) .. timed out
else .. do something with the response
If you want to be especially safe you can use an AtomicReference for the response.
Of course the timeout and sleep can be minimized based on your test case.
Moreover, you can wrap this in a utility method.

Categories

Resources