Play 2.5: get response body in custom http action - java

I'm trying to create a custom http action (https://playframework.com/documentation/2.5.x/JavaActionsComposition) to log request and response bodies with Play 2.5.0 Java. This is what I've got so far:
public class Log extends play.mvc.Action.Simple {
public CompletionStage<Result> call(Http.Context ctx) {
CompletionStage<Result> response = delegate.call(ctx);
//request body is fine
System.out.println(ctx.request().body().asText())
//how to get response body string here while also not sabotaging http response flow of the framework?
//my guess is it should be somehow possible to access it below?
response.thenApply( r -> {
//???
return null;
});
return response;
}
}

Logging is often considered a cross-cutting feature. In such cases the preferred way to do this in Play is to use Filters:
The filter API is intended for cross cutting concerns that are applied indiscriminately to all routes. For example, here are some common use cases for filters:
Logging/metrics collection
GZIP encoding
Security headers
This works for me:
import java.util.concurrent.CompletionStage;
import java.util.function.Function;
import javax.inject.Inject;
import akka.stream.*;
import play.Logger;
import play.mvc.*;
public class LoggingFilter extends Filter {
Materializer mat;
#Inject
public LoggingFilter(Materializer mat) {
super(mat);
this.mat = mat;
}
#Override
public CompletionStage<Result> apply(
Function<Http.RequestHeader, CompletionStage<Result>> nextFilter,
Http.RequestHeader requestHeader) {
long startTime = System.currentTimeMillis();
return nextFilter.apply(requestHeader).thenApply(result -> {
long endTime = System.currentTimeMillis();
long requestTime = endTime - startTime;
Logger.info("{} {} took {}ms and returned {}",
requestHeader.method(), requestHeader.uri(), requestTime, result.status());
akka.util.ByteString body = play.core.j.JavaResultExtractor.getBody(result, 10000l, mat);
Logger.info(body.decodeString("UTF-8"));
return result.withHeader("Request-Time", "" + requestTime);
});
}
}
What is it doing?
First this creates a new Filter which can be used along with other filters you may have. In order to get the body of the response we actually use the nextFilter - once we have the response we can then get the body.
As of Play 2.5 Akka Streams are the weapon of choice. This means that once you use the JavaResultExtractor, you will get a ByteString, which you then have to decode in order to get the real string underneath.
Please keep in mind that there should be no problem in copying this logic in the Action that you are creating. I just chose the option with Filter because of the reason stated at the top of my post.

Related

Java: use of org.springframework.retry.annotation.Retryable

I'm attempting to use org.springframework.retry.annotation.Retryable with an expectation that if a connection reset or timeout happens, the execution can re-try the call again. But so far I'm not getting the retry setup to work properly. The timeout error looks like this this error:
javax.ws.rs.ProcessingException: RESTEASY004655: Unable to invoke request:
java.net.SocketTimeoutException: Read timed out
Here is what I have for the retry:
import org.springframework.retry.annotation.Backoff;
import org.springframework.retry.annotation.Retryable;
import org.springframework.stereotype.Service;
import java.util.function.Supplier;
#Service
public interface RetryService {
#Retryable(maxAttempts = 4,
backoff = #Backoff(delay = 1000))
<T> T run(Supplier<T> supplier);
}
and retry implementing class:
import com.service.RetryService;
import org.springframework.stereotype.Service;
import java.util.function.Supplier;
#Service
public class RetryImpl implements RetryService {
#Override
public <T> T run(Supplier<T> supplier) {
return supplier.get();
}
}
I use it in the following way:
retryService.run(()->
myClient.cancel(String id)
);
and MyClient.java has the following implementation for the cancel call and caching:
public Response cancel(String id) {
String externalUrl = "https://other.external.service/rest/cancel?id="+id;
WebTarget target = myClient.target(externalUrl);
return target.request()
.accept(MediaType.APPLICATION_JSON)
.cacheControl(cacheControl())
.buildPut(null)
.invoke();
}
private static CacheControl cacheControl(){
CacheControl cacheControl = new CacheControl();
cacheControl.setMustRevalidate(true);
cacheControl.setNoStore(true);
cacheControl.setMaxAge(0);
cacheControl.setNoCache(true);
return cacheControl;
}
The externalUrl var above is an endpoint that's a void method - it returns nothing.
My hope is that when timeout happens, the retry implementation would fire up and call the external service another time. My setup works in Postman as long as timeout does NOT occur. But if I let it sit for about 30 min, and try again, then I get the timeout error as above. And if I issue another call via Postman right after the timeout, i do get a successful execution. So the retry setup is not working, seems like something falls asleep.
I've tried a dazzling number of variations to make it work, but no luck so far. When timeout happens, i see no logs in Kibana in the external service, while getting the the error above on my end.
I'm hoping there is something obvious that I'm not seeing here, or maybe there is a better way to do this. Any help is very much appreciated.
You need to set #EnableRetry on one of your #Configuration classes.
See the README: https://github.com/spring-projects/spring-retry

How should Blocking Operations in a ContainerRequestFilter be handled Quarkus/Vert.x

Background:
We are implementing a signed request mechanism for communication between services. Part of that process generates a digest on the contents of the request body. To validate the body on receipt, we re-generate the digest at the receiver and compare. It's pretty straight-forward stuff.
#PreMatching
#Priority(Priorities.ENTITY_CODER)
public class DigestValidationFilter implements ContainerRequestFilter {
private final DigestGenerator generator;
#Inject
public DigestValidationFilter(DigestGenerator generator) {
this.generator = generator;
}
#Override
public void filter(ContainerRequestContext context) throws IOException {
if (context.hasEntity() && context.getHeaderString(Headers.DIGEST) != null) {
String digest = context.getHeaderString(Headers.DIGEST);
ByteArrayOutputStream body = new ByteArrayOutputStream();
try (InputStream stream = context.getEntityStream()) {
stream.transferTo(body); // <-- This is line 36 from the provided stack-trace
}
String algorithm = digest.split("=", 2)[0];
try {
String calculated = generator.generate(algorithm, body.toByteArray());
if (digest.equals(calculated)) {
context.setEntityStream(new ByteArrayInputStream(body.toByteArray()));
} else {
throw new InvalidDigestException("Calculated digest does not match supplied digest. Request body may have been tampered with.");
}
} catch (NoSuchAlgorithmException e) {
throw new InvalidDigestException(String.format("Unsupported hash algorithm: %s", algorithm), e);
}
}
}
}
The above filter is made available to services as a java-lib. We also supply a set of RequestFilters that can be used with various Http clients, i.e., okhttp3, apache-httpclient, etc. These clients only generate digests when the body is "repeatable", i.e., not streaming.
The Issue:
In Jersey services and Spring Boot services, we do not run into issues. However, when we use Quarkus, we receive the following stack-trace:
2022-09-02 15:18:25 5.13.0 ERROR A blocking operation occurred on the IO thread. This likely means you need to use the #io.smallrye.common.annotation.Blocking annotation on the Resource method, class or javax.ws.rs.core.Application class.
2022-09-02 15:18:25 5.13.0 ERROR HTTP Request to /v1/policy/internal/policies/72575947-45ac-4358-bc40-b5c7ffbd3f35/target-resources failed, error id: c79aa557-c742-43d7-93d9-0e362b2dff79-1
org.jboss.resteasy.reactive.common.core.BlockingNotAllowedException: Attempting a blocking read on io thread
at org.jboss.resteasy.reactive.server.vertx.VertxInputStream$VertxBlockingInput.readBlocking(VertxInputStream.java:242)
at org.jboss.resteasy.reactive.server.vertx.VertxInputStream.readIntoBuffer(VertxInputStream.java:120)
at org.jboss.resteasy.reactive.server.vertx.VertxInputStream.read(VertxInputStream.java:82)
at java.base/java.io.InputStream.transferTo(InputStream.java:782)
at com.###.ciam.jaxrs.DigestValidationFilter.filter(DigestValidationFilter.java:36)
at org.jboss.resteasy.reactive.server.handlers.ResourceRequestFilterHandler.handle(ResourceRequestFilterHandler.java:47)
at org.jboss.resteasy.reactive.server.handlers.ResourceRequestFilterHandler.handle(ResourceRequestFilterHandler.java:8)
at org.jboss.resteasy.reactive.common.core.AbstractResteasyReactiveContext.run(AbstractResteasyReactiveContext.java:141)
at org.jboss.resteasy.reactive.server.handlers.RestInitialHandler.beginProcessing(RestInitialHandler.java:49)
at org.jboss.resteasy.reactive.server.vertx.ResteasyReactiveVertxHandler.handle(ResteasyReactiveVertxHandler.java:17)
at org.jboss.resteasy.reactive.server.vertx.ResteasyReactiveVertxHandler.handle(ResteasyReactiveVertxHandler.java:7)
at io.vertx.ext.web.impl.RouteState.handleContext(RouteState.java:1212)
at io.vertx.ext.web.impl.RoutingContextImplBase.iterateNext(RoutingContextImplBase.java:163)
at io.vertx.ext.web.impl.RoutingContextImpl.next(RoutingContextImpl.java:141)
at io.quarkus.vertx.http.runtime.StaticResourcesRecorder$2.handle(StaticResourcesRecorder.java:67) ... elided ...
I completely understand why Vert.x would like to prevent long-running I/O operations on the request processing threads. That said, the advice provided in the exception only accounts for I/O operations at the end of the request processing, i.e., it assumes the I/O is happening in the endpoint. Although we do control the filter code, it is in an external library, making it almost like a 3rd party library.
My Question:
What is the right way to handle this?
I've been scouring documentation, but haven't stumbled on the answer yet (or haven't recognized the answer). Is there a set of recommended docs I should review?
https://quarkus.io/guides/resteasy-reactive#request-or-response-filters
https://smallrye.io/smallrye-mutiny/1.7.0/guides/framework-integration/
#RequestScoped
class Filter(
private val vertx: Vertx
) {
// you can run blocking code on mutiny's Infrastructure defaultWorkerPool
#ServerRequestFilter
fun filter(requestContext: ContainerRequestContext): Uni<RestResponse<*>> {
return Uni.createFrom().item { work() }
.map<RestResponse<*>> { null }
.runSubscriptionOn(Infrastructure.getDefaultWorkerPool())
}
// or use vertx.executeBlocking api
#ServerRequestFilter
fun filter(requestContext: ContainerRequestContext): Uni<RestResponse<*>> {
return vertx.executeBlocking(
Uni.createFrom().item { work() }
.map { null }
)
}
private fun work(){
Log.info("filter")
Thread.sleep(3000)
}
}
In the end, the advice in the exception lead me to simply annotating a delegate ContainerRequestFilter:
public class DigestValidationFilterBlocking implements ContainerRequestFilter {
private final DigestValidationFilter delegate;
public DigestValidationFilterBlocking(DigestValidationFilter delegate) {
this.delegate = delegate;
}
#Blocking // <-- This annotation allowed Vert.x to accept the I/O operation
#Override
public void filter(ContainerRequestContext context) throws IOException {
delegate.filter(context);
}
}
I had the same problem. You can try using this in your #ServerRequestFilter:
#Context
HttpServerRequest request;

Retrofit 2 RequestBody writeTo() method called twice

Retrofit 2 RequestBody writeTo() method called twice, the code which I used is given below:
ProgressRequestBody requestVideoFile = new ProgressRequestBody(videoFile, new ProgressRequestBody.UploadCallbacks() {
VideoUploadStore store = new VideoUploadStore();
#Override
public void onProgressUpdate(int percentage) {
if (!mIsCancelled) {
Log.i("UploadServiceManager", "Read Percentage : " + percentage);
data.setUploadPercentage(percentage);
store.updateUploadData(data);
}
}
#Override
public void onError() {
if(!mIsCancelled) {
data.setUploadPercentage(0);
store.updateUploadData(data);
}
}
#Override
public void onFinish() {
}
});
MultipartBody.Part multipartVideo = MultipartBody.Part.createFormData("File", videoFile.getName(), requestVideoFile);
The solution below might help you out , although it might be too late. :p
Remove HttpLoggingInterceptor Object in your Api Client which will not execute writeTo() function twice.Basically , HttpLoggingInterceptor loads the data buffer first ( for internal logging purpose ) by calling writeTo() and then again calls writeTo() for uploading the data to server.
HttpLoggingInterceptor logging = new HttpLoggingInterceptor();
logging.setLevel(HttpLoggingInterceptor.Level.BODY);
httpClient.addInterceptor(logging);
Decreasing log level from BODY to HEADERS, BASIC or NONE solved this problem for me
I figured out yet another case for twice called writeTo() method.
I use OkHttpClient without Retrofit and HttpLoggingInterceptor, and I have the twice called problem.
Solution: the problem appears after upgrade Android Studio to 3.1.1 and enable Advanced Profiling in run project configuration. So disable Advanced Profiling.
If you're using writeTo() to track file upload progress, you need to distinguish between the callers of writeTo(). Basically writeTo() can be called by any interceptor in the chain, e.g. any logging interceptor such as a HttpLoggingInterceptor/OkHttpProfilerInterceptor/StethoInterceptor, and this method provides no context for that.
The simplest approach (as pointed out by other answers) is to get rid of those interceptors that require access to the request body. But this may be not always feasible.
Another solution is to make use of the fact that a server call is performed by a CallServerInterceptor which is the last interceptor in the chain (according to the docs). You can inspect stack trace prior to further handling. Yes, this is ugly. But this way you don't have to modify your interceptors or leave room for subtle bugs when someone else comes up and adds another interceptor.
override fun writeTo(sink: BufferedSink) {
val isCalledByCallServerInterceptor = Thread.currentThread().stackTrace.any { stackTraceElement ->
stackTraceElement.className == CallServerInterceptor::class.java.canonicalName
}
// TODO
}
decreasing level from BODY to HEADERS and removing HttpLoggingInterceptor is not a solution for me because it gave explanation about what is going on with the api called. you can just initiate counter variable like,
private int firstTimeCounter = 0; then
#Override
public void writeTo(BufferedSink sink) throws IOException {
firstTimeCounter += 1;
.......
.......
if(firstTimeCounter==2){
try {
while (total != file.length()) {
read = source.read(sink.buffer(), SEGMENT_SIZE);
total += read;
Log.e("progress ", total + " %");
sink.flush();
}
} finally {
okhttp3.internal.Util.closeQuietly(source);
}
}
}

Fire and forget for HTTP in Java

We're implementing our own analytics for that we've exposed a web service which needs to be invoked which will capture the data in our DB.
The problem is that as this is analytics we would be making lot of calls (like for every page load, call after each js, CSS loads etc...) so there'll be many many such calls. So I don want the server to be loaded with lots of requests to be more precise pending for response. Because the response we get back will hardly be of any use to us.
So is there any way to just fire the web service request and forget that I've fired it?
I understand that every HTTP request will have as response as well.
So one thing that ticked my mind was what if we make the request timeout to zero second? But I'm pretty not sure if this is the right way of doing this.
Please provide me with more suggestions
You might find following AsyncRequestDemo.java useful:
import java.net.URI;
import java.net.URISyntaxException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import org.apache.http.client.fluent.Async;
import org.apache.http.client.fluent.Content;
import org.apache.http.client.fluent.Request;
import org.apache.http.client.utils.URIBuilder;
import org.apache.http.concurrent.FutureCallback;
/**
* Following libraries have been used:
*
* 1) httpcore-4.4.5.jar
* 2) httpclient-4.5.2.jar
* 3) commons-logging-1.2.jar
* 4) fluent-hc-4.5.2.jar *
*
*/
public class AsyncRequestDemo {
public static void main(String[] args) throws Exception {
URIBuilder urlBuilder = new URIBuilder()
.setScheme("http")
.setHost("stackoverflow.com")
.setPath("/questions/38277471/fire-and-forget-for-http-in-java");
final int nThreads = 3; // no. of threads in the pool
final int timeout = 0; // connection time out in milliseconds
URI uri = null;
try {
uri = urlBuilder.build();
} catch (URISyntaxException use) {
use.printStackTrace();
}
ExecutorService executorService = Executors.newFixedThreadPool(nThreads);
Async async = Async.newInstance().use(executorService);
final Request request = Request.Get(uri).connectTimeout(timeout);
Future<Content> future = async.execute(request, new FutureCallback<Content>() {
public void failed(final Exception e) {
System.out.println("Request failed: " + request);
System.exit(1);
}
public void completed(final Content content) {
System.out.println("Request completed: " + request);
System.out.println(content.asString());
System.exit(0);
}
public void cancelled() {
}
});
System.out.println("Request submitted");
}
}
I used this:
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
URL url = new URL(YOUR_URL_PATH, "UTF-8"));
ExecutorService executor = Executors.newFixedThreadPool(1);
Future<HttpResponse> response = executor.submit(new HttpRequest(url));
executor.shutdown();
for HttpRequest,HttpResponse
public class HttpRequest implements Callable<HttpResponse> {
private URL url;
public HttpRequest(URL url) {
this.url = url;
}
#Override
public HttpResponse call() throws Exception {
return new HttpResponse(url.openStream());
}
}
public class HttpResponse {
private InputStream body;
public HttpResponse(InputStream body) {
this.body = body;
}
public InputStream getBody() {
return body;
}
}
that is.
Yes, you could initiate the request and break the connection without waiting for a response... But you probably don't want to do that. The overhead of the server-side having to deal with ungracefully broken connections will far outweigh letting it proceed with returning a response.
A better approach to solving this kind of performance problem in a Java servlet would bet to shove all the data from the requests into a queue, respond immediately, and have one or more worker threads pick up items out of the queue for processing (such as writing it into a database).

How to do custom action composition to log request and response in Play 2.3?

I'm working on Play 2.3 (Java) application and I need a custom Action Composition to log request and response. With what I've got so far I am able to get the body of request, but not response:
import play.libs.F;
import play.mvc.Action;
import play.mvc.Http;
import play.mvc.Result;
public class LogAction extends Action.Simple {
public F.Promise<Result> call(Http.Context ctx) throws Throwable {
//Request body
String requestBody = ctx.request().body().asText();
//Need to get response body here
//String responseBody = ???
return delegate.call(ctx);
}
}
How to I get the response body in this scenario? If it's difficult to do in java, it may as well be in scala, however it has to work with a java controller method #With annotation.
#Override
public F.Promise<Result> call(Http.Context ctx) throws Throwable {
Promise<Result> call = delegate.call(ctx);
return call.map((r) -> {
byte[] body = JavaResultExtractor.getBody(r, 0l);
Logger.info(new String(body));
return r;
});
}
You can use play.core.j.JavaResultExtractor to extract the body from the response. Keep in mind, getBody(..) blocks until response is ready, thus consider calling onRedeem instead of map.
Have you tried something like this:
public class VerboseAction extends play.mvc.Action.Simple {
public F.Promise<Result> call(Http.Context ctx) throws Throwable {
Logger.info("Calling action for " + ctx);
F.Promise<Result> resultPromise = delegate.call(ctx);
resultPromise.map(result -> {
Logger.info(result.toScala().header().status());
Logger.info(result.toScala().body().toString());
return result;
});
return resultPromise;
}
}
The body will be returned as a play.api.libs.iteratee.Enumerator. Now the hard part is to work with this. First you need to understand the concept of Iteratee and what role the Enumerator plays in it. Hint: think of the Enumerator as a producer of data and think of the Iteratee as the consumer of this data.
Now on this Enumerator you can run an Iteratee that will transform the data chunks into the type you want.
The bad news is that you need to implement the play.api.libs.iteratee.Iteratee trait. As you can see it resides in the api subpackage, that means it is part of the Scala world in Play. Maybe in this case it woul be much easier to use Scala for this part of your task. Unfortunately I cannot provide you with some example implementation but I hope this would be not that hard. I think this is something really missing on the Java side of Play.

Categories

Resources