Retrofit 2 RequestBody writeTo() method called twice - java

Retrofit 2 RequestBody writeTo() method called twice, the code which I used is given below:
ProgressRequestBody requestVideoFile = new ProgressRequestBody(videoFile, new ProgressRequestBody.UploadCallbacks() {
VideoUploadStore store = new VideoUploadStore();
#Override
public void onProgressUpdate(int percentage) {
if (!mIsCancelled) {
Log.i("UploadServiceManager", "Read Percentage : " + percentage);
data.setUploadPercentage(percentage);
store.updateUploadData(data);
}
}
#Override
public void onError() {
if(!mIsCancelled) {
data.setUploadPercentage(0);
store.updateUploadData(data);
}
}
#Override
public void onFinish() {
}
});
MultipartBody.Part multipartVideo = MultipartBody.Part.createFormData("File", videoFile.getName(), requestVideoFile);

The solution below might help you out , although it might be too late. :p
Remove HttpLoggingInterceptor Object in your Api Client which will not execute writeTo() function twice.Basically , HttpLoggingInterceptor loads the data buffer first ( for internal logging purpose ) by calling writeTo() and then again calls writeTo() for uploading the data to server.
HttpLoggingInterceptor logging = new HttpLoggingInterceptor();
logging.setLevel(HttpLoggingInterceptor.Level.BODY);
httpClient.addInterceptor(logging);

Decreasing log level from BODY to HEADERS, BASIC or NONE solved this problem for me

I figured out yet another case for twice called writeTo() method.
I use OkHttpClient without Retrofit and HttpLoggingInterceptor, and I have the twice called problem.
Solution: the problem appears after upgrade Android Studio to 3.1.1 and enable Advanced Profiling in run project configuration. So disable Advanced Profiling.

If you're using writeTo() to track file upload progress, you need to distinguish between the callers of writeTo(). Basically writeTo() can be called by any interceptor in the chain, e.g. any logging interceptor such as a HttpLoggingInterceptor/OkHttpProfilerInterceptor/StethoInterceptor, and this method provides no context for that.
The simplest approach (as pointed out by other answers) is to get rid of those interceptors that require access to the request body. But this may be not always feasible.
Another solution is to make use of the fact that a server call is performed by a CallServerInterceptor which is the last interceptor in the chain (according to the docs). You can inspect stack trace prior to further handling. Yes, this is ugly. But this way you don't have to modify your interceptors or leave room for subtle bugs when someone else comes up and adds another interceptor.
override fun writeTo(sink: BufferedSink) {
val isCalledByCallServerInterceptor = Thread.currentThread().stackTrace.any { stackTraceElement ->
stackTraceElement.className == CallServerInterceptor::class.java.canonicalName
}
// TODO
}

decreasing level from BODY to HEADERS and removing HttpLoggingInterceptor is not a solution for me because it gave explanation about what is going on with the api called. you can just initiate counter variable like,
private int firstTimeCounter = 0; then
#Override
public void writeTo(BufferedSink sink) throws IOException {
firstTimeCounter += 1;
.......
.......
if(firstTimeCounter==2){
try {
while (total != file.length()) {
read = source.read(sink.buffer(), SEGMENT_SIZE);
total += read;
Log.e("progress ", total + " %");
sink.flush();
}
} finally {
okhttp3.internal.Util.closeQuietly(source);
}
}
}

Related

How should Blocking Operations in a ContainerRequestFilter be handled Quarkus/Vert.x

Background:
We are implementing a signed request mechanism for communication between services. Part of that process generates a digest on the contents of the request body. To validate the body on receipt, we re-generate the digest at the receiver and compare. It's pretty straight-forward stuff.
#PreMatching
#Priority(Priorities.ENTITY_CODER)
public class DigestValidationFilter implements ContainerRequestFilter {
private final DigestGenerator generator;
#Inject
public DigestValidationFilter(DigestGenerator generator) {
this.generator = generator;
}
#Override
public void filter(ContainerRequestContext context) throws IOException {
if (context.hasEntity() && context.getHeaderString(Headers.DIGEST) != null) {
String digest = context.getHeaderString(Headers.DIGEST);
ByteArrayOutputStream body = new ByteArrayOutputStream();
try (InputStream stream = context.getEntityStream()) {
stream.transferTo(body); // <-- This is line 36 from the provided stack-trace
}
String algorithm = digest.split("=", 2)[0];
try {
String calculated = generator.generate(algorithm, body.toByteArray());
if (digest.equals(calculated)) {
context.setEntityStream(new ByteArrayInputStream(body.toByteArray()));
} else {
throw new InvalidDigestException("Calculated digest does not match supplied digest. Request body may have been tampered with.");
}
} catch (NoSuchAlgorithmException e) {
throw new InvalidDigestException(String.format("Unsupported hash algorithm: %s", algorithm), e);
}
}
}
}
The above filter is made available to services as a java-lib. We also supply a set of RequestFilters that can be used with various Http clients, i.e., okhttp3, apache-httpclient, etc. These clients only generate digests when the body is "repeatable", i.e., not streaming.
The Issue:
In Jersey services and Spring Boot services, we do not run into issues. However, when we use Quarkus, we receive the following stack-trace:
2022-09-02 15:18:25 5.13.0 ERROR A blocking operation occurred on the IO thread. This likely means you need to use the #io.smallrye.common.annotation.Blocking annotation on the Resource method, class or javax.ws.rs.core.Application class.
2022-09-02 15:18:25 5.13.0 ERROR HTTP Request to /v1/policy/internal/policies/72575947-45ac-4358-bc40-b5c7ffbd3f35/target-resources failed, error id: c79aa557-c742-43d7-93d9-0e362b2dff79-1
org.jboss.resteasy.reactive.common.core.BlockingNotAllowedException: Attempting a blocking read on io thread
at org.jboss.resteasy.reactive.server.vertx.VertxInputStream$VertxBlockingInput.readBlocking(VertxInputStream.java:242)
at org.jboss.resteasy.reactive.server.vertx.VertxInputStream.readIntoBuffer(VertxInputStream.java:120)
at org.jboss.resteasy.reactive.server.vertx.VertxInputStream.read(VertxInputStream.java:82)
at java.base/java.io.InputStream.transferTo(InputStream.java:782)
at com.###.ciam.jaxrs.DigestValidationFilter.filter(DigestValidationFilter.java:36)
at org.jboss.resteasy.reactive.server.handlers.ResourceRequestFilterHandler.handle(ResourceRequestFilterHandler.java:47)
at org.jboss.resteasy.reactive.server.handlers.ResourceRequestFilterHandler.handle(ResourceRequestFilterHandler.java:8)
at org.jboss.resteasy.reactive.common.core.AbstractResteasyReactiveContext.run(AbstractResteasyReactiveContext.java:141)
at org.jboss.resteasy.reactive.server.handlers.RestInitialHandler.beginProcessing(RestInitialHandler.java:49)
at org.jboss.resteasy.reactive.server.vertx.ResteasyReactiveVertxHandler.handle(ResteasyReactiveVertxHandler.java:17)
at org.jboss.resteasy.reactive.server.vertx.ResteasyReactiveVertxHandler.handle(ResteasyReactiveVertxHandler.java:7)
at io.vertx.ext.web.impl.RouteState.handleContext(RouteState.java:1212)
at io.vertx.ext.web.impl.RoutingContextImplBase.iterateNext(RoutingContextImplBase.java:163)
at io.vertx.ext.web.impl.RoutingContextImpl.next(RoutingContextImpl.java:141)
at io.quarkus.vertx.http.runtime.StaticResourcesRecorder$2.handle(StaticResourcesRecorder.java:67) ... elided ...
I completely understand why Vert.x would like to prevent long-running I/O operations on the request processing threads. That said, the advice provided in the exception only accounts for I/O operations at the end of the request processing, i.e., it assumes the I/O is happening in the endpoint. Although we do control the filter code, it is in an external library, making it almost like a 3rd party library.
My Question:
What is the right way to handle this?
I've been scouring documentation, but haven't stumbled on the answer yet (or haven't recognized the answer). Is there a set of recommended docs I should review?
https://quarkus.io/guides/resteasy-reactive#request-or-response-filters
https://smallrye.io/smallrye-mutiny/1.7.0/guides/framework-integration/
#RequestScoped
class Filter(
private val vertx: Vertx
) {
// you can run blocking code on mutiny's Infrastructure defaultWorkerPool
#ServerRequestFilter
fun filter(requestContext: ContainerRequestContext): Uni<RestResponse<*>> {
return Uni.createFrom().item { work() }
.map<RestResponse<*>> { null }
.runSubscriptionOn(Infrastructure.getDefaultWorkerPool())
}
// or use vertx.executeBlocking api
#ServerRequestFilter
fun filter(requestContext: ContainerRequestContext): Uni<RestResponse<*>> {
return vertx.executeBlocking(
Uni.createFrom().item { work() }
.map { null }
)
}
private fun work(){
Log.info("filter")
Thread.sleep(3000)
}
}
In the end, the advice in the exception lead me to simply annotating a delegate ContainerRequestFilter:
public class DigestValidationFilterBlocking implements ContainerRequestFilter {
private final DigestValidationFilter delegate;
public DigestValidationFilterBlocking(DigestValidationFilter delegate) {
this.delegate = delegate;
}
#Blocking // <-- This annotation allowed Vert.x to accept the I/O operation
#Override
public void filter(ContainerRequestContext context) throws IOException {
delegate.filter(context);
}
}
I had the same problem. You can try using this in your #ServerRequestFilter:
#Context
HttpServerRequest request;

Coroutines delegate exceptions

Currently, I have some scenario like this where I have java interface callback which looks something like this.
Java Callback
interface Callback<T> {
void onComplete(T result)
void onException(HttpResponse response, Exception ex)
}
Suspending function for the above look like this
suspend inline fun <T> awaitCallback(crossinline block: (Callback<T>) -> Unit) : T =
suspendCancellableCoroutine { cont ->
block(object : Callback<T> {
override fun onComplete(result: T) = cont.resume(result)
override fun onException(e: Exception?) {
e?.let { cont.resumeWithException(it) }
}
})
}
My calling function looks like this
fun getMovies(callback: Callback<Movie>) {
launch(UI) {
awaitCallback<Movie> {
// I want to delegate exceptions here.
fetchMovies(it)
}
}
What I'm currently doing to catch exception is this
fun getMovies(callback: CallbackWrapper<Movie>) {
launch(UI) {
try{
val data = awaitCallback<Movie> {
// I want to delegate exceptions here.
fetchMovies(it)
}
callback.onComplete(data)
}catch(ex: Exception) {
callback.onFailure(ex)
}
}
}
// I have to make a wrapper kotlin callback interface for achieving the above
interface CallbackWrapper<T> {
fun onComplete(result: T)
fun onFailure(ex: Exception)
}
Questions
The above works but is there any better way to do this?? One of the main thing is I'm currently migrating this code from callback so I have ~20 api calls and I don't want to add try/catch everywhere to delegate the result along with the exception.
Also, I'm only able to get exception from my suspending function is there any way to get both HttpResponse as well as the exception. Or is it possible to use existing JAVA interface.
Is there any better way to delegate the result from getMovies without using callback??
Is there any better way to delegate the result from getMovies without using callback?
Let me start with some assumptions:
you're using some async HTTP client library. It has some methods to send requests, for example httpGet and httpPost. They take callbacks.
you have ~20 methods like fetchMovies that send HTTP requests.
I propose to create an extension suspend fun for each HTTP client method that sends a request. For example, this turns an async client.httpGet() into a suspending client.awaitGet():
suspend fun <T> HttpClient.awaitGet(url: String) =
suspendCancellableCoroutine<T> { cont ->
httpGet(url, object : HttpCallback<T> {
override fun onComplete(result: T) = cont.resume(result)
override fun onException(response: HttpResponse?, e: Exception?) {
e?.also {
cont.resumeWithException(it)
} ?: run {
cont.resumeWithException(HttpException(
"${response!!.statusCode()}: ${response.message()}"
))
}
}
})
}
Based on this you can write suspend fun fetchMovies() or any other:
suspend fun fetchMovies(): List<Movie> =
client.awaitGet("http://example.org/movies")
My reduced example is missing the parsing logic that turns the HTTP response into Movie objects, but I don't think this affects the approach.
I'm currently migrating this code from callback so I have ~20 api calls and I don't want to add try/catch everywhere to delegate the result along with the exception.
You don't need a try-catch around each individual call. Organize your code so you just let the exception propagate upwards to the caller and have a central place where you handle exceptions. If you can't do that, it means you've got a specific way to handle each exception; then the try-catch is the best and idiomatic option. It's what you would write if you had a plain blocking API. Especially note how trivial it is to wrap many HTTP calls in a single try-catch, something you can't replicate with callbacks.
I'm only able to get exception from my suspending function is there any way to get both HttpResponse as well as the exception.
This is probably not what you need. What exactly do you plan to do with the response, knowing that it's an error response? In the example above I wrote some standard logic that creates an exception from the response. If you have to, you can catch that exception and provide custom logic at the call site.
I am not so sure whether you really need that awaitCallback or not.
If you really have lots of Callback already in place and that's why you used it then your functions will probably already have everything in place that works correctly with the Callback, e.g. I expect some methods as follows:
fun fetchMovies(callback : Callback<List<Movie>>) {
try {
// get some values from db or from a service...
callback.onComplete(listOf(Movie(1), Movie(2)))
} catch (e : Exception) {
callback.onFailure(e)
}
}
If you do not have something like this in place, you may not even need awaitCallback at all. So if your fetchMovies function rather has a signature as follows:
fun fetchMovies() : List<Movie>
and in getMovies you pass your Callback, then all you need is probably a simple async, e.g.:
fun getMovies(callback: Callback<List<Movie>>) {
GlobalScope.launch { // NOTE: this is now a suspend-block, check the parameters for launch
val job = async { fetchMovies() }
try {
callback.onComplete(job.await())
} catch (e: Exception) {
callback.onException(e)
}
}
}
This sample can of course be changed to many similar variants, e.g. the following will also work:
fun getMovies(callback: Callback<List<Movie>>) {
GlobalScope.launch { // NOTE: this is now a suspend-block, check the parameters for launch
val job = async { fetchMovies() } // you could now also cancel/await, or whatever the job
job.join() // we just join now as a sample
job.getCompletionExceptionOrNull()?.also(callback::onFailure)
?: job.getCompleted().also(callback::onComplete)
}
}
You could also add something like job.invokeOnCompletion. If you just wanted to pass any exception to your callback in your current code, you could just have used callback.onException(RuntimeException()) at the place where you put your comment I want to delegate exceptions here..
(note that I am using Kotlin 1.3 which is a RC now...)

How to share memory over long chain of method calls?

I need to figure out a way to save and log a method request and response conditionally, with the condition being the latency of the top-level method crossing the p50 latency. The call visualization is as follows:
topLevel() -> method1() -> method2() -> ... -> makeRequest()
In makeRequest is where the request and response to that request are that I need to log.
But I'll only know if I need to actually log those at some point on the way back up the call stack - if topLevel method is taking too long.
So to me, the only option is to save the request and response in makeRequest no matter what and make that available to the topLevel method. The topLevel method will check if latency is above p50 and conditionally log the request and response.
This all leads to the titular question: How to share memory over long chain of method calls?
I don't want to be passing objects back through multiple method calls, polluting function signatures.
What is the best pattern for this? Maybe using a local cache to save the request and response and then retrieving it in topLevel? Is there an aspect oriented approach to solving this?
As long as you have control of the code for the top level and down through method1 and method2, this really isn't so hard.
You just need to pass the request down through the calling chain, and pass back the response.
topLevel() -> method1(request) -> method2(request) -> ...
-> makeRequest(request) { ... return response; }
To relate this to a real code example, you can look at how the jersey framework works.
Here's an example of a method where the request is injected, and a response is returned.
#POST
#Consumes({MediaType.TEXT_XML})
#Produces({TEXT_XML_UTF_8})
public Response resource(#Context HttpServletRequest servletRequest) throws Exception {
ExternalRequest req = makeRequest(servletRequest.getInputStream());
ExternalResponse resp = externalGateway.doSomething(req);
return Response.ok(wrapResponse(resp)).build();
}
Although Jersey offers some fancy annotations (#Context and so on), there isn't really a distinguishable design pattern here of any significance - you're just passing down the request object and returning a response.
Of course you can also maintain a cache and pass that up the call stack, or really just a wrapper object for a request and a response, but it's very similar to simply passing the request.
This type of functionality is best done using ThreadLocals. Your makeRequest will add the request and response objects into ThreadLocal and then topLevel will remove them and log them if needed. Here is an example:
public class RequestResponseThreadLocal{
public static ThreadLocal<Object[]> requestResponseThreadLocal = new ThreadLocal<>();
}
public class TopLevel{
public void topLevel(){
try{
new Method1().method1();
Object[] requestResponse = RequestResponseThreadLocal.requestResponseThreadLocal.get();
System.out.println( requestResponse[0] + " : " + requestResponse[1] );
}finally{
//make sure to clean up stuff that was added to ThreadLocal otherwise you will end up with memory leak when using Thread pools
RequestResponseThreadLocal.requestResponseThreadLocal.remove();
}
}
}
public class Method1{
public void method1(){
new Method2().method2();
}
}
public class Method2{
public void method2(){
new MakeRequest().makeRequest();
}
}
public class MakeRequest{
public void makeRequest(){
Object request = new Object();
Object response = new Object();
RequestResponseThreadLocal.requestResponseThreadLocal.set( new Object[]{request, response} );
}
}

Hystrix async methods within javanica not running inside spring-boot java application

I am using spring-cloud-starter (ie.. spring boot with all the microservices features). When I create hystrix method in a component annotated using the javanica #HystrixCommand, follow the directions on the javanica github site (https://github.com/Netflix/Hystrix/tree/master/hystrix-contrib/hystrix-javanica) to make that method run async, regardless of whether I use their 'Future<>' or Reactive execution 'Observable<>', nothing runs/executes and I get
java.lang.ClassCastException: springbootdemo.EricComponent$1 cannot be cast to springbootdemo.Eric whenever I attempt to pull the result (in the case of Future<>) or get a callback (in case of Reactive Execution .. and println's dont trigger so it really didnt run).
public class Application { ...
}
#RestController
#RequestMapping(value = "/makebunchofcalls/{num}")
class EricController { ..
#RequestMapping(method={RequestMethod.POST})
ArrayList<Eric> doCalls(#PathVariable Integer num) throws IOException {
ArrayList<Eric> ale = new ArrayList<Eric>(num);
for (int i =0; i<num; i++) {
rx.Observable<Eric> oe = this.ericComponent.doRestTemplateCallAsync(i);
oe.subscribe(new Action1<Eric>() {
#Override
public void call(Eric e) { // AT RUNTIME, ClassCastException
ale.add(e);
}
});
}
return ale;
}
#Component
class EricComponent { ...
// async version =========== using reactive execution via rx library from netflix ==============
#HystrixCommand(fallbackMethod = "defaultRestTemplateCallAsync", commandKey = "dogeAsync")
public rx.Observable<Eric> doRestTemplateCallAsync(int callNum) {
return new ObservableResult<Eric>() {
#Override
public Eric invoke() { // NEVER CALLED
try {
ResponseEntity<String> result = restTemplate.getForEntity("http://doges/doges/24232/photos", String.class); // actually make a call
System.out.println("*************** call successfull: " + new Integer(callNum).toString() + " *************");
} catch (Exception ex) {
System.out.println("=============== call " + new Integer(callNum).toString() + " not successfull: " + ex.getMessage() + " =============");
}
return new Eric(new Integer(callNum).toString(), "ok");
}
};
}
public rx.Observable<Eric> defaultRestTemplateCallAsync(int callNum) {
return new ObservableResult<Eric>() {
#Override
public Eric invoke() {
System.out.println("!!!!!!!!!!!!! call bombed " + new Integer(callNum).toString() + "!!!!!!!!!!!!!");
return new Eric(new Integer(callNum).toString(), "bomb");
}
};
}
}
Why would I be getting back an EricComponent$1 instead of a Eric? btw, Eric is just a simple class with 2 strings... its ommitted.
I am figuring that I must have to explicitly execute, but that alludes me because: 1) Doing it with Future<> the queue() method is not available as the documentation claims and 2) doing it with Observable<> there really isn't a way to execute it that I get.
Do you have the #EnableHystrix annotation on you application class?
The subscribe method is asynchronous and you are trying to populate a list in a synchronous controller method so there may be a problem there. Can you change the subscribe to toBlockingObservable().forEach() and see if that helps?
Update #1
I was able to duplicate. Your default method should not return an Observable<Eric>, just an Eric.
public Eric defaultRestTemplateCallAsync(final int callNum) {
System.out.println("!!!!!!!!!!!!! call bombed " + new Integer(callNum) + "!!!!!!!!!!!!!");
return new Eric(new Integer(callNum).toString(), "bomb");
}
Update #2
See my code here https://github.com/spencergibb/communityanswers/tree/so26372319
Update #3
When I commented out the fallbackMethod attribute, it complained that it couldn't find a public version of EricComponent for AOP. I made EricComponent public static and it worked. A top level class in its own file would work to. My code, linked above, works (assuming the restTemplate call works) and returns n OK.

Determine target service/method from CXF Interceptor

I'd like to write an interceptor for the Apache CXF JAX-RS implementation that inspects the target service/method for a particular annotation and does some special processing for that annotation.
I can't seem to find anything in the interceptor documentation that describes how to do this. Does anyone have any ideas?
Thanks!
If the interceptor runs fairly late in the chain (like the USER_LOGICAL
phase), you should be able to do something like:
Exchange exchange = msg.getExchange();
BindingOperationInfo bop = exchange.get(BindingOperationInfo.class);
MethodDispatcher md = (MethodDispatcher)
exchange.get(Service.class).get(MethodDispatcher.class.getName());
Method meth = md.getMethod(bop);
That should give you the Method that was bound in so you can get the declared
class or the annotations, etc...
Ah. I didn't specify that I was using the JAX-RS part of CXF; not sure if that impacts Daniel Kulp's answer but his solution didn't actually work for me. I believe it is because CXF does things differently when handling JAX-RS.
I came across the source for CXF's [JAXRSInInterceptor][1] and I saw in that code that this interceptor is putting the method info into the Exchange object like so:
message.getExchange().put(OperationResourceInfo.class, ori);
...during the UNMARSHAL phase, which according to the CXF interceptor docs happens before the *_LOGICAL phase. So by writing an Interceptor that handles the USER_LOGICAL phase I can do:
message.getExchange().get(OperationResourceInfo.class)
...to get access in there to the Method and Class<?> of the Service handling the call!
Building off the original interrogator's answer, I came up with this
public UserContextInterceptor() {
super(Phase.USER_LOGICAL);
}
#Override
public void handleMessage(Message message) {
if(StringUtils.isEmpty(getHeader("some-header-name", message))) {
final Method method = getTargetMethod(message);
if(isAnnotated(method.getDeclaringClass().getAnnotations()) || isAnnotated(method.getAnnotations())) {
final Fault fault = new Fault(new LoginException("Missing user id"));
fault.setStatusCode(HttpServletResponse.SC_UNAUTHORIZED);
throw fault;
}
}
}
private static Method getTargetMethod(Message message) {
final Exchange exchange = message.getExchange();
final OperationResourceInfo resource = exchange.get(OperationResourceInfo.class);
if(resource == null || resource.getMethodToInvoke() == null) {
throw new AccessDeniedException("Method is not available");
}
return resource.getMethodToInvoke();
}
private static boolean isAnnotated(Annotation[] annotations) {
for(Annotation annotation : annotations) {
if(UserRequired.class.equals(annotation.annotationType())) {
return true;
}
}
return false;
}
It has been quite some time since the accepted answer. But there are a few supporting abstractions provided in the
cxf-rt-core-2.7.3.jar
One in there that is provided is org.apache.cxf.interceptor.security.AbstractAuthorizingInInterceptor
This sample excerpt from the source might be a good reference:
protected Method getTargetMethod(Message m) {
BindingOperationInfo bop = m.getExchange().get(BindingOperationInfo.class);
if (bop != null) {
MethodDispatcher md = (MethodDispatcher)
m.getExchange().get(Service.class).get(MethodDispatcher.class.getName());
return md.getMethod(bop);
}
Method method = (Method)m.get("org.apache.cxf.resource.method");
if (method != null) {
return method;
}
throw new AccessDeniedException("Method is not available : Unauthorized");
}

Categories

Resources