I need to figure out a way to save and log a method request and response conditionally, with the condition being the latency of the top-level method crossing the p50 latency. The call visualization is as follows:
topLevel() -> method1() -> method2() -> ... -> makeRequest()
In makeRequest is where the request and response to that request are that I need to log.
But I'll only know if I need to actually log those at some point on the way back up the call stack - if topLevel method is taking too long.
So to me, the only option is to save the request and response in makeRequest no matter what and make that available to the topLevel method. The topLevel method will check if latency is above p50 and conditionally log the request and response.
This all leads to the titular question: How to share memory over long chain of method calls?
I don't want to be passing objects back through multiple method calls, polluting function signatures.
What is the best pattern for this? Maybe using a local cache to save the request and response and then retrieving it in topLevel? Is there an aspect oriented approach to solving this?
As long as you have control of the code for the top level and down through method1 and method2, this really isn't so hard.
You just need to pass the request down through the calling chain, and pass back the response.
topLevel() -> method1(request) -> method2(request) -> ...
-> makeRequest(request) { ... return response; }
To relate this to a real code example, you can look at how the jersey framework works.
Here's an example of a method where the request is injected, and a response is returned.
#POST
#Consumes({MediaType.TEXT_XML})
#Produces({TEXT_XML_UTF_8})
public Response resource(#Context HttpServletRequest servletRequest) throws Exception {
ExternalRequest req = makeRequest(servletRequest.getInputStream());
ExternalResponse resp = externalGateway.doSomething(req);
return Response.ok(wrapResponse(resp)).build();
}
Although Jersey offers some fancy annotations (#Context and so on), there isn't really a distinguishable design pattern here of any significance - you're just passing down the request object and returning a response.
Of course you can also maintain a cache and pass that up the call stack, or really just a wrapper object for a request and a response, but it's very similar to simply passing the request.
This type of functionality is best done using ThreadLocals. Your makeRequest will add the request and response objects into ThreadLocal and then topLevel will remove them and log them if needed. Here is an example:
public class RequestResponseThreadLocal{
public static ThreadLocal<Object[]> requestResponseThreadLocal = new ThreadLocal<>();
}
public class TopLevel{
public void topLevel(){
try{
new Method1().method1();
Object[] requestResponse = RequestResponseThreadLocal.requestResponseThreadLocal.get();
System.out.println( requestResponse[0] + " : " + requestResponse[1] );
}finally{
//make sure to clean up stuff that was added to ThreadLocal otherwise you will end up with memory leak when using Thread pools
RequestResponseThreadLocal.requestResponseThreadLocal.remove();
}
}
}
public class Method1{
public void method1(){
new Method2().method2();
}
}
public class Method2{
public void method2(){
new MakeRequest().makeRequest();
}
}
public class MakeRequest{
public void makeRequest(){
Object request = new Object();
Object response = new Object();
RequestResponseThreadLocal.requestResponseThreadLocal.set( new Object[]{request, response} );
}
}
Related
Retrofit 2 RequestBody writeTo() method called twice, the code which I used is given below:
ProgressRequestBody requestVideoFile = new ProgressRequestBody(videoFile, new ProgressRequestBody.UploadCallbacks() {
VideoUploadStore store = new VideoUploadStore();
#Override
public void onProgressUpdate(int percentage) {
if (!mIsCancelled) {
Log.i("UploadServiceManager", "Read Percentage : " + percentage);
data.setUploadPercentage(percentage);
store.updateUploadData(data);
}
}
#Override
public void onError() {
if(!mIsCancelled) {
data.setUploadPercentage(0);
store.updateUploadData(data);
}
}
#Override
public void onFinish() {
}
});
MultipartBody.Part multipartVideo = MultipartBody.Part.createFormData("File", videoFile.getName(), requestVideoFile);
The solution below might help you out , although it might be too late. :p
Remove HttpLoggingInterceptor Object in your Api Client which will not execute writeTo() function twice.Basically , HttpLoggingInterceptor loads the data buffer first ( for internal logging purpose ) by calling writeTo() and then again calls writeTo() for uploading the data to server.
HttpLoggingInterceptor logging = new HttpLoggingInterceptor();
logging.setLevel(HttpLoggingInterceptor.Level.BODY);
httpClient.addInterceptor(logging);
Decreasing log level from BODY to HEADERS, BASIC or NONE solved this problem for me
I figured out yet another case for twice called writeTo() method.
I use OkHttpClient without Retrofit and HttpLoggingInterceptor, and I have the twice called problem.
Solution: the problem appears after upgrade Android Studio to 3.1.1 and enable Advanced Profiling in run project configuration. So disable Advanced Profiling.
If you're using writeTo() to track file upload progress, you need to distinguish between the callers of writeTo(). Basically writeTo() can be called by any interceptor in the chain, e.g. any logging interceptor such as a HttpLoggingInterceptor/OkHttpProfilerInterceptor/StethoInterceptor, and this method provides no context for that.
The simplest approach (as pointed out by other answers) is to get rid of those interceptors that require access to the request body. But this may be not always feasible.
Another solution is to make use of the fact that a server call is performed by a CallServerInterceptor which is the last interceptor in the chain (according to the docs). You can inspect stack trace prior to further handling. Yes, this is ugly. But this way you don't have to modify your interceptors or leave room for subtle bugs when someone else comes up and adds another interceptor.
override fun writeTo(sink: BufferedSink) {
val isCalledByCallServerInterceptor = Thread.currentThread().stackTrace.any { stackTraceElement ->
stackTraceElement.className == CallServerInterceptor::class.java.canonicalName
}
// TODO
}
decreasing level from BODY to HEADERS and removing HttpLoggingInterceptor is not a solution for me because it gave explanation about what is going on with the api called. you can just initiate counter variable like,
private int firstTimeCounter = 0; then
#Override
public void writeTo(BufferedSink sink) throws IOException {
firstTimeCounter += 1;
.......
.......
if(firstTimeCounter==2){
try {
while (total != file.length()) {
read = source.read(sink.buffer(), SEGMENT_SIZE);
total += read;
Log.e("progress ", total + " %");
sink.flush();
}
} finally {
okhttp3.internal.Util.closeQuietly(source);
}
}
}
In order to implement long polling I've tried different solution and did not acquire any good result.
So I decide to look into asynchronous methods and DeferredResult.
Here I implemented REST constroller.
#Controller("sessionStateRest")
#RequestMapping("ui")
public class SessionStateRest extends BaseRestResource {
private final Queue<DeferredResult<ModelAndView>> mavQueue = new ConcurrentLinkedQueue<>();
/**
* Rest to check session state.
*
* #return string with session state
*/
#RequestMapping(value = "/session")
public #ResponseBody DeferredResult<ModelAndView> sessionState() {
final DeferredResult<ModelAndView> stateResult = new DeferredResult<>();
this.mavQueue.add(stateResult);
return stateResult;
}
#Scheduled(fixedDelay = 5000)
public void processQueue() {
for(DeferredResult<ModelAndView> result: mavQueue) {
if (null == SecurityHelper.getUserLogin()) {
result.setResult(createSuccessResponse("Invalidated session"));
mavQueue.remove(result);
}
}
}
}
By idea it should process queue of requests every 5 seconds and setResult if condition is true.
Synchronous version would be something like this
#RequestMapping(value = "/sync")
public ModelAndView checkState() {
if (null == SecurityHelper.getUserLogin()) {
createSuccessResponse("Invalidated session");
}
return null; // return something instead
}
But after some time I've got an exception
java.lang.IllegalStateException: Cannot forward after response has been committed
at org.apache.catalina.core.ApplicationDispatcher.doForward(ApplicationDispatcher.java:349) ~[tomcat-embed-core-7.0.
39.jar:7.0.39]
at org.apache.catalina.core.ApplicationDispatcher.forward(ApplicationDispatcher.java:339) ~[tomcat-embed-core-7.0.39
.jar:7.0.39]
at org.apache.catalina.core.StandardHostValve.custom(StandardHostValve.java:467) [tomcat-embed-core-7.0.39.jar:7.0.3
9]
at org.apache.catalina.core.StandardHostValve.status(StandardHostValve.java:338) [tomcat-embed-core-7.0.39.jar:7.0.3
9]
at org.apache.catalina.core.StandardHostValve.throwable(StandardHostValve.java:428) [tomcat-embed-core-7.0.39.jar:7.
0.39]
at org.apache.catalina.core.AsyncContextImpl.setErrorState(AsyncContextImpl.java:417) [tomcat-embed-core-7.0.39.jar:
7.0.39]
at org.apache.catalina.connector.CoyoteAdapter.asyncDispatch(CoyoteAdapter.java:294) [tomcat-embed-core-7.0.39.jar:7
.0.39]
at org.apache.coyote.http11.AbstractHttp11Processor.asyncDispatch(AbstractHttp11Processor.java:1567) [tomcat-embed-c
ore-7.0.39.jar:7.0.39]
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:583) [tomcat-embed-cor
e-7.0.39.jar:7.0.39]
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:312) [tomcat-embed-core-7.0.39.jar:7.
0.39]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_67]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_67]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_67]
What's the problem? Should I set the timeout for DeferredResult?
I think the problem comes from the #ResponseBody annotation. It tells Spring that the controller method will directly returns the body of the response. But it does not, because it returns a ModelAndView. So Spring tries to send the return of the method directly to client, (and should send and commit an empty response), then the ModelAndView handler tries to forward to a view with an already committed response causing the error.
You should at least remove the #ResponseBody annotation since it is not it what would be a synchronous equivalent.
But that's not all :
you write final DeferredResult<... - IMHO the final should not be there since you will modify later the DeferredResult
you try to test logged user in the scheduled asynchronous thread. This should not work, since common SecurityHelper use local thread storage to store this info, and actual processing will occur in another thread. Javadoc for DeferredResult even says : For example, one might want to associate the user used to create the DeferredResult by extending the class and adding an addition property for the user. In this way, the user could easily be accessed later without the need to use a data structure to do the mapping.
you do not say how you configured async support. Spring Reference manual says : The MVC Java config and the MVC namespace both provide options for configuring async request processing. WebMvcConfigurer has the method configureAsyncSupport while <mvc:annotation-driven> has an <async-support> sub-element.
I'm looking for some guidance on real unit testing for Restlet components, and specifically extractors. There is plenty of advice on running JUnit to rest entire endpoints, but being picky this is not unit testing, but integration testing. I really don't want to have set up an entire routing system and Spring just to check an extractor against a mock data repository.
The extractor looks like this:
public class CaseQueryExtractor extends Extractor {
protected int beforeHandle(Request request, Response response) {
extractFromQuery("offset", "offset", true);
extractFromQuery("limit", "limit", true);
// Stuff happens...
attributes.put("query", query);
return CONTINUE;
}
}
I'm thinking part of the virtue of Restlets is that its nice routing model ought to make unit testing easy, but I can't figure out what I need to do to actually exercise extractFromQuery and its friends, and all my logic that builds a query object, without mocking so much that I'm losing testing against a realistic web request.
And yes, I am using Spring, but I don't want to have to set the whole context for this -- I'm not integration testing as I haven't actually finished the app yet. I'm happy to inject manually, once I know what I need to make to get this method called.
Here's where I'm at now:
public class CaseQueryExtractorTest {
private class TraceRestlet extends Restlet {
// Does snothing, but prevents warning shouts
}
private CaseQueryExtractor extractor;
#Before
public void initialize() {
Restlet mock = new TraceRestlet();
extractor = new CaseQueryExtractor();
extractor.setNext(mock);
}
#Test
public void testBasicExtraction() {
Reference reference = new Reference();
reference.addQueryParameter("offset", "5");
reference.addQueryParameter("limit", "3");
Request request = new Request(Method.GET, reference);
Response response = extractor.handle(request);
extractor.handle(request, response);
CaseQuery query = (CaseQuery) request.getAttributes().get("query");
assertNotNull(query);
}
}
Which of course fails, as whatever set up I am doing isn't enough to make Restlets able to extract the query parameters.
Any thoughts or pointers?
There is a test module in Restlet that can provide you some hints about unit testing. See https://github.com/restlet/restlet-framework-java/tree/master/modules/org.restlet.test/src/org/restlet/test.
You can have a look at class HeaderTestCase (see https://github.com/restlet/restlet-framework-java/blob/master/modules/org.restlet.test/src/org/restlet/test/HeaderTestCase.java).
For information, if you use attributes from request, your unit test will pass ;-) See below:
public class CaseQueryExtractor extends Extractor {
protected int beforeHandle(Request request, Response response) {
extractFromQuery("offset", "offset", true);
extractFromQuery("limit", "limit", true);
// Stuff happens...
CaseQuery query = new CaseQuery();
Map<String,Object> attributes = request.getAttributes();
attributes.put("query", query);
return CONTINUE;
}
}
I don't know if you want to go further...
Hope it helps you,
Thierry
I'm working on Play 2.3 (Java) application and I need a custom Action Composition to log request and response. With what I've got so far I am able to get the body of request, but not response:
import play.libs.F;
import play.mvc.Action;
import play.mvc.Http;
import play.mvc.Result;
public class LogAction extends Action.Simple {
public F.Promise<Result> call(Http.Context ctx) throws Throwable {
//Request body
String requestBody = ctx.request().body().asText();
//Need to get response body here
//String responseBody = ???
return delegate.call(ctx);
}
}
How to I get the response body in this scenario? If it's difficult to do in java, it may as well be in scala, however it has to work with a java controller method #With annotation.
#Override
public F.Promise<Result> call(Http.Context ctx) throws Throwable {
Promise<Result> call = delegate.call(ctx);
return call.map((r) -> {
byte[] body = JavaResultExtractor.getBody(r, 0l);
Logger.info(new String(body));
return r;
});
}
You can use play.core.j.JavaResultExtractor to extract the body from the response. Keep in mind, getBody(..) blocks until response is ready, thus consider calling onRedeem instead of map.
Have you tried something like this:
public class VerboseAction extends play.mvc.Action.Simple {
public F.Promise<Result> call(Http.Context ctx) throws Throwable {
Logger.info("Calling action for " + ctx);
F.Promise<Result> resultPromise = delegate.call(ctx);
resultPromise.map(result -> {
Logger.info(result.toScala().header().status());
Logger.info(result.toScala().body().toString());
return result;
});
return resultPromise;
}
}
The body will be returned as a play.api.libs.iteratee.Enumerator. Now the hard part is to work with this. First you need to understand the concept of Iteratee and what role the Enumerator plays in it. Hint: think of the Enumerator as a producer of data and think of the Iteratee as the consumer of this data.
Now on this Enumerator you can run an Iteratee that will transform the data chunks into the type you want.
The bad news is that you need to implement the play.api.libs.iteratee.Iteratee trait. As you can see it resides in the api subpackage, that means it is part of the Scala world in Play. Maybe in this case it woul be much easier to use Scala for this part of your task. Unfortunately I cannot provide you with some example implementation but I hope this would be not that hard. I think this is something really missing on the Java side of Play.
Our application uses several back-end services and we maintain wrappers which contain the methods to make the actual service calls. If any exception occurs in any of those methods while invoking a service, we throw a custom exception encapsulating the original exception as shown below.
interface IServiceA {
public void submit(String user, String attributes);
}
public class ServiceAWrapper implements IserviceA {
private ActualService getActualService() {
.....
}
public void submit(String user, String attributes) {
try {
Request request = new Request();
request.setUser(user);
request.setAttributes(attributes);
getActualService().call(request);
} catch(ServiceException1 e) {
throw new MyException(e, reason1);
} catch(ServiceException2 e) {
throw new MyException(e, reason2);
}
}
}
I would like to know if there's any framework that would allow me to
capture (and probably log) all the
parameters passed to my wrapper
methods at run-time; if the methods
are called.
capture the actual exception
object(MyException instance in above
example), if any thrown; so that I
could append the passed parameters
to the object at run-time.
I am currently exploring AspectJ to see if it can address my requirements, but I am not sure if it can be used to capture the parameters passed to methods at runtime and also to capture exception objects, if any occur.
Thanks.
With AspectJ, you can use around advice to execute advice instead of the code at the join point. You can then execute the actual join-point from within the advice by calling proceed. This would allow you to capture the input parameters, log them, and proceed to call the actual method.
Within the same advice you could capture any logs throw from the method, and inspect or log them before passing it back up to higher levels.
Matt B's answer is right. Specifically, you can do something like this:
aspect MonitorServiceCalls {
private final Logger LOG = LoggerFactory.getLog("ServiceCallLog");
Object around() throws MyException: call(public * *(..) throws MyException)
&& target(IServiceA+) {
MethodSignature msig = (MethodSignature)thisJoinPoint;
String fullMethName = msig.getMethod().toString();
try {
Object result = proceed();
LOG.info("Successful call to {} with arguments {}",
fullMethName,
thisJoinPoint.getArgs());
return result;
} catch(MyException e) {
LOG.warn("MyException thrown from {}: {}", msig.getMethod(), e);
throw e;
}
}
}
AspectJ is the right option. You will be able to get hold of the parameters by way of a JoinPoint object that will be passed to your advise methods. You can also get hold of the exception either by implementing an after throwing advise or an around advise.