So here's the situation: I'm implementing the caching of our webapp using vertx-redis (we were formerly using lettuce). Pretty simple mechanism, there is an anotation we use on endpoints which is responsible to invoke the redis-client (whatever implementation we are using) and, if there is cached info for the given key it should be used as response body and the request should be finished with no processing.
But there's this really annoying behavior with the vertx-redis implementation in which ending the request doesn't stop the processing. I make the request, get the quick response since there was cached info, but I can still see in the logs that the app keeps the processing going on, as if the request was still open. I believe that it's because I'm ending the response inside the handler for the Redis client call, like this:
client.get("key", onResponse -> {
if (onResponse.succeeded() && onResponse.result() != null) {
//ending request from here
}
});
I realize that I could maybe reproduce the behavior as it was before if I could do something like this:
String cachedInfo = client.get("key").map(onResponse -> onResponse.result());
// endResponse
But as we know, vertx-redis is a semantic API and every method returns the same instance of RedisClient. I also thought about doing something like this:
private String cachedInfo;
...
client.get("key", onResult -> {
if (onResponse.succeeded()) {
this.cachedInfo = onResponse.result();
}
});
if (cachedInfo != null) { // The value could be unset since the lambda is running in other thread
//end request
}
Really don't know what to do, is there a way to return the contents of the AsyncResult to a variable or maybe set it to a variable synchronously somehow? I've also been searching for ways to somehow stop the whole flow of the current request but couldn't find any satisfactory, non-aggressive solution so far, but I'm really open to this option either.
I have jetty server, which does some logic, when it is finished, it creates a json string (using jackson) and sends it as a response. If there is an exception during the creation of the json, a JsonProcessingException is thrown. This exception bubbles up to an UncaughtErrorHandler (extends ErrorHandler in Jetty), which logs the exception and returns some failure message and status code 500.
This is just for a backend api.
The endpoint is not idempotent (it is a post endpoint), there are changes to the state of the application (ie database) when endpoint is hit and logic applied.
Now if JsonProcessingException occurs, the user will get failure message and will not know that the process/logic has been done.
How do I handle this?
Here are my thoughts on possible solutions:
Leave existing behaviour, and if user complains then tech support can clarify that the process has gone through. Or error will alert support, and they will check the logs and contact the user to say it has gone through.
Leave the endpoint as idempotent (or similar ie no change in state of the app), so that the user can send the same request (with the same body) and get a response (when it is working ie no JsonProcessingException) which states it has already done it, or cannot do it as it has already done.
Catch the JsonProcessingException when creating the json string, log it with exception message, and create a response without json informing the user that process has been done. Although this means the user will need to handle two different responses, but reduces human interaction in the above (current) solution.
OR convert/wrap it in a runtime exception (or other exception) and throw this in the catch block. But assign a better exception message (ie process was completed). Then in errorHandler, I can display this exception message in the response body, when it finds a specific exception. This way the user will know that the process was complete and not send another resquest. But as above, the user will have to handle to different types of responses.
Do not use Jackson to create the json string, do it manually using String.format() and a json template. This is fine for simple json, but complex json will be a nightmare.
Have some logic which checks if the previous call was done but not confirmed in the response, and then makes a call to the user (via some client ie email/sms) with correct details. Seems a lot of work.
Do you have any other suggestions?
Here is some example code to show where this is happening:
private String createFailedResponseBodyJson(FailedPlaneLandStatus failedPlaneLandStatus) throws JsonProcessingException {
LinkedHashMap<String, String> jsonBody = new LinkedHashMap<>();
jsonBody.put("PlaneId", failedPlaneLandStatus.planeId.value);
jsonBody.put("PlaneStatus", failedPlaneLandStatus.planeStatus.name());
jsonBody.put("AirportStatus", failedPlaneLandStatus.airportStatus.name());
jsonBody.put("LandFailureReason", failedPlaneLandStatus.failureMessage.toString());
return new ObjectMapper().setDefaultPrettyPrinter(new DefaultPrettyPrinter())
.writerWithDefaultPrettyPrinter().writeValueAsString(jsonBody);
}
It is the writeValueAsString() method which throws the JsonProcessingException
I think I am happy with the existing behaviour (First bullet point). But I just want to know if the other solutions (other bullet points) are viable, or if there is another solution?
Thanks
The code base I am working with is spring and jersey.
I am on this new code base, and within the 500 exceptions got back from REST call in response.context.entityContent, I found that the RO containing the true exception is a ByteArrayInputStream.
Right now I can extract the server exception message with ex.getResponse().readEntity(CustomRO.class).getMessage().
The 500 exceptions containing the RO, however, does not contain the cause (ex.getCause() = null).
Is there a way I can apply read Entity to all exceptions on the client side, get the exception in the RO and put it as the cause of the 500 InternalServerErrorException?
Please correct me if I am wrong in any of my thoughts, and let me know if there is any standard that I should follow.
Also it is my first time asking here, so I am sorry if my wording is long or not to the point
I have a Java application which uses Spring's RestTemplate API to write concise, readable consumers of JSON REST services:
In essence:
RestTemplate rest = new RestTemplate(clientHttpRequestFactory);
ResponseEntity<ItemList> response = rest.exchange(url,
HttpMethod.GET,
requestEntity,
ItemList.class);
for(Item item : response.getBody().getItems()) {
handler.onItem(item);
}
The JSON response contains a list of items, and as you can see, I have have an event-driven design in my own code to handle each item in turn. However, the entire list is in memory as part of response, which RestTemplate.exchange() produces.
I would like the application to be able to handle responses containing large numbers of items - say 50,000, and in this case there are two issues with the implementation as it stands:
Not a single item is handled until the entire HTTP response has been transferred - adding unwanted latency.
The huge response object sits in memory and can't be GC'd until the last item has been handled.
Is there a reasonably mature Java JSON/REST client API out there that consumes responses in an event-driven manner?
I imagine it would let you do something like:
RestStreamer rest = new RestStreamer(clientHttpRequestFactory);
// Tell the RestStreamer "when, while parsing a response, you encounter a JSON
// element matching JSONPath "$.items[*]" pass it to "handler" for processing.
rest.onJsonPath("$.items[*]").handle(handler);
// Tell the RestStreamer to make an HTTP request, parse it as a stream.
// We expect "handler" to get passed an object each time the parser encounters
// an item.
rest.execute(url, HttpMethod.GET, requestEntity);
I appreciate I could roll my own implementation of this behaviour with streaming JSON APIs from Jackson, GSON etc. -- but I'd love to be told there was something out there that does it reliably with a concise, expressive API, integrated with the HTTP aspect.
A couple of months later; back to answer my own question.
I didn't find an expressive API to do what I want, but I was able to achieve the desired behaviour by getting the HTTP body as a stream, and consuming it with a Jackson JsonParser:
ClientHttpRequest request =
clientHttpRequestFactory.createRequest(uri, HttpMethod.GET);
ClientHttpResponse response = request.execute();
return handleJsonStream(response.getBody(), handler);
... with handleJsonStream designed to handle JSON that looks like this:
{ items: [
{ field: value; ... },
{ field: value, ... },
... thousands more ...
] }
... it validates the tokens leading up to the start of the array; it creates an Item object each time it encounters an array element, and gives it to the handler.
// important that the JsonFactory comes from an ObjectMapper, or it won't be
// able to do readValueAs()
static JsonFactory jsonFactory = new ObjectMapper().getFactory();
public static int handleJsonStream(InputStream stream, ItemHandler handler) throws IOException {
JsonParser parser = jsonFactory.createJsonParser(stream);
verify(parser.nextToken(), START_OBJECT, parser);
verify(parser.nextToken(), FIELD_NAME, parser);
verify(parser.getCurrentName(), "items", parser);
verify(parser.nextToken(), START_ARRAY, parser);
int count = 0;
while(parser.nextToken() != END_ARRAY) {
verify(parser.getCurrentToken(), START_OBJECT, parser);
Item item = parser.readValueAs(Item.class);
handler.onItem(item);
count++;
}
parser.close(); // hope it's OK to ignore remaining closing tokens.
return count;
}
verify() is just a private static method which throws an exception if the first two arguments aren't equal.
The key thing about this method is that no matter how many items there are in the stream, this method only every has a reference to one Item.
you can try JsonSurfer which is designed to process json stream in event-driven style.
JsonSurfer surfer = JsonSurfer.jackson();
Builder builder = config();
builder.bind("$.items[*]", new JsonPathListener() {
#Override
public void onValue(Object value, ParsingContext context) throws Exception {
// handle the value
}
});
surfer.surf(new InputStreamReader(response.getBody()), builder.build());
Is there no way to break up the request? It sounds like you should use paging. Make it so that you can request the first 100 results, the next 100 results, so on. The request should take a starting index and a count number. That's very common behavior for REST services and it sounds like the solution to your problem.
The whole point of REST is that it is stateless, it sounds like you're trying to make it stateful. That's anathema to REST, so you're not going to find any libraries written that way.
The transactional nature of REST is very much intentional by design and so you won't get around that easily. You'll fighting against the grain if you try.
From what I've seen, wrapping frameworks (like you are using) make things easy by deserializing the response into an object. In your case, a collection of objects.
However, to use things in a streaming fashion, you may need to get at the underlying HTTP response stream. I am most familiar with Jersey, which exposes https://jersey.java.net/nonav/apidocs/1.5/jersey/com/sun/jersey/api/client/ClientResponse.html#getEntityInputStream()
It would be used by invoking
Client client = Client.create();
WebResource webResource = client.resource("http://...");
ClientResponse response = webResource.accept("application/json")
.get(ClientResponse.class);
InputStream is = response.getEntityInputStream();
This provides you with the stream of data coming in. The next step is to write the streaming part. Given that you are using JSON, there are options at various levels, including http://wiki.fasterxml.com/JacksonStreamingApi or http://argo.sourceforge.net/documentation.html. They can consume the InputStream.
These don't really make good use of the full deserialization that can be done, but you could use them to parse out an element of a json array, and pass that item to a typical JSON object mapper, (like Jackson, GSON, etc). This becomes the event handling logic. You could spawn new threads for this, or do whatever your use case needs.
I won't claim to know all the rest frameworks out there (or even half) but I'm going to go with the answer
Probably Not
As noted by others this is not the way REST normally thinks of it's interactions. REST is a great Hammer but if you need streaming, you are (IMHO) in screwdriver territory, and the hammer might still be made to work, but it is likely to make a mess. One can argue that it is or is not consistent with REST all day long, but in the end I'd be very surprised to find a framework that implemented this feature. I'd be even more surprised if the feature is mature (even if the framework is) because with respect to REST your use case is an uncommon corner case at best.
If someone does come up with one I'll be happy to stand corrected and learn something new though :)
Perhaps it would be best to be thinking in terms of comet or websockets for this particular operation. This question may be helpful since you already have spring. (websockets are not really viable if you need to support IE < 10, which most commercial apps still require... sadly, I've got one client with a key customer still on IE 7 in my personal work)
You may consider Restlet.
http://restlet.org/discover/features
Supports asynchronous request processing, decoupled from IO operations. Unlike the Servlet API, the Restlet applications don't have a direct control on the outputstream, they only provide output representation to be written by the server connector.
The best way to achieve this is to use another streaming Runtime for JVM that allows reading response off websockets and i am aware of one called atmostphere
This way your large dataset is both sent and received in chunks on both side and read in the same manner in realtime withou waiting for the whole response.
This has a good POC on this:
http://keaplogik.blogspot.in/2012/05/atmosphere-websockets-comet-with-spring.html
Server:
#RequestMapping(value="/twitter/concurrency")
#ResponseBody
public void twitterAsync(AtmosphereResource atmosphereResource){
final ObjectMapper mapper = new ObjectMapper();
this.suspend(atmosphereResource);
final Broadcaster bc = atmosphereResource.getBroadcaster();
logger.info("Atmo Resource Size: " + bc.getAtmosphereResources().size());
bc.scheduleFixedBroadcast(new Callable<String>() {
//#Override
public String call() throws Exception {
//Auth using keaplogik application springMVC-atmosphere-comet-webso key
final TwitterTemplate twitterTemplate =
new TwitterTemplate("WnLeyhTMjysXbNUd7DLcg",
"BhtMjwcDi8noxMc6zWSTtzPqq8AFV170fn9ivNGrc",
"537308114-5ByNH4nsTqejcg5b2HNeyuBb3khaQLeNnKDgl8",
"7aRrt3MUrnARVvypaSn3ZOKbRhJ5SiFoneahEp2SE");
final SearchParameters parameters = new SearchParameters("world").count(5).sinceId(sinceId).maxId(0);
final SearchResults results = twitterTemplate.searchOperations().search(parameters);
sinceId = results.getSearchMetadata().getMax_id();
List<TwitterMessage> twitterMessages = new ArrayList<TwitterMessage>();
for (Tweet tweet : results.getTweets()) {
twitterMessages.add(new TwitterMessage(tweet.getId(),
tweet.getCreatedAt(),
tweet.getText(),
tweet.getFromUser(),
tweet.getProfileImageUrl()));
}
return mapper.writeValueAsString(twitterMessages);
}
}, 10, TimeUnit.SECONDS);
}
Client:
Atmosphere has it's own javascript file to handle the different Comet/Websocket transport types and requests. By using this, you can set the Spring URL Controller method endpoint to the request. Once subscribed to the controller, you will receive dispatches, which can be handled by adding a request.onMessage method. Here is an example request with transport of websockets.
var request = new $.atmosphere.AtmosphereRequest();
request.transport = 'websocket';
request.url = "<c:url value='/twitter/concurrency'/>";
request.contentType = "application/json";
request.fallbackTransport = 'streaming';
request.onMessage = function(response){
buildTemplate(response);
};
var subSocket = socket.subscribe(request);
function buildTemplate(response){
if(response.state = "messageReceived"){
var data = response.responseBody;
if (data) {
try {
var result = $.parseJSON(data);
$( "#template" ).tmpl( result ).hide().prependTo( "#twitterMessages").fadeIn();
} catch (error) {
console.log("An error ocurred: " + error);
}
} else {
console.log("response.responseBody is null - ignoring.");
}
}
}
It has support on all major browsers and native mobile clients Apple being pioneers of this technology:
As mentioned here excellent support for deployment environments on web and enterprise JEE containers:
http://jfarcand.wordpress.com/2012/04/19/websockets-or-comet-or-both-whats-supported-in-the-java-ee-land/
I've already looked for the answer for this question, and I've found the following suggestions:
If you are always expecting to find a value then throw the exception if it is missing. The exception would mean that there was a problem. If the value can be missing or present and both are valid for the application logic then return a null.
Only throw an exception if it is truly an error. If it is expected behavior for the object to not exist, return the null.
But how should I interpret them in my (so casual) case:
My web app controller is receiving request to show details for a user with a certain id. Controller asks the service layer to get the user, and then the service returns the object, if it's found. If not, a redirect to 'default' location is issued.
What should I do when someone passes invalid user id inside the request URL? Should I consider it as "expected behaviour" and return null to the controller, or perhaps should I call it a "problem or unexpected behaviour" and thus throw an exception inside the service method and catch in inside the controller?
Technically it's not a big difference after all, but I'd like to do it the right way by following standard convetions. Thanks in advance for any suggestions.
EDIT:
I assume, that the URLs generated by the app are valid and existing - when clicked by user, the user with a certaing id should be found. I want to know how to handle a situation, when user tries to access URL with wrong (not existing) user id, by manually typing the URL into browser's address bar.
If I understand you correctly, the request containing the user ID is coming from a client (out of your control). Applying the rules of thumb you quoted: invalid user input is an entirely expectable case, which would not require an exception, rather handle the null value gracefully by returning an appropriate error message to the client.
(OTOH if the user id in the request were automatically generated by another app / coming from DB etc, an invalid user ID would be unexpected, thus an exception would be appropriate.)
My personal suggestion would be to log the error details (IP address, the invalid user ID) and re-direct the user to an error page which says that some error has occurred and the administrators have been notified. Click on so-n-so link to go back to your home page etc.
Point being, whether you throw exception or return null, just make sure that the outermost filter or handler "logs" the details before the response is returned to the user.
What should I do when someone passes invalid user id inside the request URL?
You have two choices: show the 'default' page you mentioned or return a "Not found" / 404.
Regarding null, it depends. If you consider null unacceptable for a reference, then annotate it with #NotNull and the annotation shall take care of doing the correct thing upon getting a null reference: that is, throwing an (unchecked) exception (of course you need to work with the amazing #NotNull annotation for this to work).
What your do higher up the chain is up to you: to me returning a 404 to someone trying to fake user IDs sounds really close to optimal.