I've been trying to get json streaming to work in jersey 2. For the life of me nothing streams until the stream is complete.
I've tried this example trying to simulate a slow producer of data.
#Path("/foo")
#GET
public void getAsyncStream(#Suspended AsyncResponse response) {
StreamingOutput streamingOutput = output -> {
JsonGenerator jg = new ObjectMapper().getFactory().createGenerator(output, JsonEncoding.UTF8);
jg.writeStartArray();
for (int i = 0; i < 100; i++) {
jg.writeObject(i);
try {
Thread.sleep(100);
}
catch (InterruptedException e) {
logger.error(e, "Error");
}
}
jg.writeEndArray();
jg.flush();
jg.close();
};
response.resume(Response.ok(streamingOutput).build());
}
And yet jersey just sits there until the json generator is done to return the results. I'm watching the results come through in charles proxy.
Do I need to enable something? Not sure why this won't stream out
Edit:
This may actually be working, just not how I expected it. I dont' think stream is writing things realtime which is what I wanted, its more for not having to buffer responses and immediately write them out to the client. If I run a loop of a million and no thread sleep then data does get written out in chunks without having to buffer it in memory.
Your edit it correct. It is working as expected. StreamingOutput is just a wrapper that let's us write directly to the response stream, but does not actually mean the response is streamed on each server side write to the stream. Also AsyncResponse does not provide any different response as far as the client is concerned. It is simply to help increase throughput with long running tasks. The long running task should actually be done in another thread, so the method can return.
See more at Asynchronous Server API
What you seem to be looking for instead is Chunked Output
Jersey offers a facility for sending response to the client in multiple more-or-less independent chunks using a chunked output. Each response chunk usually takes some (longer) time to prepare before sending it to the client. The most important fact about response chunks is that you want to send them to the client immediately as they become available without waiting for the remaining chunks to become available too.
Not sure how it will work for your particular use case, as the JsonGenerator expects an OutputStream (of which the ChuckedOutput we use is not), but here is a simpler example
#Path("async")
public class AsyncResource {
#GET
public ChunkedOutput<String> getChunkedStream() throws Exception {
final ChunkedOutput<String> output = new ChunkedOutput<>(String.class);
new Thread(() -> {
try {
String chunk = "Message";
for (int i = 0; i < 10; i++) {
output.write(chunk + "#" + i);
Thread.sleep(1000);
}
} catch (Exception e) {
} finally {
try {
output.close();
} catch (IOException ex) {
Logger.getLogger(AsyncResource.class.getName())
.log(Level.SEVERE, null, ex);
}
}
}).start();
return output;
}
}
Note: I had a problem getting this to work at first. I would only get the delayed complete result. The problem seemed to have been with something completely separate from the program. It was actually my AVG causing the problem. Some feature called "LinkScanner" was stopping this chunking process to occur. I disabled that feature and it started working.
I haven't explored chunking much, and am not sure the security implications, so I am not sure why the AVG application has a problem with it.
EDIT
Seems the real problem is due to Jersey buffering the response in order to calculate the Content-Length header. You can see this post for how you can change this behavior
Related
I'm having a difficult time understanding what is going wrong in my application. Below shows code that successfully executes on first pass (a prior HttpResponse, and then on its second call, hangs at the dataInputStream.readFully(...) line.
doSomething(HttpResponse response) {
HttpEntity responseBody = response.getEntity();
long len = responseBody.getContentLength();
byte[] payload = new byte[(int)len]; // <-- I've confirmed this is the correct length
DataInputStream d = null;
try {
InputStream bais = responseBody.getContent();
d = new DataInputStream(bais);
d.readFully(payload); // <-- *** HANGS HERE! ***
EntityUtils.consume(responseBody);
...
} finally {
if (d != null) {
IOUtils.closeQuietly(d);
}
}
}
After blocking/hanging for 10+ seconds, the application times out and the calling thread is torn down.
I've inspected the response object, and confirmed its Content-Length to be what is expected.
Nothing jumped out at me reading the DataInputStream Javadoc.
Going through our commit history, I noticed that the IOUtils.closeQuietly(...) call was introduced. [Trust me here:] It's been hard to track our integration system's initial failure of this application, so I cannot confirm whether this finally block introduced any unintended behavior. From what I can tell, calling closeQuietly() there is the recommended approach.
Update: For clarity, this code is part of an application performing HTTP RANGED GET requests. E.g., first response is pulling down N bytes, starting at byte 0. The second call GETs the next N bytes. The file is large, so there are more than two necessary RANGED GETs.
I am trying to parse contents of a large number of emails in a gmail account. My code works fine on the Google App Engine for upto ~4000 emails, but I get the following error when the number is higher
Uncaught exception from servlet com.google.apphosting.runtime.HardDeadlineExceededError
My sample space has about 4500 emails and the code below will take a little over a minute to get all the emails. I am looking to lower the execution time to fetch the emails.
My code is:
final List<Message> messages = new ArrayList<Message>();
BatchRequest batchRequest = gmail.batch();
JsonBatchCallback<Message> callback = new JsonBatchCallback<Message>() {
public void onSuccess(Message message, HttpHeaders responseHeaders) {
synchronized (messages) {
messages.add(message);
}
}
#Override
public void onFailure(GoogleJsonError e, HttpHeaders responseHeaders)
throws IOException {
}
};
int batchCount=0;
if(noOfEmails>0){
for(Message message : messageList){
gmail.users().messages().get("me", message.getId()).set("format", "metadata").set("fields", "payload").queue(batchRequest, callback);
batchCount++;
if(batchCount==1000){
try{
noOfEmailsRead+=batchCount;
log.info("No of Emails Read : " + noOfEmailsRead);
batchRequest.execute();
}
catch(Exception e){
}
batchCount=0;
}
}
noOfEmailsRead+=batchCount;
log.info("No of Emails Read : " + noOfEmailsRead);
batchRequest.execute();
}
As said here: RuntimeError
HardDeadlineExceededError
is because you must finish your task in 30 seconds.
To accomplish this whole task in about 30 seconds, you can use the Divide and Conquer Algorithms. This technique breaks the task into smaller tasks, using all the parallel power of your processor. To determine the best number of tasks, can be little hard because depends on your OS, Processor, .... You must do some tests and benchmark.
Java have the java.util.concurrent that can help you to accomplish this issue. You can use the Fork/Join Framework.
you will need to break up the work into smaller tasks that can each complete in under 30 seconds.
A simple google search would have revealed this to you.
I want to create a link that would initiate a file download which would be asynchronous to the page itself, i.e. I want the page not to be locked during the file download. Should I make it be initiated outside wicket? Or is there something inside wicket that would let me set up a resource stream which would bypass the page locks?
Things I tried:
DownloadLink - locks the page, as stated in its doc. This was my starting point.
ResourceLink - did not state the locking explicitly in the doc, so I tried this, but it also locked the page.
At this point I've investigated the code of both links a bit and noticed they both schedule the download via ResourceStreamRequestHandler. Expecting that his kind of behavior could be just handler-specific I've attempted to schedule a custom handler I've written:
private void sendFile(final File file) throws IOException {
IRequestHandler fileDownloadHandler = new IRequestHandler() {
#Override
public void respond(IRequestCycle requestCycle) {
WebResponse response = (WebResponse) requestCycle.getResponse();
OutputStream outStream = response.getOutputStream();
response.setContentType("audio/x-wav");
response.setContentLength((int)file.length());
String fileName = "Somethingsomething.wav";
// sets HTTP header
response.setHeader("Content-Disposition", "attachment; filename=\"" + fileName + "\"");
byte[] byteBuffer = new byte[1024];
DataInputStream in = null;
try {
in = new DataInputStream(new FileInputStream(file));
int length = 0;
// reads the file's bytes and writes them to the response stream
while ((in != null) && ((length = in.read(byteBuffer)) != -1))
{
outStream.write(byteBuffer,0,length);
}
in.close();
outStream.close();
} catch (IOException e) {
throw new PortalError("IOException trying to write the response", e);
}
}
#Override
public void detach(IRequestCycle requestCycle) {
}
};
getRequestCycle().scheduleRequestHandlerAfterCurrent(fileDownloadHandler);
}
This did not quite work either, so I've investigated further. I've noticed that unlike I expected, the "scheduled" request handlers would not get executed on a separate request, as I expected, but on the same one. I figured that it must be that the page gets locked for the first handler and then remains locked while the second one is executing as well. So I've attempted to force the download handler into a separate request (via an ajax behaviour):
public void startDownload(AjaxRequestTarget target) throws DownloadTargetNotFoundException{
target.appendJavaScript("setTimeout(\"window.location.href='" + getCallbackUrl() + "'\", 100);");
}
#Override
public void onRequest() {
sendFile(getFile());
logger.debug("Download initiated");
}
I've found this here and hoped it could potentially be what I've been looking for. However, unsurprisingly so, the page gets locked still (I would imagine because the behaviour still has to be retrieved from the page, for which the page lock has to be acquired).
I'm at a loss where I should be looking next, especially after all this time trying to get a simple download link working. I was considering creating another web filter one layer above wicket, which could be signaled from within wicket to create the download after the wicket filter is finished with its work (and hence the page lock is already released), but that seems a bit excessive for a task like this.
Any suggestions are welcome.
You have to download from a resource, see
http://wicketinaction.com/2012/11/uploading-files-to-wicket-iresource/ and read http://wicket.apache.org/guide/guide/resources.html
We are developing an application with Scala and Websockets. For the latter we use Java-Websocket. The application itself works great and we are in the middle of writing unit tests.
We use a WebSocket class as follows
class WebSocket(uri : URI) extends WebSocketClient(uri) {
connectBlocking()
var response = ""
def onOpen(handshakedata : ServerHandshake) {
println("onOpen")
}
def onMessage(message : String) {
println("Received: " + message)
response = message
}
def onClose(code : Int, reason : String, remote : Boolean) {
println("onClose")
}
def onError(ex : Exception) {
println("onError")
}
}
A test might look like this (pseudo code)
websocketTest {
ws = new WebSocket("ws://example.org")
ws.send("foo")
res = ws.getResponse()
....
}
Sending and receiving data works. However, the problem is that connecting to the websocket creates a new thread and only the new thread will have access to response using the onMessage handler. What is the best way to either make the websocket implementation single-threaded or connect the two threads so that we can access the response in the test case? Or is there another, even better way of doing it? In the end we should be able to somehow test the response of the websocket.
There are a number of ways you could try to do this. The issue will be that you might get an error or a successful response from the server. As a result, the best way is probably to use some sort of timeout. In the past I have used a pattern like (note, this is untested code):
...
use response in the onMessage like you did
...
long start = System.currentTimeMillis();
long timeout = 5000;//5 seconds
while((system.currentTimeMillis()-start)<timeout && response==null)
{
Thread.sleep(100);
}
if(response == null) .. timed out
else .. do something with the response
If you want to be especially safe you can use an AtomicReference for the response.
Of course the timeout and sleep can be minimized based on your test case.
Moreover, you can wrap this in a utility method.
I'm writing to the browser window using servletResponse.getWriter().write(String).
But how do I clear the text which was written previously by some other similar write call?
The short answer is, you cannot -- once the browser receives the response, there is no way to take it back. (Unless there is some way to abnormally stop a HTTP response to cause the client to reload the page, or something to that extent.)
Probably the last place a response can be "cleared" in a sense, is using the ServletResponse.reset method, which according to the Servlet Specification, will reset the buffer of the servlet's response.
However, this method also seems to have a catch, as it will only work if the buffer has not been committed (i.e. sent to the client) by the ServletOutputStream's flush method.
You cannot. The best thing is to write to a buffer (StringWriter / StringBuilder) and then you can replace the written data any time. Only when you know for sure what is the response you can write the buffer's content to the response.
In the same matter, and reason to write the response this way and not to use some view technology for your output such as JSP, Velocity, FreeMarker, etc.?
If you have an immediate problem that you need to solve quickly, you could work around this design problem by increasing the size of the response buffer - you'll have to read your application server's docs to see if this is possible. However, this solution will not scale as you'll soon run into out-of-memory issues if you site traffic peaks.
No view technology will protect you from this issue. You should design your application to figure out what you're going to show the user before you start writing the response. That means doing all your DB access and business logic ahead of time. This is a common issue I've seen with convoluted system designs that use proxy objects that lazily access the database. E.g. ORM with Entity relationships are bad news if accessed from your view layer! There's not much you can do about an exception that happens 3/4 of the way into a rendered page.
Thinking about it, there might be some way to inject a page redirect via AJAX. Anyone ever heard of a solution like that?
Good luck with re-architecting your design!
I know the post is pretty old, but just thought of sharing my views on this.
I suppose you could actually use a Filter and a ServletResponseWrapper to wrap the response and pass it along the chain.
That is, You can have an output stream in the wrapper class and write to it instead of writing into the original response's output stream... you can clear the wrapper's output stream as and when you please and you can finally write to the original response's output stream when you are done with your processing.
For example,
public class MyResponseWrapper extends HttpServletResponseWrapper {
protected ByteArrayOutputStream baos = null;
protected ServletOutputStream stream = null;
protected PrintWriter writer = null;
protected HttpServletResponse origResponse = null;
public MyResponseWrapper( HttpServletResponse response ) {
super( response );
origResponse = response;
}
public ServletOutputStream getOutputStream()
throws IOException {
if( writer != null ) {
throw new IllegalStateException( "getWriter() has already been " +
"called for this response" );
}
if( stream == null ) {
baos = new ByteArrayOutputStream();
stream = new MyServletStream(baos);
}
return stream;
}
public PrintWriter getWriter()
throws IOException {
if( writer != null ) {
return writer;
}
if( stream != null ) {
throw new IllegalStateException( "getOutputStream() has already " +
"been called for this response" );
}
baos = new ByteArrayOutputStream();
stream = new MyServletStream(baos);
writer = new PrintWriter( stream );
return writer;
}
public void commitToResponse() {
origResponse.getOutputStream().write(baos.toByteArray());
origResponse.flush();
}
private static class MyServletStream extends ServletOutputStream {
ByteArrayOutputStream baos;
MyServletStream(ByteArrayOutputStream baos) {
this.baos = baos;
}
public void write(int param) throws IOException {
baos.write(param);
}
}
//other methods you want to implement
}