I want to create a link that would initiate a file download which would be asynchronous to the page itself, i.e. I want the page not to be locked during the file download. Should I make it be initiated outside wicket? Or is there something inside wicket that would let me set up a resource stream which would bypass the page locks?
Things I tried:
DownloadLink - locks the page, as stated in its doc. This was my starting point.
ResourceLink - did not state the locking explicitly in the doc, so I tried this, but it also locked the page.
At this point I've investigated the code of both links a bit and noticed they both schedule the download via ResourceStreamRequestHandler. Expecting that his kind of behavior could be just handler-specific I've attempted to schedule a custom handler I've written:
private void sendFile(final File file) throws IOException {
IRequestHandler fileDownloadHandler = new IRequestHandler() {
#Override
public void respond(IRequestCycle requestCycle) {
WebResponse response = (WebResponse) requestCycle.getResponse();
OutputStream outStream = response.getOutputStream();
response.setContentType("audio/x-wav");
response.setContentLength((int)file.length());
String fileName = "Somethingsomething.wav";
// sets HTTP header
response.setHeader("Content-Disposition", "attachment; filename=\"" + fileName + "\"");
byte[] byteBuffer = new byte[1024];
DataInputStream in = null;
try {
in = new DataInputStream(new FileInputStream(file));
int length = 0;
// reads the file's bytes and writes them to the response stream
while ((in != null) && ((length = in.read(byteBuffer)) != -1))
{
outStream.write(byteBuffer,0,length);
}
in.close();
outStream.close();
} catch (IOException e) {
throw new PortalError("IOException trying to write the response", e);
}
}
#Override
public void detach(IRequestCycle requestCycle) {
}
};
getRequestCycle().scheduleRequestHandlerAfterCurrent(fileDownloadHandler);
}
This did not quite work either, so I've investigated further. I've noticed that unlike I expected, the "scheduled" request handlers would not get executed on a separate request, as I expected, but on the same one. I figured that it must be that the page gets locked for the first handler and then remains locked while the second one is executing as well. So I've attempted to force the download handler into a separate request (via an ajax behaviour):
public void startDownload(AjaxRequestTarget target) throws DownloadTargetNotFoundException{
target.appendJavaScript("setTimeout(\"window.location.href='" + getCallbackUrl() + "'\", 100);");
}
#Override
public void onRequest() {
sendFile(getFile());
logger.debug("Download initiated");
}
I've found this here and hoped it could potentially be what I've been looking for. However, unsurprisingly so, the page gets locked still (I would imagine because the behaviour still has to be retrieved from the page, for which the page lock has to be acquired).
I'm at a loss where I should be looking next, especially after all this time trying to get a simple download link working. I was considering creating another web filter one layer above wicket, which could be signaled from within wicket to create the download after the wicket filter is finished with its work (and hence the page lock is already released), but that seems a bit excessive for a task like this.
Any suggestions are welcome.
You have to download from a resource, see
http://wicketinaction.com/2012/11/uploading-files-to-wicket-iresource/ and read http://wicket.apache.org/guide/guide/resources.html
Related
I have a class to process some files which is uploaded zipped.
And a method to unzip and fill a HashMap and convert to an Collection.unmodifiableMap.
public class MyClass extends HttpServlet {
...
private Map<String, String> rnaseqfiles = new HashMap<>();
...
private void processZipFile(String zipfile) throws Exception {
String fileName = zipfile;
byte[] buffer = new byte[1024];
try (ZipInputStream zis = new ZipInputStream(new FileInputStream(fileName))) {
ZipEntry zipEntry = zis.getNextEntry();
while (zipEntry != null) {
File newFile = new File(diretorio, zipEntry.toString());
if (zipEntry.isDirectory()) {
if (!newFile.isDirectory() && !newFile.mkdirs()) {
throw new IOException("Failed to create directory " + newFile);
}
} else {
FileOutputStream fos = new FileOutputStream(newFile);
int len;
while ((len = zis.read(buffer)) > 0) {
fos.write(buffer, 0, len);
}
fos.close();
rnaseqfiles.put(zipEntry.toString(), newFile.getAbsolutePath());
}
zipEntry = zis.getNextEntry();
}
rnaseqfiles = Collections.unmodifiableMap(rnaseqfiles);
zis.closeEntry();
zis.close();
}
}
...
}
When I test with a small example, it works nicely, but when I try with the real case I got this kind of error.
java.lang.UnsupportedOperationException
at java.base/java.util.Collections$UnmodifiableMap.put(Collections.java:1457)
I found some hints to deal with it but I don't know exactly what to do.
Any help is appreciated
servlets are quite annoying. Think of the notion that any given servlet is likely going to run many times, and probably many times simultaneously, as various users hit your site.
They are the worst of both worlds: The servlet spec does not guarantee that the system initializes a new object for every request (meaning, it is possible that many different requests, some even simultaneous, are all using the same fields), but it als does not guarantee the opposite either: The system is free to do so.
Conclusion: Fields in servlets are pretty much useless. But you have one, and it's causing troubles: One 'run' overwrites your mutable hashmap with an immutable one, and then the next servlet tries to add stuff to this now immutable map.
The fix is generally to just get rid of servlets. There are better ways to write web apps these days, such as spark, DropWizard, Spring, and many others.
If you insist, then your servlets should not have any fields. If you desire them, your servlet code should simply make a new object and then invoke whatever you want there - your doGet and friends are mostly just oneliners of the form new ActualHandler(req, res).go() or similar. Now you actually have one instance per request.
Or, just.. write the code so that no fields are needed. I don't see why you need a field here, for example. Your current code does;
Receive the request and parse stuff out (you didn't paste this part)
That code evidently invokes processZipFile which returns nothing, but conveys data back using a field. (This does not work in servlets).
Your request handling code then uses that field for stuff.
Seems easy to replace that - don't have a field, have the processZipFile method return that map instead.
I am working on a utility that zips up a number of files (for diagnostics purposes). At it's core, it uses the following function:
private void write(ZipOutputStream zipStream, String entryPath, ByteSource content) throws IOException {
try (InputStream contentStream = content.openStream()) {
zipStream.putNextEntry(new ZipEntry(entryPath));
ByteStreams.copy(contentStream, zipStream);
zipStream.closeEntry();
}
}
But one of the files I want to read is a log file that another application runs and locks. Because that file is locked, I get an IO exception.
<ERROR>java.io.IOException: The process cannot access the file because another process has locked a portion of the file
at java.base/java.io.FileInputStream.readBytes(Native Method)
at java.base/java.io.FileInputStream.read(FileInputStream.java:257)
at com.google.common.io.ByteStreams.copy(ByteStreams.java:112)
If I am willing to accept that I might get some garbage because of conflicts between my reads and the other application's writes, what is the best/easiest way to work around this? Is there a file reader that ignores locks or perhaps only reads all the unlocked sections only?
Update -- To clarify, I am looking to read a log file, or as much of it as possible. So, I could just start reading the file, wait until I get a block I can't read, catch the error, append a file end and go. Notepad++ and other programs can read files that are partially locked. I'm just looking for a way to do that without re-inventing the ByteStreams.copy function to create a "Copy as much as I can" function.
I should have perhaps asked "How to read all the unlocked parts of a log file" and I will update the title.
One possible answer (which I don't like) is to create a method almost identical to ByteStreams.copy(), which I call "copyUntilLock" which catches any IOException, then it checks to see if the exception is a because another process has locked a portion of the file.
If that is the case, then simply stop writing and return the number of bytes so far. If its some other exception go ahead and throw it. (You could also write a note to the stream like "READING FAILED DUE TO LOCK").
Still looking for a better answer. Code included below.
private static long copyUntilLock (InputStream from, OutputStream to) throws IOException {
checkNotNull(from);
checkNotNull(to);
byte[] buf = createBuffer();
long total = 0;
try {
while (true) {
int r = from.read(buf);
if (r == -1) {
break;
}
to.write(buf, 0, r);
total += r;
}
return total;
} catch (IOException iox) {
if (iox.getMessage() != null && iox.getMessage().contains("another process has locked a portion of the file")) {
return total;
} else {
throw iox;
}
}
}
The legacy J2EE application details :
JSP + Servlets(2.4)
Websphere Application Server 7.0
The view is using IE frames, core javascript and so on
The user's action :
User's search returning > 900 rows takes some time to display (NO pagination)
User then clicks on 'Download' button which again triggers a form submit.
Following is the code snippet that is executed in the action servlet :
public class DownloadFileEvent extends ActionGeneric {
java.text.SimpleDateFormat df_file = new java.text.SimpleDateFormat("yyyyMMdd_HHmmss");
public void run(HttpServletRequest request, HttpServletResponse response) throws Exception {
String errormsg = null;
StringBuffer LineBuffer = null;
// read parameters.
String _v = request.getParameter("view");
// Start traitment.
try {
// get sessions.
ServletOutputStream out = response.getOutputStream();
// Create title columns.
response.setContentType("application/csv");
response.setHeader("Content-Disposition", "inline; filename = " + getFilename(_v));
LineBuffer = new StringBuffer();
//Get the string response from some business method
//String v_wrk = getOutflow(request, _v).toString();
LineBuffer.append(v_wrk);
LineBuffer.append("\r\n");
out.print(LineBuffer.toString());
out.flush();
// end.
}// fin try
catch (Exception e) {
errormsg = e.getMessage();
} finally {
// to do.
}
}// end run.
}// fin class
The issue :
Since the 'Download' takes some time, the user moves to other screen
When he comes back, the 'Open/Save/Save As' prompt is there already for some time. Now when user saves/opens the file but instead of 900 rows, there are less than 100 rows
Surprisingly, if the open/save is done immediately, all the rows are downloaded
In the catch block, I had put a log but there is no exception anywhere
The issue is not simulated on my local machine(Windows, WAS 7) or in the SYSTEM test environment(Linux, WAS 8.5) but surfaces on ACCEPTANCE (WAS 7, Linux) and PRODUCTION(WAS 7, Linux). The ACCEPTANCE and PRODUCTION have load balancers, web server set up but NOT in systest or local
How shall I proceed ?
Try a high SendBufferSize in your web servers. If the client does not read while the dialog is up, this will prevent the webserver from seeing writes eventually block then timeout.
I'm having issues with reading decrypted data from conceal. It looks like I can't correctly finish streaming.
I pretend there is some issue with conceal, because of when I switch my proxyStream (just the encryption part) to not run it through conceal, everything works as expected. I'm also assuming that writing is ok, there is no exception whatsoever and I can find the encrypted file on disk.
I'm proxying my data through contentprovider to allow other apps read decrypted data when the user wants it. (sharing,...)
In my content provider I'm using the openFile method to allow contentResolvers read the data
#Override
public ParcelFileDescriptor openFile(Uri uri, String mode) throws FileNotFoundException {
try {
ParcelFileDescriptor[] pipe = ParcelFileDescriptor.createPipe();
String name = uri.getLastPathSegment();
File file = new File(name);
InputStream fileContents = mStorageProxy.getDecryptInputStream(file);
ParcelFileDescriptor.AutoCloseOutputStream stream = new ParcelFileDescriptor.AutoCloseOutputStream(pipe[1]);
PipeThread pipeThread = new PipeThread(fileContents, stream);
pipeThread.start();
return pipe[0];
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
I guess in the Facebook app Facebook android team could be rather using a standard query() method with a byte array sent in MediaStore.MediaColumns() which is not suitable for me because of I'm not only encrypting media files and I also like the approach of streams better.
This is how I'm reading from the Inpustream. It's basically a pipe between two parcelFileDescriptors. The inputstream comes from conceal and it is a FileInputstream wrapped into a BufferedInputStream originaly.
static class PipeThread extends Thread {
InputStream input;
OutputStream out;
PipeThread(InputStream inputStream, OutputStream out) {
this.input=inputStream;
this.out=out;
}
#Override
public void run() {
byte[] buf=new byte[1024];
int len;
try {
while ((len=input.read(buf)) > 0) {
out.write(buf, 0, len);
}
input.close();
out.flush();
out.close();
}
catch (IOException e) {
Log.e(getClass().getSimpleName(),
"Exception transferring file", e);
}
}
}
I've tried other methods how to read the stream, so it really shouldn't be the issue.
Finally here's the exception I'm constantly ending up with. Do you know what could be the issue? It points to native calls, which I got lost in..
Exception transferring file
com.facebook.crypto.cipher.NativeGCMCipherException: decryptFinal
at com.facebook.crypto.cipher.NativeGCMCipher.decryptFinal(NativeGCMCipher.java:108)
at com.facebook.crypto.streams.NativeGCMCipherInputStream.ensureTagValid(NativeGCMCipherInputStream.java:126)
at com.facebook.crypto.streams.NativeGCMCipherInputStream.read(NativeGCMCipherInputStream.java:91)
at com.facebook.crypto.streams.NativeGCMCipherInputStream.read(NativeGCMCipherInputStream.java:76)
EDIT:
It looks like the stream is working ok, but what fails is the last iteration of reading from it. As I'm using buffer it seems like the fact that the buffer is bigger then the amount of remaiming data is causing the issue. I've been looking into sources of conceal and it seems to be ok from this regard there. Couldn't it be failing somewhere in the native layer?
Note: I've managed to get the decrypted file except its final chunk of bytes..So I have for example an incomplete image file (with last few thousands of pixels not being displayed)
From my little experience with conceal, I have noticed that, only the same application that encrypts a file could decrypt it successfully irrespective whether it has the same package or not. Be sure to put this in mind
This was resolved in https://github.com/facebook/conceal/issues/24. For posterity's sake, the problem here is that the author forgot to call close() on the output stream.
I'm writing to the browser window using servletResponse.getWriter().write(String).
But how do I clear the text which was written previously by some other similar write call?
The short answer is, you cannot -- once the browser receives the response, there is no way to take it back. (Unless there is some way to abnormally stop a HTTP response to cause the client to reload the page, or something to that extent.)
Probably the last place a response can be "cleared" in a sense, is using the ServletResponse.reset method, which according to the Servlet Specification, will reset the buffer of the servlet's response.
However, this method also seems to have a catch, as it will only work if the buffer has not been committed (i.e. sent to the client) by the ServletOutputStream's flush method.
You cannot. The best thing is to write to a buffer (StringWriter / StringBuilder) and then you can replace the written data any time. Only when you know for sure what is the response you can write the buffer's content to the response.
In the same matter, and reason to write the response this way and not to use some view technology for your output such as JSP, Velocity, FreeMarker, etc.?
If you have an immediate problem that you need to solve quickly, you could work around this design problem by increasing the size of the response buffer - you'll have to read your application server's docs to see if this is possible. However, this solution will not scale as you'll soon run into out-of-memory issues if you site traffic peaks.
No view technology will protect you from this issue. You should design your application to figure out what you're going to show the user before you start writing the response. That means doing all your DB access and business logic ahead of time. This is a common issue I've seen with convoluted system designs that use proxy objects that lazily access the database. E.g. ORM with Entity relationships are bad news if accessed from your view layer! There's not much you can do about an exception that happens 3/4 of the way into a rendered page.
Thinking about it, there might be some way to inject a page redirect via AJAX. Anyone ever heard of a solution like that?
Good luck with re-architecting your design!
I know the post is pretty old, but just thought of sharing my views on this.
I suppose you could actually use a Filter and a ServletResponseWrapper to wrap the response and pass it along the chain.
That is, You can have an output stream in the wrapper class and write to it instead of writing into the original response's output stream... you can clear the wrapper's output stream as and when you please and you can finally write to the original response's output stream when you are done with your processing.
For example,
public class MyResponseWrapper extends HttpServletResponseWrapper {
protected ByteArrayOutputStream baos = null;
protected ServletOutputStream stream = null;
protected PrintWriter writer = null;
protected HttpServletResponse origResponse = null;
public MyResponseWrapper( HttpServletResponse response ) {
super( response );
origResponse = response;
}
public ServletOutputStream getOutputStream()
throws IOException {
if( writer != null ) {
throw new IllegalStateException( "getWriter() has already been " +
"called for this response" );
}
if( stream == null ) {
baos = new ByteArrayOutputStream();
stream = new MyServletStream(baos);
}
return stream;
}
public PrintWriter getWriter()
throws IOException {
if( writer != null ) {
return writer;
}
if( stream != null ) {
throw new IllegalStateException( "getOutputStream() has already " +
"been called for this response" );
}
baos = new ByteArrayOutputStream();
stream = new MyServletStream(baos);
writer = new PrintWriter( stream );
return writer;
}
public void commitToResponse() {
origResponse.getOutputStream().write(baos.toByteArray());
origResponse.flush();
}
private static class MyServletStream extends ServletOutputStream {
ByteArrayOutputStream baos;
MyServletStream(ByteArrayOutputStream baos) {
this.baos = baos;
}
public void write(int param) throws IOException {
baos.write(param);
}
}
//other methods you want to implement
}