I need to allow for csv-file downloads on my page and I was going to try ngCsv (from Angular) but for browser support this seems fairly limited. I've seen quite a few examples of this being done with vanilla Javascript. And after a discussion with a colleague of "backend vs. frontend" I'm feeling more and more unsure of what to do.
Are there any true optimization/efficiency reasons why I should avoid doing this on the client side (assuming the files are no more than 100MB each download)?
Are there any true optimization/efficiency reasons why I should avoid
doing this on the client side (assuming the files are no more than
100MB each download)?
If the data on the .csv would be the same for each user, and only updated every now and then, I would suggest you have your server create / update a static .csv. It wouldn't be resource-intensive, and you wouldn't have to worry about browser compatibility / user resources.
If, however, the data you need to create a .csv for is different on a per-user basis, then you should consider creating the file client-side. If you can help it, you don't want your server having to dynamically generate 100MB .csv files each time a user clicks the link.
You could write a script that only generates the .csv client-side if the browser is not mobile and there is web-worker support. If either of those conditions are not met, you could fall back to having your server do it.
Ultimately, your answer is going to really depend on the requirements / context of this project. Try to cache the results where possible, and use common sense. Good luck :)
Related
I am developing an API in Java. It is basically a java servlet that returns content in json (application/json). Using a Tomcat server. One of the field in the response is supposed to be a link to a downloadable .txt file.
I wonder what is the best way to deliver this file:
Generating this file on every request seems to me killer, even having some cron to clean directories with files
Any way to give a temporary link only while that request for a period without saving to the file system?
Thank you.
If you say writing to the file system would kill your application, then I deduce from that that your IO performance is too weak for that, right? I mean, if you even would not have the storage capacity for that, then your infrastructure is not suitable for your application at all. I can only see four other ways for solving that problem (but maybe there are more, my list is not exclusive):
Store the text file in a database. The database should also store timeout information. Good if there are more than 1 application servers and there is a load balancer in front of them (but all application servers share the same database).
Store the text file in RAM, maybe using a cache library which does the cleanup tasks automatically for you - but be aware that a cache library will usually not guarantee a minimum storage time for each file.
Do not store the text file at all, but create it just when it is requested (no idea if that is possible in your application).
Do not provide a link to the text file, but directly include its content in the json answer (of course it would then be escaped as a JSon String), which means your server can directly forget about it when the answer has been sent, but the client _must_ download it without checking if it needs the file or not.
I am writing an applet that I eventually want to put online so that my friends/family can use it. I have the applet running now locally, but in order to work properly it needs to read a .ser file in when the applet opens, and update that same file when the applet closes. The file is quite large (~180 MB), though I am working on paring it down.
What would be the fastest/most effective way to read/write this file in java? There is a lot of information out there on this and I have never done anything like it before, so it's a bit overwhelming. The class HTTPURLConnection seems like an option to read it, but not write it. Any free web hosting that I have seen will not allow a file that big to be uploaded.
The size of the file should hopefully go down substantially, it is a list of 2.8 million musical artists, many of which I'm sure nobody using the program will ever encounter, but if this program is to be effective, many artists will have to be stored, so the problem most likely remains the same.
Thanks in advance for any help
It sounds like it would be wise to keep this large data and the processing of it on your server instead of making the applet operate on it. That's because you would avoid each user downloading a large file and processing it. If you had a server side piece that the applet could call to get useful information from, then only your server would have to load it, write it, and process it. You could implement a Java servlet, or a PHP program to respond to http requests from your applet in a format that suits the data. I'm assuming that your server can handle either servlets or custom PHP (most can).
I've been tasked with implementing large (2gb+) file uploads via a web browser. After evaluating various technologies, java applets seem to be the way forward (only one's which provide proper access to the local disk). I was wondering if anyone can recommend a 3rd party file upload app we can use as a base? requirements are
Decent UI, ideally we want something similar to facebooks photo uploader
Can handle large (2gb+) files
Resumable uploads
We beed the source to extend it to our needs (dont mind paying extra)
You're probably looking for JUpload.
Update: not sure if it has as nice of UI as you're hoping, but unless you want to build a custom solution like I have it's your best option.
Just a tip, maybe it is obvious, i don't know :P
It is nice to send the big file in chunks like 2mb, and on the server side you just append the bytes to the target file. The server knows what bytes it needs, and if a upload is aborted and continued later, the server can just send a message about from what bytes to start uploading the file again. Then we get resumability (is it a word? :P) and safety of large HTTP-uploads (since, in fact, we are sending many smuller uploads, and each upload is checked to be of the correct size on the server).
We wrote an implementation like this once with a Java-applet as the client and PHP on the server, I'll see if I can dig it out as a reference for you :p
Not really a solution : from experience you may bump into the following issues:
problems when uploading over HTTPs
problems uploading through proxies
Just wanted to make you aware of these two cases, for you to test when evaluating a solution.
Hope, you will get solutions for your prob over here.. http://jupload.sourceforge.net/
Background:
Our software generates reports for customers in the usual suspect formats (HTML, PDF, etc.) and each report can contain charts and other graphics unique to that report. For PDFs everthing is held in one place - the PDF file itself. HTML is trickier as the report is basically the sum of more than 1 file. The files are available via HTTP through Tomcat.
Problem:
I really want to have a tidy environment and wrap the HTML reports into a single file. There's MTHML, Data URIs, several formats to consider. This excellent question posits that, given the lack of cross-broser support for these formats, ZIP is a neat solution. This is attractive to me as I can also offer the zip for download as a "HTML report you can email" option. (In the past users have complained about losing the graphics about when they set about emailling HTML reports)
The solution seems simple. A request comes in, I locate the appropriate zip, unpack it somewhere on the webserver, point the request at the new HTML file, and after a day or so tidy everything up again.
But something doesn't quite seem right about that. I've kind of got a gut feeling that it's not a good solution, that there's something intrisically wrong with it, or that maybe a better way exists that I can't see at the moment.
Can anyone suggest whether this is good or bad, and offer an alternative solution?
Edit for more background information!
The reports need to persist on the server. Our customers are users at sites, and the visibility of a single report could be as wide as everyone at the site. The creation process involves the user selecting the criteria for the report, and submitting it for creation to the server. Data is extracted from the database and a document built. A placeholder record goes into the database, and the documents themselves get stored on the fileserver somewhere. It's the 'documents on the fileserver' part that I'd like to be tidier - zipping also means less disk space used!. Once a report is created, it is available to anyone who can see it.
I would have thought the plan would be that the zip file ends up on the client rather than staying on the server.
Without knowing about your architecture, I would guess at an approach like this:
User requests report
Server displays report as HTML
User perhaps tweaks some parameters, repeats request
Server displays report as HTML (repeat until user is happy)
On each of the HTML reports, there's a "download as zip" link
User clicks on link
Server regenerates report, stores it in a zip file and serves it to the user
User saves zip file somewhere, emails it around etc - server isn't involved at all
This relies on being able to rerun the report to generate the zip file, of course. You could generate a zip file each time you generate some HTML, but that's wasteful if you don't need to do it, and requires clean-up etc.
Perhaps I've misunderstood you though... if this doesn't sound appropriate, could you update your question?
EDIT: Okay, having seen the update to your question, I'd be tempted to store the files for each report in a separate directory (e.g. using a GUID as the directory name). Many file systems support compression at the file system level, so "premature zipping" probably wouldn't save much disk space, and would make extracting individual files harder. Then if the user requests a zip, you just need to build the zip file at that point, probably just in memory, before serving it.
Once a report is created, it is
available to anyone who can see it.
that is quite telling - it means that the reports are sharable, and you also would like to "cache" reports so that it doesnt have to be regenerated.
one way to do this would be to work out a way to hash the parameters together, in such a way that different parameter combinations (that result in different a report) hash to different values. then, you can use those hash as a key into a large cache of reports stored in disk in zip (may be the name of the file is the hash?)
that way, every time someone requests a report, you hash the parameters, and check if that report was already generated, and serve that up, either as a zip download, or, you can unzip it, and serve up the html as per normal. If the report doesnt exist, generate it, and zip it, make sure to be able to identify it later on as being produced by these parameters (i.e., record the hash).
one thing to be careful of is that file system writes tends to be non-atomic, so if you are not careful, you will regenerate the report twice, which sucks, but luckily in your case, not too harmful. to avoid, you can use a single thread to do it (slower), or implement some kind of lock.
You dont need to physically create zip files on a file system. Theres nothing wrong with creating the zips in memory, stream it to the browser and let GC take care of releasing the memory taken by the temporary zip. This of course introduces problems as it could be potentially ineffecient to continnally recreate the zip each time a request is made. However judge these things according to your needs and so on.
I'm working on a web application. There is one place where the user can upload files with the HTTP protocol. There is a choice between the classic HTML file upload control and a Java applet to upload the files.
The classic HTML file upload isn't great because you can only select one file at a time, and it's quite hard to get any progress indication during the actual upload (I finally got it using a timer refreshing a progress indicator with data fetched from the server via an AJAX call). The advantage: it's always working.
With the Java applet I can do more things: select multiple files at once (even a folder), compress the files, get a real progress bar, drag'n'drop files on the applet, etc...
BUT there are a few drawbacks:
it's a nightmare to get it to work properly on Mac Safari and Mac Firefox (Thanks Liveconnect)
the UI isn't exactly the native UI and some people notice that
the applet isn't as responsive as it should (could be my fault, but everything looks ok to me)
there are bugs in the Java UrlConnection class with HTTPS, so I use the Apache common HTTP client to do the actual HTTP upload. It's quite big a package and slows down the download of the .jar file
the Apache common HTTP client has sometimes trouble going through proxies
the Java runtime is quite big
I've been maintaining this Java applet for a while but now I'm fed up with all the drawbacks, and considering writing/buying a completely new component to upload theses files.
Question
If you had the following requirements:
upload multiple files easily from a browser, through HTTP or HTTPS
compress the files to reduce the upload time
upload should work on any platform, with native UI
must be able to upload huge files, up to 2gb at least
you have carte blanche on the technology
What technology/compontent would you use?
Edit :
Drag'n'Drop of files on the component would be a great plus.
It looks like there are a lot of issues related to bugs with the Flash Player (swfupload known issues). Proper Mac support and upload through proxies with authentication are options I can not do without. This would probably rule out all Flash-based options :-( .
I rule out all HTML/Javascript-only options because you can't select more than one file at a time with the classic HTML control. It's a pain to click n-times the "browse" button when you want to select multiple files in a folder.
I implemented something very recently in Silverlight.
Basically uses HttpWebRequest to send a chunk of data to a GenericHandler.
On the first post, 4KB of data is sent. On the 2nd chunk, I send another 4K chunk.
When the 2nd chunk is received, I calculate the round trip it took between first and 2nd chunk and so now
the 3rd chunk when sent will know to increase speed.
Using this method I can upload files of ANY size and I can resume.
Each post I send along this info:
[PARAMETERS]
[FILEDATA]
Here, parameters contain the following:
[Chunk #]
[Filename]
[Session ID]
After each chunk is received, I send a response back to my Silverlight saying how fast it took so that it can now send a larger
chunk.
Hard to put my explaination without code but that's basically how I did it.
At some point I will put together a quick writeup on how I did this.
I've never used it with files of 2GB in size, but the YUI File Uploader worked pretty well on a previous project. You may also be interested in this jQuery Plugin.
That said, I still think the Java Applet is the way to go. I think you'll end up with less portability and UI issues than you expect and Drag/Drop works great. For the record, Box.net uses a Java Applet for their multi-file quick uploads.
OK this is my take on this
I did some testing with swfupload, and I have my previous experience with Java, and my conclusion is that whatever technology is used there is no perfect solution to do uploads on the browser : you'll always end up with bugs when uploading huge files, going through proxies, with ssl, etc...
BUT :
a flash uploader (a la swfupload) is really lightweight, doesn't need authorization from the user and has a native interface which is REALLY cool, me thinks
a java uploader needs authorization but you can do whatever you want with the files selected by the user (aka compression if needed), and drag and drop works well. Be prepared for some epic bugs debuggin' though.
I didn't get a change to play with Silverlight as long as I'd like maybe that's the real answer, though the technology is still quite young so ... I'll edit this post if I get a chance to fiddle a bit with Silverlight
Thanks for all the answers !!
There are a number of free flash components that exist with nice multiple file upload capability. They make use of ActionScripts FileReference class with a PHP (or whatever) receiver on the server side. Some have recently broken with the launch of FP10 but I know for certain that swfupload will work :)
Hope this helps!
What about these two
Jupload
http://jupload.sourceforge.net/
and
jumploader
http://jumploader.com/
Both are java applets but they are also both really easy to use and implement.
what about google gears?
There are HTTP/HTTPS upload controls that allow multi-file upload. Here is one from Telerik, which I have found to be solid and reliable. The latest version looks to have most if not all of your feature requirements.
You can upload multiple files with HTTP forms as well, as Dave already pointed out, but if you're set on using something beyond what HTTP and Javascript offers I would heavily consider Flash. There are even some pre-existing solutions for it such as MultiPowUpload and it offers many of the features you're looking for. It's also easier to obtain progress information using a Flash client than with AJAX calls from Javascript since you have a little more flexibility.
You may check the Apache Commons FileUpload package. It allows you to upload multiple files, monitor the progress of the upload, and more. You can find more information here:
http://commons.apache.org/fileupload/
http://commons.apache.org/fileupload/using.html
Good luck