Is it possible to get the timestamp of a file without having to download it fully, if the file is fetched from a https link?
If not, is it possible to only fetch some chunks of the file and then check the timestamp before downloading the full file?
I think that You are looking for is the HEAD HTTP request. It returns the headers without the content, so depending on the server, and headers it provides, You can experiment with Last-Modified and If-Modified-Since and pick the one matching your needs.
Related
For my web site, I need to get some data from an URL whose response headers contains Content-Disposition attribute which forces me to download the file. I would like to know how I can read the content of the file without downloading the file to disk and do I/O to read it.
Doing so in either Java or JavaScript would be fine.
Content-Disposition is just advisory. If you use a non-browser client (Java, curl, wget...) and do a GET request, you can just do whatever you want.
(I guess this means your question isn't sufficiently specific)
I have few question related to Web technologies. From my reading ant looking at Apache and Netty documents I could not figure out few things about downloading a large file with HTTP multipart/post request.
Is it possible to send HTTP request indicating request to download a file in smaller multipart (chunks)?
How to download large file in multipart ?
Please correct me if I have not understood the 'multipart' term itself. I know lot of people have faced this problem, where application (client) downloads files in smaller portion, so when network outage happens, application does not need to download whole file from the beginning again. Specially, when the file is not any media file.
Thanks.
Multipart refers to encoding multiple documents in one body, see this for the definition. For http, a multipart upload allows the client to send multiple documents with one post, for example uploading an image, and form fields in one request.
Multipart does not refer to downloading a document in multiple chunks.
You can use http ranges to restart downloading if a network outage occurs.
I need a download a text/plain file in to a folder. The url does not end with .txt but it has content-type etc... properly set. When I use the browser it immediately prompts me to save the file. The browser automatically puts proper file name also.
Using java how can i download that url in to a folder? Note that I dont know the filename also but I want the file to be saved in a directory.
code to download a file is easy... my question is that I dont know by what name should i save my file. the filename is part of content-disposition header, now how do i extract that?
The HTTP protocol uses the HTTP headers to define some information about the data transferred.
You have the content-disposition header that can have a property filename that is generated by the server. This holds the name of the file being transferred. But it is optional. Should you handle the case it is not present. Here is the doc: http://www.w3.org/Protocols/rfc2616/rfc2616-sec19.html
Depending on how you download the file, you'll have dozen of ways to retrieve this file name from the http header.
Give a look to the apache http client for instance.
HIH
M.
I am trying to get file attributes present in a Unix server and when I type this url in my IE it displays the files in the file-folder-directory architecture.
I am planning to write a code for a tool such that I can automate the process of getting the file attributes like file modified date,size of file etc.
Are there any Methods/ways to do this?
Does this code work:
File file = New File("http://<someserver.com>:<portnumber>/logs/log.txt");
Date date = file.LastModifiedDate();
System.out.println("modifed date is"+date);
If the protocol your server supports is only HTTP, I'm afraid there's no easy way to do this. You will have to:
parse the returned HTML, probably looking for <a href= tags (using some html parser, but not with regex)
open those links with new URL(url).openConnection(), read their streams, and do the same thing recursively, until an actual file (and not directory) is found.
But this won't give you the file attributes - only the name and the file contents.
If you want to browse, you need a different protocol, like FTP or SCP.
The HTTP protocol won't help you here. HTTP does not publish any file attributes, except for the Content-length (file size) and the Last-Modified header value which doesn't necessarily mirror the actual file modification date. Also it might not be sent by the HTTP Server at all.
Your best bet would be using an FTP library for example the one from Apache Commons Net.
If you decide to use this library, you can use the properties of the FTPFile class, for example the file size, file date and permissions.
I'm creating a java web application runing on a Tomcat server.
One of the functions fill in a StringBuffer variable with data.
At the end, I would like to propose the user to download the generated content packaged in a text file. This without having to store the file on the server.
I've been searching for a code snippet but couldn't find anything corresponding ...
I hope I've been clear enough on my problem.
Thanks in advance,
See Making A Download Servlet
Don't forget to add the servlet to your web.xml.
You have to send a content-type along with the response, so that the browser knows what to do with the body of the response.
Normal text has the content-type text/plain, html is text/html. Images are image/gif and so on. For an unknown mime type you normally set "application/octet", which afaik every browser treats as a download. But I recommend to use the propery content type, so the browser might start a matching application to handle the content (e.g. Office for Documents or XML Editor for XML Files ..)
To send a filename along, which the browser suggests for saving, use the following header (example):
Content-Disposition: attachment; filename="downloaded.pdf"
For sending custom headers, use the setHeader() method in the response object.