In a program I have that auto-updates, I need it to download a specific file (working already) and it does so with the following code:
public static void getFile() {
try {
URL url = new URL("https://dl.dropboxusercontent.com/s/tc301v61zt0v5cd/texture_pack.png?dl=1");
InputStream in = new BufferedInputStream(url.openStream());
ByteArrayOutputStream out = new ByteArrayOutputStream();
byte[] buf = new byte[1024];
int n = 0;
while (-1 != (n = in.read(buf))) {
out.write(buf, 0, n);
}
out.close();
in.close();
byte[] response = out.toByteArray();
FileOutputStream fos = new FileOutputStream(file4);
fos.write(response);
fos.close();
} catch (Exception e) {
JOptionPane.showMessageDialog(null, "Check your internet connection, then try again.\nOr re-install the program.\nError Message 7", "Could Not Download The Required Resources.", JOptionPane.NO_OPTION);
e.printStackTrace();
System.exit(0);
}
}
How would I implement a way to get the current completion of the download (like the percent it's downloaded) and put it into an integer to be printed out in the console (just for developer testing). Also what sort of equation could i use to get the estimated amount of time left on the download? Anything would help! Thanks.
If you examine the HTTP headers sent in the file download, you'll discover the file size. From this you can calculate percentage complete:
curl --head "https://dl.dropboxusercontent.com/s/tc301v61zt0v5cd/texture_pack.png?dl=1"
Gives you this:
HTTP/1.1 200 OK
accept-ranges: bytes
cache-control: max-age=0
content-disposition: attachment; filename="texture_pack.png"
Content-length: 29187
Content-Type: image/png
Date: Mon, 28 Apr 2014 22:38:34 GMT
etag: 121936d
pragma: public
Server: nginx
x-dropbox-request-id: 1948ddaaa2df2bdf2c4a2ce3fdbeb349
X-RequestId: 4d9ce90907637e06728713be03e6815d
x-server-response-time: 514
Connection: keep-alive
You may have to use something more advanced than standard Java library classes for your download to access the headers however, something like Apache HttpClient.
There is no way to get the size of a streamed file without streaming to the end of the file, so it can't be done within that block of code.
If you are willing to adopt the Dropbox API, you can use that to get the size of the file before starting the download. Then you can work with that size and the number bytes downloaded (n in your code) to achieve what you need.
The class in the Dropbox API that you need is DbxEntry.File.
Related
I'm having an issue with android MediaPlayer, I was first caching the the whole song into the memory before playing it, Then I decided to stream the song from my api written in SparkJava. So now it works fine If I'm trying to seek to a loaded point, otherwise It just stops. and Produces this in the API server:
org.eclipse.jetty.io.EofException
at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:286)
at org.eclipse.jetty.io.WriteFlusher.flush(WriteFlusher.java:393)
at org.eclipse.jetty.io.WriteFlusher.completeWrite(WriteFlusher.java:349)
at org.eclipse.jetty.io.ChannelEndPoint$3.run(ChannelEndPoint.java:133)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:295)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
at java.base/java.lang.Thread.run(Thread.java:830)
Caused by: java.io.IOException: Connection reset by peer
at java.base/sun.nio.ch.SocketDispatcher.write0(Native Method)
at java.base/sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:54)
at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79)
at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:50)
at java.base/sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:484)
at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:264)
This is code used at API part:
public static Object postAudioResponse(Request request, Response response) {
try ( OutputStream os = response.raw().getOutputStream(); BufferedOutputStream bos = new BufferedOutputStream(os))
{
File mp3 = new File("C:\\FTPServer\\" + request.queryParams("dir"));
String range = request.headers("Range");
if (range == null) {
response.status(200);
byte[] bytes = Files.readAllBytes(java.nio.file.Paths.get("C:\\FTPServer\\" + request.queryParams("dir")));
response.header("Content-Type", "audio/mpeg");
response.header("Content-Length", String.valueOf(bytes.length));
System.out.println(response.raw().toString());
HttpServletResponse raw = response.raw();
raw.getOutputStream().write(bytes);
raw.getOutputStream().flush();
raw.getOutputStream().close();
return raw;
}
int[] fromTo = fromTo(mp3, range);
int length = fromTo[1] - fromTo[0] + 1;
response.status(206);
response.raw().setContentType("audio/mpeg");
response.header("Accept-Ranges", "bytes");
response.header("Content-Range", contentRangeByteString(fromTo));
response.header("Content-Length", String.valueOf(length));
final RandomAccessFile raf = new RandomAccessFile(mp3, "r");
raf.seek(fromTo[0]);
writeAudioToOS(length, raf, bos);
raf.close();
bos.flush();
bos.close();
return response.raw();
} catch (IOException e) {
e.printStackTrace();
response.header("Content-Type", "application/json");
return gson.toJson(new StandardResponse(StatusResponse.ERROR, e.toString()));
}
}
And this is my API HTTP Response(String Format)
HTTP/1.1 206
Date: Wed, 04 Mar 2020 19:39:45 GMT
Content-Type: audio/mpeg
Accept-Ranges: bytes
Content-Range: bytes 4767443-4775635/8897707
Content-Length: 8193
I tried multiple things, Changing the header, Validating the header multiple times, Tried ExoPlayer, I even checked android source code for the HTTP part and it seemed correct.
Android Code:
mediaPlayer = new MediaPlayer();
mediaPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC);
mediaPlayer.setDataSource(Api.getSongSource(songName));
mediaPlayer.prepareAsync();
Note:The exception happens before I send the new HTTP Response with the Content in specific Range.
Thanks.
* UPD *
I solved this issue in the first answer.
I Fixed the above bug by doing the following
*Client Side
Instead of using mediaPlayer in android native SDK, I used ExoMedia(Which is an interface for ExoPlayer but has the same functionalities as mediaPlayer)
*Server Side
After using ExoMedia, I started to notice something, When I seeked to a point that is not yet loaded, It played from that part for a very small fraction of time,
Maybe 50 millisecond, So I started digging around and Found out the following.
When The server Receives a request with Range, Server would look for the from, then send a response with the data [From, From + Chunk].
My Chunk was equal to 8192, Upon increasing the chunk part, I could notice that the song played for longer. So Instead of sending the [From, From + Chunk] and waiting for the ExoMedia to ask for the next part.
I sent the whole [From, End of song]. And this fixed my bug.
.
Even though I set content type to text/html it ends up as application/octet-stream on S3.
ByteArrayInputStream contentsAsStream = new ByteArrayInputStream(contentAsBytes);
ObjectMetadata md = new ObjectMetadata();
md.setContentLength(contentAsBytes.length);
md.setContentType("text/html");
s3.putObject(new PutObjectRequest(ARTIST_BUCKET_NAME, artistId, contentsAsStream, md));
If however I name the file so that it ends up with .html
s3.putObject(new PutObjectRequest(ARTIST_BUCKET_NAME, artistId + ".html", contentsAsStream, md));
then it works.
Is my md object just being ignored? How can I get round this programmatically as over time I need to upload thousands of files so cannot just go into S3 UI and manually fix the contentType.
You must be doing something else in your code. I just tried your code example using the 1.9.6 S3 SDK and the file gets the "text/html" content type.
Here's the exact (Groovy) code:
class S3Test {
static void main(String[] args) {
def s3 = new AmazonS3Client()
def random = new Random()
def bucketName = "raniz-playground"
def keyName = "content-type-test"
byte[] contentAsBytes = new byte[1024]
random.nextBytes(contentAsBytes)
ByteArrayInputStream contentsAsStream = new ByteArrayInputStream(contentAsBytes);
ObjectMetadata md = new ObjectMetadata();
md.setContentLength(contentAsBytes.length);
md.setContentType("text/html");
s3.putObject(new PutObjectRequest(bucketName, keyName, contentsAsStream, md))
def object = s3.getObject(bucketName, keyName)
println(object.objectMetadata.contentType)
object.close()
}
}
The program prints
text/html
And the S3 metadata says the same:
Here are the communication sent over the net (courtesy of Apache HTTP Commons debug logging):
>> PUT /content-type-test HTTP/1.1
>> Host: raniz-playground.s3.amazonaws.com
>> Authorization: AWS <nope>
>> User-Agent: aws-sdk-java/1.9.6 Linux/3.2.0-84-generic Java_HotSpot(TM)_64-Bit_Server_VM/25.45-b02/1.8.0_45
>> Date: Fri, 12 Jun 2015 02:11:16 GMT
>> Content-Type: text/html
>> Content-Length: 1024
>> Connection: Keep-Alive
>> Expect: 100-continue
<< HTTP/1.1 200 OK
<< x-amz-id-2: mOsmhYGkW+SxipF6S2+CnmiqOhwJ62WfWUkmZk4zU3rzkWCEH9P/bT1hUz27apmO
<< x-amz-request-id: 8706AE3BE8597644
<< Date: Fri, 12 Jun 2015 02:11:23 GMT
<< ETag: "6c53debeb28f1d12f7ad388b27c9036d"
<< Content-Length: 0
<< Server: AmazonS3
>> GET /content-type-test HTTP/1.1
>> Host: raniz-playground.s3.amazonaws.com
>> Authorization: AWS <nope>
>> User-Agent: aws-sdk-java/1.9.6 Linux/3.2.0-84-generic Java_HotSpot(TM)_64-Bit_Server_VM/25.45-b02/1.8.0_45
>> Date: Fri, 12 Jun 2015 02:11:23 GMT
>> Content-Type: application/x-www-form-urlencoded; charset=utf-8
>> Connection: Keep-Alive
<< HTTP/1.1 200 OK
<< x-amz-id-2: 9U1CQ8yIYBKYyadKi4syaAsr+7BV76Q+5UAGj2w1zDiPC2qZN0NzUCQNv6pWGu7n
<< x-amz-request-id: 6777433366DB6436
<< Date: Fri, 12 Jun 2015 02:11:24 GMT
<< Last-Modified: Fri, 12 Jun 2015 02:11:23 GMT
<< ETag: "6c53debeb28f1d12f7ad388b27c9036d"
<< Accept-Ranges: bytes
<< Content-Type: text/html
<< Content-Length: 1024
<< Server: AmazonS3
And this is also the behaviour that looking at the source code shows us - if you set the content type the SDK won't override it.
Because you have to set content type at the end just before sending, using the putObject method;
ObjectMetadata md = new ObjectMetadata();
InputStream myInputStream = new ByteArrayInputStream(bFile);
md.setContentLength(bFile.length);
md.setContentType("text/html");
md.setContentEncoding("UTF-8");
s3client.putObject(new PutObjectRequest(bucketName, keyName, myInputStream, md));
And after upload, content type is set as "text/html"
Here is a working dummy code, check that out, I've just tried and it's working;
public class TestAWS {
//TEST
private static String bucketName = "whateverBucket";
public static void main(String[] args) throws Exception {
BasicAWSCredentials awsCreds = new BasicAWSCredentials("whatever", "whatever");
AmazonS3 s3client = new AmazonS3Client(awsCreds);
try
{
String uploadFileName = "D:\\try.txt";
String keyName = "newFile.txt";
System.out.println("Uploading a new object to S3 from a file\n");
File file = new File(uploadFileName);
//bFile will be the placeholder of file bytes
byte[] bFile = new byte[(int) file.length()];
FileInputStream fileInputStream=null;
//convert file into array of bytes
fileInputStream = new FileInputStream(file);
fileInputStream.read(bFile);
fileInputStream.close();
ObjectMetadata md = new ObjectMetadata();
InputStream myInputStream = new ByteArrayInputStream(bFile);
md.setContentLength(bFile.length);
md.setContentType("text/html");
md.setContentEncoding("UTF-8");
s3client.putObject(new PutObjectRequest(bucketName, keyName, myInputStream, md));
} catch (AmazonServiceException ase)
{
System.out.println("Caught an AmazonServiceException, which "
+ "means your request made it "
+ "to Amazon S3, but was rejected with an error response"
+ " for some reason.");
System.out.println("Error Message: " + ase.getMessage());
System.out.println("HTTP Status Code: " + ase.getStatusCode());
System.out.println("AWS Error Code: " + ase.getErrorCode());
System.out.println("Error Type: " + ase.getErrorType());
System.out.println("Request ID: " + ase.getRequestId());
} catch (AmazonClientException ace)
{
System.out.println("Caught an AmazonClientException, which "
+ "means the client encountered "
+ "an internal error while trying to "
+ "communicate with S3, "
+ "such as not being able to access the network.");
System.out.println("Error Message: " + ace.getMessage());
}
}
}
Hope that it helps.
It seems that
When uploading files, the AWS S3 Java client will attempt to determine
the correct content type if one hasn't been set yet. Users are
responsible for ensuring a suitable content type is set when uploading
streams. If no content type is provided and cannot be determined by
the filename, the default content type, "application/octet-stream",
will be used.
Giving the file a .html extension provides a way to set the correct type.
According to the examples I've been looking at, the code you show should be doing what you want to do. :/
I could fix this issue easily by commandline, I faced similar issue while uploading html files through aws commandline even though the file name had correct extension.
As mentioned in earlier comments, adding --content-type param fixes the issue.
Executing below command and refreshing page returned octet stream.
aws s3api put-object --bucket [BUCKETNAME] --body index.html --key index.html --profile [PROFILE] --acl public-read
Fix: add --content type text/html
aws s3api put-object --bucket [BUCKETNAME] --body index.html --key index.html --profile [PROFILE] --acl public-read --content-type text/html
If you are using the AWS SDK for Java 2.x, it is possible to add the content type within the builder pattern.
For example, uploading a Base64-encoded image as a JPEG object to S3 (assuming you have already instantiated an S3 client):
byte[] stringAsByteArray = java.util.Base64.getDecoder().decode(base64EncodedString);
s3Client.putObject(
PutObjectRequest.builder().bucket("my-bucket").key("my-key").contentType("image/jpg").build(),
RequestBody.fromBytes(stringAsByteArray)
);
Do you have any Override on the default mime content on your S3 account? Look at this link to see how to check it: How to override default Content Types.
Anyway, it looks like your S3 client fails to determine the correct mime-type by the content of the file, so it relies on the extension. octet-stream is the widely used default content mime type when a browser/servlet can't determine the mimetype: Is there any default mime type?
I've got a bit of code I've been using for a while to fetch data from a web server and a few months ago, I added compression support which seems to be working well for "regular" HTTP responses where the whole document is contained in the response. It does not seem to be working when I use a Range header, though.
Here is the code doing the real work:
InputStream in = null;
int bufferSize = 4096;
int responseCode = conn.getResponseCode();
boolean error = 5 == responseCode / 100
|| 4 == responseCode / 100;
int bytesRead = 0;
try
{
if(error)
in = conn.getErrorStream();
else
in = conn.getInputStream();
// Buffer the input
in = new BufferedInputStream(in);
// Handle compressed responses
if("gzip".equalsIgnoreCase(conn.getHeaderField("Content-Encoding")))
in = new GZIPInputStream(in);
else if("deflate".equalsIgnoreCase(conn.getHeaderField("Content-Encoding")))
in = new InflaterInputStream(in, new Inflater(true));
int n;
byte[] buffer = new byte[bufferSize];
// Now, just write out all the bytes
while(-1 != (n = in.read(buffer)))
{
bytesRead += n;
out.write(buffer, 0, n);
}
}
catch (IOException ioe)
{
System.err.println("Got IOException after reading " + bytesRead + " bytes");
throw ioe;
}
finally
{
if(null != in) try { in.close(); }
catch (IOException ioe)
{
System.err.println("Could not close InputStream");
ioe.printStackTrace();
}
}
Hitting a URL with the header Accept-Encoding: gzip,deflate,identity works just great: I can see that the data is returned by the server in compressed format, and the above code decompressed it nicely.
If I then add a Range: bytes=0-50 header, I get the following exception:
Got IOException after reading 0 bytes
Exception in thread "main" java.io.EOFException: Unexpected end of ZLIB input stream
at java.util.zip.InflaterInputStream.fill(InflaterInputStream.java:240)
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:158)
at java.util.zip.GZIPInputStream.read(GZIPInputStream.java:116)
at java.io.FilterInputStream.read(FilterInputStream.java:107)
at [my code]([my code]:511)
Line 511 in my code is the line containing the in.read() call. The response includes the following headers:
Content-Type: text/html
Content-Encoding: gzip
Content-Range: bytes 0-50/751
Content-Length: 51
I have verified that, if I don't attempt to decompress the response, I actually get 51 bytes in the response... it's not a server failure (at least that I can tell). My server (Apache httpd) does not support "deflate", so I can't test another compression scheme (at least not right now).
I've also tried to request much more data (like 700 bytes of the total 751 bytes in the target resource) and I get the same kind of error.
Is there something I'm missing?
Update
Sorry, I forgot to include that I'm hitting Apache/2.2.22 on Linux. There aren't any server bugs I'm aware of. I'll have a bit of trouble verifying the compressed bytes that I retrieve from the server, as the "gzip" Content-Encoding is quite bare... e.g. I believe I can't just use "gunzip" on the command-line to decompress those bytes. I'll give it a try, though.
You can use 'gunzip' to decompress it, just keep in mind that the first 50 bytes probably aren't enough for gzip to decompress anything (headers, dictionaries etc). Try this: wget -O- -q <URL> | head -c 50 | zcat with your URL to see whether normal gzip works where your code fails.
Sigh switching to another server (happens to be running Apache/2.2.25) shows that my code does in fact work. The original target server appears to be affected by AWS's current outage in the US-EAST availability zone. I'm going to chalk this up to network errors and close this question. Thanks to those who offered suggestions.
Hello fellow java developers. I receive a response with headers and body as below, but when I try to decompress it using the code below, it fails with this exception:
java.io.IOException: Not in GZIP format
Response:
HTTP/1.1 200 OK
Content-Type: text/xml; charset=utf-8
Content-Encoding: gzip
Server: Jetty(6.1.x)
▼ ═UMs¢0►=7┐ép?╙6-C╚$╢gΩ↓╟±╪₧∟zS╨╓╓♦$FÆ╒÷▀G┬╚╞8N≤╤Cf°►╦█╖╗o↨æJÄ+`:↓2
♣»└√S▬L&?∙┬_)U╔|♣%ûíyk_à\,æ] hⁿ?▀xΓ∟o╜4♫ù\#MAHG?┤(Q¶╞⌡▌Ç?▼ô[7Fí¼↔φ☻I%╓╣Z♂?¿↨F;x|♦o/A╬♣╘≡∞─≤╝╘U∙♥0☺æ?|J%à{(éUmHµ %σl┴▼Ç9♣┌Ç?♫╡5╠yë~├╜♦íi♫╥╧
╬û?▓ε?╞┼→RtGqè₧ójWë♫╩∞j05├╞┘|>┘º∙↑j╪2┐|= ÷²
eY\╛P?#5wÑqc╙τ♦▓½Θt£6q∩?┌4┼t♠↕=7æƒ╙?╟|♂;║)∩÷≈═^╛{v⌂┌∞◄>6ä╝|
Code:
byte[] b= IOUtils.toByteArray(sock.getInputStream());
ByteArrayInputStream bais = new ByteArrayInputStream(b);
GZIPInputStream gzis = new GZIPInputStream(bais);
InputStreamReader reader = new InputStreamReader(gzis);
BufferedReader in = new BufferedReader(reader);
String readed;
while ((readed = in.readLine()) != null) {
System.out.println("read: "+readed);
}
Please advise.
Thanks,
Pradeep
The MIME header is NOT in the GZIP format, it's in plain text. You have to read that first before you can decompress the stream.
Also, why not just use this:
InputStream in = sock.getInputStream();
readHeader(in);
InputStream zin = new GZIPInputStream(in);
There are libraries for all of this. You can use, for example, Apache HTTP Components, or you can read its open source to see what it does. At very least, read the relevant specification.
I second bmarguiles' answer.
Only the body (response-body in the RFC) is compressed, so you only need to decompress the part that is after the \r\n\r\n.
Generally speaking, you can cut the response in half by that double CRLF, and only decompress the second half.
This is what I have so far,
Socket clientSocket = new Socket(HOST, PORT);
ByteArrayOutputStream buffer = new ByteArrayOutputStream();
InputStream is = socket.getInputStream();
byte[] byteChunk = new byte[1024];
int c = is.read(byteChunk);
while (c != -1){
buffer.write(byteChunk, 0, c);
c = is.read(byteChunk);
}
BufferedImage bufferedImage = ImageIO.read(new ByteArrayInputStream(buffer.toByteArray()));
My problem with my code is ImageIO.read() returns null.
When I print the content of ByteArrayOutputStream object what i get is header part
HTTP/1.1 200 OK
Date: Fri, 30 Dec 2011 11:34:19 GMT
Server: Apache/2.2.3 (Debian) ...........
Last-Modified: Tue, 20 Dec 2011 19:12:23 GMT
ETag: "502812-490e-4b48ad8d273c0"
Accept-Ranges: bytes
Content-Length: 18702
Connection: close
Content-Type: image/jpeg
followed with a empty line plus many lines with different characters such as Àã$sU,e6‡Í~áŸP;Öã….
Again my problem is ImageIO.read() function returns null.
Thanks in advance.
Why you don't want to use simple http URL to get image from host?
I mean:
URL imageURL = new URL("http://host:port/address");
BufferedImage bufferedImage = ImageIO.read(imageURL);
If you want to use plain socket you have to parse http response and extract data from the http reply manually: read/skip headers, read binary data and pass it to ImageIO.read (or seek stream to correct position and pass stream to ImageIO.read).