I'm having an issue with android MediaPlayer, I was first caching the the whole song into the memory before playing it, Then I decided to stream the song from my api written in SparkJava. So now it works fine If I'm trying to seek to a loaded point, otherwise It just stops. and Produces this in the API server:
org.eclipse.jetty.io.EofException
at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:286)
at org.eclipse.jetty.io.WriteFlusher.flush(WriteFlusher.java:393)
at org.eclipse.jetty.io.WriteFlusher.completeWrite(WriteFlusher.java:349)
at org.eclipse.jetty.io.ChannelEndPoint$3.run(ChannelEndPoint.java:133)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:295)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:765)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:683)
at java.base/java.lang.Thread.run(Thread.java:830)
Caused by: java.io.IOException: Connection reset by peer
at java.base/sun.nio.ch.SocketDispatcher.write0(Native Method)
at java.base/sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:54)
at java.base/sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113)
at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:79)
at java.base/sun.nio.ch.IOUtil.write(IOUtil.java:50)
at java.base/sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:484)
at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:264)
This is code used at API part:
public static Object postAudioResponse(Request request, Response response) {
try ( OutputStream os = response.raw().getOutputStream(); BufferedOutputStream bos = new BufferedOutputStream(os))
{
File mp3 = new File("C:\\FTPServer\\" + request.queryParams("dir"));
String range = request.headers("Range");
if (range == null) {
response.status(200);
byte[] bytes = Files.readAllBytes(java.nio.file.Paths.get("C:\\FTPServer\\" + request.queryParams("dir")));
response.header("Content-Type", "audio/mpeg");
response.header("Content-Length", String.valueOf(bytes.length));
System.out.println(response.raw().toString());
HttpServletResponse raw = response.raw();
raw.getOutputStream().write(bytes);
raw.getOutputStream().flush();
raw.getOutputStream().close();
return raw;
}
int[] fromTo = fromTo(mp3, range);
int length = fromTo[1] - fromTo[0] + 1;
response.status(206);
response.raw().setContentType("audio/mpeg");
response.header("Accept-Ranges", "bytes");
response.header("Content-Range", contentRangeByteString(fromTo));
response.header("Content-Length", String.valueOf(length));
final RandomAccessFile raf = new RandomAccessFile(mp3, "r");
raf.seek(fromTo[0]);
writeAudioToOS(length, raf, bos);
raf.close();
bos.flush();
bos.close();
return response.raw();
} catch (IOException e) {
e.printStackTrace();
response.header("Content-Type", "application/json");
return gson.toJson(new StandardResponse(StatusResponse.ERROR, e.toString()));
}
}
And this is my API HTTP Response(String Format)
HTTP/1.1 206
Date: Wed, 04 Mar 2020 19:39:45 GMT
Content-Type: audio/mpeg
Accept-Ranges: bytes
Content-Range: bytes 4767443-4775635/8897707
Content-Length: 8193
I tried multiple things, Changing the header, Validating the header multiple times, Tried ExoPlayer, I even checked android source code for the HTTP part and it seemed correct.
Android Code:
mediaPlayer = new MediaPlayer();
mediaPlayer.setAudioStreamType(AudioManager.STREAM_MUSIC);
mediaPlayer.setDataSource(Api.getSongSource(songName));
mediaPlayer.prepareAsync();
Note:The exception happens before I send the new HTTP Response with the Content in specific Range.
Thanks.
* UPD *
I solved this issue in the first answer.
I Fixed the above bug by doing the following
*Client Side
Instead of using mediaPlayer in android native SDK, I used ExoMedia(Which is an interface for ExoPlayer but has the same functionalities as mediaPlayer)
*Server Side
After using ExoMedia, I started to notice something, When I seeked to a point that is not yet loaded, It played from that part for a very small fraction of time,
Maybe 50 millisecond, So I started digging around and Found out the following.
When The server Receives a request with Range, Server would look for the from, then send a response with the data [From, From + Chunk].
My Chunk was equal to 8192, Upon increasing the chunk part, I could notice that the song played for longer. So Instead of sending the [From, From + Chunk] and waiting for the ExoMedia to ask for the next part.
I sent the whole [From, End of song]. And this fixed my bug.
.
Related
I am running into an issue with some code in my Android application which downloads a file from a URL, here is a code snippet:
int bytesRead = 0;
final byte[] buffer = new byte[32 * 1024];
InputStream stream = httpUrlConnection.getInputStream();
try {
while ((bytesRead = stream.read(buffer)) > 0) {
Log.i("TAG", "Read from steam, Bytes Read: " + bytesRead);
}
} catch (IOException ex) {
//Recover from lost WIFI connection.
} finally {
stream.close();
}
My application relies on InputStream.read() to throw an IOException if WiFi connectivity is lost. As stated in the Java 8 documentation this method should throw an IOException "if the input stream has been closed, or if some other I/O error occurs". In Android M, this occurs immediately and my code can process and recover from the exception. In Android N, this exception is not thrown which causes my app to simply hang in the read() method, it never breaks out of it. Has anyone else run into this problem and worked around it in such a way that doesn't break backwards compatibility? Is this a new Android N bug?
Reading from a socket can block forever if the connection goes down. You need to use a read timeout.
To avoid reinventing the wheel and find yourself figuring out this and some other common scenarios, I would use a high library such as Volley
see: Transmitting Network Data Using Volley
Volley offers you in a very straight forward way interesting things as:
timeout control
Ease of customization, for example, for retry and backoff
control of several kind of errors
cancellation request API. You can cancel a single request, or you can set blocks or scopes of requests to cancel.
Debugging and tracing tools.
etc
Sending a request is as easy as something like this (from the docs )
final TextView mTextView = (TextView) findViewById(R.id.text);
...
// Instantiate the RequestQueue.
RequestQueue queue = Volley.newRequestQueue(this);
String url ="http://www.google.com";
// Request a string response from the provided URL.
StringRequest stringRequest = new StringRequest(Request.Method.GET, url,
new Response.Listener<String>() {
#Override
public void onResponse(String response) {
// Display the first 500 characters of the response string.
mTextView.setText("Response is: "+ response.substring(0,500));
}
}, new Response.ErrorListener() {
#Override
public void onErrorResponse(VolleyError error) {
mTextView.setText("That didn't work!");
}
});
// Add the request to the RequestQueue.
queue.add(stringRequest);
Setting a timeout/retry policy is as easy as:
stringRequest.setRetryPolicy(new DefaultRetryPolicy(20 * 1000, 1, 1.0f));
where the parameters are:
Timeout: Specifies Socket Timeout in millis per every retry attempt.
Number Of Retries: Number of times retry is attempted.
Back Off Multiplier: A multiplier which is used to determine exponential time set to socket for every retry attempt.
Regarding error handling
As you see, you are passing an error listener new Response.ErrorListener() to the request. When there is an error, Volley invokes the onErrorResponse callback public void onErrorResponse method passing an instance of the VolleyError object when there is an error while performing the request.
The following is the list of exceptions in Volley, taken from this post here
AuthFailureError — If you are trying to do Http Basic authentication then this error is most likely to come.
NetworkError — Socket disconnection, server down, DNS issues might result in this error.
NoConnectionError — Similar to NetworkError, but fires when device does not have internet connection, your error handling logic can club
NetworkError and NoConnectionError together and treat them similarly.
ParseError — While using JsonObjectRequest or JsonArrayRequest if the received JSON is malformed then this exception will be generated. If you get this error then it is a problem that should be fixed instead of being handled.
ServerError — The server responded with an error, most likely with 4xx or 5xx HTTP status codes.
TimeoutError — Socket timeout, either server is too busy to handle the request or there is some network latency issue. By default Volley times out the request after 2.5 seconds, use a RetryPolicy if you are consistently getting this error.
So you can do things like
if ((error instanceof NetworkError) || (error instanceof NoConnectionError)) {
//then it was a network error
}
As #EJP says "Reading from a socket can block forever if the connection goes down", just add this line to your code , and also catch java.net.SocketTimeoutException :
int bytesRead = 0;
final byte[] buffer = new byte[32 * 1024];
InputStream stream = httpUrlConnection.getInputStream();
httpUrlConnection.setConnectTimeout(5000);
try {
while ((bytesRead = stream.read(buffer)) > 0) {
Log.i("TAG", "Read from steam, Bytes Read: " + bytesRead);
}
} catch (java.net.SocketTimeoutException ex) {
//Recover from lost WIFI connection.
} finally {
stream.close();
}
In a program I have that auto-updates, I need it to download a specific file (working already) and it does so with the following code:
public static void getFile() {
try {
URL url = new URL("https://dl.dropboxusercontent.com/s/tc301v61zt0v5cd/texture_pack.png?dl=1");
InputStream in = new BufferedInputStream(url.openStream());
ByteArrayOutputStream out = new ByteArrayOutputStream();
byte[] buf = new byte[1024];
int n = 0;
while (-1 != (n = in.read(buf))) {
out.write(buf, 0, n);
}
out.close();
in.close();
byte[] response = out.toByteArray();
FileOutputStream fos = new FileOutputStream(file4);
fos.write(response);
fos.close();
} catch (Exception e) {
JOptionPane.showMessageDialog(null, "Check your internet connection, then try again.\nOr re-install the program.\nError Message 7", "Could Not Download The Required Resources.", JOptionPane.NO_OPTION);
e.printStackTrace();
System.exit(0);
}
}
How would I implement a way to get the current completion of the download (like the percent it's downloaded) and put it into an integer to be printed out in the console (just for developer testing). Also what sort of equation could i use to get the estimated amount of time left on the download? Anything would help! Thanks.
If you examine the HTTP headers sent in the file download, you'll discover the file size. From this you can calculate percentage complete:
curl --head "https://dl.dropboxusercontent.com/s/tc301v61zt0v5cd/texture_pack.png?dl=1"
Gives you this:
HTTP/1.1 200 OK
accept-ranges: bytes
cache-control: max-age=0
content-disposition: attachment; filename="texture_pack.png"
Content-length: 29187
Content-Type: image/png
Date: Mon, 28 Apr 2014 22:38:34 GMT
etag: 121936d
pragma: public
Server: nginx
x-dropbox-request-id: 1948ddaaa2df2bdf2c4a2ce3fdbeb349
X-RequestId: 4d9ce90907637e06728713be03e6815d
x-server-response-time: 514
Connection: keep-alive
You may have to use something more advanced than standard Java library classes for your download to access the headers however, something like Apache HttpClient.
There is no way to get the size of a streamed file without streaming to the end of the file, so it can't be done within that block of code.
If you are willing to adopt the Dropbox API, you can use that to get the size of the file before starting the download. Then you can work with that size and the number bytes downloaded (n in your code) to achieve what you need.
The class in the Dropbox API that you need is DbxEntry.File.
I've got a bit of code I've been using for a while to fetch data from a web server and a few months ago, I added compression support which seems to be working well for "regular" HTTP responses where the whole document is contained in the response. It does not seem to be working when I use a Range header, though.
Here is the code doing the real work:
InputStream in = null;
int bufferSize = 4096;
int responseCode = conn.getResponseCode();
boolean error = 5 == responseCode / 100
|| 4 == responseCode / 100;
int bytesRead = 0;
try
{
if(error)
in = conn.getErrorStream();
else
in = conn.getInputStream();
// Buffer the input
in = new BufferedInputStream(in);
// Handle compressed responses
if("gzip".equalsIgnoreCase(conn.getHeaderField("Content-Encoding")))
in = new GZIPInputStream(in);
else if("deflate".equalsIgnoreCase(conn.getHeaderField("Content-Encoding")))
in = new InflaterInputStream(in, new Inflater(true));
int n;
byte[] buffer = new byte[bufferSize];
// Now, just write out all the bytes
while(-1 != (n = in.read(buffer)))
{
bytesRead += n;
out.write(buffer, 0, n);
}
}
catch (IOException ioe)
{
System.err.println("Got IOException after reading " + bytesRead + " bytes");
throw ioe;
}
finally
{
if(null != in) try { in.close(); }
catch (IOException ioe)
{
System.err.println("Could not close InputStream");
ioe.printStackTrace();
}
}
Hitting a URL with the header Accept-Encoding: gzip,deflate,identity works just great: I can see that the data is returned by the server in compressed format, and the above code decompressed it nicely.
If I then add a Range: bytes=0-50 header, I get the following exception:
Got IOException after reading 0 bytes
Exception in thread "main" java.io.EOFException: Unexpected end of ZLIB input stream
at java.util.zip.InflaterInputStream.fill(InflaterInputStream.java:240)
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:158)
at java.util.zip.GZIPInputStream.read(GZIPInputStream.java:116)
at java.io.FilterInputStream.read(FilterInputStream.java:107)
at [my code]([my code]:511)
Line 511 in my code is the line containing the in.read() call. The response includes the following headers:
Content-Type: text/html
Content-Encoding: gzip
Content-Range: bytes 0-50/751
Content-Length: 51
I have verified that, if I don't attempt to decompress the response, I actually get 51 bytes in the response... it's not a server failure (at least that I can tell). My server (Apache httpd) does not support "deflate", so I can't test another compression scheme (at least not right now).
I've also tried to request much more data (like 700 bytes of the total 751 bytes in the target resource) and I get the same kind of error.
Is there something I'm missing?
Update
Sorry, I forgot to include that I'm hitting Apache/2.2.22 on Linux. There aren't any server bugs I'm aware of. I'll have a bit of trouble verifying the compressed bytes that I retrieve from the server, as the "gzip" Content-Encoding is quite bare... e.g. I believe I can't just use "gunzip" on the command-line to decompress those bytes. I'll give it a try, though.
You can use 'gunzip' to decompress it, just keep in mind that the first 50 bytes probably aren't enough for gzip to decompress anything (headers, dictionaries etc). Try this: wget -O- -q <URL> | head -c 50 | zcat with your URL to see whether normal gzip works where your code fails.
Sigh switching to another server (happens to be running Apache/2.2.25) shows that my code does in fact work. The original target server appears to be affected by AWS's current outage in the US-EAST availability zone. I'm going to chalk this up to network errors and close this question. Thanks to those who offered suggestions.
I have an ipcamera that whenever multiple of users are connecting to it it becomes too slow.
I was thinking about getting the stream from the camera with my server and multiple of clients should be able to stream from the server instead of the poor ipcamera.
i set up a quick and dirty servlet just too see if it works :
#RequestMapping(value = "/", method = RequestMethod.GET, produces = "application/x-shockwave-flash")
public String getVideoStream(Locale locale, Model model, HttpServletRequest request, HttpServletResponse response) throws IOException {
logger.info("Start");
// An IPcamera stream example
URL url = new URL("http://www.earthcam.com/swf/ads5.swf");
URLConnection yc = url.openConnection();
OutputStream out = response.getOutputStream();
InputStream in = yc.getInputStream();
String mimeType = "application/x-shockwave-flash";
byte[] bytes = new byte[100000];
int bytesRead;
response.setContentType(mimeType);
while ((bytesRead = in.read(bytes)) != -1) {
out.write(bytes, 0, bytesRead);
}
logger.info("End");
I believe this might work, my problem right now is that :
bytesRead = in.read(bytes)
reads only 61894 bytes and that's it :( why is that happening? am i trying to get the stream wrong?
btw: i tried to do this with xuggler, but i had an error that compressed-SWF not supported.
thanks
Your code is working perfectly. I just fetched ads5.swf from your server and it is, indeed, 61894 bytes in length. The problem you're facing is that the SWF file is just the movie player. After being downloaded, the player then fetches the video stream from the server. By default (if this is some kind of turn-key streaming solution), it's probably trying to get the stream from the same server where the SWF comes from.
I have a Servlet which is returning a csv file that is 'working' over HTTP in both internet explorer and firefox. When I execute the same Servlet over HTTPS only firefox continues to download the csv file over HTTPS. I don't think this is necessarily an Internet 6 or 7 issue described on MSDN :
The message is:
Internet Explorer cannot download
data.csv from mydomain.com Internet
Explorer was not able to open this
Internet site. The requested site is
either unavailable or cannot be found.
Please try again later.
Please note that the site is still 'up' after this message and you can continue to browse the site, its just the download of the CSV that prompts this message. I have been able to access similar files over https on IE from other j2ee applications so I believe it is our code. Should we not be closing the bufferedOutputStream?
UPDATE
whether to close or not to close the output stream:
I asked this question on the java posse forums and the discussion there is also insightful. In the end it seems that no container should rely on the 'client' (your servlet code in this case) to close this output stream. So if your failure to close the stream in your servlet causes a problem it is more a reflection on the poor implementation of your servlet container than your code. I sited the behavior of the IDEs and tutortials from Sun, Oracle and BEA and how they are also inconsistent in whether they close the stream or not.
About IE specific behavior: In our case a separate product 'Oracle Web Cache' was introducing the additional header values which impacts Internet explorer only because of the way IE implements the 'No Cache' requirement (see the MSDN article).
The code is:
public class DownloadServlet extends HttpServlet {
public void doGet(HttpServletRequest request,
HttpServletResponse response) throws ServletException,
IOException {
ServletOutputStream out = null;
ByteArrayInputStream byteArrayInputStream = null;
BufferedOutputStream bufferedOutputStream = null;
try {
response.setContentType("text/csv");
String disposition = "attachment; fileName=data.csv";
response.setHeader("Content-Disposition", disposition);
out = response.getOutputStream();
byte[] blobData = dao.getCSV();
//setup the input as the blob to write out to the client
byteArrayInputStream = new ByteArrayInputStream(blobData);
bufferedOutputStream = new BufferedOutputStream(out);
int length = blobData.length;
response.setContentLength(length);
//byte[] buff = new byte[length];
byte[] buff = new byte[(1024 * 1024) * 2];
//now lets shove the data down
int bytesRead;
// Simple read/write loop.
while (-1 !=
(bytesRead = byteArrayInputStream.read(buff, 0, buff.length))) {
bufferedOutputStream.write(buff, 0, bytesRead);
}
out.flush();
out.close();
} catch (Exception e) {
System.err.println(e); throw e;
} finally {
if (out != null)
out.close();
if (byteArrayInputStream != null) {
byteArrayInputStream.close();
}
if (bufferedOutputStream != null) {
bufferedOutputStream.close();
}
}
}
I am really confused about your "from back through the breast into the head" write mechanism. Why not simple (the servlet output stream will be bufferend, thats container stuff):
byte[] csv = dao.getCSV();
response.setContentType("text/csv");
response.setHeader("Content-Disposition", "attachment; filename=data.csv"));
reponse.setContentLength(csv.length);
ServletOutputStream out = response.getOutputStream();
out.write(csv);
There should also be no need to flush the output stream nor to close.
The header content should not be parsed case sensitive by IE, but who knows: do not camelcase fileName. The next question is the encoding. CSV is text, so you should use getWriter() instead or getOutputStream() and set the content type to "text/csv; charset=UTF-8" for example. But the dao should provide the CSV as String instead of byte[].
The servlet code has nothing to d with HTTPS, so the protocol does not matter from the server side. You may test the servlet from localhost with HTTP i hope.
What about filters in your application? A filter may als set an HTTP header (or as footer) with cache-control for example.