How to stream url from .pls file with java? - java

I want to stream a radio with Java, my approach is to download the playlist file (.pls), then extract one of the urls given in that same file and finally, stream it with java. However, it seems I cannot find a way to do it.. I tried with JMF, but I get java.io.IOException: Invalid Http response everytime I run the code.
Here is what I tried:
Player player = Manager.createPlayer(new URL("http://50.7.98.106:8398"));
player.start();
The .pls file:
[playlist]
NumberOfEntries=1
File1=http://50.7.98.106:8398/
In the piece of code above I'm setting the URL by hand, just for testing, but I've sucessfuly done the .pls downloading code and it's working, and from this I make another question, is it a better approach to just simply play the .pls file locally? Can it be done?

You are connecting to an Icecast server, not a web server. That address/port is not sending back HTTP responses, it's sending back Icecast responses.
The HTTP specification states that the response line must start with the HTTP version of the response. Icecast responses don't do that, so they are not valid HTTP responses.
I don't know anything about implementing an Icecast client, but I suspect such clients interpret an http: URL in a .pls file as being just a host and port specification, rather than a true HTTP URL.
You can't use the URL class to download your stream, because it (rightly) rejects invalid HTTP responses, so you'll need to read the data yourself. Fortunately, that part is fairly easy:
Socket connection = new Socket("50.7.98.106", 8398);
String request = "GET / HTTP/1.1\n\n";
OutputStream out = connection.getOutputStream();
out.write(request.getBytes(StandardCharsets.US_ASCII));
out.flush();
InputStream response = connection.getInputStream();
// Skip headers until we read a blank line.
int lineLength;
do {
lineLength = 0;
for (int b = response.read();
b >= 0 && b != '\n';
b = response.read()) {
lineLength++;
}
} while (lineLength > 0);
// rest of stream is audio data.
// ...
You still will need to find something to play the audio. Java Sound can't play MP3s (without a plugin). JMF and JavaFX require a URL, not just an InputStream.
I see a lot of recommendations on Stack Overflow for JLayer, whose Player class accepts an InputStream. Using that, the rest of the code is:
Player player = new Player(response);
player.play();

Related

How to properly send HTTP response to Client in Java

I'm in process of making a Server to display a HTML page as a college assessment. All the files are stored locally. Using Firefox to connect to server (chrome seems to block images).
The code below works fine if i type a HTTP Response in the HTML file itself that's being transferred (I'm typing 'HTTP/1.1 200 OK' at start of HTML file)
{
byte[] pageToBytes = Files.readAllBytes(webContent.toPath());
os.write(pageToBytes);
os.flush();
os.close();
}
But if i try and send HTTP response first ,then HTML after, it refuses to load the images in my specified in my HTML code.
Here is Code i'm trying to figure out problem with:
{
byte[] pageToBytes = Files.readAllBytes(webContent.toPath());
String HttpOK = "HTTP/1.1 200 OK\n\r";
os.write(HttpOK.getBytes());
os.write(pageToBytes);
os.flush();
os.close();
}
Any insights would be much appreciated :)
You should read about HTTP requests, when the browser makes a request open a channel of communication between the server and the client, which is the stream you are writing to, this channel closes once the client has received a response.
In your code you are responding once, but the second time the stream is already closed that's why the response body is never reaching the client. Also the server automatically sends a 200 code when there is no error or the code says otherwise.
Since you are trying to make an http server it is good to look at here
it explains how to handle an http request & response.

Socket versus URL website access

I have a Java application which opens an existing company's website using the Socket class:
Socket sockSite;
InputStream inFile = null;
BufferedWriter out = null;
try
{
sockSite = new Socket( presetSite, 80 );
inFile = sockSite.getInputStream();
out = new BufferedWriter( new OutputStreamWriter(sockSite.getOutputStream()) );
}
catch ( IOException e )
{
...
}
out.write( "GET " + presetPath + " HTTP/1.1\r\n\r\n" );
out.flush();
I would read the website with the stream inFile and life is good.
Recently this started to fail. I was getting an HTTP 301 "site has moved" error but no moved-to link. The site still exists and responds using the same original HTTP reference and any web browser. But the above code comes back with the HTTP 301.
I changed the code to this:
URL url;
InputStream inFile = null;
try
{
url = new URL( presetSite + presetPath );
inFile = url.openStream();
}
catch ( IOException e )
{
...
}
And read the site with the original code from inFile stream and it now works again.
This difference doesn't just occur in Java but it also occurs if I use Perl (using IO::Socket::INET approach opening the website port 80, then issuing a GET fails, but using LWP::Simple method get just works). In other words, I get a failure if I open the web page first with port 80, then do a GET, but it works fine if I use a class which does it "all at once" (that just says, "get me web page with such-and-such an HTTP address").
I thought I'd try the different approaches on http://www.microsoft.com and got an interesting result. In the case of opening port 80, followed by issuing the GET /..., I received an HTTP 200 response with a page that said, "Your current user agent
In one case, I tried the "port 80" open followed by GET / on www.microsoft.com and I received an HTTP 200 response page that said, "Your current user agent appears to be from an automated process...". But if I use the second method (URL class in Java, or LWP in Perl) I simply get their web page.
So my question is: how does the URL class (in Java) or the LWP module (in Perl) do its thing under the hood that makes it different from opening the website on port 80 and issuing a GET?
Most servers require the Host: header, to allow virtual hosting (multiple domains on one IP)
If you use a packet capturing software to see what's being sent when URL is used, you'll realize that there's a lot more than just "GET /" being sent. All sorts of additional header information are included. If a server gets just a simple "GET /", it's easy to deduct that it can't be a very sophisticated client on the other end.
Also, HTTP 1.0 is "outdated", the current version is 1.1.
Java URL implementation delegates to HttpURLConnection if it starts with "http:"

reading bytes from web site

I am trying to create a proxy server.
I want to read the websites byte by byte so that I can display images and all other stuff. I tried readLine but I can't display images. Do you have any suggestions how I can change my code and send all data with DataOutputStream object to browser ?
try{
Socket s = new Socket(InetAddress.getByName(req.hostname), 80);
String file = parcala(req.url);
DataOutputStream out = new DataOutputStream(clientSocket.getOutputStream());
BufferedReader dis = new BufferedReader(new InputStreamReader(s.getInputStream()));
PrintWriter socketOut = new PrintWriter(s.getOutputStream());
socketOut.print("GET "+ req.url + "\n\n");
//socketOut.print("Host: "+req.hostname);
socketOut.flush();
String line;
while ((line = dis.readLine()) != null){
System.out.println(line);
}
}
catch (Exception e){}
}
Edited Part
This is what I should have to do. I can block banned web sites but can't allow other web sites in my program.
In the filter program, you will open a TCP socket at the specified port and wait for connections. If a
request comes (i.e. the client types a URL to access a web site), the application will process it to
decide whether access is allowed or not and then, using the same socket, it will send the reply back
to the client. After the client opened her connection to WebPolice (and her request has been checked
and is allowed), the real web page needs to be shown to the client. Therefore, since the user already gave her request, now it is WebPolice’s turn to forward the request so that the user can get the web page. Thus, WebPolice acts as a client and requests the web page. This means you need to open a connection to the web server (without closing the connection to the user), forward the request over this connection, get the reply and forward it back to the client. You will use threads to handle multiple connections (at the same time and/or at different times).
I don't know what exactly you're trying to do, but crafting an HTTP request and reading its response incorporates somewhat more than you have done here. Readline won't work on binary data anyway.
You can take a look at the URLConnection class (stolen here):
URL oracle = new URL("http://www.oracle.com/");
URLConnection yc = oracle.openConnection();
BufferedReader in = new BufferedReader(new InputStreamReader(yc.getInputStream()));
Then you can read textual or binary data from the in object.
Read line will treat the line read as a String, so unless you want to mess around with conversions over to bytes, I wouldn't recommend that.
I would just read bytes until you can't read anymore, then write them out to a file, this should allow you to grab the images, keeping file headers intact which can be important when dealing with files other than text.
Hope this helps.
Instead of using BufferedReader you can try to use InputStream.
It has several methods for reading bytes.
http://docs.oracle.com/javase/6/docs/api/java/io/InputStream.html

Problem occured when using OutputStream & (DataOutPutStream or PrintWriter)

I wrote a simple server using java socket programming and intended to make that offered 2 files for download and display some html response when the download finished. What I did is use PrintWriter.print or DataOutPutStream.writeBytes to send the string including html tags and response string to the browser, then use OutputStream.write to send the file requested. The URL I typed in the browser was like 127.0.0.1/test1.zip, relevant code fragments as following:
pout.print("<html>");
pout.print("<head>");
pout.print("<meta http-equiv=\"Content-Type\" content=\"text/html; charset=ISO-8859-1/\">");
pout.print("<title>Response</title>");
pout.print("</head>");
pout.print("<body>");
pout.print(createResponseHeader(200, fileTypeCode));
pout.print("</body>");
pout.print("</html>");
pout.print(createResponseHeader(200, fileTypeCode));
pout.flush();
byte[] buffer = new byte[client.getSendBufferSize()];
int bytesRead = 0;
System.out.println("Sending...");
while((bytesRead = requestedFile.read(buffer))>-1)
{
out.write(buffer,0,bytesRead);
}
The pout is a PrintWriter while out is OutputStream.
The problem is when I try to use 127.0.0.1/test2.zip to download the file, it doesn't let me download, instead, print out the response string and a lot of non-sense character in the browser, e.g.
HTTP/1.0 200 OK
Connection: close
Server: COMP5116 Assignment Server v0
Content-Type: application/x-zip-compressed
PK‹â:Lmá^ЛàÍ test2.wmvì[y<”Ûÿ?3ÃØ—Ab¸eeË’5K"»±f_B*à Å*YÛ•¥M5h±¯u[(\·(-÷F)ß3ÏɽݺÝ×ýýñ{Íg^ÏûyžóYÏçœçyÎç¼P’>™îÝ+½Žö6A€;;ýmüH»êt©k]R#*€.G‰µÅRÏøÍLÔóZ; ´£åÑvP¹æª#õó”æÇ„‹&‡ëî9q‰Ú>LkÇÈyÖ2qãÌÆ(ãDŸã©ïÍš]Ð4iIJ0Àª3]B€ðÀ¸CôÁ`ä è1ü½¤Ã¬$ pBi
I believe it simply display the zip file as string with the response header all together. It seems once the PrintWriter is used before the code of sending the file, the whole output stream is used for sending string instead of bytes. However, if I put the part of code of sending the response AFTER the code of sending file, the download works properly but no any response message print out in the browser, just a blank page.
You've to remove your HTML code from here and send only the binary data. You can't mix them in a single servlet.
To achieve what you want to do is not easy.
I would start the download with some JavaScript code in the page, then the page will poll with Ajax for a server side servlet that will know if the download is completed for that particular session. In fact there is no download completed event in JavaScript.
To have this information the download servlet will update the session with a flag when download is completed.
When your Ajax call will return that the download is completed, you can change the text in the page or redirect to a new page.
Edit: Alternatively, if you can change your requirements, it will be much easier to show all messages that you have to show just before the download, and put target="_blank" in the download link so your page is not lost by clicking on the link.

No images displayed when website called from self written webserver

I have Java webserver (no standard software ... self written). Everything seems to work fine, but when I try to call a page that contains pictures, those pictures are not displayed. Do I have to send images with the output stream to the client? Am I missing an extra step?
As there is too much code to post it here, here is a little outline what happens or is supposed to happen:
1. client logs in
2. client gets a session id and so on
3. the client is connected with an output stream
4. we built the response with the HTML-Code for a certain 'GET'-request
5. look what the GET-request is all about
6. send html response || file || image (not working yet)
So much for the basic outline ...
It sends css-files and stuff, but I still have a problem with images!
Does anybody have an idea? How can I send images from a server to a browser?
Thanks.
I check requests from the client and responses from the server with charles. It sends the files (like css or js) fine, but doesn't with images: though the status is "200 OK" the transfer-encoding is chunked ... I have no idea what that means!? Does anybody know?
EDIT:
Here is the file-reading code:
try{
File requestedFile = new File( file );
PrintStream out = new PrintStream( this.getHttpExchange().getResponseBody() );
// File wird geschickt:
InputStream in = new FileInputStream( requestedFile );
byte content[] = new byte[(int)requestedFile.length()];
in.read( content );
try{
// some header stuff
out.write( content );
}
catch( Exception e ){
e.printStackTrace();
}
in.close();
if(out!=null){
out.close();
System.out.println( "FILE " + uri + " SEND!" );
}
}
catch ( /*all exceptions*/ ) {
// catch it ...
}
Your browser will send separate GET image.png HTTP 1.1 requests to your server, you should handle these file-gets too. There is no good way to embed and image browser-independent in HTML, only the <img src="data:base64codedimage"> protocol handler is available in some browsers.
As you create your HTML response, you can include the contents of the external js/css files directly between <script></script> and <style></style> tags.
Edit: I advise to use Firebug for further diagnostics.
Are you certain that you send out the correct MIME type for the files?
If you need a tiny OpenSource webserver to be inspired by, then have a look at http://www.acme.com/java/software/Acme.Serve.Serve.html which serves us well for ad-hoc server needs.
Do I have to send those external files
or images with the output stream to
the client?
The client will make separate requests for those files, which your server will have to serve. However, those requests can arrive over the same persisten connection (a.k.a. keepalive). The two most likely reasons for your problem:
The client tries to send multiple requests over a persistent connection (which is the default with HTTP 1.1) and your server is not handling this correctly. The easiest way to avoid this is to send a Connection: close header with the response.
The client tries to open a separate connection and your server isn't handling it correctly.
Edit:
There's a problem with this line:
in.read( content );
This method is not guaranteed to fill the array; it will read an arbitrary number of bytes and return that number. You have to use it in a loop to make sure everything is read. Since you have to do a loop anyway, it's a good idea to use a smaller array as a buffer to avoid keeping the whole file in memory and running into an OutOfMemoryError with large files.
Proabably step #4 is where you are going wrong:
// 4. we built the response with the HTML-Code for a certain 'GET'-request
Some of the requests will be a 'GET /css/styles.css' or 'GET /js/main.js' or 'GET /images/header.jpg'. Make sure you stream those files in those circumstances - try loading those URLs directly.
Images (and css/js files) are requested by the browser as completely separate GET requests to the page, so there's definitely no need to "send those ... with the output stream". So if you're getting pages served up ok, but images aren't being loaded, my first guess would be that you're not setting your response headers appropriately (for example, setting the Content-Type of the response to text/html), so the browser isn't interpreting it as a proper page & therefore not loading the images.
Some other things to try if that doesn't work:
Check if you can access an image directly
Use something like firebug or fiddler to check whether the browser is actually requesting the image/css/js files & that all your request/response headers look ok
Use an existing web server!

Categories

Resources