Tomcat - Servlet response blocking - problems with flush - java

I'm using Tomcat 6.0.36 and JRE 1.5.0, and I'm doing development work on Windows 7.
As a proof of concept for some work I'm doing, from Java code I'm HTTP posting some XML over a socket to a servlet. The
servlet then echos back the xml. In my first implementation, I was handing the input stream at both ends to an XML
document factory to extract the xml that was sent over the wire. This worked without a hitch in the servlet but failed
on the client side. It turned out that it failed on the client side because the reading of the response was blocking
to the point that the document factory was timing out and throwing an exception prior to the entire response arriving.
(The behaviour of the document factory is now moot because, as I describe below, I am getting the same blocking issue
without the use of the document factory.)
To try to work through this blocking issue, I then came up with a simpler version of the client side code and the
servlet. In this simpler version, I eliminated the document builder from the equation. The code on both sides now
simply reads the text from their respective input streams.
Unfortunately, I still have this blocking issue with the response and, as I describe below, it has not been resolved by
simply calling response.flushBuffer(). Google searches retrieved only one relevant topic that I could find
(Tomcat does not flush the response buffer) but this was not the exact same
issue.
I have included my code and explained the exact issues below.
Here's my servlet code (remember, this is bare-bones proof-of-concept code, not production code),
import java.io.InputStreamReader;
import java.io.LineNumberReader;
import javax.servlet.ServletConfig;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
public final class EchoXmlServlet extends HttpServlet {
public void init(ServletConfig config) throws ServletException {
System.out.println("EchoXmlServlet loaded.");
}
public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException {
}
public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException {
try {
processRequest(request, response);
}
catch(Exception e) {
e.printStackTrace();
throw new ServletException(e);
}
System.out.println("Response sent.");
return;
}
private final void processRequest(HttpServletRequest request, final HttpServletResponse response) throws Exception {
String line = null;
StringBuilder sb = new StringBuilder();
LineNumberReader lineReader = new LineNumberReader(new InputStreamReader(request.getInputStream(), "UTF-8"));
while((line = lineReader.readLine()) != null) {
System.out.println("line: " + line);
sb.append(line);
sb.append("\n");
}
sb.append("An additional line to see when it turns up on the client.");
System.out.println(sb);
response.setHeader("Content-Type", "text/xml;charset=UTF-8");
response.getOutputStream().write(sb.toString().getBytes("UTF-8"));
// Some things that were tried.
//response.getOutputStream().print(sb.toString());
//response.getOutputStream().print("\r\n");
//response.getOutputStream().flush();
//response.flushBuffer();
}
public void destroy() {
}
}
Here's my client side code,
import java.io.BufferedOutputStream;
import java.io.InputStreamReader;
import java.io.LineNumberReader;
import java.io.OutputStream;
import java.net.Socket;
public final class SimpleSender {
private String host;
private String path;
private int port;
public SimpleSender(String host, String path, int port) {
this.host = host;
this.path = path;
this.port = port;
}
public void execute() {
Socket connection = null;
String line;
try {
byte[] xmlBytes = getXmlBytes();
byte[] headerBytes = getHeaderBytes(xmlBytes.length);
connection = new Socket(this.host, this.port);
OutputStream outputStream = new BufferedOutputStream(connection.getOutputStream());
outputStream.write(headerBytes);
outputStream.write(xmlBytes);
outputStream.flush();
LineNumberReader lineReader
= new LineNumberReader(new InputStreamReader(connection.getInputStream(), "UTF-8"));
while((line = lineReader.readLine()) != null) {
System.out.println("line: " + line);
}
System.out.println("The response is read.");
}
catch(Exception e) {
e.printStackTrace();
}
finally {
try {
connection.close();
}
catch(Exception e) {}
}
}
private byte[] getXmlBytes() throws Exception {
StringBuffer sb = null;
sb = new StringBuffer()
.append("<my-xml>\n")
.append("Hello to myself.\n")
.append("</my-xml>\n");
return sb.toString().getBytes("UTF-8");
}
private byte[] getHeaderBytes(int contentLength) throws Exception {
StringBuffer sb = null;
sb = new StringBuffer()
.append("POST ")
.append(this.path)
.append(" HTTP/1.1\r\n")
.append("Host: ")
.append(this.host)
.append("\r\n")
.append("Content-Type: text/xml;charset=UTF-8\r\n")
.append("Content-Length: ")
.append(contentLength)
.append("\r\n")
.append("\r\n");
return sb.toString().getBytes("UTF-8");
}
}
When a request is sent to the servlet via a call to SimpleSender.execute(), the code in the servlet that receives the
request reads the xml without a hitch. My servlet code also exits from its processRequest() and doPost() without a
hitch. This is the immediate (i.e. there is no blocking between any output line) output on the server:
line: <my-xml>
line: Hello to myself.
line: </my-xml>
<my-xml>
Hello to myself.
</my-xml>
An additional line to see when it turns up on the client.
Response sent.
The output above is exactly as expected.
On the client side, however, the code outputs the following then blocks:
HELLO FROM MAIN
line: HTTP/1.1 200 OK
line: Server: Apache-Coyote/1.1
line: Content-Type: text/xml;charset=UTF-8
line: Content-Length: 74
line: Date: Sun, 18 Nov 2012 23:58:43 GMT
line:
line: <my-xml>
line: Hello to myself.
line: </my-xml>
After about 20 seconds of blocking (I timed it), the following lines are output on the client side,
line: An additional line to see when it turns up on the client.
The response is read.
GOODBYE FROM MAIN
Note that the entire output on the server side is fully visible while the blocking is occurring on the client side.
From there, I tried to flush on the server side to try to fix this issue. I independently tried two methods of flushing:
response.flushBuffer() and response.getOutputStream().flush(). With both methods of flushing, I still had blocking on
the client side (but in a different part of the response), but I had other issues as well. Here's where the client
blocked,
HELLO FROM MAIN
line: HTTP/1.1 200 OK
line: Server: Apache-Coyote/1.1
line: Content-Type: text/xml;charset=UTF-8
line: Transfer-Encoding: chunked
line: Date: Mon, 19 Nov 2012 00:21:53 GMT
line:
line: 4a
line: <my-xml>
line: Hello to myself.
line: </my-xml>
line: An additional line to see when it turns up on the client.
line: 0
line:
After blocking for about 20 seconds, the following is output on the client side,
The response is read.
GOODBYE FROM MAIN
There are three problems with this output on the client side. Firstly, the reading of the response is still blocking, it's
just blocking after a different part of the response. Secondly, I have unanticipated characters returned ("4a", "0").
Finally, the headers have changed. I've lost the Content-Length header, and I have gained the
"Transfer-encoding: chunked" header.
So, without a flush, my response is blocking prior to sending the final line and a termination to the response. However,
with a flush, the response is still blocking but now I'm getting characters I don't want and a change to the headers I
don't want.
In Tomcat, my connector has the default definition,
<Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" />
The connectionTimeout is set for 20 seconds. When I changed this to 10 seconds, my client side code blocked for 10
seconds instead of 20. So it appears that it is the connection timeout, as managed by Tomcat, that is causing my
response to be fully flushed and terminated.
Is there something additional I should be doing in my servlet code to indicate that my response is finished?
Has anyone got suggestions as to why my response is blocking prior to sending the final line and termination indicator?
Has anyone got suggestions as to why flush is sending unwanted characters and why the response is still blocking after
a flush?
If someone has the time, could you tell me if you get the same issues if you try running the code included in this post?
EDIT - In response to Guido's first reply
Guido,
Thanks very much for your reply.
Your client is blocking because you are using readLine to read the
body of the message. readLine hangs because the body does not end with
a line feed
No, I don't think this is true. Firstly, in my original version of my code, I was not using line readers on either the client or server side. On both sides, I was handing the stream to the xml document factory and letting it read from the stream. On the server, this worked fine. On the client, it timed out. (On the client, I was reading to the end of the headers prior to passing the stream to the document factory.)
Secondly, when I change my client code to not use a line reader, the blocking still occurs. Here's a version of SimpleSender.execute() that does not use a line reader,
public void execute() {
Socket connection = null;
int byteCount = 0;
try {
byte[] xmlBytes = getXmlBytes();
byte[] headerBytes = getHeaderBytes(xmlBytes.length);
connection = new Socket(this.host, this.port);
OutputStream outputStream = new BufferedOutputStream(connection.getOutputStream());
outputStream.write(headerBytes);
outputStream.write(xmlBytes);
outputStream.flush();
while(connection.getInputStream().read(new byte[1]) >= 0) {
++byteCount;
}
System.out.println("The response is read: " + byteCount);
}
catch(Exception e) {
e.printStackTrace();
}
finally {
try {
connection.close();
}
catch(Exception e) {}
}
return;
}
The above code blocks at,
HELLO FROM MAIN
then 20 seconds later, finishes wtih,
The response is read: 235
GOODBYE FROM MAIN
I think the above shows conclusively the problem is not with the use of a line reader on the client side.
sb.append("An additional line to see when it turns up on the client.\n");
The addition of the return in the above line just defers the block to one line later. I had tested this prior to my OP and I just tested again.
If you want to do your own HTTP parser, you have to read through the headers until you get two blank lines.
Yes, I do know that, but in this contrived simple example, it is a moot point. On the client, I am simply outputting the returned HTTP message, headers and all.
Then you need to scan the headers to see if you had a Content-Length header. If there is no Content-Length then you are done. If there is a Content-Length you need to parse it for the length, then read exactly that number of additional bytes from the stream. This allows HTTP to transport both text data and also binary data which has no line feeds.
Yup, all true but not relevant in this contrived simple example.
I recommend you replace the guts of your client code HTTP writer/parse with a pre-written client library that handles these details for you.
I agree entirely. I was actually hoping to pass off the handling of the streams to the xml document factories. As a way of dealing with my blocking issues, I also looked into Apache commons-httpclient. The new version (httpcomponents) still leaves it to the developer to handle the stream of a post return (from what I can tell), so that was of no use. If you can suggest another library, I'd be interested for sure.
I've disagreed with your points but I thank you for your reply and I mean no offense or any negative intimations by my disagreement. I'm obviously doing something wrong or not doing something I should, but I don't think the line reader is the issue. Additionally, where are those funky characters coming from if I flush? Why does the blocking occur when a line reader is not in use on the client side?
Also, I have replicated the issue on Jetty. Hence, this is definetly not a Tomcat issue and very much a 'me' issue. I'm doing something wrong but I don't know what it is.

Your server code looks fine. The problem is with your client code. It does not obey the HTTP protocol and is treating the response like a bunch of lines.
Quick fix on the server. Change to:
sb.append("An additional line to see when it turns up on the client.\n");
Your client is blocking because you are using readLine to read the body of the message. readLine hangs because the body does not end with a line feed. Finally Tomcat times out, closes the connection, your buffered reader detects this and returns the remaining data.
If you make the change above (to the server), This will make your client appear to work as you expect. Even though it is still wrong.
If you want to do your own HTTP parser, you have to read through the headers until you get two blank lines. Then you need to scan the headers to see if you had a Content-Length header. If there is no Content-Length then you are done. If there is a Content-Length you need to parse it for the length, then read exactly that number of additional bytes from the stream. This allows HTTP to transport both text data and also binary data which has no line feeds.
I recommend you replace the guts of your client code HTTP writer/parse with a pre-written client library that handles these details for you.

LOL Ok, I was doing something wrong (by omission). The solution to my issue? Add the following header to my http request,
Connection: close
That simple. Without that, the connection was staying alive. My code was relying on the server signifying that it was finished, but, the server was still listening on the open connection rather than closing it.
The header causes the server to close the connection when it finishes writing the response (which I guess is signified when its call to doPost(...) returns).
Addendum
With regard to the funky characters when flush() is called...
My server code, now that I'm using Connection: close, does not call flush(). However, if the content to be sent back is large enough (larger than the Tomcat connector's buffer size I suspect), I still get the funky characters sent back and the header 'Transfer-Encoding: chunked' appears in the response.
To fix this, I explicitly call, on the server side, response.setContentLength(...) prior to writing my response. When I do this, the Content-Length header is in the response instead of Transfer-Encoding: chunked, and I don't get the funky characters.
I'm not willing to burn any more time on this as my code is now working, but I do wonder if the funky characters were chunk delimiters, where, once I explicitly set the content length, the chunk delimiters were no longer necessary.

Related

Reading subsequent response from a socket

I am trying to read bytes into chars from a server which is not maintained by me. Here am the client. My issue is am not getting the required response from the request sent to the server. From my understanding, to detect the end of a message, there are three common ways:
*Closing the connection at the end of the message.
*Putting the length of the message before the data itself
*Using a separator; some value which will never occur in the normal data
So this what I have done so far.Am using sockets to achieve writing to the server like this:
Socket outgoing = new Socket(Host, Port);
String request = "GET http://www.firtRequest.com/ HTTP/1.1\r\nHost: www.firstRequest.com\r\n\r\n" + "GET http://www.secondRequest.com/ HTTP/1.1\r\nHost: www.secondRequest.com\r\n\r\n";
outgoing.getOutputStream().write(request.getBytes());
outgoing.getOutputStream().flush();
Using getInputStream() to read from the socket server,I should get two reponses back but the second response carries a html tag which from my understanding isn't part of the resonse so am guessing am not reading till the end of the stream for the first request sent to the server or am not reading the reponses properly altogether.
Output1:
HTTP/1.1 200 OK
Date: mon,24 Aug 2015 09:02:30 GMT
Content-Type: application/json
Content-Length: 42
Output2:
<!DOCTYPE html PUBLIC "-//W3C"
....
<head>
....
</head>
<body>
...
</body>
HTTP/1.1 200 OK
Here is my read method in which am trying to detect the end of the stream using "\r\n\r\n" tag in the reponse or when the stream hits -1.
public static String ReadStream(InputStream inputStream) throws IOException {
StringBuilder builder = new StringBuilder();
while (true) {
int rdL = inputStream.read();
if (rdL == -1) {
break;
}
// Convert the bytes read into characters
builder.append((char) rdL);
if (builder.indexOf("\r\n\r\n") != -1) {
// EOS detected
break;
}
}
return builder.toString();
}
Any pointers to what am doing wrong to be getting that html tag? Thanks
there are three common ways:
There's only one 'common way' with HTTP, and that is to implement RFC 2616 correctly. You haven't made the slightest attempt here. Look it up. But there's no good reason to try to implement HTTP yourself when HttpURLConnection already exists, not to mention numerous third-party HTTP APIs.

Multiple HttpURLConnection calls for get throwing Premature end of file exception with InputStream

I'm trying to make multiple calls to a REST API using HttpURLConnection with GET method and retrieving the response through InputStream.
It worked fine previously for 3 months but now it's throwing below exception:
SAXException Occurred during getArtifactsUrl method:: org.xml.sax.SAXParseException; Premature end of file.
at org.apache.xerces.parsers.DOMParser.parse(Unknown Source) [xercesImpl.jar:6.1.0.Final]
at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source) [xercesImpl.jar:6.1.0.Final]
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:121) [:1.7.0_03]
Below is the line of code where I'm making the second call to parse the response:
request = (HttpURLConnection) endpointUrl.openConnection();
inputstream = request.getInputStream();
doc = dBuilder.parse(inputstream);
First call is working fine using request and inputstream objects but second call is failing. I tried all possible answers I found in google but no luck:
after every call:
inputstream.close();
request.disconnect();
Remember that request is an HttpURLConnection object.
I greatly appreciate if you can be able to solve this as I this is a high prioirity production issue now!
First you should check for error cases and not assume it's always working.
Try this:
request = (HttpURLConnection) endpointUrl.openConnection();
request.connect(); // not really necessary (done automatically)
int statusCode = request.getResponseCode();
if (statusCode == 200) { // or maybe other 2xx codes?
// Success - should work if server gives good response
inputstream = request.getInputStream();
// if you get status code 200 and still have the same error, you should
// consider logging the stream to see what document you get from server.
// (see below *)
doc = dBuilder.parse(inputstream);
} else {
// Something happened
// handle error, try again if it makes sense, ...
if (statusCode == 404) ... // resource not found
if (statusCode == 500) ... // internal server error
// maybe there is something interesting in the body
inputstream = request.getErrorStream();
// read and parse errorStream (but probably this is not the body
// you expected)
}
Have a look at the List of HTTP status codes.
And in some nasty cases, there are other problems which are not easy to detect if you just sit behind HttpURLConnection. Then you could enable logging or snoop the TCP/IP traffic with an apropriate tool (depends on your infrastructure, rights, OS, ...). This SO post might help you.
*) In your case I suppose that you're getting a non-error status code from the server but unparseable XML. If logging the traffic is not your thing, you could read the InputStream, write it to a file and then process the stream like before. Of course you can write the stream to a ByteArrayOutputStream, get the byte[] and write that Bytes to a file and then convert them to a ByteArrayInputStream and give this to your XML-parser. Or you could use Commons IO TeeInputStream to handle that for you.
There are cases where connection.getResponseCode() throws an exception. Then it was not possible to parse the HTTP header. This should only happen if there are strange errors in server software, hardware or perhaps a firewall, proxy or load balancer not behaving well.
One more thing: You might consider choosing an HTTP Client library and not directly use HttpURLConnection.

Http Post not posting data

I'm trying to post some data from a Java client using sockets. It talks to localhost running php code, that simply spits out the post params sent to it.
Here is Java Client:
public static void main(String[] args) throws Exception {
Socket socket = new Socket("localhost", 8888);
String reqStr = "testString";
String urlParameters = URLEncoder.encode("myparam="+reqStr, "UTF-8");
System.out.println("Params: " + urlParameters);
try {
Writer out = new OutputStreamWriter(socket.getOutputStream(), "UTF-8");
out.write("POST /post3.php HTTP/1.1\r\n");
out.write("Host: localhost:8888\r\n");
out.write("Content-Length: " + Integer.toString(urlParameters.getBytes().length) + "\r\n");
out.write("Content-Type: text/html\r\n\n");
out.write(urlParameters);
out.write("\r\n");
out.flush();
InputStream inputstream = socket.getInputStream();
InputStreamReader inputstreamreader = new InputStreamReader(inputstream);
BufferedReader bufferedreader = new BufferedReader(inputstreamreader);
String string = null;
while ((string = bufferedreader.readLine()) != null) {
System.out.println("Received " + string);
}
} catch(Exception e) {
e.printStackTrace();
} finally {
socket.close();
}
}
This is how post3.php looks like:
<?php
$post = $_REQUEST;
echo print_r($post, true);
?>
I expect to see an array (myparams => "testString") as the response. But its not passing post args to server.
Here is output:
Received HTTP/1.1 200 OK
Received Date: Thu, 25 Aug 2011 20:25:56 GMT
Received Server: Apache/2.2.17 (Unix) mod_ssl/2.2.17 OpenSSL/0.9.8r DAV/2 PHP/5.3.6
Received X-Powered-By: PHP/5.3.6
Received Content-Length: 10
Received Content-Type: text/html
Received
Received Array
Received (
Received )
Just a FYI, this setup works for GET requests.
Any idea whats going on here?
As Jochen and chesles rightly point out, you are using the wrong Content-Type: header - it should indeed be application/x-www-form-urlencoded. However there are several other issues as well...
The last header should be seperated from the body by a blank line between the headers and the body. This should be a complete CRLF (\r\n), in your code it is just a new line (\n). This is an outright protocol violation and I'm a little surprised you haven't just got a 400 Bad Request back from the server, although Apache can be quite forgiving in this respect.
You should specify Connection: close to ensure that you are not left hanging around with open sockets, the server will close the connection as soon as the request is complete.
The final CRLF sequence is not required. PHP is intelligent enough to sort this out by itself, but other server languages and implementations may not be...
If you are working with any standardised protocol in it's raw state, you should always start by at least scanning over the RFC.
Also, please learn to secure your Apache installs...
It looks like you are trying to send data in application/x-www-form-urlencoded format, but you are setting the Content-Type to text/html.
Use
out.write("Content-Type: application/x-www-form-urlencoded\n\n");
instead. As this page states:
The Content-Length and Content-Type headers are critical because they tell the web server how many bytes of data to expect, and what kind, identified by a MIME type.
For sending form data, i.e. data in the format key=value&key2=value2 use application/x-www-form-urlencoded. It doesn't matter if the value contains HTML, XML, or other data; the server will interpret it for you and you'll be able to retrieve the data as usual in the $_POST or $_REQUEST arrays on the PHP end.
Alternatively, you can send your data as raw HTML, XML, etc. using the appropriate Content-Type header, but you then have to retrieve the data manually in PHP by reading the special file php://input:
<?php
echo file_get_contents("php://input");
?>
As an aside, if you're using this for anything sufficiently complex, I would strongly recommend the use of an HTTP client library like HTTPClient.

Problem gzipping HttpPost body using Apache Client 4.1.1

I need to send a HTTPPost with body gzipped, the server accepts non gzipped data also but would prefer it gzipped, so Im trying to convert some existing workign code to use gzip
The data is currently set with
httpMethod.setEntity(new UrlEncodedFormEntity(nameValuePairs));
Ive tried sublclassing HttpEntityWrapper
static class GzipWrapper extends HttpEntityWrapper
{
public GzipWrapper(HttpEntity wrapped)
{
super(wrapped);
}
public void writeTo(OutputStream outstream)
throws IOException
{
GZIPOutputStream gzip = new GZIPOutputStream(outstream);
super.writeTo(gzip);
}
}
and changed to
httpMethod.setEntity(new GzipWrapper(
new UrlEncodedFormEntity(nameValuePairs)));
and added
if (!httpMethod.containsHeader("Accept-Encoding"))
{
httpMethod.addHeader("Accept-Encoding", "gzip");
}
but now my request just time outs I think there must be something wrong with my GZIpWrapper but Im not sure what.
On another note I looked at the http://hc.apache.org/httpcomponents-client-ga/httpclient/examples/org/apache/http/examples/client/ClientGZipContentCompression.java. example. Aside from the fact that I dont like interceptors because it is difficult to follow program flow it doesnt make sense to me because the request header is set to tell the server to accept gzip data but nowhere does it actually gzip encode any data, it only unzips the response.
(1) GzipWrapper implementation is wrong. It transforms the entity content when writing it out to output stream but it still returns the Content-Length of the wrapped entity, this causing the server to expect more input than actually transmitted by the client.
(2) You completely misunderstand the purpose of the Accept-Encoding header
http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
(3) ClientGZipContentCompression sample is correct. It does not compress outgoing request entity because it is not meant to do so. See point (2)

How can I send an HTTP Response using only standard network libraries?

I'm working on my first homework project in a web programming class, which is to write a simple web server in Java. I'm at the point where I have data being transmitted back and forth, and to the untrained eye, my baby server seems to be working fine. However, I can't find a way to send appropriate responses. (In other words, an invalid page request would show a 404-ish HTML page, but it still returns a 200 OK status when I view response headers).
I'm limited to being able to use standard network libraries for socket management and standard I/O libraries to read and write bytes and strings from an input stream. Here's some pertinent code:
From my main...
ServerSocket servSocket = new ServerSocket(port, 10); // Bind the socket to the port
System.out.println("Opened port " + port + " successfully!");
while(true) {
//Accept the incoming socket, which means that the server process will
//wait until the client connects, then prepare to handle client commands
Socket newDataSocket = servSocket.accept();
System.out.println("Client socket created and connected to server socket...");
handleClient(newDataSocket); //Call handleClient method
}
From the handleClient method...(inside a loop that parses the request method and path)
if(checkURL.compareTo("/status") == 0) { // Check to see if status page has been requested
System.out.println("STATUS PAGE"); // TEMPORARY. JUST TO MAKE SURE WE ARE PROPERLY ACCESSING STATUS PAGE
sendFile("/status.html", dataStream);
}
else {
sendFile(checkURL, dataStream); // If not status, just try the input as a file name
}
From sendFile method...
File f = new File(where); // Create the file object
if(f.exists() == true) { // Test if the file even exists so we can handle a 404 if not.
DataInputStream din;
try {
din = new DataInputStream(new FileInputStream(f));
int len = (int) f.length(); // Gets length of file in bytes
byte[] buf = new byte[len];
din.readFully(buf);
writer.write("HTTP/1.1 200 OK\r\n"); // Return status code for OK (200)
writer.write("Content-Length: " + len + "\r\n"); // WAS WRITING TO THE WRONG STREAM BEFORE!
writer.write("Content-Type: "+type+"\r\n\r\n\r\n"); // TODO VERIFY NEW CONTENT-TYPE CODE
out.write(buf); // Writes the FILE contents to the client
out.flush();
out.close();
} catch (FileNotFoundException e) {
e.printStackTrace(); // Not really handled since that's not part of project spec, strictly for debug.
}
}
else {
writer.write("HTTP/1.1 404 Not Found\r\n"); // Attempting to handle 404 as simple as possible.
writer.write("Content-Type: text/html\r\n\r\n\r\n");
sendFile("/404.html", sock);
}
Can anybody explain how, in the conditional from sendFile, I can change the response in the 404 block (Like I said before, the response headers still show 200 OK)? This is bugging the crap out of me, and I just want to use the HTTPResponse class but I can't. (Also, content length and type aren't displayed if f.exists == true.)
Thanks!
Edit It looks to me like in the 404 situation, you're sending something like this:
HTTP/1.1 404 Not Found
Content-Type: text/html
HTTP/1.1 200 OK
Content-Length: 1234
Content-Type: text/html
...followed by the 404 page. Note the 200 line following the 404. This is because your 404 handling is calling sendFile, which is outputting the 200 response status code. This is probably confusing the receiver.
Old answer that missed that:
An HTTP response starts with a status line followed (optionally) by a series of headers, and then (optionally) includes a response body. The status line and headers are just lines in a defined format, like (to pick a random example):
HTTP/1.0 404 Not Found
To implement your small HTTP server, I'd recommend having a read through the spec and seeing what the responses should look like. It's a bit of a conceptual leap, but they really are just lines of text returned according to an agreed format. (Well, it was a conceptual leap for me some years back, anyway. I was used to environments that over-complicated things.)
It can also be helpful to do things like this from your favorite command line:
telnet www.google.com 80
GET /thispagewontbefound
...and press Enter. You'll get something like this:
HTTP/1.0 404 Not Found
Content-Type: text/html; charset=UTF-8
X-Content-Type-Options: nosniff
Date: Sun, 12 Sep 2010 23:01:14 GMT
Server: sffe
Content-Length: 1361
X-XSS-Protection: 1; mode=block
...followed by some HTML to provide a friendly 404 page. The first line above is the status line, the rest are headers. There's a blank line between the status line/headers and the first line of content (e.g., the page).
The problem you are seeing is most likely related to a missing flush() on your writer. Depending on which type of Writer you use the bytes are first written to a buffer that needs to be flushed to the stream. This would explain why Content-Length and Content-Type are missing in the output. Just flush it before you write additional data to the stream.
Further you call sendFile("/404.html", sock);. You did not post the full method here - but I suppose that you call it recursively inside sendFile and thus send the 200 OK status for your file /404.html.
Based on your reported symptoms, I think the real problem is that you are not actually talking to your server at all! The evidence is that 1) you cannot get a 404 response, and 2) a 200 response does not have the content length and type. Neither of these should be possible ... if you are really talking to the code listed above.
Maybe:
you are talking to an older version of your code; i.e. something is going wrong in your build / deploy cycle,
you are (mistakenly) trying to deploy / run your code in a web container (Jetty, Tomcat, etc), or
your client code / browser is actually talking to a different server due to proxying, an incorrect URL, or something like that.
I suggest that you add some trace printing / logging at appropriate points of your code to confirm that it is actually being invoked.

Categories

Resources