I am writing a Java HTTP server. I thought the entire server was working and it is using threading. However, I'm realizing that the piece of code that reads the request into a BufferedReader is not working consistently.
Here is the code that reads an incoming request:
private String receive(WebSocket webSocket) throws IOException {
int chr;
System.out.println("Receiving!");
StringBuffer buffer = new StringBuffer();
while ( (chr = webSocket.in().read() ) != -1) {
buffer.append((char) chr);
if ( !webSocket.in().ready())
break;
}
return buffer.toString();
}
My Websocket class just wraps the Socket and provides an in and an out. I did this so that I could mock out the socket and test my server.
The Websocket class looks like this:
package http.server.socket;
import java.io.*;
import java.net.Socket;
public class SystemSocket implements WebSocket {
private Socket theConnection;
private BufferedReader in;
private OutputStream out;
public SystemSocket(Socket theConnection) throws IOException {
this.theConnection = theConnection;
in = new BufferedReader(new InputStreamReader(theConnection.getInputStream()));
out = new BufferedOutputStream(theConnection.getOutputStream());
}
public BufferedReader in() throws IOException {
return in;
}
public OutputStream out() throws IOException {
return out;
}
public void close() throws IOException {
in.close();
out.close();
theConnection.close();
}
}
The problem is that with each url the user enters in a browser, two requests are made - one for the page requested and one for the favicon. Sometimes - it seems - the favicon request is not coming in and the thread hangs.
Here's some debugging information I have printing to the console when things go right:
Receiving!
Receiving!
REQUEST STRING = GET /color_picker.html HT
[20130821 20:29:23] REQUEST: http://localhost:5000/color_picker.html
[20130821 20:29:23] PAGE RENDERED
REQUEST STRING = GET /favicon.ico HTTP/1.1
[20130821 20:29:23] REQUEST: http://localhost:5000/favicon.ico
[20130821 20:29:23] PAGE RENDERED
The "Receiving" message is getting printed whenever the request is getting read. So, in this case, the "Receiving" message got printed twice, two requests came in and two things were rendered. But then, the same page (but at a different time) will do this (after about 10 seconds):
Receiving!
Receiving!
REQUEST STRING = GET /color_picker.html HTTP/1.1
[20130821 20:41:25] REQUEST: http://localhost:5000/color_picker.html
[20130821 20:41:25] PAGE RENDERED
REQUEST STRING =
Exception in thread "ServerThread" java.lang.ArrayIndexOutOfBoundsException: 1
at http.request.Parser.setRequestLineData(Parser.java:42)
at http.request.Parser.setRequestHash(Parser.java:27)
at http.request.Parser.parse(Parser.java:13)
at http.request.Request.get(Request.java:18)
at http.server.ServerThread.run(ServerThread.java:39)
All the subsequent errors are because the request string is null. But I can't figure out why the Request string is null. I can't even figure out how to debug.
Can anyone help??
Also important to note that if the second request string doesn't come in right away, the user can request a new url and it will cause the second hung process to complete (so then the fourth request url will be what hangs). So, it's only when the user stops requesting things, on the last request after about 10 seconds, I will get the error. Sometimes I can request 20 different pages and it's only after I stop requesting pages and wait a few seconds, that I will see an error. I think this is what is happening??
UPDATE:
Per the request, here is the setRequestLineData() method:
private void setRequestLineData() {
requestHash = new HashMap<String, String>();
if (requestLineParts.length == 3) {
requestHash.put("httpMethod", requestLineParts[0]);
requestHash.put("url", requestLineParts[1]); //line 42
requestHash.put("httpProtocol", requestLineParts[2]);
}
else {
requestHash.put("httpMethod", requestLineParts[0]);
requestHash.put("url", requestLineParts[1]);
requestHash.put("queryString", requestLineParts[2]);
requestHash.put("httpProtocol", requestLineParts[3]);
}
}
UPDATE:
I think I figured out more about what is going on here with my mentor's help. His thought is that once a request is received, the browser starts another request right away to reduce load time for the next request. This sound plausible to me since I can load page after page after page, but it's only about 10 seconds after the last page is requested that I get an error. Currently, I'm handling this with a custom exception, but am working on a better solution. Thanks for all the help guys!
ready() isn't a valid test for end of message. It only tells you whether there is data available to be read without blocking. TCP isn't a message-oriented protocol, it is a byte-stream protocol. If you want messages you must implement them yourself, e.g. as lines, length-value tuples, type-length-value tuples, serialized objects, XML documents, ...
There are few if any correct uses of ready() (or available()), and this isn't one of them.
Related
Before writing something like "why don't you use Java HTTP client such as apache, etc", I need you to know that the reason is SSL. I wish I could, they are very convenient, but I can't.
None of the available HTTP clients support GOST cipher suite, and I get handshake exception all the time. The ones which do support the suite, doesn't support SNI (they are also proprietary) - I'm returned with a wrong cert and get handshake exception over and over again.
The only solution was to configure openssl (with gost engine) and curl and finally execute the command with Java.
Having said that, I wrote a simple snippet for executing a command and getting input stream response:
public static InputStream executeCurlCommand(String finalCurlCommand) throws IOException
{
return Runtime.getRuntime().exec(finalCurlCommand).getInputStream();
}
Additionally, I can convert the returned IS to a string like that:
public static String convertResponseToString(InputStream isToConvertToString) throws IOException
{
StringWriter writer = new StringWriter();
IOUtils.copy(isToConvertToString, writer, "UTF-8");
return writer.toString();
}
However, I can't see a pattern according to which I could get a good response or a desired response header:
Here's what I mean
After executing a command (with -i flag), there might be lots and lots of information like in the screen below:
At first, I thought that I could just split it with '\n', but the thing is that a required response's header or a response itself may not satisfy the criteria (prettified JSON or long redirect URL break the rule).
Also, the static line GOST engine already loaded is a bit annoying (but I hope that I'll be able to get rid of it and nothing unrelated info like that will emerge)
I do believe that there's a pattern which I can use.
For now I can only do that:
public static String getLocationRedirectHeaderValue(String curlResponse)
{
String locationHeaderValue = curlResponse.substring(curlResponse.indexOf("Location: "));
locationHeaderValue = locationHeaderValue.substring(0, locationHeaderValue.indexOf("\n")).replace("Location: ", "");
return locationHeaderValue;
}
Which is not nice, obviosuly
Thanks in advance.
Instead of reading the whole result as a single string you might want to consider reading it line by line using a scanner.
Then keep a few status variables around. The main task would be to separate header from body. In the body you might have a payload you want to treat differently (e.g. use GSON to make a JSON object).
The nice thing: Header and Body are separated by an empty line. So your code would be along these lines:
boolean inHeader = true;
StringBuilder b = new StringBuilder;
String lastLine = "";
// Technically you would need Multimap
Map<String,String> headers = new HashMap<>();
Scanner scanner = new Scanner(yourInputStream);
while scanner.hasNextLine() {
String line = scanner.nextLine();
if (line.length() == 0) {
inHeader = false;
} else {
if (inHeader) {
// if line starts with space it is
// continuation of previous header
treatHeader(line, lastLine);
} else {
b.append(line);
b.appen(System.lineSeparator());
}
}
}
String body = b.toString();
I'm having a difficult time understanding what is going wrong in my application. Below shows code that successfully executes on first pass (a prior HttpResponse, and then on its second call, hangs at the dataInputStream.readFully(...) line.
doSomething(HttpResponse response) {
HttpEntity responseBody = response.getEntity();
long len = responseBody.getContentLength();
byte[] payload = new byte[(int)len]; // <-- I've confirmed this is the correct length
DataInputStream d = null;
try {
InputStream bais = responseBody.getContent();
d = new DataInputStream(bais);
d.readFully(payload); // <-- *** HANGS HERE! ***
EntityUtils.consume(responseBody);
...
} finally {
if (d != null) {
IOUtils.closeQuietly(d);
}
}
}
After blocking/hanging for 10+ seconds, the application times out and the calling thread is torn down.
I've inspected the response object, and confirmed its Content-Length to be what is expected.
Nothing jumped out at me reading the DataInputStream Javadoc.
Going through our commit history, I noticed that the IOUtils.closeQuietly(...) call was introduced. [Trust me here:] It's been hard to track our integration system's initial failure of this application, so I cannot confirm whether this finally block introduced any unintended behavior. From what I can tell, calling closeQuietly() there is the recommended approach.
Update: For clarity, this code is part of an application performing HTTP RANGED GET requests. E.g., first response is pulling down N bytes, starting at byte 0. The second call GETs the next N bytes. The file is large, so there are more than two necessary RANGED GETs.
Good evening. I got this little problem here.Im trying to connect two clients to a server. I made two queues where i put client1 and client2. I got this method here to read from the queue. But im only able to read from one of the queue.
NimMessage receiveMessage(Clientconnection client) throws NimServerException {
NimMessage request = null;
while (request == null){
request = (NimMessage) client1.toserver.pollLast(); //read from queue
}
//log("\n" + request.toString());
return request;
}
I enter the methodd with this
NimMessage request = receiveMessage(client1);
But when i want the second client to read the second queue with
request = receiveMessage(client2);
The receiveMessage method just reads from the client1 queue.
I cant figure out how to add the second queue in the receiveMessage method.
I want to create a link that would initiate a file download which would be asynchronous to the page itself, i.e. I want the page not to be locked during the file download. Should I make it be initiated outside wicket? Or is there something inside wicket that would let me set up a resource stream which would bypass the page locks?
Things I tried:
DownloadLink - locks the page, as stated in its doc. This was my starting point.
ResourceLink - did not state the locking explicitly in the doc, so I tried this, but it also locked the page.
At this point I've investigated the code of both links a bit and noticed they both schedule the download via ResourceStreamRequestHandler. Expecting that his kind of behavior could be just handler-specific I've attempted to schedule a custom handler I've written:
private void sendFile(final File file) throws IOException {
IRequestHandler fileDownloadHandler = new IRequestHandler() {
#Override
public void respond(IRequestCycle requestCycle) {
WebResponse response = (WebResponse) requestCycle.getResponse();
OutputStream outStream = response.getOutputStream();
response.setContentType("audio/x-wav");
response.setContentLength((int)file.length());
String fileName = "Somethingsomething.wav";
// sets HTTP header
response.setHeader("Content-Disposition", "attachment; filename=\"" + fileName + "\"");
byte[] byteBuffer = new byte[1024];
DataInputStream in = null;
try {
in = new DataInputStream(new FileInputStream(file));
int length = 0;
// reads the file's bytes and writes them to the response stream
while ((in != null) && ((length = in.read(byteBuffer)) != -1))
{
outStream.write(byteBuffer,0,length);
}
in.close();
outStream.close();
} catch (IOException e) {
throw new PortalError("IOException trying to write the response", e);
}
}
#Override
public void detach(IRequestCycle requestCycle) {
}
};
getRequestCycle().scheduleRequestHandlerAfterCurrent(fileDownloadHandler);
}
This did not quite work either, so I've investigated further. I've noticed that unlike I expected, the "scheduled" request handlers would not get executed on a separate request, as I expected, but on the same one. I figured that it must be that the page gets locked for the first handler and then remains locked while the second one is executing as well. So I've attempted to force the download handler into a separate request (via an ajax behaviour):
public void startDownload(AjaxRequestTarget target) throws DownloadTargetNotFoundException{
target.appendJavaScript("setTimeout(\"window.location.href='" + getCallbackUrl() + "'\", 100);");
}
#Override
public void onRequest() {
sendFile(getFile());
logger.debug("Download initiated");
}
I've found this here and hoped it could potentially be what I've been looking for. However, unsurprisingly so, the page gets locked still (I would imagine because the behaviour still has to be retrieved from the page, for which the page lock has to be acquired).
I'm at a loss where I should be looking next, especially after all this time trying to get a simple download link working. I was considering creating another web filter one layer above wicket, which could be signaled from within wicket to create the download after the wicket filter is finished with its work (and hence the page lock is already released), but that seems a bit excessive for a task like this.
Any suggestions are welcome.
You have to download from a resource, see
http://wicketinaction.com/2012/11/uploading-files-to-wicket-iresource/ and read http://wicket.apache.org/guide/guide/resources.html
I'm writing to the browser window using servletResponse.getWriter().write(String).
But how do I clear the text which was written previously by some other similar write call?
The short answer is, you cannot -- once the browser receives the response, there is no way to take it back. (Unless there is some way to abnormally stop a HTTP response to cause the client to reload the page, or something to that extent.)
Probably the last place a response can be "cleared" in a sense, is using the ServletResponse.reset method, which according to the Servlet Specification, will reset the buffer of the servlet's response.
However, this method also seems to have a catch, as it will only work if the buffer has not been committed (i.e. sent to the client) by the ServletOutputStream's flush method.
You cannot. The best thing is to write to a buffer (StringWriter / StringBuilder) and then you can replace the written data any time. Only when you know for sure what is the response you can write the buffer's content to the response.
In the same matter, and reason to write the response this way and not to use some view technology for your output such as JSP, Velocity, FreeMarker, etc.?
If you have an immediate problem that you need to solve quickly, you could work around this design problem by increasing the size of the response buffer - you'll have to read your application server's docs to see if this is possible. However, this solution will not scale as you'll soon run into out-of-memory issues if you site traffic peaks.
No view technology will protect you from this issue. You should design your application to figure out what you're going to show the user before you start writing the response. That means doing all your DB access and business logic ahead of time. This is a common issue I've seen with convoluted system designs that use proxy objects that lazily access the database. E.g. ORM with Entity relationships are bad news if accessed from your view layer! There's not much you can do about an exception that happens 3/4 of the way into a rendered page.
Thinking about it, there might be some way to inject a page redirect via AJAX. Anyone ever heard of a solution like that?
Good luck with re-architecting your design!
I know the post is pretty old, but just thought of sharing my views on this.
I suppose you could actually use a Filter and a ServletResponseWrapper to wrap the response and pass it along the chain.
That is, You can have an output stream in the wrapper class and write to it instead of writing into the original response's output stream... you can clear the wrapper's output stream as and when you please and you can finally write to the original response's output stream when you are done with your processing.
For example,
public class MyResponseWrapper extends HttpServletResponseWrapper {
protected ByteArrayOutputStream baos = null;
protected ServletOutputStream stream = null;
protected PrintWriter writer = null;
protected HttpServletResponse origResponse = null;
public MyResponseWrapper( HttpServletResponse response ) {
super( response );
origResponse = response;
}
public ServletOutputStream getOutputStream()
throws IOException {
if( writer != null ) {
throw new IllegalStateException( "getWriter() has already been " +
"called for this response" );
}
if( stream == null ) {
baos = new ByteArrayOutputStream();
stream = new MyServletStream(baos);
}
return stream;
}
public PrintWriter getWriter()
throws IOException {
if( writer != null ) {
return writer;
}
if( stream != null ) {
throw new IllegalStateException( "getOutputStream() has already " +
"been called for this response" );
}
baos = new ByteArrayOutputStream();
stream = new MyServletStream(baos);
writer = new PrintWriter( stream );
return writer;
}
public void commitToResponse() {
origResponse.getOutputStream().write(baos.toByteArray());
origResponse.flush();
}
private static class MyServletStream extends ServletOutputStream {
ByteArrayOutputStream baos;
MyServletStream(ByteArrayOutputStream baos) {
this.baos = baos;
}
public void write(int param) throws IOException {
baos.write(param);
}
}
//other methods you want to implement
}