My goal is to connect to a server and then maintain the connection. The server keeps pushing me some data whenever it has any. I wrote the following but it works only the first time. The second time onwards, it gives me an exception saying that the get.getResponseBodyAsStream() is null. I was thinking that Apache's HTTPClient keeps the connection alive by default so what I understand is that I need a blocking call somewhere. Can someone help me out here?
GetMethod get = new GetMethod(url);
String nextLine;
String responseBody = "";
BufferedReader input;
try {
httpClient.executeMethod(get);
while(true) {
try {
input = new BufferedReader(new InputStreamReader(get.getResponseBodyAsStream()));
while ((nextLine = input.readLine()) != null)
responseBody += nextLine;
System.out.println(responseBody);
} catch (IOException ex) {
ex.printStackTrace();
}
}
} catch (Exception e) {
e.printStackTrace();
}
Actually at the end of the day, I am trying to get a persistent connection to the server (I will handle possible errors later) so that I can keep receiving updates from my server. Any pointers on this would be great.
I haven't looked in great detail or tested code, but I think repeatedly opening up a reader on the response is probably a bad idea. I'd take and move the input = line up outside the loop, for starters.
in my opinion HttpClient library is meant for client pull situations. i recommend you to look at comet which supports server push
You cannot do it like this. When you have read the "body" of the response, that is it. To get more information, the client has to send a new request. That is the way that the HTTP protocol works.
If you want to stream multiple chunks of data in a single HTTP response, then you are going to need to do the chunking and unchunking yourself. There a variety of approaches you could use, depending on the nature of the data. For example:
If the data is XML or JSON, send a stream of XML documents / JSON objects an have the receiver separate the stream into documents / objects before sending them to the parser.
Invent your own light-weight "packetization" where you precede each chunk with a start marker and a byte count.
The other alternative is to use multiple GET requests, but try to configure things so that the underlying TCP/IP connection stays open between requests; see HTTP Persistent Connections.
EDIT
Actually, I need to send only one GET request and keep waiting for status messages from the server.
The HTTP status code is transmitted in the first line of the HTTP response message. There can be only one per HTTP response, and (obviously) there can be only one response per HTTP request. Therefore what you are trying to do is impossible using normal HTTP status codes and request/reply messages.
Please review the alternatives that I suggested above. The bullet-pointed alternatives can be tweaked to allow you to include some kind of status in each chunk of data. And the last one (sending multiple requests) solves the problem already.
EDIT 2
To be more particular, it seems that keeping the connection alive is done transparently
That is correct.
... so all I need is a way to get notified when there is some data present that can be consumed.
Assuming that you are not prepared to send multiple GET requests (which is clearly the simplest solution!!!), then your code might look like this:
while (true) {
String header = input.readLine(); // format "status:linecount"
if (header == null) {
break;
}
String[] parts = header.split(":");
String status = parts[0];
StringBuilder sb = new StringBuilder();
int lineCount = Integer.parseInt(parts[1]);
for (int i = 0; i < lineCount; i++) {
String line = input.readLine();
if (line == null) {
throw new Exception("Ooops!");
}
sb.append(line).append('\n');
}
System.out.println("Got status = " + status + " body = " + body);
}
But if you are only sending status codes or if the rest of each data chunk can be shoe-horned onto the same line, you can simplify this further.
If you are trying to implement this so that your main thread doesn't have to wait (block) on reading from the input stream, then either use NIO, or use a separate thread to read from the input stream.
Related
I'm currently working on a project where i have to host a server wich fetches an inputstream, parses the data and sends it to the database. Every client that connects to my server sends an inputstream wich never stops once it is connected. Every client is assigned a socket and its own parser thread object so the server can deal with the datastream coming from the client. The parser object just deals with the incoming data and sends it to the database.
Server / parser generator:
public void generateParsers() {
while (keepRunning) {
try {
Socket socket = s.accept();
// new connection
t = new Thread(new Parser(socket));
t.start();
} catch (IOException e) {
appLog.severe(e.getMessage());
}
}
}
Parser thread:
#Override
public void run() {
while (!socket.isClosed() && socket.isConnected()) {
try {
BufferedReader bufReader = new BufferedReader(new InputStreamReader(socket.getInputStream()));
String line = bufReader.readLine();
String data = "";
if (line == null) {
socket.close();
} else if (Objects.equals(line, "<DATA")) {
while (!Objects.equals(line, "</DATA>")) {
data += line;
line = bufReader.readLine();
}
/*
Send the string that was build
from the client's datastream to the database
using the parse() function.
*/
parse(data);
}
}
} catch (IOException e) {
System.out.println("ERROR : " + e);
}
}
}
My setup is functional but the problem is that it delivers too much stress on my server when too much clients are connected and thus too many threads are parsing data concurrently. The parsing of the incoming data and the sending of the data to the database is hardly effecting the performance at all. The bottleneck is mostly the concurrent reading of the client's datastreams from the connected clients.
Is there any way that i can optimize my current setup ? I was thinking of capping the amount of connections and once a full datafile is recieved, parse it and move to the next client in the connection que or something similar.
The bottleneck is mostly the concurrent reading
No. The bottleneck is string concatenation. Use a StringBuffer or StringBuilder.
And probably improper behaviour when a client disconnects. It's hard to believe this works at all. It shouldn't:
You should use the same BufferedReader for the life of the socket, otherwise you can lose data.
Socket.isClosed() and Socket.isConnected() don't do what you think they do: the correct loop termination condition is readLine() returning null, or throwing an IOException:
while ((line = bufReader.readLine()) != null)
Capping the number of concurrent connections can't possibly achieve anything if the clients never disconnect. All you'll accomplish is never listening to clients beyond the first N to connect, which can't possibly be what you want. 'Move to the next client' will never happen.
If your problem is indeed that whatever you are doing while client is connected is expensive, you will have to use client queue. The most simple way to do this will be to use ExecutorService with N numer of max threads.
For example
private ExecutorService pool=Executors.newFixedThreapPool(N);
...
and then
Socket socket = s.accept();
pool.submit(new Parser(socket)));
This will limit concurent client handling to N at the time, and queue any additional clients that exceeds N.
Also depends on what you are doing with the data, you could always split the process to phases for example
Read raw data from client and enqueue for processing - close socket etc. so you can save resources
Process the data in separate thread (possibly thread pool) and enqueue the result
Do something with the result (check for validity, persist into DB etc) in another pool.
This is especially helpfull if you got some blocking operations like network I/O, or expensive one etc.
Looks like in your case, client does not have to wait for whole backend proess to complete. He only needs to deliver the data, so splitting data reading and parsing/persisting into separate phases (subtasks) sounds like reasonable approach.
How to implement http streaming client on android?
I have a comet streaming server - which gets an http request, keep it open and flush data once in a while.
How can I make a http post request from Android and keep handling the flushes which the server sent me (with the current open connection)? Notice that the response headers contains : Transfer-Encoding:chunked
I've tried to work with the HttpClient and HttpPost and ChunkedInputStream but couldn't handled this in the right way - is there a way to handle this in an callback base way, I mean to get some event on each flush and then to handle the current content?
Edited:
Currently there are two solutions which I've thought of :
1) Read byte by byte and search for an end delimiter, once I got the end delimiter I can process the last message and continue to block the reading thread on the read action - until the next message arrives.
2) Send the length of the message and after that the message itself (from the server), then on my android app I'll try to get the last message by reading x bytes (according to the first length msg) and after that let the reading thread to block on the read action untill the next message arrives.
So the solution is:
Use standard http client like DefaultHttpClient or AndroidHttpClient and while processing the request just use the "read" function of the Reader.
For example:
BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream, "UTF-8"));
char[] msgsReadArray = new char[streamingArrayMaxCapacity];
int currentCharsCount = 0;
int currOffset = 0;
int currMsgBytesLength = 0;
while ((currentCharsCount = reader.read(msgsReadArray, currOffset , streamingArrayMaxCapacity - currOffset)) != -1 ) {
String lastStreamingMsg = new String(msgsReadArray, 0, currMsgBytesLength);
currOffset += currentCharsCount;
}
When the connection will be closed you'll get "-1" as the result of the read.
Of course you'll have to handle exceptions and problematic situations - and also you might want to decide on a protocol for sending the streaming msgs.
I am currently implementing a web proxy but i have run into a problem.I can parse my request from the browser and make a new request quite alright but i seem to have a problem with response.It keeps hanging inside my response loop
serveroutput.write(request.getFullRequest());
// serveroutput.newLine();
serveroutput.flush();
//serveroutput.
//serveroutput.close();
} catch (IOException e) {
System.out.println("Writting tothe server was unsuccesful");
e.printStackTrace();
}
System.out.println("Write was succesful...");
System.out.println("flushed.");
try {
System.out.println("Getting a response...");
response= new HttpResponse(serversocket.getInputStream());
} catch (IOException e) {
System.out.println("tried to read response from server but failed");
e.printStackTrace();
}
System.out.println("Response was succesfull");
//response code
public HttpResponse(InputStream input) {
busy=true;
reader = new BufferedReader(new InputStreamReader(input));
try {
while (!reader.ready());//wait for initialization.
String line;
while ((line = reader.readLine()) != null) {
fullResponse += "\r\n" + line;
}
reader.close();
fullResponse = "\r\n" + fullResponse.trim() + "\r\n\r\n";
} catch (IOException`` e) {
e.printStackTrace();
}
busy = false;
}
You're doing a blocking, synchronous read on a socket. Web servers don't close their connections after sending you a page (if HTTP/1.1 is specified) so it's going to sit there and block until the webserver times out the connection. To do this properly you would need to be looking for the Content-Length header and reading the appropriate amount of data when it gets to the body.
You really shouldn't be trying to re-invent the wheel and instead be using either the core Java provided HttpURLConnection or the Appache HttpClient to make your requests.
while (!reader.ready());
This line goes into an infinite loop, thrashing the CPU until the stream is available for read. Generally not a good idea.
You are making numerous mistakes here.
Using a spin loop calling ready() instead of just blocking in the subsequent read.
Using a Reader when you don't know that the data is text.
Not implementing the HTTP 1.1 protocol even slightly.
Instead of reviewing your code I suggest you review the HTTP 1.1 RFC. All you need to do to implement a naive proxy for HTTP 1.1 is the following:
Read one line from the client. This should be a CONNECT command naming the host you are to connect to. Read this with a DataInputStream, not a BufferedReader, and yes I know it's deprecated.
Connect to the target. If that succeeded, send an HTTP 200 back to the client. If it didn't, send whatever HTTP status is appropriate and close the client.
If you succeeded at (2), start two threads, one to copy all the data from the client to the target, as bytes, and the other to do the opposite.
When you get EOS reading one of those sockes, call shutdownOutput() on the other one.
If shutdownOutput() hasn't already been called on the input socket of this thread, just exit the thread.
If it has been called already, close both sockets and exit the thread.
Note that you don't have to parse anything except the CONNECT command; you don't have to worry about Content-length; you just have to transfer bytes and then EOS correctly.
I want to recognize end of data stream in Java Sockets. When I run the code below, it just stuck and keeps running (it stucks at value 10).
I also want the program to download binary files, but the last byte is always distinct, so I don't know how to stop the while (pragmatically).
String host = "example.com";
String path = "/";
Socket connection = new Socket(host, 80);
PrintWriter out = new PrintWriter(connection.getOutputStream());
out.write("GET "+ path +" HTTP/1.1\r\nHost: "+ host +"\r\n\r\n");
out.flush();
int dataBuffer;
while ((dataBuffer = connection.getInputStream().read()) != -1)
System.out.println(dataBuffer);
out.close();
Thanks for any hints.
Actually your code is not correct.
In HTTP 1.0 each connection is closed and as a result the client could detect when an input has ended.
In HTTP 1.1 with persistent connections, the underlying TCP connection remains open, so a client can detect when an input ends with 1 of the following 2 ways:
1) The HTTP Server puts a Content-Length header indicating the size of the response. This can be used by the client to understand when the reponse has been fully read.
2)The response is send in Chunked-Encoding meaning that it comes in chunks prefixed with the size of each chunk. The client using this information can construct the response from the chunks received by the server.
You should be using an HTTP Client library since implementing a generic HTTP client is not trivial (at all I may say).
To be specific in your code posted you should have followed one of the above approaches.
Additionally you should read in lines, since HTTP is a line terminated protocol.
I.e. something like:
BufferedReader in =new BufferedReader(new InputStreamReader( Connection.getInputStream() ) );
String s=null;
while ( (s=in.readLine()) != null) {
//Read HTTP header
if (s.isEmpty()) break;//No more headers
}
}
By sending a Connection: close as suggested by khachik, gets the job done (since the closing of the connection helps detect the end of input) but the performance gets worse because for each request you start a new connection.
It depends of course on what you are trying to do (if you care or not)
You should use existing libraries for HTTP. See here.
Your code works as expected. The server doesn't close the connection, and dataBuffer never becomes -1. This happens because connections are kept alive in HTTP 1.1 by default. Use HTTP 1.0, or put Connection: close header in your request.
For example:
out.write("GET "+ path +" HTTP/1.1\r\nHost: "+ host +"\r\nConnection: close\r\n\r\n");
out.flush();
int dataBuffer;
while ((dataBuffer = connection.getInputStream().read()) != -1)
System.out.print((char)dataBuffer);
out.close();
I am looking at sending objects over http from an android client to my server that is running java servlets. The object can hold a bitmap image, and I am just wondering if you could show me an example of sending an object from the client to the server.
I read on the forms that people say to use JSON , but it seems to me JSON works with only textual data. If it does not could someone show me how to use it with objects that contain images
To send binary data between a Java client and a Java server which is connected by HTTP, you have basically 2 options.
Serialize it, i.e. let object implement Serializable, have an exact copy of the .class file on both sides and send it by ObjectInputStream and read it by ObjectInputStream. Advantage: ridiculously easy. Disadvantage: poor backwards compatibility (when you change the object to add a new field, you've to write extremely a lot of extra code and checks to ensure backwards compatibitility) and bad reusability (not reusable on other clients/servers than Java ones).
Use HTTP multipart/form-data. Advandage: very compatible (a web standard) and very good reusability (server is reusable on other clients and client is reusable on other servers). Disadvantage: harder to implement (fortunately there are APIs and libraries for this). In Android you can use builtin HttpClient API to send it. In Servlet you can use Apache Commons FileUpload to parse it.
I recommend you use XStream
XStream for your servlet side:
http://x-stream.github.io/tutorial.html
XStream code optimized for Android:
http://jars.de/java/android-xml-serialization-with-xstream
If you are sending images and such, wrap them into a 'envelope' class that contains a byte array like the one here: Serializing and De-Serializing android.graphics.Bitmap in Java
Then use HttpClient in your android app to send the data to your servlet ^^ Also make sure that both the app and the servlet have the same classes ^^
Socket Api is also well.
Creating socket in both side will allow to send raw data to be transmitted from client android application to server.
Here is the code for hitting a servlet and send data to the server.
boolean hitServlet(final Context context, final String data1,final String data2) {
String serverUrl = SERVER_URL + "/YourSevletName";
Map<String, String> params = new HashMap<String, String>();
params.put("data1", data1);
params.put("data2" data2)
long backoff = BACKOFF_MILLI_SECONDS + random.nextInt(1000);
// As the server might be down, we will retry it a couple
// times.
for (int i = 1; i <= MAX_ATTEMPTS; i++) {
try {
post(serverUrl, params);
return true;
} catch (IOException e) {
// Here we are simplifying and retrying on any error; in a real
// application, it should retry only on unrecoverable errors
// (like HTTP error code 503).
Log.e(TAG, "Failed " + i, e);
if (i == MAX_ATTEMPTS) {
break;
}
try {
Log.d(TAG, "Sleeping for " + backoff + " ms before retry");
Thread.sleep(backoff);
} catch (InterruptedException e1) {
// Activity finished before we complete - exit.
Log.d(TAG, "Thread interrupted: abort remaining retries!");
Thread.currentThread().interrupt();
return false;
}
// increase backoff exponentially
backoff *= 2;
}
}
return false;
}