I am looking at sending objects over http from an android client to my server that is running java servlets. The object can hold a bitmap image, and I am just wondering if you could show me an example of sending an object from the client to the server.
I read on the forms that people say to use JSON , but it seems to me JSON works with only textual data. If it does not could someone show me how to use it with objects that contain images
To send binary data between a Java client and a Java server which is connected by HTTP, you have basically 2 options.
Serialize it, i.e. let object implement Serializable, have an exact copy of the .class file on both sides and send it by ObjectInputStream and read it by ObjectInputStream. Advantage: ridiculously easy. Disadvantage: poor backwards compatibility (when you change the object to add a new field, you've to write extremely a lot of extra code and checks to ensure backwards compatibitility) and bad reusability (not reusable on other clients/servers than Java ones).
Use HTTP multipart/form-data. Advandage: very compatible (a web standard) and very good reusability (server is reusable on other clients and client is reusable on other servers). Disadvantage: harder to implement (fortunately there are APIs and libraries for this). In Android you can use builtin HttpClient API to send it. In Servlet you can use Apache Commons FileUpload to parse it.
I recommend you use XStream
XStream for your servlet side:
http://x-stream.github.io/tutorial.html
XStream code optimized for Android:
http://jars.de/java/android-xml-serialization-with-xstream
If you are sending images and such, wrap them into a 'envelope' class that contains a byte array like the one here: Serializing and De-Serializing android.graphics.Bitmap in Java
Then use HttpClient in your android app to send the data to your servlet ^^ Also make sure that both the app and the servlet have the same classes ^^
Socket Api is also well.
Creating socket in both side will allow to send raw data to be transmitted from client android application to server.
Here is the code for hitting a servlet and send data to the server.
boolean hitServlet(final Context context, final String data1,final String data2) {
String serverUrl = SERVER_URL + "/YourSevletName";
Map<String, String> params = new HashMap<String, String>();
params.put("data1", data1);
params.put("data2" data2)
long backoff = BACKOFF_MILLI_SECONDS + random.nextInt(1000);
// As the server might be down, we will retry it a couple
// times.
for (int i = 1; i <= MAX_ATTEMPTS; i++) {
try {
post(serverUrl, params);
return true;
} catch (IOException e) {
// Here we are simplifying and retrying on any error; in a real
// application, it should retry only on unrecoverable errors
// (like HTTP error code 503).
Log.e(TAG, "Failed " + i, e);
if (i == MAX_ATTEMPTS) {
break;
}
try {
Log.d(TAG, "Sleeping for " + backoff + " ms before retry");
Thread.sleep(backoff);
} catch (InterruptedException e1) {
// Activity finished before we complete - exit.
Log.d(TAG, "Thread interrupted: abort remaining retries!");
Thread.currentThread().interrupt();
return false;
}
// increase backoff exponentially
backoff *= 2;
}
}
return false;
}
Related
I need to implement a heartbeat system on my Java project (3-5 Clients with 1 server for them) but I have some questions.
1) Do I need to have 2 sockets by clients ? 1 for the heartbeat and 1 to receive normal message for my software
2) I saw that in specific case when a client is lagging, the client don't receive a message, how to avoid this ?
3) In case of a client disconnect, how to retreive the connection with it ? Without recreate a new socket with it.
So, you have a "central server" which needs to provide an heartbeat mechanism to the clients. Well, part of the solution is simple since you have only 1 server, which simplifies a LOT since you don't need to deal with data replication, data synchronization mechanisms, server failure, and so on. You just expect that your server never fails and if it fails it's a fatal error.
My suggestion is to implement a system based on notifications (pooling is bad and ugly): instead of having the server pooling the clients, you have the clients reporting to the server every X seconds of their state. This reduces the general overload of your system and it's based on the design principle of "Tell, don't ask". This also allows you to have different report times for each individual client.
There is one more question, which is what data do you want to transmit? Simply if the client is alive? Runtime data of the client, for example, % of it's job done if the client is downloading a file? Environment status, such as CPU overload, memory usage, network status? Define that, that's the first step.
Talking about the java implementation, you should run your a thread on each of your clients (implementing the Runnable interface). It should look something like this code (this is simplified for the sake of brevity):
public class HeartbeatAgent implements Runnable {
private int DEFAULT_SAMPLING_PERIOD = 5; //seconds
private String DEFAULT_NAME = "HeartbeatAgent";
private HashMap<Integer, Object> values; // <id, value>
public HeartbeatAgent () {
values = new HashMap<Integer,Object>();
}
private void collect() {
/** Here you should collect the data you want to send
and store it in the hash
**/
}
public void sendData(){
/** Here you should send the data to the server. Use REST/SOAP/multicast messages, whatever you want/need/are forced to **/
}
public void run() {
System.out.println("Running " + DEFAULT_NAME );
try {
while( /** condition you want to stop **/ {
System.out.println("Thread: " + DEFAULT_NAME + ", " + "I'm alive");
this.collect();
this.send();
// Let the thread sleep for a while.
Thread.sleep(DEFAULT_SAMPLING_PERIOD * 1000);
}
} catch (InterruptedException e) {
System.out.println("Thread " + DEFAULT_NAME + " interrupted.");
}
System.out.println("Thread " + DEFAULT_NAME + " exiting.");
}
}
You should write a server that handles the requests made and is "smart" enough to call a time-out after X seconds without "news" from client Y.
This is still not ideal, since you collect data and send it with the same sampling period, but usually you want to collect data at very tiny intervals (collecting CPU usage every 5 seconds, for instance) but only report it every 30 seconds.
If you want to look at good code of a good library that does this (it's what we've been using to our project at my company), take a look at JCatascopia framework code (just look at the Agent and Server folders, ignore the others).
There's a lot to say about this topic, this is the basic. Feel free to ask!
You could try to take a look at this small framework I made for a project I'd worked one last year. It's focused on a simple implementation and yet a strong feedback about your clients status.
It's based on UDP protocol which sends a payload containg an id, which it can be a MAC address of a NIC or an id chosen and set automatically by you or something else too, that confirms the client being safe and sound.
I think it's kind of cool because it's based on listeners which then receive various kinds of events based on what the heartbeat protocol compute about a client status.
You can find more about it here
I think it's handy to use it with TCP sockets to understand if you are capable or not to send data over your TCP stream. Having continuos feedback on your clients status takes you in a position where you can easily achieve that, for example by saving in some sort of map your client status and check it before sending any kind of data.
I have a Java application which uses Spring's RestTemplate API to write concise, readable consumers of JSON REST services:
In essence:
RestTemplate rest = new RestTemplate(clientHttpRequestFactory);
ResponseEntity<ItemList> response = rest.exchange(url,
HttpMethod.GET,
requestEntity,
ItemList.class);
for(Item item : response.getBody().getItems()) {
handler.onItem(item);
}
The JSON response contains a list of items, and as you can see, I have have an event-driven design in my own code to handle each item in turn. However, the entire list is in memory as part of response, which RestTemplate.exchange() produces.
I would like the application to be able to handle responses containing large numbers of items - say 50,000, and in this case there are two issues with the implementation as it stands:
Not a single item is handled until the entire HTTP response has been transferred - adding unwanted latency.
The huge response object sits in memory and can't be GC'd until the last item has been handled.
Is there a reasonably mature Java JSON/REST client API out there that consumes responses in an event-driven manner?
I imagine it would let you do something like:
RestStreamer rest = new RestStreamer(clientHttpRequestFactory);
// Tell the RestStreamer "when, while parsing a response, you encounter a JSON
// element matching JSONPath "$.items[*]" pass it to "handler" for processing.
rest.onJsonPath("$.items[*]").handle(handler);
// Tell the RestStreamer to make an HTTP request, parse it as a stream.
// We expect "handler" to get passed an object each time the parser encounters
// an item.
rest.execute(url, HttpMethod.GET, requestEntity);
I appreciate I could roll my own implementation of this behaviour with streaming JSON APIs from Jackson, GSON etc. -- but I'd love to be told there was something out there that does it reliably with a concise, expressive API, integrated with the HTTP aspect.
A couple of months later; back to answer my own question.
I didn't find an expressive API to do what I want, but I was able to achieve the desired behaviour by getting the HTTP body as a stream, and consuming it with a Jackson JsonParser:
ClientHttpRequest request =
clientHttpRequestFactory.createRequest(uri, HttpMethod.GET);
ClientHttpResponse response = request.execute();
return handleJsonStream(response.getBody(), handler);
... with handleJsonStream designed to handle JSON that looks like this:
{ items: [
{ field: value; ... },
{ field: value, ... },
... thousands more ...
] }
... it validates the tokens leading up to the start of the array; it creates an Item object each time it encounters an array element, and gives it to the handler.
// important that the JsonFactory comes from an ObjectMapper, or it won't be
// able to do readValueAs()
static JsonFactory jsonFactory = new ObjectMapper().getFactory();
public static int handleJsonStream(InputStream stream, ItemHandler handler) throws IOException {
JsonParser parser = jsonFactory.createJsonParser(stream);
verify(parser.nextToken(), START_OBJECT, parser);
verify(parser.nextToken(), FIELD_NAME, parser);
verify(parser.getCurrentName(), "items", parser);
verify(parser.nextToken(), START_ARRAY, parser);
int count = 0;
while(parser.nextToken() != END_ARRAY) {
verify(parser.getCurrentToken(), START_OBJECT, parser);
Item item = parser.readValueAs(Item.class);
handler.onItem(item);
count++;
}
parser.close(); // hope it's OK to ignore remaining closing tokens.
return count;
}
verify() is just a private static method which throws an exception if the first two arguments aren't equal.
The key thing about this method is that no matter how many items there are in the stream, this method only every has a reference to one Item.
you can try JsonSurfer which is designed to process json stream in event-driven style.
JsonSurfer surfer = JsonSurfer.jackson();
Builder builder = config();
builder.bind("$.items[*]", new JsonPathListener() {
#Override
public void onValue(Object value, ParsingContext context) throws Exception {
// handle the value
}
});
surfer.surf(new InputStreamReader(response.getBody()), builder.build());
Is there no way to break up the request? It sounds like you should use paging. Make it so that you can request the first 100 results, the next 100 results, so on. The request should take a starting index and a count number. That's very common behavior for REST services and it sounds like the solution to your problem.
The whole point of REST is that it is stateless, it sounds like you're trying to make it stateful. That's anathema to REST, so you're not going to find any libraries written that way.
The transactional nature of REST is very much intentional by design and so you won't get around that easily. You'll fighting against the grain if you try.
From what I've seen, wrapping frameworks (like you are using) make things easy by deserializing the response into an object. In your case, a collection of objects.
However, to use things in a streaming fashion, you may need to get at the underlying HTTP response stream. I am most familiar with Jersey, which exposes https://jersey.java.net/nonav/apidocs/1.5/jersey/com/sun/jersey/api/client/ClientResponse.html#getEntityInputStream()
It would be used by invoking
Client client = Client.create();
WebResource webResource = client.resource("http://...");
ClientResponse response = webResource.accept("application/json")
.get(ClientResponse.class);
InputStream is = response.getEntityInputStream();
This provides you with the stream of data coming in. The next step is to write the streaming part. Given that you are using JSON, there are options at various levels, including http://wiki.fasterxml.com/JacksonStreamingApi or http://argo.sourceforge.net/documentation.html. They can consume the InputStream.
These don't really make good use of the full deserialization that can be done, but you could use them to parse out an element of a json array, and pass that item to a typical JSON object mapper, (like Jackson, GSON, etc). This becomes the event handling logic. You could spawn new threads for this, or do whatever your use case needs.
I won't claim to know all the rest frameworks out there (or even half) but I'm going to go with the answer
Probably Not
As noted by others this is not the way REST normally thinks of it's interactions. REST is a great Hammer but if you need streaming, you are (IMHO) in screwdriver territory, and the hammer might still be made to work, but it is likely to make a mess. One can argue that it is or is not consistent with REST all day long, but in the end I'd be very surprised to find a framework that implemented this feature. I'd be even more surprised if the feature is mature (even if the framework is) because with respect to REST your use case is an uncommon corner case at best.
If someone does come up with one I'll be happy to stand corrected and learn something new though :)
Perhaps it would be best to be thinking in terms of comet or websockets for this particular operation. This question may be helpful since you already have spring. (websockets are not really viable if you need to support IE < 10, which most commercial apps still require... sadly, I've got one client with a key customer still on IE 7 in my personal work)
You may consider Restlet.
http://restlet.org/discover/features
Supports asynchronous request processing, decoupled from IO operations. Unlike the Servlet API, the Restlet applications don't have a direct control on the outputstream, they only provide output representation to be written by the server connector.
The best way to achieve this is to use another streaming Runtime for JVM that allows reading response off websockets and i am aware of one called atmostphere
This way your large dataset is both sent and received in chunks on both side and read in the same manner in realtime withou waiting for the whole response.
This has a good POC on this:
http://keaplogik.blogspot.in/2012/05/atmosphere-websockets-comet-with-spring.html
Server:
#RequestMapping(value="/twitter/concurrency")
#ResponseBody
public void twitterAsync(AtmosphereResource atmosphereResource){
final ObjectMapper mapper = new ObjectMapper();
this.suspend(atmosphereResource);
final Broadcaster bc = atmosphereResource.getBroadcaster();
logger.info("Atmo Resource Size: " + bc.getAtmosphereResources().size());
bc.scheduleFixedBroadcast(new Callable<String>() {
//#Override
public String call() throws Exception {
//Auth using keaplogik application springMVC-atmosphere-comet-webso key
final TwitterTemplate twitterTemplate =
new TwitterTemplate("WnLeyhTMjysXbNUd7DLcg",
"BhtMjwcDi8noxMc6zWSTtzPqq8AFV170fn9ivNGrc",
"537308114-5ByNH4nsTqejcg5b2HNeyuBb3khaQLeNnKDgl8",
"7aRrt3MUrnARVvypaSn3ZOKbRhJ5SiFoneahEp2SE");
final SearchParameters parameters = new SearchParameters("world").count(5).sinceId(sinceId).maxId(0);
final SearchResults results = twitterTemplate.searchOperations().search(parameters);
sinceId = results.getSearchMetadata().getMax_id();
List<TwitterMessage> twitterMessages = new ArrayList<TwitterMessage>();
for (Tweet tweet : results.getTweets()) {
twitterMessages.add(new TwitterMessage(tweet.getId(),
tweet.getCreatedAt(),
tweet.getText(),
tweet.getFromUser(),
tweet.getProfileImageUrl()));
}
return mapper.writeValueAsString(twitterMessages);
}
}, 10, TimeUnit.SECONDS);
}
Client:
Atmosphere has it's own javascript file to handle the different Comet/Websocket transport types and requests. By using this, you can set the Spring URL Controller method endpoint to the request. Once subscribed to the controller, you will receive dispatches, which can be handled by adding a request.onMessage method. Here is an example request with transport of websockets.
var request = new $.atmosphere.AtmosphereRequest();
request.transport = 'websocket';
request.url = "<c:url value='/twitter/concurrency'/>";
request.contentType = "application/json";
request.fallbackTransport = 'streaming';
request.onMessage = function(response){
buildTemplate(response);
};
var subSocket = socket.subscribe(request);
function buildTemplate(response){
if(response.state = "messageReceived"){
var data = response.responseBody;
if (data) {
try {
var result = $.parseJSON(data);
$( "#template" ).tmpl( result ).hide().prependTo( "#twitterMessages").fadeIn();
} catch (error) {
console.log("An error ocurred: " + error);
}
} else {
console.log("response.responseBody is null - ignoring.");
}
}
}
It has support on all major browsers and native mobile clients Apple being pioneers of this technology:
As mentioned here excellent support for deployment environments on web and enterprise JEE containers:
http://jfarcand.wordpress.com/2012/04/19/websockets-or-comet-or-both-whats-supported-in-the-java-ee-land/
In my Java application I am using 2 network connections to a webserver. I ask for a range of data for a file from each interface with a GET message and when I get the data, I calc how much time it took and the bps for each link.
This part works fine.
(I haven't closed the sockets yet)
I determine which link is faster then "attempt" to send another HTTP GET request for the rest of the file on the faster link. This is where my problem is, The 2nd cOut.write(GET) doesn't send anything and hence I get no data back.
Do I have to close the socket and re-establish my connection before I can write to it again?
edit
OK to answer some Qs:
Yes TCP
The following code (used on the low speed link) is used first to grab the first block of data using the GET request:
GET /test-downloads/10mb.txt HTTP/1.1
HOST: www.cse.usf.edu
Range: bytes=0-999999
Connection: Close
Is the Connection: Close what is doing it? as when I use Keep-Alive I get a 5sec delay and still do not send/receive data on a subsequent write/read.
// This try block has another fo rthe other link but uses [1]
try {
skSocket[0] = new Socket( ipServer, 80, InetAddress.getByName(ipLowLink), 0 );
skSocket[0].setTcpNoDelay(true);
skSocket[0].setKeepAlive(true);
cOut[0] = new PrintWriter(skSocket[0].getOutputStream(),true);
cIn[0] = new InputStreamReader(skSocket[0].getInputStream());
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
/* -----------------------------------
the following code is what is called
once for each link (ie cOut[0], cOut [1])
then after determining which has the better bps
(using a GET to grab the rest of the file)
this code is repeated on the designated link
----------------------------------- */
// Make GET header
GET.append(o.strGetRange(ipServer, File, startRange, endRange-1));
// send GET for a specific range of data
cOut[0].print(GET);
cOut[0].flush();
try {
char[] buffer = new char[4*1024];
int n = 0;
while (n >= 0) {
try {
n = cIn[0].read(buffer, 0, buffer.length);
} catch (IOException e) {
e.printStackTrace();
}
if (n > 0) {
raw[0].append(buffer, 0, n); // raw is a stringBuffer
}
}
}
finally {
//if (b != null) b.close();
}
I use the same code as above for my 2nd link (just shift the start/end range over a block) and after I determine the bps, I request the remaining data on the best link (don't worry about how big the file is, etc, I have all that logic done and not the point of the problem)
Now for my subsequent request for the rest of the data, I use the same code as above minus the socket/in/out creation. But the write doesn't send anything. (I hav done the socket check isClosed(), isConnected(), isBound(), isInputShutdown(), isOutboundShutdown(), and all prove that the socket is still open/useable.
According to the HTTP 1.1 spec, section 8:
An HTTP/1.1 client MAY expect a connection to remain open, but would decide to keep it open based on whether the response from a server contains a Connection header with the connection-token close. In case the client does not want to maintain a connection for more than that request, it SHOULD send a Connection header including the connection-token close.
So, you should ensure that you are using HTTP 1.1, and if you are, you should also check that your webserver supports persistent connections (it probably does). However, bear in mind that the spec says SHOULD and not MUST and so this functionality could be considered optional for some servers.
As per the excerpt above, check for a connection header with the connection-close token.
Without a SSCCE, it's going to be difficult to give any concrete recommendations.
My goal is to connect to a server and then maintain the connection. The server keeps pushing me some data whenever it has any. I wrote the following but it works only the first time. The second time onwards, it gives me an exception saying that the get.getResponseBodyAsStream() is null. I was thinking that Apache's HTTPClient keeps the connection alive by default so what I understand is that I need a blocking call somewhere. Can someone help me out here?
GetMethod get = new GetMethod(url);
String nextLine;
String responseBody = "";
BufferedReader input;
try {
httpClient.executeMethod(get);
while(true) {
try {
input = new BufferedReader(new InputStreamReader(get.getResponseBodyAsStream()));
while ((nextLine = input.readLine()) != null)
responseBody += nextLine;
System.out.println(responseBody);
} catch (IOException ex) {
ex.printStackTrace();
}
}
} catch (Exception e) {
e.printStackTrace();
}
Actually at the end of the day, I am trying to get a persistent connection to the server (I will handle possible errors later) so that I can keep receiving updates from my server. Any pointers on this would be great.
I haven't looked in great detail or tested code, but I think repeatedly opening up a reader on the response is probably a bad idea. I'd take and move the input = line up outside the loop, for starters.
in my opinion HttpClient library is meant for client pull situations. i recommend you to look at comet which supports server push
You cannot do it like this. When you have read the "body" of the response, that is it. To get more information, the client has to send a new request. That is the way that the HTTP protocol works.
If you want to stream multiple chunks of data in a single HTTP response, then you are going to need to do the chunking and unchunking yourself. There a variety of approaches you could use, depending on the nature of the data. For example:
If the data is XML or JSON, send a stream of XML documents / JSON objects an have the receiver separate the stream into documents / objects before sending them to the parser.
Invent your own light-weight "packetization" where you precede each chunk with a start marker and a byte count.
The other alternative is to use multiple GET requests, but try to configure things so that the underlying TCP/IP connection stays open between requests; see HTTP Persistent Connections.
EDIT
Actually, I need to send only one GET request and keep waiting for status messages from the server.
The HTTP status code is transmitted in the first line of the HTTP response message. There can be only one per HTTP response, and (obviously) there can be only one response per HTTP request. Therefore what you are trying to do is impossible using normal HTTP status codes and request/reply messages.
Please review the alternatives that I suggested above. The bullet-pointed alternatives can be tweaked to allow you to include some kind of status in each chunk of data. And the last one (sending multiple requests) solves the problem already.
EDIT 2
To be more particular, it seems that keeping the connection alive is done transparently
That is correct.
... so all I need is a way to get notified when there is some data present that can be consumed.
Assuming that you are not prepared to send multiple GET requests (which is clearly the simplest solution!!!), then your code might look like this:
while (true) {
String header = input.readLine(); // format "status:linecount"
if (header == null) {
break;
}
String[] parts = header.split(":");
String status = parts[0];
StringBuilder sb = new StringBuilder();
int lineCount = Integer.parseInt(parts[1]);
for (int i = 0; i < lineCount; i++) {
String line = input.readLine();
if (line == null) {
throw new Exception("Ooops!");
}
sb.append(line).append('\n');
}
System.out.println("Got status = " + status + " body = " + body);
}
But if you are only sending status codes or if the rest of each data chunk can be shoe-horned onto the same line, you can simplify this further.
If you are trying to implement this so that your main thread doesn't have to wait (block) on reading from the input stream, then either use NIO, or use a separate thread to read from the input stream.
I'm writing an Android App and I'm looking for the fastest (In terms of setup) way for me to send data to a server and receive information back on request.
We're talking basic stuff. I have a log file which tells me how a user is using my application (In beta, I wouldn't runin a user experience by constantly logging usually) and I want to communicate that to my server (That I haven't setup).
I don't need security, I don't need high throughput or concurrent connections (I have 3 phones to play with) but I do need to set it up fast!
I remember back in the day that setting up XAMPP was particularly brainless, then maybe I could use PHP to send the file from the phone to the Server?
The Server would ideally be able to respond to a GET which would allow me to send back some SQL statements which ultimately affect the UI. (It's meant to adapt the presented options depending on those most commonly used).
So there you have it, I used PHP about 4 years ago and will go down that route if it's the best but if there's some kind of new fangled port open closing binary streaming singing and dancing method that has superseeded that option I would love to know.
This tutorial seems useful but I don't really need object serialization, just text files back and forth, compressed naturally.
Android comes with the Apache HTTP Client 4.0 built in as well as java.net.URL and java.net.HttpUrlConnection, I'd rather not add too much bult to my App with third party libraries.
Please remember that I'm setting up the server side as well so I'm looking for an overall minimum lines of code!
private void sendData(ProfileVO pvo) {
Log.i(getClass().getSimpleName(), "send task - start");
HttpParams p=new BasicHttpParams();
p.setParameter("name", pvo.getName());
//Instantiate an HttpClient
HttpClient client = new DefaultHttpClient(p);
//Instantiate a GET HTTP method
try {
HttpResponse response=client.execute(new HttpGet("http://www.itortv.com/android/sendName.php"));
InputStream is=response.getEntity().getContent();
//You can convert inputstream to a string with: http://senior.ceng.metu.edu.tr/2009/praeda/2009/01/11/a-simple-restful-client-at-android/
} catch (ClientProtocolException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
Log.i(getClass().getSimpleName(), "send task - end");
}
The easiest solution for you would be to use the apache http client to get and post JSON requests to a php server.
The android already has a JSON builder/parser built-in, as does PHP5, so integration is trivial, and a lot easier than using its XML/Socket counterparts.
If you want examples of how to do this the android side of things, here is a twitter API that basically communicates with twitter via their JSON rest API.
http://java.sun.com/docs/books/tutorial/networking/sockets/
This is a good background tutorial to Java socket communication.
Check out WWW.viewstreet.com developing an Apache plugin specifically for android serverside development for Java programmers