I am currently working on a simple java program that tests ping, upload speed, and download speed. It is for a class project, and therefore rather than using java's httpURLConnection package, I'm doing it manually using Java Sockets. I found it quite easy to find the format of a GET request, put cannot find the format of a PUT.
Here, for example, is how my GET request looks:
//Create GET request
outToServer.writeBytes("GET / HTTP/1.1\r\n
+ "Host: " + server + "\r\n"
+ "Connection: close\r\n\r\n");
where outToServer is a DataOutputStream using the outputStream of a Socket. I am looking for a similar way to perform a PUT to upload a file to a server and measure the time.
Thank you very much for any help!
In short, format is the same, it has "PUT" instead of "GET" and it is usually used to "add" or "update" entities in the server, so after the header you would get an empty line (\r\n\r\n) and then a body, whose content type would correspond to the header "content-type" (like, say, text/html or application/json) An example of what a PUT request would look like:
PUT /boo/foo.txt HTTP/1.1
Host: www.foo.com
Content-Type: plain/text
This is a testing content for the text file foo.txt
Now, the server must support the method PUT (it is not required for a server to support such method)
Related
In order to send a string data to the server once, I do as below:
Make “HttpURLConnection” to my URL address and open it
Set the required headers
for the connection I Set setDoOutput to True
new a DataOutputStream from my connection and finally write my string data to it.
HttpURLConnection myConn = (HttpURLConnection);
myUrl.openConnection();
myConn.setRequestProperty("Accept", "application/json, text/plain, */*");
myConn.setDoOutput(true);
DataOutputStream my_output = new DataOutputStream(myConn.getOutputStream());
my_output.write(myData.getBytes("UTF-8"));
But what to do if I want to send exactly the same data with same URl and headers multiple times?
Can I write to it multiple times?(I mean that is it possible to use the last line of code multiple times?) Or should I repeat the above steps and try it with a new connection?
And if yes should I wait for some second or millisecond before sending the next one?
I also searched for some other alternatives such as “HttpClient” Http API and making synchronous Http request which as far as I got can help me setting the headers only once.
At the end, I appreciate your help and support and any other alternatives would be welcome.
Thanks a million.
I understand that the question has be answered in the comments, but I am leaving this here so that future viewers can see it.
An HTTP request contains 3 main parts:
Request Line: Method, Path, Protocol
Headers: Key-Pairs
Body: Data
Running my_output.write() will just add bytes to the body until my_output.flush() has been executed. Flushing the stream will write the data to the server.
Because HTTP requests are usually closed by the server once all data has been sent/received, whether or not you create a new connection or just add on to the body depends on your intentions. Typically, clients will create a new connection for each request because each response should be handled individually, rather than sending a repetitive body. This will vary though because some servers choose to hold a connection (such as WebSockets).
If you are open to external libraries, you may find this chart insightful:
AsyncHttpClient would be a good fit for your intentions.
Alternatively, you can use cURL by running a terminal command with Runtime.getRuntime().exec(). More information about using cURL with POST Requests can be found here. While cURL is efficient, you have to depend on the fact that your OS supports the command (though usually all devices that can run Java have this command).
I've been banging my head against the wall with coding up a JAX-RS client for uploading a file to a Red Hat Satellite Six server. The REST API for the server specifies what appears to be a somewhat unusual method of uploading the file. The calling pattern specifies that I create an upload request and then use a PUT to a URL incorporating the id for the upload request (as part of the URL) and two parameters in the body: bytes from the file and an offset for where in the file those bytes belong. The intention is that callers would read in a file and send chunks of data to the server, which it will then re-assemble when a final call is invoked to commit the upload.
I've found a working Ruby client that implements the basic algorithm and I've confirmed it manipulates the API properly to get the file uploaded. The sequence of steps is basically what I have in my Java code: issue an API call to get an upload request id, then enter a loop to read in some bytes and PUT them to the upload URL. I've tcpdumped the client and see the following request (slightly truncated and cleaned up for readability):
PUT /katello/api/repositories/90/content_uploads/ee9028cf-bed6-40fb8561f91a86b95bdc HTTP/1.1
Accept: application/json;version=2
Accept-Encoding: gzip, deflate
Content-Type: application/x-www-form-urlencoded
Accept-Language: en
Multipart: true
Content-Length: 8643
User-Agent: Ruby
Authorization: Basic XXXXXXXX==
Host: mysathost.example.com
offset=0&content=%ED%AB%EE%DB%03%00%00%00%00%FFhelloworld-0.0.1-SNAPSHOT20150319153318%00%00%00%00% ...snip...
My JAX-RS Client's request looks like this:
PUT /katello/api/v2/repositories/90/content_uploads/ff4a4273-0b3f-49f0-8e61-223259e09f01 HTTP/1.1
Accept: application/json
Content-Type: application/x-www-form-urlencoded
Authorization: Basic XXXXXXXXX==
User-Agent: Jersey/2.13 (HttpUrlConnection 1.8.0_40)
Host: mysathost.example.com
Connection: keep-alive
Content-Length: 13567
offset=0&content=%EF%BF%BD%EF%BF%BD%EF%BF%BD%03%00%00%00%00%EF%BF%BDhelloworld-0.0.1-SNAPSHOT20150319153318%00%00%00%00 ...snip...
There are some obvious differences between the two. The most obvious is that the functioning Ruby client has a "Multipart: true" header that my client doesn't have. The Content-Length is different, possibly because the Ruby client is gzipping the request. Finally, it is obvious that the bytes for my file are being displayed differently between the two even though I'm using the same test file.
I have tried using the Multipart form and provider for Jersey but the raw requests don't seem like Multipart requests and I can't seem to find an Entity that has a way to natively send byte arrays (e.g. with a method that has byte[] in its signature) but still keep the right Content-Type.
As for the byte array encoding, the Ruby code appears to do a file.read of 4K worth of data at a time and dump the results into a variable and then passes that variable into its REST client machinery where it gets digested in a way that I can't trace. I thought it might be Base64 encoding the bytes (with URL escaping), but when I tried that with Commons Codec, the output of my tcpdumps didn't look remotely like the Ruby client. On the assumption that it is just treating the bytes as a Unicode string, I tried to do the same thing in my Java code. That looks closer to what the Ruby client does, but obviously the bytes don't seem to match exactly in the output and the Satellite Server complains that the file is corrupted when I commit the request. Currently my JAX-RS call looks like this:
WebTarget contentUpload = satServer.path("repositories").path(repoId).path("content_uploads").path(uploadRequestId);
Form uploadForm = new Form();
uploadForm.param("offset", Integer.toString(offset));
// data is a byte[]
uploadForm.param("content", new String(data));
Response response = contentUpload.request(MediaType.APPLICATION_JSON).put(Entity.form(uploadForm));
if (response.getStatus() != Response.Status.OK.getStatusCode()) {
StatusType statusInfo = response.getStatusInfo();
response.close();
throw new SatelliteException(
"Encountered error while uploading offset: " + offset
+ " for " + uploadRequestId + " : "
+ statusInfo.getStatusCode() + ": "
+ statusInfo.getReasonPhrase());
}
I have tried new String(data,"UTF-8") as well with no luck. I have also tried Commons Codec Base64 encoding with URL safety enabled. I have also tried a multipart/form, but the working request doesn't really seem to follow that pattern.
I'm looking for some idea of how to encode my bytes to match the working client or maybe some sort of Jersey media processing that can handle sending a byte array in a regular form. Or another suggestion about what I should be looking at. I'm not afraid to do some more digging if I can be pointed in a direction, but I feel I've exhausted what I can learn from the implementation of the Ruby client and server for the moment. It appears that receiving server is using Apipie and Ruby for its implementation, if that helps.
I am trying to "spoof" a Firefox HTTP POST request in Java using java.net.HttpURLConnection.
I use Wireshark to check the HTTP headers being sent, so I have (hopefully) reliable source of information, why the Java result doesn't match the ideal situation (using Firefox).
I have set all header fields exactly to the values that Firefox sends via HTTP and noticed, that the sequence of the header fields is not the same.
The output for Firefox is like:
POST ...
**Host**
User-Agent
Accept
Accept-Language
Accept-Encoding
Referer
Connection
Content-Type
Content-Length
When I let wireshark tap off my implementation in Java, it gives me a slightly different sequence of fields:
POST...
**User-Agent**
Accept
Accept-Language
Accept-Encoding
Referer
Content-Type
Host
Connection
Content-Length
So basically, I have all the fields, just in a different order.
I have also noticed that the Host field is sent with a different value:
www.thewebsite.com (Firefox) <---> thewebsite.com (Java HttpURLConnection), although I pass on the String to httpUrlConnection.setRequestProperty with the "www."
I have not yet analyzed the byte output of Wireshark, but I know that the server is not returning the same Location in the header fields of my response.
My questions are:
(1) Is is possible to control the sequence the header fields in the request, and if yes is it possible to do using HttpURLConnection? If not, is it possible to directly control the bytes in the HTTP header using Java? [I don't own the server, so my only hope to get the POST method working is through my application pretending to be Firefox, the server is not really verbose, my only info are: Apache with PHP]
(2) Is there a way to fix the setRequestProperty() problem ("www") as described above?
(3) What else could matter? (Do I need to concern the underlying layers, TCP....?)
Thanks for any comments.
PS. I am trying to model a situation without cookies being sent, so that I can ignore the effect.
First, the order of the headers is irrelevant.
Second, in order to manually override the host header you need to set sun.net.http.allowRestrictedHeaders=true either in code
System.setProperty("sun.net.http.allowRestrictedHeaders", "true")
or at JVM start
-Dsun.net.http.allowRestrictedHeaders=true
This is a security precaution introduced by Oracle a while ago. That's because according to RFC
The Host request-header field specifies the Internet host and port
number of the resource being requested, as obtained from the original
URI given by the user or referring resource (generally an HTTP URL).
the headers order is not important. the headers got by server are also out-of-order. And you can not control httpUrlConnection header order. But if you write your own TCP client, you can control your header order. like:
clientSocket = new Socket(serverHost, serverPort);
OutputStream os = clientSocket.getOutputStream();
String send = "GET /?id=y2y HTTP/1.1\r\nConnection: keep-alive\r\nKeep-Alive: timeout=15, max=200\r\nHost: chillyc.info\r\n\r\nGET /?id=y2y HTTP/1.1\r\nConnection: keep-alive\r\nKeep-Alive: timeout=15, max=200\r\nHost: chillyc.info\r\n\r\n";
os.write(send.getBytes());
The Second question is answered by Marcel Stör in the first answer.
a
I got lucky with Apache Http Components, my guess is that the "Host" header's missing "www." made the difference, which can be set exactly as intended using Apache's HttpPost:
httpPost.setHeader("Host", "www.thewebsite.com");
The Wireshark output confirmed my suspicion. Also this time the TCP communication prior to my HTTP post looks different (client ---> server, server ---> client, client ---> server) instead of (client ---> server, server ---> client, client ---> server, client---> server).
Now I get the desired Location header value and the server is also setting the cookies. :)
For the most part, this question is resolved.
Actually I wanted to use the lightweihgt HttpUrlConnection because that's what the Android Developers blog suggesting. The System.setProperty("sun.net.http.allowRestrictedHeaders", "true") might work as well, if it allows to "www." in the Host value.
I'm working on my first homework project in a web programming class, which is to write a simple web server in Java. I'm at the point where I have data being transmitted back and forth, and to the untrained eye, my baby server seems to be working fine. However, I can't find a way to send appropriate responses. (In other words, an invalid page request would show a 404-ish HTML page, but it still returns a 200 OK status when I view response headers).
I'm limited to being able to use standard network libraries for socket management and standard I/O libraries to read and write bytes and strings from an input stream. Here's some pertinent code:
From my main...
ServerSocket servSocket = new ServerSocket(port, 10); // Bind the socket to the port
System.out.println("Opened port " + port + " successfully!");
while(true) {
//Accept the incoming socket, which means that the server process will
//wait until the client connects, then prepare to handle client commands
Socket newDataSocket = servSocket.accept();
System.out.println("Client socket created and connected to server socket...");
handleClient(newDataSocket); //Call handleClient method
}
From the handleClient method...(inside a loop that parses the request method and path)
if(checkURL.compareTo("/status") == 0) { // Check to see if status page has been requested
System.out.println("STATUS PAGE"); // TEMPORARY. JUST TO MAKE SURE WE ARE PROPERLY ACCESSING STATUS PAGE
sendFile("/status.html", dataStream);
}
else {
sendFile(checkURL, dataStream); // If not status, just try the input as a file name
}
From sendFile method...
File f = new File(where); // Create the file object
if(f.exists() == true) { // Test if the file even exists so we can handle a 404 if not.
DataInputStream din;
try {
din = new DataInputStream(new FileInputStream(f));
int len = (int) f.length(); // Gets length of file in bytes
byte[] buf = new byte[len];
din.readFully(buf);
writer.write("HTTP/1.1 200 OK\r\n"); // Return status code for OK (200)
writer.write("Content-Length: " + len + "\r\n"); // WAS WRITING TO THE WRONG STREAM BEFORE!
writer.write("Content-Type: "+type+"\r\n\r\n\r\n"); // TODO VERIFY NEW CONTENT-TYPE CODE
out.write(buf); // Writes the FILE contents to the client
out.flush();
out.close();
} catch (FileNotFoundException e) {
e.printStackTrace(); // Not really handled since that's not part of project spec, strictly for debug.
}
}
else {
writer.write("HTTP/1.1 404 Not Found\r\n"); // Attempting to handle 404 as simple as possible.
writer.write("Content-Type: text/html\r\n\r\n\r\n");
sendFile("/404.html", sock);
}
Can anybody explain how, in the conditional from sendFile, I can change the response in the 404 block (Like I said before, the response headers still show 200 OK)? This is bugging the crap out of me, and I just want to use the HTTPResponse class but I can't. (Also, content length and type aren't displayed if f.exists == true.)
Thanks!
Edit It looks to me like in the 404 situation, you're sending something like this:
HTTP/1.1 404 Not Found
Content-Type: text/html
HTTP/1.1 200 OK
Content-Length: 1234
Content-Type: text/html
...followed by the 404 page. Note the 200 line following the 404. This is because your 404 handling is calling sendFile, which is outputting the 200 response status code. This is probably confusing the receiver.
Old answer that missed that:
An HTTP response starts with a status line followed (optionally) by a series of headers, and then (optionally) includes a response body. The status line and headers are just lines in a defined format, like (to pick a random example):
HTTP/1.0 404 Not Found
To implement your small HTTP server, I'd recommend having a read through the spec and seeing what the responses should look like. It's a bit of a conceptual leap, but they really are just lines of text returned according to an agreed format. (Well, it was a conceptual leap for me some years back, anyway. I was used to environments that over-complicated things.)
It can also be helpful to do things like this from your favorite command line:
telnet www.google.com 80
GET /thispagewontbefound
...and press Enter. You'll get something like this:
HTTP/1.0 404 Not Found
Content-Type: text/html; charset=UTF-8
X-Content-Type-Options: nosniff
Date: Sun, 12 Sep 2010 23:01:14 GMT
Server: sffe
Content-Length: 1361
X-XSS-Protection: 1; mode=block
...followed by some HTML to provide a friendly 404 page. The first line above is the status line, the rest are headers. There's a blank line between the status line/headers and the first line of content (e.g., the page).
The problem you are seeing is most likely related to a missing flush() on your writer. Depending on which type of Writer you use the bytes are first written to a buffer that needs to be flushed to the stream. This would explain why Content-Length and Content-Type are missing in the output. Just flush it before you write additional data to the stream.
Further you call sendFile("/404.html", sock);. You did not post the full method here - but I suppose that you call it recursively inside sendFile and thus send the 200 OK status for your file /404.html.
Based on your reported symptoms, I think the real problem is that you are not actually talking to your server at all! The evidence is that 1) you cannot get a 404 response, and 2) a 200 response does not have the content length and type. Neither of these should be possible ... if you are really talking to the code listed above.
Maybe:
you are talking to an older version of your code; i.e. something is going wrong in your build / deploy cycle,
you are (mistakenly) trying to deploy / run your code in a web container (Jetty, Tomcat, etc), or
your client code / browser is actually talking to a different server due to proxying, an incorrect URL, or something like that.
I suggest that you add some trace printing / logging at appropriate points of your code to confirm that it is actually being invoked.
I have Java webserver (no standard software ... self written). Everything seems to work fine, but when I try to call a page that contains pictures, those pictures are not displayed. Do I have to send images with the output stream to the client? Am I missing an extra step?
As there is too much code to post it here, here is a little outline what happens or is supposed to happen:
1. client logs in
2. client gets a session id and so on
3. the client is connected with an output stream
4. we built the response with the HTML-Code for a certain 'GET'-request
5. look what the GET-request is all about
6. send html response || file || image (not working yet)
So much for the basic outline ...
It sends css-files and stuff, but I still have a problem with images!
Does anybody have an idea? How can I send images from a server to a browser?
Thanks.
I check requests from the client and responses from the server with charles. It sends the files (like css or js) fine, but doesn't with images: though the status is "200 OK" the transfer-encoding is chunked ... I have no idea what that means!? Does anybody know?
EDIT:
Here is the file-reading code:
try{
File requestedFile = new File( file );
PrintStream out = new PrintStream( this.getHttpExchange().getResponseBody() );
// File wird geschickt:
InputStream in = new FileInputStream( requestedFile );
byte content[] = new byte[(int)requestedFile.length()];
in.read( content );
try{
// some header stuff
out.write( content );
}
catch( Exception e ){
e.printStackTrace();
}
in.close();
if(out!=null){
out.close();
System.out.println( "FILE " + uri + " SEND!" );
}
}
catch ( /*all exceptions*/ ) {
// catch it ...
}
Your browser will send separate GET image.png HTTP 1.1 requests to your server, you should handle these file-gets too. There is no good way to embed and image browser-independent in HTML, only the <img src="data:base64codedimage"> protocol handler is available in some browsers.
As you create your HTML response, you can include the contents of the external js/css files directly between <script></script> and <style></style> tags.
Edit: I advise to use Firebug for further diagnostics.
Are you certain that you send out the correct MIME type for the files?
If you need a tiny OpenSource webserver to be inspired by, then have a look at http://www.acme.com/java/software/Acme.Serve.Serve.html which serves us well for ad-hoc server needs.
Do I have to send those external files
or images with the output stream to
the client?
The client will make separate requests for those files, which your server will have to serve. However, those requests can arrive over the same persisten connection (a.k.a. keepalive). The two most likely reasons for your problem:
The client tries to send multiple requests over a persistent connection (which is the default with HTTP 1.1) and your server is not handling this correctly. The easiest way to avoid this is to send a Connection: close header with the response.
The client tries to open a separate connection and your server isn't handling it correctly.
Edit:
There's a problem with this line:
in.read( content );
This method is not guaranteed to fill the array; it will read an arbitrary number of bytes and return that number. You have to use it in a loop to make sure everything is read. Since you have to do a loop anyway, it's a good idea to use a smaller array as a buffer to avoid keeping the whole file in memory and running into an OutOfMemoryError with large files.
Proabably step #4 is where you are going wrong:
// 4. we built the response with the HTML-Code for a certain 'GET'-request
Some of the requests will be a 'GET /css/styles.css' or 'GET /js/main.js' or 'GET /images/header.jpg'. Make sure you stream those files in those circumstances - try loading those URLs directly.
Images (and css/js files) are requested by the browser as completely separate GET requests to the page, so there's definitely no need to "send those ... with the output stream". So if you're getting pages served up ok, but images aren't being loaded, my first guess would be that you're not setting your response headers appropriately (for example, setting the Content-Type of the response to text/html), so the browser isn't interpreting it as a proper page & therefore not loading the images.
Some other things to try if that doesn't work:
Check if you can access an image directly
Use something like firebug or fiddler to check whether the browser is actually requesting the image/css/js files & that all your request/response headers look ok
Use an existing web server!