Java 7 URL Connection Fail - java

The following code used to work fine under Java 6 (and earlier) but it stopped working after updating to JRE 7 (Java 7).
The URL is an FTP file:
ftp://ftp-private.ncbi.nlm.nih.gov/pubchem/.fetch/96/4133257873201306969.sdf.gz
Here is the output I get:
application/octet-stream
-1 [Ljava.lang.StackTraceElement;#5419f97c
And here is my code:
public static void store(URL url, File targetFile){
try
{
System.out.println(url);
URLConnection uc = url.openConnection();
String contentType = uc.getContentType();
System.out.println(contentType);
int contentLength = uc.getContentLength();
System.out.println(contentLength);
Settings.setDownloadSize(contentLength);
if (contentType.startsWith("text/") || contentLength == -1) {
throw new IOException("This is not a binary file.");
}
InputStream raw = uc.getInputStream();
InputStream in = new BufferedInputStream(raw);
byte[] data = new byte[contentLength];
int bytesRead = 0;
StatusPanel.updateProgrssBar(bytesRead);
int offset = 0;
while (offset < contentLength) {
bytesRead = in.read(data, offset, data.length - offset);
if (bytesRead == -1) {
break;
}
offset += bytesRead;
StatusPanel.updateProgrssBar(offset);
}
in.close();
if (offset != contentLength) {
throw new IOException("Only read " + offset + " bytes; Expected " + contentLength + " bytes");
}
FileOutputStream out = new FileOutputStream(targetFile);
out.write(data);
out.flush();
out.close();
//StatusPanel.setStatus("File has been stored at " + targetFile.toString());
//System.out.println("file has been stored at " + targetFile.toString());
}
The content length returns -1:
Area: API: Networking
Synopsis: Server Connection Shuts Down when Attempting to Read Data When http Response Code is -1
How do I make this code compatible with Java 7?
Description: As a result of the bug fix for CR 6886436, the HTTP protocol handler will close the connection to a server that sends a response without a valid HTTP status line. When this occurs, any attempt to read data on that connection results in an IOException.
For example, the following code is problematic:
public static void test () throws Exception {
.....
HttpURLConnection urlc = (HttpURLConnection)url.openConnection();
....
System.out.println ("Response code: " + urlc.getResponseCode());
/** Following line throws java.io.IOException: Invalid Http response
* when Response Code returned was -1
*/
InputStream is = urlc.getInputStream(); // PROBLEMATIC CODE
To work around this problem, check the return value from the getResponseCode method and deal with a -1 value appropriately; perhaps by opening a new connection, or invoking getErrorStream on the stream.
Nature of incompatibility: behavioral
RFE: 7055058
The problem is definitely with getContentLength() method.
With JRE6, this method returns a value, but with JRE7 I get -1.

Based on the Java 7's Javadoc of URLConnection there are two possible reasons this is happening.
The first possible cause is that the content length is greater than Integer.MAX_VALUE. To determine if this is the issue I would use getContentLengthLong() because this returns a long instead of an int and if the content length is greater than Integer.MAX_VALUE getContentLength() will return -1. Also, since Java 7 it is preffered to use getContentLengthLong() over getContentLength() as stated in the Java 7's URLConnection Javadoc, "it returns a long instead and is therefore more portable." If you desire to use both JRE 6 and 7 I would create a Java 6 and 7 wrapper classes to create a set of methods that your application uses to interact with URLs. Than in your application's start script check if the host has JRE 6 or 7 and load the proper wrapper class according to the JRE version. This is generally a good design because it prevents your application from being dependent on one specific JRE, third party library or application, etc.
The second possibility is that the content-length header field is not known by the server so the getContentLength() or getContentLengthLong() method returns a value of -1. This is why I suggest trying getContentLengthLong() before anything else because it will probably be the quickest fix. If both methods return -1 I would suggest using an application like [Apache JMeter][11] to determine the header information. A quick way of doing this is to have JMeter "HTTP Proxy Server" running with your browser's proxy settings set to go to use localhost as the address and the port you set the HTTP Proxy Server for the port. The information recorded will appear as individual elements themselves and if you expand them there should be a HTTP Header Manager that contains the name of each Header with its value next to it.
Lastly, you may want to do analysis on the server itself to see if there are any issues. Verify logs look ok, that all the correct processes are up, configuration's are set correctly, the file still exists and is in the correct location, etc. Maybe the server is not set to respond to content length requests anymore. Also, verify if your code functions with JRE 7 on another host
I hope these suggestions will be of value to you and that your are able to solve this issue you seem to be having. I would also note that you really should consider using a wrapper class and following the notes for each version of third party class that you use in the future so that you follow better practices that are easier to maintain like reducing the amount of external dependencies you have by using wrapper classes.

Related

Odd behavior reading SSL socket Java

I am trying to write a simple echo server using SSL. The first line that goes to the server is echoed exactly. When I send a second line, only the first character is echoed. The client works off of a buffered reader's read line from stdin. If I hit CR again the rest of the message comes through. The server seems to be sending all of the data. Here are output from client and server:
CLIENT:
Sending to server at 192.168.0.161
on port 9999
4 seasoNS
echo:4 seasoNS
are really good
echo:a
echo:re really good
SERVER:
server listening on 9999
has cr/lf
4 seasoNS
size to send: 10
has cr/lf
are really good
size to send: 16
exiting...
Here is the client loop:
try {
BufferedReader consoleBufferedReader = getConsoleReader();
sslsocket = getSecSocket(strAddress, port);
BufferedWriter sslBufferedWriter = getSslBufferedWriter(sslsocket);
InputStream srvrStream = sslsocket.getInputStream();
String outMsg;
while ((outMsg = consoleBufferedReader.readLine()) != null) {
byte[] srvrData = new byte[1024];
sslBufferedWriter.write(outMsg);
sslBufferedWriter.newLine();
sslBufferedWriter.flush();
int sz = srvrStream.read(srvrData);
String echoStr = new String(srvrData, 0, sz);
System.out.println("echo:" + echoStr);
}
} catch (Exception exception) {
exception.printStackTrace();
}
This problem seemed so odd that I was hoping there was something obvious that I was missing.
What you're seeing is perfectly normal.
The assumption you're making that you're going to read the whole buffer in one go is wrong:
int sz = srvrStream.read(srvrData);
Instead, you need to keep looping until you get the delimiter of your choice (possibly a new line in your case).
This applies to plain TCP connections as well as SSL/TLS connections in general. This is why application protocols must have delimiters or content length (for example, HTTP has a double new line to end its headers and uses Content-Length or chunked transfer encoding to tell the other party when the entity ends).
In practice, you might not see when your assumption doesn't work for such a small example.
However, the JSSE splits the records it sends into 1/n-1 on purpose to mitigate the BEAST attack. (OpenSSL would send 0/n.)
Hence, the problem is more immediately noticeable in this case.
Again, this is not an SSL/TLS or Java problem, the way to fix this is to treat the input you read as a stream and not to assume the size of buffers you read on one end will match the size of the buffers used to send that data from the other end.

Java socket connection

I've a target machine(*nix based) which is connected to my machine(local to my pc). I want to execute some commands on this target.
So, I'm using Java socket to achieve the same.
socket = new Socket("100.200.400.300", 23, null, 0 );
remote local
Here, after the above line -
socket.isConnected() returns true.
Now, I've I/O streams of this Socket object and I'm doing -
while((string = bufferedReader.readLine()) != null) {
outputStream.write(string .getBytes());
outputStream.flush(); }
Buffered reader to read commands from a local file and write to the socket object for execution on the target machine. Below code to read from Socket -
while((myIntVar = is.read()) != -1) {
//Below line prints some junk data ... hash, updaward arrow and spaces and then
// loop hangs to raise a Socket I/O exception.
System.out.println((char) i);
stringBuffer.append((char) i);}
Here, my understanding is that, as I already have the socket connection established, I can just pass my commands and those commands should get executed on the other side(correct me if am wrong).
But this is not working. I'm getting junk characters as I've mentioned above and there is one more thing - I'm not passing username and password for establishing the socket connection - do I've to pass it as we do for telnet(how...? am lost here).
And, just for info - the above code is all that I've(no server or client code as mentioned in various other threads) .
Telnet does not quite use raw sockets as you have. Telnet has special ways of end lines and ending messages. You will need to work out the correct protocol to use, there are actually several varying implementations of telnet. It may be easier to use a library.
An easy work around would be to filter any character that does not fall in the correct ascii range we want.
private static String cleanMessage(String in) {
StringBuilder sb = new StringBuilder();
for (Character i : in.toCharArray()) {
int charInt = i.hashCode();
if (charInt >= 32 && charInt <= 126) {
sb.append(i);
}
}
return sb.toString();
}
The Apache Commons library has an implement of Telnet handling, with an example here

Failure in writing to a Java socket more then once

In my Java application I am using 2 network connections to a webserver. I ask for a range of data for a file from each interface with a GET message and when I get the data, I calc how much time it took and the bps for each link.
This part works fine.
(I haven't closed the sockets yet)
I determine which link is faster then "attempt" to send another HTTP GET request for the rest of the file on the faster link. This is where my problem is, The 2nd cOut.write(GET) doesn't send anything and hence I get no data back.
Do I have to close the socket and re-establish my connection before I can write to it again?
edit
OK to answer some Qs:
Yes TCP
The following code (used on the low speed link) is used first to grab the first block of data using the GET request:
GET /test-downloads/10mb.txt HTTP/1.1
HOST: www.cse.usf.edu
Range: bytes=0-999999
Connection: Close
Is the Connection: Close what is doing it? as when I use Keep-Alive I get a 5sec delay and still do not send/receive data on a subsequent write/read.
// This try block has another fo rthe other link but uses [1]
try {
skSocket[0] = new Socket( ipServer, 80, InetAddress.getByName(ipLowLink), 0 );
skSocket[0].setTcpNoDelay(true);
skSocket[0].setKeepAlive(true);
cOut[0] = new PrintWriter(skSocket[0].getOutputStream(),true);
cIn[0] = new InputStreamReader(skSocket[0].getInputStream());
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
/* -----------------------------------
the following code is what is called
once for each link (ie cOut[0], cOut [1])
then after determining which has the better bps
(using a GET to grab the rest of the file)
this code is repeated on the designated link
----------------------------------- */
// Make GET header
GET.append(o.strGetRange(ipServer, File, startRange, endRange-1));
// send GET for a specific range of data
cOut[0].print(GET);
cOut[0].flush();
try {
char[] buffer = new char[4*1024];
int n = 0;
while (n >= 0) {
try {
n = cIn[0].read(buffer, 0, buffer.length);
} catch (IOException e) {
e.printStackTrace();
}
if (n > 0) {
raw[0].append(buffer, 0, n); // raw is a stringBuffer
}
}
}
finally {
//if (b != null) b.close();
}
I use the same code as above for my 2nd link (just shift the start/end range over a block) and after I determine the bps, I request the remaining data on the best link (don't worry about how big the file is, etc, I have all that logic done and not the point of the problem)
Now for my subsequent request for the rest of the data, I use the same code as above minus the socket/in/out creation. But the write doesn't send anything. (I hav done the socket check isClosed(), isConnected(), isBound(), isInputShutdown(), isOutboundShutdown(), and all prove that the socket is still open/useable.
According to the HTTP 1.1 spec, section 8:
An HTTP/1.1 client MAY expect a connection to remain open, but would decide to keep it open based on whether the response from a server contains a Connection header with the connection-token close. In case the client does not want to maintain a connection for more than that request, it SHOULD send a Connection header including the connection-token close.
So, you should ensure that you are using HTTP 1.1, and if you are, you should also check that your webserver supports persistent connections (it probably does). However, bear in mind that the spec says SHOULD and not MUST and so this functionality could be considered optional for some servers.
As per the excerpt above, check for a connection header with the connection-close token.
Without a SSCCE, it's going to be difficult to give any concrete recommendations.

How do I recognize EOF in Java Sockets?

I want to recognize end of data stream in Java Sockets. When I run the code below, it just stuck and keeps running (it stucks at value 10).
I also want the program to download binary files, but the last byte is always distinct, so I don't know how to stop the while (pragmatically).
String host = "example.com";
String path = "/";
Socket connection = new Socket(host, 80);
PrintWriter out = new PrintWriter(connection.getOutputStream());
out.write("GET "+ path +" HTTP/1.1\r\nHost: "+ host +"\r\n\r\n");
out.flush();
int dataBuffer;
while ((dataBuffer = connection.getInputStream().read()) != -1)
System.out.println(dataBuffer);
out.close();
Thanks for any hints.
Actually your code is not correct.
In HTTP 1.0 each connection is closed and as a result the client could detect when an input has ended.
In HTTP 1.1 with persistent connections, the underlying TCP connection remains open, so a client can detect when an input ends with 1 of the following 2 ways:
1) The HTTP Server puts a Content-Length header indicating the size of the response. This can be used by the client to understand when the reponse has been fully read.
2)The response is send in Chunked-Encoding meaning that it comes in chunks prefixed with the size of each chunk. The client using this information can construct the response from the chunks received by the server.
You should be using an HTTP Client library since implementing a generic HTTP client is not trivial (at all I may say).
To be specific in your code posted you should have followed one of the above approaches.
Additionally you should read in lines, since HTTP is a line terminated protocol.
I.e. something like:
BufferedReader in =new BufferedReader(new InputStreamReader( Connection.getInputStream() ) );
String s=null;
while ( (s=in.readLine()) != null) {
//Read HTTP header
if (s.isEmpty()) break;//No more headers
}
}
By sending a Connection: close as suggested by khachik, gets the job done (since the closing of the connection helps detect the end of input) but the performance gets worse because for each request you start a new connection.
It depends of course on what you are trying to do (if you care or not)
You should use existing libraries for HTTP. See here.
Your code works as expected. The server doesn't close the connection, and dataBuffer never becomes -1. This happens because connections are kept alive in HTTP 1.1 by default. Use HTTP 1.0, or put Connection: close header in your request.
For example:
out.write("GET "+ path +" HTTP/1.1\r\nHost: "+ host +"\r\nConnection: close\r\n\r\n");
out.flush();
int dataBuffer;
while ((dataBuffer = connection.getInputStream().read()) != -1)
System.out.print((char)dataBuffer);
out.close();

Downloading files in Java and common errors

I wrote a simple downloader as Java applet. During some tests I discover that my way of downloading files is not even half as perfect as e.g. Firefox's way of doing it.
My code:
InputStream is = null;
FileOutputStream os = null;
os = new FileOutputStream(...);
URL u = new URL(...);
URLConnection uc = u.openConnection();
is = uc.getInputStream();
final byte[] buf = new byte[1024];
for(int count = is.read(buf);count != -1;count = is.read(buf)) {
os.write(buf, 0, count);
}
Sometimes my applet works fine, sometimes unexpected things happen. E.g. from time to time, in the middle of downloading applet throws an IO exception or just lose a connection for a while, without possibility to return to current download and finish it.
I know that really advanced way is too complicated for single unexperienced Java programmer, but maybe you know some techniques to minimalise risk of appearing these problems.
So you want to resume your download.
If you get an IOException on reading from the URL, there was a problem with the connection.
This happens. Now you must note how much you already did download, and open a new connection which starts from there.
To do this, use setRequestProperty() on the second, and send the right header fields for "I want only the range of the resource starting with ...". See section 14.35.2 Range Retrieval Requests in the HTTP 1.1 specification. You should check the header fields on the response to see if you really got back a range, though.

Categories

Resources