Context: I'm receiving a 400 error when attempting to get a crumb from a Jenkins CI server via Java's HttpsURLConnection class. A Python utility that I wrote makes the call successfully with no problems, as does wget. Here's the Java code:
String crumb_url = JENKINS_URL + "crumbIssuer/api/xml?xpath=concat(//crumbRequestField,\":\",//crumb)";
String userpass = config.getProperty("USERNAME") + ":" + config.getProperty("API_TOKEN");
String basicAuth = "Basic " + javax.xml.bind.DatatypeConverter.printBase64Binary(userpass.getBytes());
URL url = new URL(crumb_url);
HttpsURLConnection conn = (HttpsURLConnection) url.openConnection();
conn.setSSLSocketFactory(sslFactory);
conn.setRequestMethod("GET");
conn.setRequestProperty("Authorization", basicAuth);
conn.setRequestProperty("User-Agent", "XXXXXXXXXX/1.0");
BufferedReader rd = new BufferedReader(new InputStreamReader(conn.getInputStream()));
The call to create the BufferedReader is where I get an IOException indicating that I got a 400 from the server. Since I'm getting a 400 instead of a certificate-related exception, I'm pretty sure the SSL stuff is working properly. I turned on debug info to see exactly what was being sent, and this is what it's telling me:
HTTPS GET header
Sorry about redacting some of the info, but it shouldn't be relevant. My main concern is the 16 bytes highlighted at the beginning of the request, and that the extra data may be why the Jenkins server is unhappy. Otherwise, the request looks practically identical to what wget sends, with the exception of "Keep-Alive" in wget vs. "keep-alive" in Java. I also attempted to generate the request by hand in case the capitalization difference was the problem, but I still get the 16 byte prefix before the GET. I'm also somewhat curious about the trailing data after the request, but I suspect as long as I have the two CR/LFs at the end it shouldn't matter.
If anyone has any ideas on how to resolve this, I'm all ears. Thanks.
I can address your 'main concern' but not your problem :-(
Padded plaintext before ENCRYPTION strongly suggests this was captured inside your TLS stack, since you're using Java probably by javax.net.debug. When TLS sends application data, which for HTTPS is the HTTP request or response, it adds several things depending on the protocol and ciphersuite in use. For an AES (or possibly but much less common Camellia SEED or ARIA) CBC cipher in TLS 1.1 or 1.2, it adds a 16-byte IV at the beginning, and an HMAC and padding at the end. The data at the end of your screenshot after the double-CRLF is valid for a TLS CBC 'GenericBlock' record if the selected HMAC is SHA384, which it might be since you didn't say which ciphersuite was used.
However, that means the request you are actually sending at the app level looks valid, which doesn't help with your 400.
Although, /:, in your query part are in the RFC2396 reserved set and " is excluded which are supposed to be percent-encoded. Webservers and apps vary wildly in how they handle this, and I have no idea if Jenkins cares.
The percent-encoding of the URL was the problem. Thanks to all that answered!
Related
I'm working on some code which would send HTTP GET/POST requests to a target server. The problem arises, when the application attempts to establish a connection to a secure server (https). I used HttpsURLConnection for that purpose, as follows:
url = new URL("https://www.sendspace.com/");
String input, htmlResponse = "";
HttpsURLConnection con;
con = (HttpsURLConnection)url.openConnection();
BufferedReader br = new BufferedReader(new InputStreamReader(con.getInputStream()));
while ((input = br.readLine()) != null){
htmlResponse += input;
}
br.close();
however, it showed the following error:
javax.net.ssl.SSLHandshakeException: Received fatal alert:
handshake_failure
Therefore, I changed the above code like this:
url = new URL("https://www.sendspace.com/");
String input, htmlResponse = "";
SSLUtilities.trustAllHostnames();
SSLUtilities.trustAllHttpsCertificates();
HttpsURLConnection con;
// code below has not been changed
which used SSLUtilities.java.
However, I noticed that doing so had no effect whatsoever, since I still received the exact same error, again:
javax.net.ssl.SSLHandshakeException: Received fatal alert:
handshake_failure
Similarly, I also tried using a different approach, which involved using SSLCertificateValidation.java by using this line before starting the HttpsURLConnection:
SSLCertificateValidation.disable();
and even using that returned the exact same error as explained previously. I would like to know exactly why this approach is not working and how could I possibly get circumvent this error?
Please note the following:
Goes without saying that I have searched extensively on this issue and have already spent a considerable amount of time, trying different suggestions and code examples, but to no avail. I know this error has been coming up a lot in questions, but I failed to find a specific answer that already existed relating to my own.
I am aware of and understand the security issues that arise as a result of disabling certificate validation. However, since this application would connect to 100 to 200 different servers (as and when required), managing the certificate on a per-server basis is not feasible. Therefore, unless someone can state a method of (a) retrieving, (b) storing, and (c) using the correct certificate on a per-server basis (at runtime), all of which must be done programmatically without any manual work, suggesting the "clean" way to do it is not a solution to this scenario.
Thanks.
A handshake failure is not a certificate validation error, so you cannot fix it by ignoring certificate errors (which is a bad idea anyway). The SSLLabs page more clearly states the real cause of the problem:
Java 8u31 Protocol or cipher suite mismatch
If you look at the ciphers supported by the server (again in the SSLLabs report) you can see that all ciphers contain AES with a key size of 256. But in the documentation about supported key sizes it is stated that because of export restriction the maximum supported key size for AES is 128. This means your client offers only AES128 ciphers whereas the server only accepts AES256 ciphers, that is there are no shared ciphers.
To fix the issue you must do as described in the referenced documentation:
If stronger algorithms are needed (for example, AES with 256-bit keys), the JCE Unlimited Strength Jurisdiction Policy Files must be obtained and installed in the JDK/JRE.
I admit there is a possibility that I am not well informed about the subject, but I've done a LOADS of reading and I still can't get answer to my question.
From what I have learnt, to make communication secure with HTTPS I need to be using some sort of public key (reminds me of pgp-encryption).
My goal is to make a secured POST request from my java application (which I, in the moment it starts working, will rewrite to Android app, if it matters) to a php application accessible via https address.
Naturally I did some Google research on the topic and I got a lot of results how to make ssl connection. Non of those results used any sort of certificate/hash prints. They just use HttpsURLConnection instead of HttpURLConnection, everything else is almost identical.
Right now, almost copy paste of something I found here is this:
String httpsURL = "https://xx.yyyy.zzz/requestHandler.php?getParam1=value1&getParam2=value2";
String query = "email=" + URLEncoder.encode("abc#xyz.com", "UTF-8");
query+="&";
query+="password="+URLEncoder.encode("tramtarie","UTF-8");
URL myurl = new URL(httpsURL);
HttpsURLConnection con = (HttpsURLConnection) myurl.openConnection();
con.setRequestMethod("POST");
con.setRequestProperty("Content-length",String.valueOf(query.length()));
con.setRequestProperty("Content-Type","application/x-www-form-urlencoded");
con.setRequestProperty("User-Agent","Mozilla/4.0 (compatible; MSIE 5.0;Windows98;DigExt)");
con.setDoOutput(true);
con.setDoInput(true);
DataOutputStream output = new DataOutputStream(con.getOutputStream());
output.writeBytes(query);
output.close();
DataInputStream input = new DataInputStream(con.getInputStream());
for(
int c = input.read();
c!=-1;c=input.read())
System.out.print((char)c);
input.close();
System.out.println("Resp Code:"+con.getResponseCode());
System.out.println("Resp Message:"+con.getResponseMessage());
Which sadly does not work and ends up with this exception:
Exception in thread "main" javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No subject alternative DNS name matching app.elessy.cz found
This probably means that it checks the certificate and finds out that the certificate I am using does not match domain name for which is registered (it is webhosting certificate, registered for webhosting domain, not the domain I own, the only reason I am using https is to secure data for internal purposes, I do not want this site to be visited by users from outside, so this certificate should be ok).
There are two things that I just don't get about the code and everything.
No code I have been able to find use MD5/SHA-1 (supposedly the public keys for message encryption?) prints or
certificate, they just somehow automatically connect to https
website and should work. Doesn't work for me though.
Do I really need those md5/sha-1 prints that are provided to me? Or at least, what in the given context do those prints mean?
Edit:
Following the given answer and duplicate mark, I managed to get it working - in the meaning that I can communicate with application behind https.
But I didnt have to use any sort of md5/sha1 print. How do I know now that it is safe? Does this protocol on his own? Like that communication is secured either way, when I use built-in java classes to connect to app behind https?
I probably do not seek for precise technical explanation, but more for an assurance that yes - the communication is safe even though I do not use (knowingly) certificate/servers public key to encrypt my messages. That it does the ssl connection for me.
I am trying to "spoof" a Firefox HTTP POST request in Java using java.net.HttpURLConnection.
I use Wireshark to check the HTTP headers being sent, so I have (hopefully) reliable source of information, why the Java result doesn't match the ideal situation (using Firefox).
I have set all header fields exactly to the values that Firefox sends via HTTP and noticed, that the sequence of the header fields is not the same.
The output for Firefox is like:
POST ...
**Host**
User-Agent
Accept
Accept-Language
Accept-Encoding
Referer
Connection
Content-Type
Content-Length
When I let wireshark tap off my implementation in Java, it gives me a slightly different sequence of fields:
POST...
**User-Agent**
Accept
Accept-Language
Accept-Encoding
Referer
Content-Type
Host
Connection
Content-Length
So basically, I have all the fields, just in a different order.
I have also noticed that the Host field is sent with a different value:
www.thewebsite.com (Firefox) <---> thewebsite.com (Java HttpURLConnection), although I pass on the String to httpUrlConnection.setRequestProperty with the "www."
I have not yet analyzed the byte output of Wireshark, but I know that the server is not returning the same Location in the header fields of my response.
My questions are:
(1) Is is possible to control the sequence the header fields in the request, and if yes is it possible to do using HttpURLConnection? If not, is it possible to directly control the bytes in the HTTP header using Java? [I don't own the server, so my only hope to get the POST method working is through my application pretending to be Firefox, the server is not really verbose, my only info are: Apache with PHP]
(2) Is there a way to fix the setRequestProperty() problem ("www") as described above?
(3) What else could matter? (Do I need to concern the underlying layers, TCP....?)
Thanks for any comments.
PS. I am trying to model a situation without cookies being sent, so that I can ignore the effect.
First, the order of the headers is irrelevant.
Second, in order to manually override the host header you need to set sun.net.http.allowRestrictedHeaders=true either in code
System.setProperty("sun.net.http.allowRestrictedHeaders", "true")
or at JVM start
-Dsun.net.http.allowRestrictedHeaders=true
This is a security precaution introduced by Oracle a while ago. That's because according to RFC
The Host request-header field specifies the Internet host and port
number of the resource being requested, as obtained from the original
URI given by the user or referring resource (generally an HTTP URL).
the headers order is not important. the headers got by server are also out-of-order. And you can not control httpUrlConnection header order. But if you write your own TCP client, you can control your header order. like:
clientSocket = new Socket(serverHost, serverPort);
OutputStream os = clientSocket.getOutputStream();
String send = "GET /?id=y2y HTTP/1.1\r\nConnection: keep-alive\r\nKeep-Alive: timeout=15, max=200\r\nHost: chillyc.info\r\n\r\nGET /?id=y2y HTTP/1.1\r\nConnection: keep-alive\r\nKeep-Alive: timeout=15, max=200\r\nHost: chillyc.info\r\n\r\n";
os.write(send.getBytes());
The Second question is answered by Marcel Stör in the first answer.
a
I got lucky with Apache Http Components, my guess is that the "Host" header's missing "www." made the difference, which can be set exactly as intended using Apache's HttpPost:
httpPost.setHeader("Host", "www.thewebsite.com");
The Wireshark output confirmed my suspicion. Also this time the TCP communication prior to my HTTP post looks different (client ---> server, server ---> client, client ---> server) instead of (client ---> server, server ---> client, client ---> server, client---> server).
Now I get the desired Location header value and the server is also setting the cookies. :)
For the most part, this question is resolved.
Actually I wanted to use the lightweihgt HttpUrlConnection because that's what the Android Developers blog suggesting. The System.setProperty("sun.net.http.allowRestrictedHeaders", "true") might work as well, if it allows to "www." in the Host value.
I'm getting a too many redirects redirect error from URLConnection when trying to fetch www.palringo.com
URL url = new URL("http://www.palringo.com/");
HttpURLConnection.setFollowRedirects(true);
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
System.out.println("Response code = " + connection.getResponseCode());
outputs the dreaded:
Exception in thread "main" java.net.ProtocolException: Server redirected too many times (20)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source)
According to wget there is just one redirect, from www.palringo.com to www.palringo.com/en/gb/
Any ideas why my request using URLConnection for /en/gb results in another 302 response for the same resource ?
The problem is exemplified by:
URL url = new URL("http://www.palringo.com/en/gb/");
HttpURLConnection.setFollowRedirects(false);
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
// Just for testing, use Chrome header, to eliminate "anti-crawler" response!
connection.setRequestProperty("User-Agent", "Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.30 (KHTML, like Gecko) Ubuntu/11.04 Chromium/12.0.742.112 Chrome/12.0.742.112 Safari/534.30");
System.out.println("Response code = " + connection.getResponseCode());
This outputs:
Response code = 302
Redirected to /en/gb/
hence an infinite redirect loop.
Interestingly although browsers and wget handle it, curl does not:
joel#bohr:/tmp$ curl http://www.palringo.com/en/gb/
curl: (7) couldn't connect to host
A request for /en/gb/ is redirected to /en/gb/ precisely once.
The problem is that your HttpURLConnection (or whatever code you use -- sorry, I'm NOT familiar with Java) does not use cookies.
Disable cookies in browser and observe exactly the same behaviour -- infinite redirect.
The reason: Server checks if cookie is set. If not set -- it sets it and redirects. Because cookies are not supported/disabled, script on server side redirects over and over again.
Solution: Enable/add cookie support to your code and try again.
I think that redirect is defined with pattern like /* -> /en/gb
So, when you arrive to /en/gb the redirect rule works again.
Check your redirect rules. Where are they defined? In apache web server or in other place? Check all. Verify that this is (or is not) a case and fix the rules accordingly.
The problem is on the server side. It might be a broken Apache httpd rewrite rule that is sending redirects that loop back to the same place. It might be something else. Whatever it is, you are unlikely to be able to fix it on the client side.
I'm basically running a crawler and just noticed this issue.
Ah.
It is possible that it is an anti-crawler defence measure. "Hmmm ... looks like one of those pesky crawlers who ignore my robots.txt file, waste all of my bandwidth and steal my precious content. Lets cause him some pain with a redirect loop!!".
Check that your crawler is obeying the "robots.txt" protocol. Check the ToS for the site you are crawling to see if what you are doing is allowed.
You could be right, but if so how come wget and browsers handle this with just the one redirect?
Maybe because the server is looking at the request headers, or at your pattern of requests.
The Terms of Service (that I see) say this:
"You agree to not use the Service to: ... xiii - Run any automated systems, processes, scripts or bots for any purpose without the express written permission of Palringo."
Arguably, crawling their site is in violation of that.
You will also get this error if you're trying to connect to a service that requires authentication and you provide wrong username and password.
I have HTML based queries in my code and one specific kind seems to give rise to IOExceptions upon receiving 505 response from the server. I have looked up the 505 response along with other people who seemed to have similar problems. Apparently 505 stands for HTTP version mismatch, but when I copy the same query URL to any browser (tried firefox, seamonkey and Opera) there seems to be no problem. One of the posts I read suggested that the browsers might automatically handle the version mismatch problem..
I have tried to dig in deeper by using the nice developer tool that comes with Opera, and it looks like there is no mismatch in versions (I believe Java uses HTTP 1.1) and a nice 200 OK response is received. Why do I experience problems when the same query goes through my Java code?
private InputStream openURL(String urlName) throws IOException{
URL url = new URL(urlName);
URLConnection urlConnection = url.openConnection();
return urlConnection.getInputStream();
}
sample link: http://www.uniprot.org/uniprot/?query=mnemonic%3aNUGM_HUMAN&format=tab&columns=id,entry%20name,reviewed,organism,length
There has been some issues in Tomcat with URLs containing space in it. To fix the problem, you need to encode your url with URLEncoder.
Example (notice the space):
String url="http://example.org/test test2/index.html";
String encodedURL=java.net.URLEncoder.encode(url,"UTF-8");
System.out.println(encodedURL); //outputs http%3A%2F%2Fexample.org%2Ftest+test2%2Findex.html
AS a developer at www.uniprot.org I have the advantage of being able to look in the request logs. In the last year according to the logs we have not send a 505 response code. In any case our servers do understand http 1 requests as well as the default http1.1 (though you might not get the results that you expect).
That makes me suspect there was either some kind of data corruption on the way. Or you where affected by a hardware failure (lately we have had some trouble with a switch and a whole datacentre ;). In any case if you ever have questions or problems with uniprot.org please contact help#uniprot.org then we can see if we can help/fix the problem.
Your code snippet seems normal and should work.
Regards,
Jerven Bolleman
Are you behind a proxy? This code works for me and prints out the same text I see through a browser.
final URL url = new URL("http://www.uniprot.org/uniprot/?query=mnemonic%3aNUGM_HUMAN&format=tab&columns=id,entry%20name,reviewed,organism,length");
final URLConnection conn = url.openConnection();
final InputStream is = conn.getInputStream();
System.out.println(IOUtils.toString(is));
conn is an instance of HttpURLConnection
from the API documentation for the URL class:
The URL class does not itself encode or decode any URL components
[...]. It is the responsibility of the caller to encode any fields,
which need to be escaped prior to calling URL, and also to decode any
escaped fields, that are returned from URL.
so if you have any spaces in your url-str encode it before calling new URL(url-str)
#posdef I was having same HTTP error code 505 problem. When I pasted URL that I was using in Java code in Firefox, Chrome it worked. But through code was giving IOException. But at last I came to know that in url string there were brackets '(' and ')', by removing them it worked so it seems I needed URLEncodeing same like browsers.