Suppress NTLM Auth in Java - java

I am currently pen-testing a web application and came across an interesting phenomenon. During my testing sessions, I gathered URLs using a proxy. Now I wanted to test my URL list for anonymous access, so i wrote this little tool
public static void main(String[] args) {
try {
TrustAllCerts.disableCertChecks();
FileReader fr = new FileReader(new File("urls.txt"));
BufferedReader br = new BufferedReader(fr);
String urlStr = br.readLine();
while (urlStr != null) {
if (urlStr.trim().length() > 0) {
URL url = new URL(urlStr);
HttpsURLConnection urlc = (HttpsURLConnection) url.openConnection();
urlc.connect();
if (urlc.getResponseCode() == HttpURLConnection.HTTP_OK) {
System.out.println(urlStr);
} else {
System.out.println("["+urlc.getResponseCode()+"] "+urlStr);
}
urlc.disconnect();
}
urlStr = br.readLine();
}
br.close();
} catch (Exception e) {
e.printStackTrace();
}
}
It does basically nothing, but opening an URL connection on a given URL and test the HTTP response code (actually I implemented some more tests, if I'm getting redirected to a login page). However, the problem is, that this specific application (some custom MS SQL Server Reporting Services) is configured to use NTLM WWW authentication. If I try to access some of the URLs using Firefox, i get an 401 Unauthorized + login dlg. Internet Exploder performs NTLM auth in the background and grants access. It seems that the Java URLConnection (or URL) class does the same, because I am getting no 401. Is there a way to disable implicit NTLM authentication in Java? This is a bad pitfall for me.

I think the Java Network Documentation is the best resource. Setting the http.auth.preference="basic" should get you what you want. Assuming you don't need digest or something else. I'm not sure if you can go beyond that to disable NTLM.
Another thing to consider is other Java HTTP client implementations, like Apache's or Google's.

I'm not sure that this will help, but I've been stumped by the opposite.
I wanted NTLM auth to take place, so on my local machine I use a free app called CNTLM. It's a local proxy server that will forward (and NT auth) incoming requests. Good for apps that can't use NTLM proxies.
I'm sorry, I know this isn't answering your question, but maybe it proves helpful to someone out there! :)

Related

HTTP request gets authorized and unauthorized on different environments with as it seems same setup

We have a funny situation where basic GET HTTP request doesn't pass Windows NTLM autorization at IIS server. At the same time we have same code running on another environment which gets successfully executed.
When we repeat request via browser it gets successfully executed.
It seems that somehow Java code doesn't send correct authorization information with the request. We are trying to figure out how this can be. Classes used are from java.net package.
We tried switching account to Local System under which Tomcat is running back and forth with no success.
Code is as simple as it can be:
public static String sendHttpRequest(String urlString, String method, boolean
disconnect) {
HttpURLConnection urlConnection = null;
String result = null;
try {
URL url = new URL(urlString);
urlConnection = (HttpURLConnection) url.openConnection();
urlConnection.addRequestProperty("Content-Length", "0");
urlConnection.setRequestMethod(method);
urlConnection.setUseCaches(false);
StringBuilder sb = new StringBuilder();
try (InputStream is = urlConnection.getInputStream()) {
InputStream buffer = new BufferedInputStream(is);
Reader reader = new InputStreamReader(buffer, "UTF-8");
int c;
while ((c = reader.read()) != -1) {
sb.append((char) c);
}
}
int statusCode = urlConnection.getResponseCode();
if (statusCode < 200 || statusCode >= 300) {
} else
result = sb.toString();
} catch (IOException e) {
LOGGER.warning(e.getMessage());
} finally {
if (disconnect && urlConnection != null) {
urlConnection.disconnect();
}
}
return result;
}
Explicit questions to answer:
How to log/trace/debug information used for authentication purpose on the client side? Any hint would be appreciated :)
Apache HttpClient in it's newer versions supports native Windows Negotiate, Kerberos and NTLM via SSPI through JNA when running on Windows OS. So if you have the option to use the newer version (from 4.4 I believe), this is a non-issue.
For example:
http://hc.apache.org/httpcomponents-client-4.4.x/httpclient-win/examples/org/apache/http/examples/client/win/ClientWinAuth.java
As far as I am aware, vanilla JDKs themselves do not have built-in NTLM support. This would then mean that you have to manually wire the steps of the protocol yourself.
I am writing this based on my experience: I had to roll a whole multi-round SPNEGO myself, which supports Oracle and IBM JRE. Was (not) fun. Although, that was just the server-side of the fun, I remember stumbling into this feature missing on Java-client side too (because it was SPNEGO and not plain NTLM, the client side was a browser, so I could skip that part).
This may have changed with the new HTTP Client in Java 11, I do not have any experience with that yet.
In order to debug your authentication exchange, you can add command line parameter :
-Djavax.net.debug=ssl,handshake.
This way, you'll have for both client step by step in handshake process.

Java - Retrieving a web page with authorization

I'm trying to retrieve a github web page using a java code, for this I used following code.
String startingUrl = "https://github.com/xxxxxx";
URL url = new URL(startingUrl );
HttpURLConnection uc = (HttpURLConnection) url.openConnection();
uc.connect();
String line = null;
StringBuffer tmp = new StringBuffer();
try{
BufferedReader in = new BufferedReader(new InputStreamReader(uc.getInputStream(), "UTF-8"));
while ((line = in.readLine()) != null) {
tmp.append(line);
}
}catch(FileNotFoundException e){
}
However, the page I received here is different from what I observe in browser after login to github. I tried sending authorization header as following, but it didn't worked either.
uc.setRequestProperty("Authorization", "Basic encodexxx");
How can I retrieve the same page that I see when I logged in?
I can't tell you more on this, because I don't know what are you getting, but most common issue for web crawlers is the fact that website owners mostly don't like web crawlers. Thus, you should behave like regular user - your browser for instance. Open your browser inspection element (press f12) when you are reaching some website and see what your browser send in request, then try to mimic it: For example, add Host, Referer, etc in your header. You need to experiment on this.
Also, good to know - some website owners will use advanced techniques (so they will block you to access their site), some won't stop you crawling on their website. Some will let you do what you want. Most fair option is to check www.somedomain.com/robots.txt and there is list of endpoints that are allowed for scraping and those that shouldn't be allowed.

Getting "java.net.ProtocolException: Server redirected too many times" Error

I'm making a simple URL request with code like this:
URL url = new URL(webpage);
URLConnection urlConnection = url.openConnection();
InputStream is = urlConnection.getInputStream();
But on that last line, I'm getting the "redirected too many times error". If my "webpage" var is, say, google.com then it works fine, but when I try to use my servlet's URL then it fails. It seems I can adjust the number of times it follows the redirects (default is 20) with this:
System.setProperty("http.maxRedirects", "100");
But when I crank it up to, say, 100 it definitely takes longer to throw the error so I know it is trying. However, the URL to my servlet works fine in (any) browser and using the "persist" option in firebug it seems to only be redirecting once.
A bit more info on my servlet ... it is running in tomcat and fronted by apache using 'mod-proxy-ajp'. Also of note, it is using form authentication so any URL you enter should redirect you to the login page. As I said, this works correctly in all browsers, but for some reason the redirect isn't working with the URLConnection in Java 6.
Thanks for reading ... ideas?
It's apparently redirecting in an infinite loop because you don't maintain the user session. The session is usually backed by a cookie. You need to create a CookieManager before you use URLConnection.
// First set the default cookie manager.
CookieHandler.setDefault(new CookieManager(null, CookiePolicy.ACCEPT_ALL));
// All the following subsequent URLConnections will use the same cookie manager.
URLConnection connection = new URL(url).openConnection();
// ...
connection = new URL(url).openConnection();
// ...
connection = new URL(url).openConnection();
// ...
See also:
Using java.net.URLConnection to fire and handle HTTP requests
Duse, I have add this lines:
java.net.CookieManager cm = new java.net.CookieManager();
java.net.CookieHandler.setDefault(cm);
See this example:
java.net.CookieManager cm = new java.net.CookieManager();
java.net.CookieHandler.setDefault(cm);
String buf="";
dk = new DAKABrowser(input.getText());
try {
URL url = new URL(dk.toURL(input.getText()));
DataInputStream dis = new DataInputStream(url.openStream());
String inputLine;
while ((inputLine = dis.readLine()) != null) {
buf+=inputLine;
output.append(inputLine+"\n");
}
dis.close();
}
catch (MalformedURLException me) {
System.out.println("MalformedURLException: " + me);
}
catch (IOException ioe) {
System.out.println("IOException: " + ioe);
}
titulo.setText(dk.getTitle(buf));
I was using Jenkins on Tomcat6 on a unix environment and got this bug. For some reason, upgrading to Java7 solved it. I'd be interested to know exactly why that fixed it.
I had faced the same problem and it took considerable amount of time to understand the problem.
So to summarize the problem was in mismatch of headers.
Consider below being my Resource
#GET
#Path("booksMasterData")
#Produces(Array(core.MediaType.APPLICATION_JSON))
def booksMasterData(#QueryParam("stockStatus") stockStatus : String): Response = {
// some logic here to get the books and send it back
}
And here is client code, which was trying to connect to my above resource
ClientResponse clientResponse = restClient.resource("http://localhost:8080/booksService").path("rest").path("catalogue").path("booksMasterData").accept("application/boks-master-data+json").get(ClientResponse.class);
And the error was coming on exactly above line.
What was the problem?
My Resource was using
"application/json"
in
#Produces annotation
and my client was using
accept("application/boks-master-data+json")
and this was the problem.
It took me long to find out this as the error was no where related. Break through was when I tried to access my resource in postman with
Accept-> "application/json" header
it worked fine, however with
Accept-> "application/boks-master-data+json" header
it doesnt.
And again, even Postman was not giving me proper error. The error was too generic. Please see the below image for reference.

Connect to a site using proxy code in java

I want to connect to as site through proxy in java. This is the code which I have written:
public class ConnectThroughProxy
{
Proxy proxy = new Proxy(Proxy.Type.HTTP, new InetSocketAddress("proxy ip", 8080));
public static void main(String[] args)
{
try
{
URL url = new URL("http://www.rgagnon.com/javadetails/java-0085.html");
URLConnection connection=url.openConnection();
String encoded = new String(Base64.encode(new String("user_name:pass_word").getBytes()));
connection.setDoOutput(true);
connection.setRequestProperty("Proxy-Authorization","Basic "+encoded);
String page="";
String line;
StringBuffer tmp = new StringBuffer();
BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream()));
while ((line=in.readLine()) != null)
{
page.concat(line + "\n");
}
System.out.println(page);
}
catch(Exception ex)
{
ex.printStackTrace();
}
}
While trying to run this code it throws the following error:
java.lang.IllegalArgumentException: Illegal character(s) in message header value: Basic dXNlcl9uYW1lOnBhc3Nfd29yZA==
at sun.net.www.protocol.http.HttpURLConnection.checkMessageHeader(HttpURLConnection.java:323)
at sun.net.www.protocol.http.HttpURLConnection.setRequestProperty(HttpURLConnection.java:2054)
at test.ConnectThroughProxy.main(ConnectThroughProxy.java:30)
Any Idea how to do it?
If you're just trying to make HTTP requests through an HTTP proxy server, you shouldn't need to go to this much effort. There's a writeup here: http://java.sun.com/javase/6/docs/technotes/guides/net/proxies.html
But it basically boils down to just setting the http.proxyHost and http.proxyPort environment properties, either on the command line, or in code:
// Set the http proxy to webcache.mydomain.com:8080
System.setProperty("http.proxyHost", "webcache.mydomain.com");
System.setProperty("http.proxyPort", "8080");
// Next connection will be through proxy.
URL url = new URL("http://java.sun.com/");
InputStream in = url.openStream();
// Now, let's 'unset' the proxy.
System.clearProperty("http.proxyHost");
// From now on HTTP connections will be done directly.
It seems to me, that you are not using your Proxy instance at all. I think you should pass it when you are creating URLConnection instance:
URLConnection connection=url.openConnection(proxy);
Setting of environment properties http.proxy is easier and when using some 3rd party libraries without Proxy instance passing support only possible solution, but its drawback is that it is set globally for the whole process.
I was using the Google Data APIs and the only way I got the proxy settings to work was to provide ALL the parameters related to proxy, even thought they are set to be empty:
/usr/java/jdk1.7.0_04/bin/java -Dhttp.proxyHost=10.128.128.13
-Dhttp.proxyPassword -Dhttp.proxyPort=80 -Dhttp.proxyUserName
-Dhttps.proxyHost=10.128.128.13 -Dhttps.proxyPassword -Dhttps.proxyPort=80
-Dhttps.proxyUserName com.stackoverflow.Runner
Where, username and password are NOT required, and the same http and https servers are set to be the same, as well as the port number (if that's your case as well). Note that the same HTTP proxy is also provided as the HTTPS server, as well as its port number (reference from https://code.google.com/p/syncnotes2google/issues/detail?id=2#c16).
If your Java class has an instance of the class "URL", it should pick those configurations up...

Cookies turned off with Java URLConnection

I am trying to make a request to a webpage that requires cookies. I'm using HTTPUrlConnection, but the response always comes back saying
<div class="body"><p>Your browser's cookie functionality is turned off. Please turn it on.
How can I make the request such that the queried server thinks I have cookies turned on. My code goes something like this.
private String readPage(String page) throws MalformedURLException {
try {
URL url = new URL(page);
HttpURLConnection uc = (HttpURLConnection) url.openConnection();
uc.connect();
InputStream in = uc.getInputStream();
int v;
while( (v = in.read()) != -1){
sb.append((char)v);
}
in.close();
uc.disconnect();
} catch (IOException e){
e.printStackTrace();
}
return sb.toString();
}
You need to add a CookieHandler to the system for it handle cookie. Before Java 6, there is no CookieHandler implementation in the JRE, you have to write your own. If you are on Java 6, you can do this,
CookieHandler.setDefault(new CookieManager());
URLConnection's cookie handling is really weak. It barely works. It doesn't handle all the cookie rules correctly. You should use Apache HttpClient if you are dealing with sensitive cookies like authentication.
I think server can't determine at the first request that a client does not support cookies. So, probably server sends redirects. Try to disable redirects:
uc.setInstanceFollowRedirects(false);
Then you will be able to get cookies from response and use them (if you need) on the next request.
uc.getHeaderFields()
// get cookie (set-cookie) here
URLConnection conn = url.openConnection();
conn.setRequestProperty("User-Agent", "Mozilla/5.0 (Windows; U; Windows NT 6.0; pl; rv:1.9.1.2) Gecko/20090729 Firefox/3.5.2");
conn.addRequestProperty("Referer", "http://xxxx");
conn.addRequestProperty("Cookie", "...");
If you're trying to scrape large volumes of data after a login, you may even be better off with a scripted web scraper like WebHarvest (http://web-harvest.sourceforge.net/) I've used it to great success in some of my own projects.

Categories

Resources