I've been experimenting with the HttpClient stuff in the Java 9/10 incubator, and have the following trivial code (virtually stolen from the project home page!):
URI uri = URI.create("http://192.168.1.102:8080/");
HttpRequest getRequest = HttpRequest.newBuilder()
.uri(uri)
.GET()
.build();
HttpResponse<String> response = client.send(getRequest,
HttpResponse.BodyHandler.asString());
System.out.println("response to get: " + response.body());
I find it works fine if it's pointed at a URL that is not the localhost, but fails if I ask for the localhost (whether by the name "localhost", by 172.0.0.1, or by the actual IP address of the local host). The error is very strange, and the entire stack trace does not mention any of my code.
WARNING: Using incubator modules: jdk.incubator.httpclient
Exception in thread "main" java.io.EOFException: EOF reached while reading
at jdk.incubator.httpclient/jdk.incubator.http.Http1AsyncReceiver$Http1TubeSubscriber.onComplete(Http1AsyncReceiver.java:507)
at jdk.incubator.httpclient/jdk.incubator.http.SocketTube$InternalReadPublisher$ReadSubscription.signalCompletion(SocketTube.java:551)
at jdk.incubator.httpclient/jdk.incubator.http.SocketTube$InternalReadPublisher$InternalReadSubscription.read(SocketTube.java:728)
at jdk.incubator.httpclient/jdk.incubator.http.SocketTube$SocketFlowTask.run(SocketTube.java:171)
at jdk.incubator.httpclient/jdk.incubator.http.internal.common.SequentialScheduler$SchedulableTask.run(SequentialScheduler.java:198)
at jdk.incubator.httpclient/jdk.incubator.http.internal.common.SequentialScheduler.runOrSchedule(SequentialScheduler.java:271)
at jdk.incubator.httpclient/jdk.incubator.http.internal.common.SequentialScheduler.runOrSchedule(SequentialScheduler.java:224)
at jdk.incubator.httpclient/jdk.incubator.http.SocketTube$InternalReadPublisher$InternalReadSubscription.signalReadable(SocketTube.java:675)
at jdk.incubator.httpclient/jdk.incubator.http.SocketTube$InternalReadPublisher$ReadEvent.signalEvent(SocketTube.java:829)
at jdk.incubator.httpclient/jdk.incubator.http.SocketTube$SocketFlowEvent.handle(SocketTube.java:243)
at jdk.incubator.httpclient/jdk.incubator.http.HttpClientImpl$SelectorManager.handleEvent(HttpClientImpl.java:769)
at jdk.incubator.httpclient/jdk.incubator.http.HttpClientImpl$SelectorManager.run(HttpClientImpl.java:731)
There is a server running locally, and I can connect to it just fine using a simple request from a web browser.
Any thoughts?
[EDIT]I found, I beleive, the mail list for this project. It's "obfuscated" (which fooled me completely!) but shown as: net dash dev at openjdk dot java dot net I'll post there too, and see if they have any input.
[EDIT 2]I'm pretty sure that this has nothing to do with localhost (per original title) but is something in the protocol negotiation with node.js/express (which is the server I'm using because it's easy to experiment with). Node occasionally (e.g. with a last line of text that's not LF terminated) seems to report the wrong content-length, but this isn't the problem, as the failure still occurs with correct length. I think it's possibly a bug in the attempt to upgrade the connection to HTTP/2.0 but don't know yet...
[EDIT 3]After wasting way too much of my life experimenting, I'm fairly sure that there's something in the way node.js 8.11.1 (and express 4.13.4 and body-parser 1.15.1) handle a request to upgrade a to HTTP 2.0 that's causing the problem. But I have no idea what. I'm giving up, and will continue the learning process for httpClient using a different server.
Updated. I finally got curl built with http 2.0 support, and the blame is entirely on node/express. When this server sees an upgrade request (node 8.something) it simply fails to create any output.Consequently, the client correctly fails with an EOF error.
As a side note, node/express also sets the content-length header "off by one" on occasions (not always!?)
try this
HttpRequest request = HttpRequest.newBuilder()
.uri(new URI("http://localhost:3000"))
.POST(BodyPublisher.fromString("hello"))
.version(Version.HTTP_1_1).build();
Related
I have an endpoint to be tested using RestAssured. The same endpoint is working fine while opening it in browser/Postman. But, while trying to test the same using RestAssured,
I am getting Operation Timed Out Error.
I had to connect to proxy to make that end point working in browser. used the same proxy in the rest assured also.
Sample Code below:
given().proxy("My_Proxy_URL_HERE",8080).when().get("My_API_URL_Here").then().log().all();
I am getting the response as
"Operation Timed Out" with Status Code 503.
I need your suggestion, what could be the possible issue, how to debug etc. Any suggestion is appreciated. Thanks in advance.
There can be many reasons for this behavior:
The address is just wrong and given there is some load balancer/proxy it can be configured to wait for a certain period of time and then respond with 503 status code.
Note, 503 is not a "request timed out", but "Service Unavailable".
The request url is good, but the request lacks some headers so that the load balancer/proxy won't be able to route the request to the required server.
How to check this? there exist tools that can come handy in this situation:
Check the access logs of the load balancer/proxy and even of your server if its possible - and see the request.
If it doesn't help, try to compare requests coming from rest-assured vs regular request. You can use tools like Burp for example, there are others, or you can even roll your own.
The idea is simple:
Start the "interceptor" on some port of your local computer (say, 9999 for example)
Configure the interceptor to forward all the requests to proxy of your choice (identified by URL - My_Proxy_URL_HERE and port 8080).
Now rest-assured must call localhost:9999 and the request will be intercepted by this tool. You'll be able to inspect its contents - headers, body, http method - everything.
Do the same for browser request and compare.
One of our devs reported the following error.
HttpGet foo = new HttpGet("http://www.example.com/path/to/file.xml");
works fine.
However, if the port is specified,
HttpGet foo = new HttpGet("http://www.example.com:80/path/to/file.xml");
the server returns a HTTP 500 error.
I've already verified that the website runs on the standard HTTP port 80. What could be the reason of this behavior? It looks like it's server side, as both lines of code work fine towards other websites.
A look into the server's log should bring up more information what exactly is going wrong there (status code 500 means that the server ran into a problem) but my guess is that there is some kind of script configured behind the URL that processes that value of the HTTP-request-header Host, doesn't expect the port-specification and runs into an error because of this.
Another reason might be a proxy between you and the server that ran into an error but I found that more hard to believe than the above theory.
Please provide the error-log of the server in order to be able to say more about this.
This is a question regarding an exception that is occurring in my code which makes a call to an https server.
javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated
It internally uses an instance of CloseableHttpClient to execute the PUT call.
Also, this code is a functional test that would be running on a remote machine as a CI job. I have seen some solutions with the SSL certificate error that mention how we can disable the SSL certificate validation in Java or add the certificate in the local JVM, one of them being here -
'peer not authenticated' SSL certificate error usng DefaultHttpClient
Unfortunately, it doesn't seem to be working as this is a remote machine and we cannot import certificates into that machine.
String endPoint = "https://" + hostName + ":" + port + "/v1/service/data/put";
endPoint is set in the code that is called from a jar. So, there is no scope that we'd be able to change it either.
If I am running the code that makes a PUT call to the endPoint from a standalone class (through the main method), it seems to be running fine, returning a 200/OK. Currently, the exception occurs if it is being run as a TestNG class from the .xml file.
The code added as a Github gist is here.
Let me know if you need more details.
there's a lot going on there and most of it isn't really related to the problem (the caching, for example or the other boilerplate code to set up the call).
what i usually do in this kind of situation is reduce your problem to a smaller and smaller chunk of code that can still reproduce the problem. for ex, using these HttpClient components, can you make any SSL call? try this code, which requires HttpClient 4.4 and will work on sites that don't have valid certificates:
CloseableHttpClient client = HttpClients.custom()
.setSSLContext(sslContext)
.setSSLHostnameVerifier(new NoopHostnameVerifier())
.build();
HttpGet httpGet = new HttpGet(<your https URL here>);
httpGet.setHeader("Accept", <whatever appropriate for URL above>);
HttpResponse response = client.execute(httpGet);
System.out.println(response.getStatusLine().getStatusCode());
As it was mentioned in the question, the code works fine if it were running in a standalone class, through the main method.
I was able to resolve the issue by placing my code in a static block. It might be related to the certificates being disabled during class load and thus, works fine now.
While searching the solution for this issue, I read somewhere that max size of get request is 8Kb. However when I am trying to execute get request of content length of only 248 bytes and total URL length of only 282 characters through Apache HttpClient execute method, Apache HttpClient is giving me error: org.apache.http.HttpException: HTTP/1.1 413 Request Entity Too Large.
However the same get request (the same URL) gives expected response in browser (and NOT "413 Request Entity Too Large").
Apache HttpClient execute method is working fine for some other get request which is slightly smaller in length and has lesser no. of query params.
I also tried sending the Post request but still got the same error.
Please help me resolve this issue. Any help will be appreciated
The other seemingly similar questions didn't solve my problem.
Request Entity too large error, comes due to server receiving request that are larger than configured to process. It should not be from the client side, you need to modify you server setting to allow larger request body length. This parameter will differ server to server.
Some of them are listed here
Sorry the issue was not on client side. But the internal API that I am using was sending incorrect 413 response code instead of (almost) correct response code 507.
I have a bit of Java code to download url data that is plagued by the error in the title. Sometimes it works, most time it fails. Has anyone come across this:
URLConnection urlConnection = url2search.openConnection();
urlConnection.setRequestProperty("User-Agent","Mozilla/5.0 ( compatible ) ");
urlConnection.setRequestProperty("Accept","*/*");
urlConnection.setDoInput(true);
urlConnection.setDoOutput(false);
try{
reader = new BufferedReader(new InputStreamReader(urlConnection.getInputStream()));
}catch(Exception r)
{}
Now it fails consistently at the reader line with:
java.io.IOException: Server returned HTTP response code: 520 for URL:
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown Source)
I can copy the url in to the search bar and it works fine. And as yet my web research on this topic has proved fruitless. Any suggestions?
An HTTP error code between 500 and 599 indicates a server failure. It could be at the requested document's origin server, or at a proxy server between the client and the origin server.
Code 520 itself is not documented by any of the HTTP specifications, so its specific meaning is unclear. If that code is being generated by a CloudFlare reverse proxy between your client and the origin server, however, then it signals a generic, unspecified connection error between the proxy and the origin server.
Any way around, the problem is basically external to your client. It may be that there is something about your request properties that tends to cause the server chain to fail as you observe it to do, but to debug it you need either to analyze the server logs and software, or else to reverse-engineer its behavior. If the problem is not exhibited in conjunction with your browser, then you could consider capturing the request/response involving your browser to see how it differs from the request/response involving your Java client.
Try bringing your user agent string up the latest.
See here: https://www.whatismybrowser.com/guides/the-latest-user-agent