linux network setting POST failing after exactly 2 hours - java

I have following test code running fine for a very long POST request (more than 2 hours):
URL postURL = new URL(url);
con = (HttpURLConnection) postURL.openConnection();
con.setUseCaches(false);
con.setDoOutput(true);
con.setDoInput(true);
con.setRequestMethod("POST");
OutputStream out = con.getOutputStream();
OutputStreamWriter wout = new OutputStreamWriter(out, "UTF-8");
wout.write(xmlRequest);
wout.flush();
out.close();
con.connect();
int httpResponseCode = HttpURLConnection.HTTP_SERVER_ERROR;
try {
httpResponseCode = con.getResponseCode();
} catch (IOException e) {
log(e.toString() + " Error retrieving the status code");
}
log("Failure status code " + httpResponseCode + " received: ");
I run this post from one host to another and it runs fine in all environments, except one exact linux redhat host -when i run this code from this host I got exception:
java.net.SocketException: recv() failed, errno = 104 Connection reset by peer Error retrieving the status code
Failure status code 500 received.
Target server host is the same host in all tests. So the difference is only in client caller host.
So I'm trying to understand what exact tcp setting on this linux machine is causing receive to fail after exactly 2 hours.
I agree to get any blame here for such "incorrect" using of the sending post and waiting for more then 2 hours for response;) but question is what causing this

google for 7200 seconds tcp redhat and among other things
tcp_keepalive_time (integer; default: 7200; since Linux 2.2)
The number of seconds a connection needs to be idle before TCP
begins sending out keep-alive probes. Keep-alives are only sent
when the SO_KEEPALIVE socket option is enabled. The default
value is 7200 seconds (2 hours). An idle connection is termi-
nated after approximately an additional 11 minutes (9 probes an
interval of 75 seconds apart) when keep-alive is enabled.
Note that underlying connection tracking mechanisms and applica-
tion timeouts may be much shorter.

the difference was that one client host was on different network segment and router was killing idle sessions after timeout.

Related

Apache HttpClient Keep-Alive Strategy for active connections

In an Apache HttpClient with a PoolingHttpClientConnectionManager, does the Keep-Alive strategy change the amount of time that an active connection will stay alive until it will be removed from the connection pool? Or will it only close out idle connections?
For example, if I set my Keep-Alive strategy to return 5 seconds for every request, and I use the same connection to hit a single URL/route once every 2 seconds, will my keep-alive strategy cause this connection to leave the pool? Or will it stay in the pool, because the connection is not idle?
I just tested this and confirmed that the Keep-Alive strategy will only idle connections from the HttpClient's connection pool after the Keep-Alive duration has passed. The Keep-Alive duration determines whether or not the connection is idle, in fact - if the Keep-alive strategy says to keep connections alive for 10 seconds, and we receive responses from the server every 2 seconds, the connection will be kept alive for 10 seconds after the last successful response.
The test that I ran was as follows:
I set up an Apache HttpClient (using a PoolingHttpClientConnectionManager) with the following ConnectionKeepAliveStrategy:
return (httpResponse, httpContext) -> {
// Honor 'keep-alive' header
HeaderElementIterator it = new BasicHeaderElementIterator(
httpResponse.headerIterator(HTTP.CONN_KEEP_ALIVE));
while (it.hasNext()) {
HeaderElement he = it.nextElement();
String param = he.getName();
String value = he.getValue();
if (value != null && param.equalsIgnoreCase("timeout")) {
try {
return Long.parseLong(value) * 1000;
} catch(NumberFormatException ignore) {
}
}
}
if (keepAliveDuration <= 0) {
return -1; // the connection will stay alive indefinitely.
}
return keepAliveDuration * 1000;
};
}
I created an endpoint on my application which used the HttpClient to make a GET request to a URL behind a DNS.
I wrote a program to hit that endpoint every 1 second.
I changed my local DNS for the address that the HttpClient was sending GET requests to to point to a dummy URL that would not respond to requests. (This was done by changing my /etc/hosts file).
When I had set the keepAliveDuration to -1 seconds, even after changing the DNS to point to the dummy URL, the HttpClient would continuously send requests to the old IP address, despite the DNS change. I kept this test running for 1 hour and it continued to send requests to the old IP address associated with the stale DNS. This would happen indefinitely, as my ConnectionKeepAliveStrategy had been configured to keep the connection to the old URL alive indefinitely.
When I had set the keepAliveDuration to 10, after I had changed my DNS, I sent successful requests continuously, for about an hour. It wasn't until I turned off my load test and waited 10 seconds until we received a new connection. This means that the ConnectionKeepAliveStrategy removed the connection from the HttpClient's connection pool 10 seconds after the last successful response from the server.
Conclusion
By default, if an HttpClient does not receive a Keep-Alive header from a response it gets from a server, it assumes its connection to that server can be kept alive indefinitely, and will keep that connection in it's PoolingHttpClientConnectionManager indefinitely.
If you set a ConnectionKeepAliveStrategy like I did, then it will add a Keep-Alive header to the response from the server. Having a Keep-Alive header on the HttpClient response will cause the connection to leave the connection pool after the Keep-Alive duration has passed, after the last successful response from the server. This means that only idle connections are affected by the Keep-Alive duration, and "idle connections" are connections that haven't been used since the Keep-Alive duration has passed.

NodeMCU : Available Heaps size falls until out of memory error

I am facing some troubles with my NodeMCU, running a simple server in access point mode, programmed in LUA using ESPlorer.
Here is the LUA script :
local SSID = "NodeMCU"
local SSID_PASSWORD = "12345678"
function connect (conn)
print ("Hello connect")
conn:on ("receive",
function (cn, req_data)
print(req_data)
print("Available memory :")
print(node.heap())
--local query_data = get_http_req (req_data)
local query_data = {}
cn:send("HTTP/1.1 200 OK\n\n",
function()
cn:close()
--collectgarbage()
end)
end)
end
-- Configure the ESP as a station (client)
wifi.setmode (wifi.SOFTAP)
cfg={}
cfg.ssid=SSID
cfg.pwd=SSID_PASSWORD
wifi.ap.config(cfg)
cfg={}
cfg.ip="192.168.1.1";
cfg.netmask="255.255.255.0";
cfg.gateway="192.168.1.1";
wifi.ap.setip(cfg)
print("Set up UART config")
uart.setup(1, 921600, 8, uart.PARITY_NONE, uart.STOPBITS_1, 1)
-- Create the httpd server
svr = net.createServer (net.TCP, 30)
-- Server listening on port 80, call connect function if a request is received
svr:listen (80, connect)
Once this program is running on NodeMCU, I connect my PC in WiFi to the NodeMCU, and I send it some http POST requests with this piece of Java code :
public void sendPost(RGBWPixel[][] LedMatrix) throws Exception {
HttpURLConnection con = (HttpURLConnection) obj.openConnection();
System.out.println("Sending post");
// add reuqest header
con.setRequestMethod("POST");
con.setRequestProperty("matrixValues", new String(convertTo1DCharArray(LedMatrix)));
con.setDoOutput(true);
DataOutputStream wr = new DataOutputStream(con.getOutputStream());
wr.flush();
wr.close();
con.getResponseCode();
}
obj is the URLconnection corresponding to the NodeMCU IP adress.
The String corresponding to matrixValues always has a 2050 length.
Please notice that I reduced the LUA srcipt to the minimal functions that make the problem happend. More exactly, it happens when I had the cn:send() part, but I don't know if it is possible to receive and process the request without sending a response, because the request is not sent when I don't run the con.getResponseCode() from the Java program. I am a beginner with http, so I don't understand all the protocols yet.
Here is how the output looks like from the NodeMCU side, in ESPlorer :
> dofile("init.lua");
Set up UART config
> Hello connect
POST / HTTP/1.1
matrixValues:
Available memory :
38088
Available memory :
37032
Hello connect
POST / HTTP/1.1
matrixValues:
Available memory :
37688
Available memory :
36664
Hello connect
POST / HTTP/1.1
matrixValues:
Available memory :
37440
Available memory :
36264
And after a few dozens of iterations, this happend and the NodeMCU restarts :
Hello connect
POST / HTTP/1.1
matrixValues:
Available memory :
4680
Available memory :
3600
E:M 1584
PANIC: unprotected error in call to Lua API (init.lua:19: out of memory)
ets Jan 8 2013,rst cause:2, boot mode:(3,6)
load 0x40100000, len 26704, room 16
tail 0
chksum 0x0c
load 0x3ffe8000, len 2184, room 8
tail 0
chksum 0x9a
load 0x3ffe8888, len 136, room 8
tail 0
chksum 0x44
csum 0x44
Line 19 corresponds to the line of cn:send().
I guess I made something wrong declaring some variables, functions or callback funtions in the LUA script, that stack until there is no more memory...
Also, I don't understand why there is 2 calls to the conn:on callback funtion (where node.heap() is printed) for only 1 "Hello connect". It is like a second void http request is always sent...
Thank you very much for your precious time and your potential help, if you came to the end of this post!
Paul
sck:send(data, fn) is equivalent to sck:send(data); sck:on("sent", fn).
A closure passed to :on() must not reference the object directly (explanations).
Use first argument of the callback function instead of referencing the upvalue.
cn:send(
"HTTP/1.1 200 OK\n\n",
function(s) -- s here has the same value as cn
s:close()
--collectgarbage()
end
)
Send the header:
Connection: close
With both requests and responses. The default for modern http servers and client apps is keep-alive. When this header is present the other side will explicitly close the connection when all data has been sent. When the connection closes, memory is freed.

How to send request periodically to the client from the server by http persistent connection

I am new in http connections. The thing I want to realize is that the server should send some data (notifications) to the client periodically by persistent connection.
I wrote a code in server side by php like:
<?php
set_time_limit(0);
header('Connection: keep-alive');
$i = 0;
while($i < 10){
echo "Hello$i<br/>";
sleep(5);
$i++;
}
?>
and tried to connect to the server by java:
public static void main(String[] args) throws Exception {
URL oracle = new URL("http://localhost/connection.php");
URLConnection yc = oracle.openConnection();
BufferedReader in = new BufferedReader(new InputStreamReader(
yc.getInputStream()));
String inputLine;
while ((inputLine = in.readLine()) != null)
System.out.println(inputLine);
in.close();
}
I expected to get content from the server every five seconds like following:
Hello0<br/>
Hello1<br/>
...
but instead of this the java client is waiting 50 seconds. and printing:
Hello0<br/>Hello1<br/>Hello2<br/>Hello3<br/>Hello4<br/>Hello5<br/>Hello6<br/>Hello7<br/>Hello8<br/>Hello9<br/>
I want the server send notifications itself. instead of the client connect to the server every five seconds.
It's really unnecessary to add Connection: keep-alive header in response for HTTP/1.1 server, UNLESS for backward compatibility.
No matter how long or many times you sleep in that loop, it's seen as ONE request by the client nevertheless.
with that being said, your client snippet, in fact, only make ONE request to http://localhost/connection.php, and it's impossible to reuse URLConnection in order to dispatch another request(achieving persistent).
to sum up:
Persistent Connection behaviour is handled at transport layer (TCP), more specifically, you are required to reuse a client socket for multiple request to the same host plus some other requirements specified in HTTP/1.1.
Go and find some projects that are suitable for your needs, don't reinvent the wheel.
Flushing the connections seemed really good idea. But I think I found better solution. Instead of keeping connection with unlimited timeout, I think it is better to make persistent connection with 5 (N minutes) minutes timeout. It is better because when the user will be offline unexpectedly, the server will keep the connection alive anyway. and it is not good. That's why I am going to make 5 (this number is optional) connections for notification. That is the server will use first one for notification and closes the connection after sending request, and the rest 4 connections will be on duty. When the client (or java client) will receive the notification, it will make new connection to fill missing part or the connection times out.
and the client will be notified immediately every time (of course if connected to the internet).
If someone has better solution I will be happy to see that.

Catching connection issues (and network I/O issues) in java BufferedReader?

Note: expressed in Scala. Using a BufferedReader to process a gzipped HTTP stream and iterating through each line to read the incoming data. Problem is that if there is ever a reset connection due to a network I/O issue (provider does weird things sometimes) then I can see the connection staying open for up to 15 seconds before it times out, something I'd like to get down to 1 second. For some reason our office provider resets connections every 11 hours.
Here's how I'm handling the connection:
val connection = getConnection(URL, USER, PASSWORD)
val inputStream = connection.getInputStream()
val reader = new BufferedReader(new InputStreamReader(new StreamingGZIPInputStream(inputStream), GNIP_CHARSET))
var line = reader.readLine()
while(line != null){
parseSupervisor ! ParseThis(line)
line = reader.readLine()
}
throw new ParseStreamCollapseException
and here is getConnection defined:
private def getConnection(urlString: String, user: String, password: String): HttpURLConnection = {
val url = new URL(urlString)
val connection = url.openConnection().asInstanceOf[HttpURLConnection]
connection.setReadTimeout(1000 * KEEPALIVE_TIMEOUT)
connection.setConnectTimeout(1000 * 1)
connection.setRequestProperty("Authorization", createAuthHeader(user, password));
connection.setRequestProperty("Accept-Encoding", "gzip")
connection
}
To summarize: reading HTTP stream line-by-line via java.io.BufferedReader. Keep-alive on stream is 16 seconds, but to prevent further data loss I'd like to narrow it down to hopefully 1-2 seconds (basically check if stream is currently blank or if is network I/O). Some device in the middle is terminating the connection every 11 hours, and it would be nice to have a meaningful workaround to minimize data loss. The HttpURLConnection does not receive a "termination signal" on the connection.
Thanks!
Unfortunately, unless the network device that's killing the connection is closing it cleanly, you're not going to get any sort of notification that the connection is dead. The reason for this is that there is no way to tell the difference between a remote host that is just taking a long time to respond and a broken connection. Either way the socket is silent.
Again, assuming that the connection is just being severed, your only option to detect the broken connection more quickly is to decrease your timeout.

Java HTTP Request Occasionally Hangs

For the majority of the time, my HTTP Requests work with no problem. However, occasionally they will hang.
The code that I am using is set up so that if the request succeeds (with a response code of 200 or 201), then call screen.requestSucceeded(). If the request fails, then call screen.requestFailed().
When the request hangs, however, it does so before one of the above methods are called. Is there something wrong with my code? Should I be using some sort of best practice to prevent any hanging?
The following is my code. I would appreciate any help. Thanks!
HttpConnection connection = (HttpConnection) Connector.open(url
+ connectionParameters);
connection.setRequestMethod(method);
connection.setRequestProperty("WWW-Authenticate",
"OAuth realm=api.netflix.com");
if (method.equals("POST") && postData != null) {
connection.setRequestProperty("Content-Type",
"application/x-www-form-urlencoded");
connection.setRequestProperty("Content-Length", Integer
.toString(postData.length));
OutputStream requestOutput = connection.openOutputStream();
requestOutput.write(postData);
requestOutput.close();
}
int responseCode = connection.getResponseCode();
System.out.println("RESPONSE CODE: " + responseCode);
if (connection instanceof HttpsConnection) {
HttpsConnection secureConnection = (HttpsConnection) connection;
String issuer = secureConnection.getSecurityInfo()
.getServerCertificate().getIssuer();
UiApplication.getUiApplication().invokeLater(
new DialogRunner(
"Secure Connection! Certificate issued by: "
+ issuer));
}
if (responseCode != 200 && responseCode != 201) {
screen.requestFailed("Unexpected response code: "
+ responseCode);
connection.close();
return;
}
String contentType = connection.getHeaderField("Content-type");
ByteArrayOutputStream baos = new ByteArrayOutputStream();
InputStream responseData = connection.openInputStream();
byte[] buffer = new byte[20000];
int bytesRead = 0;
while ((bytesRead = responseData.read(buffer)) > 0) {
baos.write(buffer, 0, bytesRead);
}
baos.close();
connection.close();
screen.requestSucceeded(baos.toByteArray(), contentType);
} catch (IOException ex) {
screen.requestFailed(ex.toString());
}
Without any trace, I am just shooting in the dark.
Try to add this 2 calls,
System.setProperty("http.keepAlive", "false");
connection.setRequestProperty("Connection", "close");
Keep-alive is a common cause for stale connections. These calls will disable it.
I don't see any issues with the code. It could be that your platform has an intermittent bug, or that the website is causing the connection to hang. Changing connection parameters, such as keep alive, may help.
But, even with a timeout set, Sockets can hang indefinitely - a friend aptly demonstrated this to me some years ago by pulling out the network cable - my program just hung there forever, even with a SO_TIMEOUT set to 30 seconds.
As a "best practice", you can avoid hanging your application by moving all network communication to a separate thread. If you wrap up each request as a Runnable and queue these for exeuction, you maintain control over timeouts (synchronization is still in java, rather than a blocking native I/O call). You can interrupt your waiting thread after (say) 30s to avoid stalling your app. You could then either inform the user, or retry the request. Because the request is a Runnable, you can remove it from the stalled thread's queue and schedule it to execute on another thread.
I see you have code to handle sending a "POST" type request, however there is nothing that writes the POST data in the request. If the connection type is "POST", then you should be doing the following BEFORE the connection.getResponseCode():
set the "Content-Length" header
set the "Content-Type" header (which you're doing)
get an OutputStream from the connection using connection.openOutputStream()
write the POST (form) data to the OutputStream
close the OutputStream
I noticed this problem too on the blackberry OS 5.0. There is no way to reproduce this reliably. We ended up using additional thread using wait/notify along with Timertask.

Categories

Resources