I have the following code to start a netty server:
Application application = ApplicationTypeFactory.getApplication(type);
resteasyDeployment = new ResteasyDeployment();
// Explicitly setting the Application should prevent scanning
resteasyDeployment.setApplication(application);
// Need to set the provider factory to the default, otherwise the
// providers we need won't be registered, such as JSON mapping
resteasyDeployment.setProviderFactory(ResteasyProviderFactory.getInstance());
netty = new NettyJaxrsServer();
netty.setHostname(HOST);
netty.setPort(port);
netty.setDeployment(resteasyDeployment);
// Some optional extra configuration
netty.setKeepAlive(true);
netty.setRootResourcePath("/");
netty.setSecurityDomain(null);
netty.setIoWorkerCount(16);
netty.setExecutorThreadCount(16);
LOGGER.info("Starting REST server on: " + System.getenv("HOSTNAME") + ", port:" + port);
// Start the server
//("Starting REST server on " + System.getenv("HOSTNAME"));
netty.start();
LOGGER.info("Started!");
When I do:
netty.stop()
It doesn't appear to close the connection:
[john#dub-001948-VM01:~/workspace/utf/atf/src/test/resources ]$ netstat -anp | grep 8888
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 ::ffff:10.0.21.88:8888 ::ffff:10.0.21.88:60654 TIME_WAIT -
tcp 0 0 ::ffff:10.0.21.88:8888 ::ffff:10.0.21.88:60630 TIME_WAIT -
tcp 0 0 ::ffff:10.0.21.88:8888 ::ffff:10.0.21.88:60629 TIME_WAIT -
tcp 0 0 ::ffff:10.0.21.88:8888 ::ffff:10.0.21.88:60637 TIME_WAIT -
tcp 0 0 ::ffff:10.0.21.88:8888 ::ffff:10.0.21.88:60640 TIME_WAIT -
even after the program exits. In other posts I have read that netty does not close client connections on a stop. How do I shut it down cleanly?
Time_Wait is one of socket state. application can not do anything about it. more information here: http://www.isi.edu/touch/pubs/infocomm99/infocomm99-web/. For a busy server, you could tune some tcp/ip parameter to alleviate its impact.
Related
In my Java 8 application (RHEL 6.x, Wildfly 10.1.0.Final) the first time a user prints a document, application gets stuck while getting the list of printers from the system.
Here is the stacktrace of the blocking thread :
"Thread-211" #799 daemon prio=5 os_prio=0 tid=0x00007fca543a6800 nid=0x10755 runnable [0x00007fca02820000]
java.lang.Thread.State: RUNNABLE
at sun.print.CUPSPrinter.canConnect(Native Method)
at sun.print.CUPSPrinter.isCupsRunning(CUPSPrinter.java:444)
at sun.print.UnixPrintServiceLookup.getDefaultPrintService(UnixPrintServiceLookup.java:650)
- locked <0x00000006d2c7fff8> (a sun.print.UnixPrintServiceLookup)
at sun.print.UnixPrintServiceLookup.refreshServices(UnixPrintServiceLookup.java:277)
- locked <0x00000006d2c7fff8> (a sun.print.UnixPrintServiceLookup)
at sun.print.UnixPrintServiceLookup$PrinterChangeListener.run(UnixPrintServiceLookup.java:947)
Other users trying to print documents and relatives threads are blocked by this one.
I looked at the source code of CUPSPrinter.canConnect() (native code) and at this point we try to connect to the cups server :
/*
* Checks if connection can be made to the server.
*
*/
JNIEXPORT jboolean JNICALL
Java_sun_print_CUPSPrinter_canConnect(JNIEnv *env,
jobject printObj,
jstring server,
jint port)
{
const char *serverName;
serverName = (*env)->GetStringUTFChars(env, server, NULL);
if (serverName != NULL) {
http_t *http = j2d_httpConnect(serverName, (int)port);
(*env)->ReleaseStringUTFChars(env, server, serverName);
if (http != NULL) {
j2d_httpClose(http);
return JNI_TRUE;
}
}
return JNI_FALSE;
}
In my case CUPS is on the same host listening on port 631.
I checked the logs & everything seems to be fine.
I also checked active connections for cups with netstat :
tcp 0 0 0.0.0.0:631 0.0.0.0:* LISTEN 76107/cupsd
tcp 0 0 127.0.0.1:45652 127.0.0.1:631 TIME_WAIT -
tcp 0 0 :::631 :::* LISTEN 76107/cupsd
tcp 0 0 ::1:35982 ::1:631 TIME_WAIT -
tcp 0 0 ::1:35981 ::1:631 TIME_WAIT -
tcp 0 0 ::1:35978 ::1:631 TIME_WAIT -
tcp 0 0 ::1:35979 ::1:631 TIME_WAIT -
udp 0 0 0.0.0.0:631 0.0.0.0:* 76107/cupsd
Important notes :
If I restart Cups service, thread is not deblocked. It seems to live endessly until application restarts.
I found a bug similar to this on Open JDK : https://bugs.openjdk.java.net/browse/JDK-6290446 But the workaround of setting -Dsun.java2d.print.polling=false does not work for me (the property seems to be cleared at some point for an obscure reason, so PrinterChangeListener gets instantiated and though polling is not desactivated).
I can't reproduce the problem with a test application (clone of production) on the same server
Please HELP !!
-Djava.awt.printerjob=sun.print.PSPrinterJob
I'm using Microsoft JDBC mssql-jdbc-7.0.0.jre8.jar to connect to a 11.00.2100 MS-SQL Server. At this point I'm able to connect using DBeaver but can't connect using Tomcat or even a simple Java+JDBC class. These alternatives fail with timeout.
My JDBC URL is like this:
jdbc:sqlserver://thundercat.md.pt:1433;database=MYDBNAME;instanceName=MYINSTANCENAME;user=myusername;password=mypassword;loginTimeout=50;
My findings up to now are:
there must be something in the server config, because I have a similar server in the same network and I am able to connect. But I can't find out what (not very savvy on MS-SQL).
Although using the same JDBC driver, DBeaver connects in a different way than Tomcat or my simple class; using tcpdump I found out that DBeaver starts with an UDP connection to port 1434 and proceeds with a tcp connection between two random ports but my simple class always starts with a tcp connection to port 1433 instead, failing with timeout.
My code is very simple and it works if I just point to a different server, so it must have something to do with the network setup on this server and/or with this jdbc connection.
String connectionUrl =
"jdbc:sqlserver://" + host + ":1433;" + "database=" + database + ";" +
"instanceName=" + instance + ";" + "user=" + user + ";" +
"password=" + password + ";" +
"loginTimeout=50;";
Connection connection = DriverManager.getConnection(connectionUrl);
Statement statement = connection.createStatement();
Although these parameters work in DBeaver, they fail with timeout from Tomcat or my simple class.
This is the traffic for a successful connection with DBeaver:
10:01:13.649901 IP 10.10.1.5.50031 > 10.10.3.50.1434: UDP, length 16
10:01:13.650674 IP 10.10.3.50.1434 > 10.10.1.5.50031: UDP, length 101
10:01:13.652401 IP 10.10.1.5.55025 > 10.10.3.50.49831: tcp 0
10:01:13.652823 IP 10.10.3.50.49831 > 10.10.1.5.55025: tcp 0
10:01:13.653400 IP 10.10.1.5.55025 > 10.10.3.50.49831: tcp 0
10:01:13.653452 IP 10.10.1.5.55025 > 10.10.3.50.49831: tcp 67
10:01:13.654168 IP 10.10.3.50.49831 > 10.10.1.5.55025: tcp 31
10:01:13.656148 IP 10.10.1.5.55025 > 10.10.3.50.49831: tcp 273
...
And this is a failed connection from my jdbc class:
10:02:26.273941 IP 10.10.1.5.55074 > 10.10.3.50.1433: tcp 0
10:02:26.874860 IP 10.10.1.5.55076 > 10.10.3.50.1433: tcp 0
10:02:29.877173 IP 10.10.1.5.55076 > 10.10.3.50.1433: tcp 0
10:02:35.875310 IP 10.10.1.5.55076 > 10.10.3.50.1433: tcp 0
10:02:39.575984 IP 10.10.1.5.55086 > 10.10.3.50.1433: tcp 0
10:02:42.575554 IP 10.10.1.5.55086 > 10.10.3.50.1433: tcp 0
10:02:48.576195 IP 10.10.1.5.55086 > 10.10.3.50.1433: tcp 0
...
I'd like to understand how to configure the JDBC connection to make it work like DBeaver does.
I use logstash to collect logs from other component in my project. The log was divided into two type, app_log and sys_log, app_log was sent to tcp port 5000 and sys_log was sent to 5001.
The following is my logstash input config:
input {
tcp {
port => 5000
type => app_log
}
tcp {
port => 5001
type => sys_log
}
}
After I started the logstash, the port 5000 and 5001 are both activated.
tcp6 0 0 :::5000 :::* LISTEN 7650/java
tcp6 0 0 :::5001 :::* LISTEN 7650/java
But I could only receive log from port 5000 normally.when sending log to port 5001, the log was not collected, Is there anything I configured wrongly?
I'm new to Scala so the question may be quite simple, though I have spent some time trying to resolve it. I have a simple Scala TCP server (no actors, single thread):
import java.io._
import java.net._
object Application {
def readSocket(socket: Socket): String = {
val bufferedReader = new BufferedReader(new InputStreamReader(socket.getInputStream))
var request = ""
var line = ""
do {
line = bufferedReader.readLine()
if (line == null) {
println("Stream terminated")
return request
}
request += line + "\n"
} while (line != "")
request
}
def writeSocket(socket: Socket, string: String) {
val out: PrintWriter = new PrintWriter(new OutputStreamWriter(socket.getOutputStream))
out.println(string)
out.flush()
}
def main(args: Array[String]) {
val port = 8000
val serverSocket = new ServerSocket(port)
while (true) {
val socket = serverSocket.accept()
readSocket(socket)
writeSocket(socket, "HTTP/1.1 200 OK\r\n\r\nOK")
socket.close()
}
}
}
The server listens on localhost:8000 for incomming requests and sends HTTP response with single OK word in the body. Then I run Apache Benchmark like this:
ab -c 1000 -n 10000 http://localhost:8000/
which works nicely for the first time. The second time I start ab it hangs producing the following output in netstat -a | grep 8000:
....
tcp 0 0 localhost.localdo:43709 localhost.localdom:8000 FIN_WAIT2
tcp 0 0 localhost.localdo:43711 localhost.localdom:8000 FIN_WAIT2
tcp 0 0 localhost.localdo:43717 localhost.localdom:8000 FIN_WAIT2
tcp 0 0 localhost.localdo:43777 localhost.localdom:8000 FIN_WAIT2
tcp 0 0 localhost.localdo:43722 localhost.localdom:8000 FIN_WAIT2
tcp 0 0 localhost.localdo:43725 localhost.localdom:8000 FIN_WAIT2
tcp6 0 0 [::]:8000 [::]:* LISTEN
tcp6 83 0 localhost.localdom:8000 localhost.localdo:43724 CLOSE_WAIT
tcp6 83 0 localhost.localdom:8000 localhost.localdo:43786 CLOSE_WAIT
tcp6 1 0 localhost.localdom:8000 localhost.localdo:43679 CLOSE_WAIT
tcp6 83 0 localhost.localdom:8000 localhost.localdo:43735 CLOSE_WAIT
tcp6 83 0 localhost.localdom:8000 localhost.localdo:43757 CLOSE_WAIT
tcp6 83 0 localhost.localdom:8000 localhost.localdo:43754 CLOSE_WAIT
tcp6 83 0 localhost.localdom:8000 localhost.localdo:43723 CLOSE_WAIT
....
Since that no more requests are served by the server. One more detail: The same ab script with the same parameters works smoothly testing a simple Node.js server on the same machine. So this issue is not related to a number of opened TCP connections which I have set to be reusable with
sudo sysctl -w net.ipv4.tcp_tw_recycle=1
sudo sysctl -w net.ipv4.tcp_tw_reuse=1
Could anyone give me a clue on what I'm missing?
Edit: Termination of stream handling has been added to the code above:
if (line == null) {
println("Stream terminated")
return request
}
I'm posting the (partial) answer to my own question for those who will stumble upon the same issue one day. First, the nature of the problem lies not in the source code but in the system itself which restricts numerious connections. The problem is that the socket passed to readSocket function appears corrupted under some conditions, i.e. it can not be read and bufferedReader.readLine() either returns null on first call or hangs indefinitely. The following two steps make the code working on some machines:
Increase the number of concurrent connections to a socket with
sysctl -w net.core.somaxconn=65535
Provide the second parameter to ServerSocket constructor which will explicitly set the length of connection queue:
val maxQueue = 50000
val serverSocket = new ServerSocket(port, maxQueue)
The steps above solve the problem on EC2 m1.large instances, however I'm still getting issues on my local machine. The better way would be to use Akka for the stuff of that kind:
import akka.actor._
import java.net.InetSocketAddress
import akka.util.ByteString
class TCPServer(port: Int) extends Actor {
override def preStart {
IOManager(context.system).listen(new InetSocketAddress(port))
}
def receive = {
case IO.NewClient(server) =>
server.accept()
case IO.Read(rHandle, bytes) => {
val byteString = ByteString("HTTP/1.1 200 OK\r\n\r\nOK")
rHandle.asSocket.write(byteString)
rHandle.close()
}
}
}
object Application {
def main(args: Array[String]) {
val port = 8000
ActorSystem().actorOf(Props(new TCPServer(port)))
}
}
First, I'd suggest trying this without ab. You can do something like:
echo "I'm\nHappy\n" | nc -vv localhost 8000
Second, I'd suggest handling end-of-stream. This is where BufferedReader.readLine() returns null. The code above only checks for an empty String. After you fix this, I'd try again. Then test with ab, after everything looks good. Let us know if the problem persists.
I have a Java application which uses RMI for client/server communication.
To secure this communication the traffic is tunneled through an ssh connection.
Everything works well, except that the connection keeps getting closed automatically after a few seconds.
I have set the keep alive property true of:
SSHD connection
SSH client connection
ServerSocket server side
ClientSocket client side
A common connection routine of connecting to the register (port 4000) and invoking a method on an object (port 4005) outputs the following log:
INFO org.apache.sshd.server.session.ServerSession - Authentication succeeded
INFO org.apache.sshd.server.session.ServerSession - Received SSH_MSG_CHANNEL_OPEN direct-tcpip
INFO org.apache.sshd.server.channel.ChannelDirectTcpip - Receiving request for direct tcpip: hostToConnect=ThinkPad, portToConnect=4000, originatorIpAddress=127.0.0.1, originatorPort=64539
INFO org.apache.sshd.server.session.ServerSession - Received SSH_MSG_CHANNEL_OPEN direct-tcpip
INFO org.apache.sshd.server.channel.ChannelDirectTcpip - Receiving request for direct tcpip: hostToConnect=ThinkPad, portToConnect=4005, originatorIpAddress=127.0.0.1, originatorPort=64540
INFO org.apache.sshd.server.channel.ChannelDirectTcpip - Received SSH_MSG_CHANNEL_EOF on channel 1
INFO org.apache.sshd.server.channel.ChannelDirectTcpip - Send SSH_MSG_CHANNEL_CLOSE on channel 1
INFO org.apache.sshd.server.channel.ChannelDirectTcpip - Received SSH_MSG_CHANNEL_CLOSE on channel 1
INFO org.apache.sshd.server.channel.ChannelDirectTcpip - Closing channel 1 immediately
INFO org.apache.sshd.server.channel.ChannelDirectTcpip - Closing channel 1 immediately
INFO org.apache.sshd.server.channel.ChannelDirectTcpip - Send SSH_MSG_CHANNEL_EOF on channel 1
INFO org.apache.sshd.server.session.ServerSession - Closing session
INFO org.apache.sshd.server.channel.ChannelDirectTcpip - Closing channel 0 immediately
INFO org.apache.sshd.server.channel.ChannelDirectTcpip - Closing channel 0 immediately
INFO org.apache.sshd.server.channel.ChannelDirectTcpip - Send SSH_MSG_CHANNEL_EOF on channel 0
The line ** Received SSH_MSG_CHANNEL_EOF on channel 1 ** suggests that the method invoked on the object has generated an EOF message. This then causes the session to close...
Possible solutions I can think of:
Intercept or prevent the EOF message (but where and how?)
Try to configure the server side sessionfactory to ignore the EOF messages (feels wrong...)
RMI connections are pooled at the client and closed if not reused within 15 seconds. You can adjust this behaviour via system properties: see the Sun system properties page linked from the RMI home page.