How can I use Apache common pool to pool TCP connections and reuse the connection?
To implement a TCP connection pool, I am trying to use Apache common pool(1.6), I used the object pool that is posted in https://javaarchitectforum.com/tag/apache-common-object-pool-example/ to implement it.
I do expect to see persisted TCP connection upon initiating connection to another server and reuse it for other subsequent connection requests.
The issue is, I can not see any persistent connection to the server(netstat -an). By borrowing object, new connection is established and when return the object, socket is disconnected.No pooling!
Am I using correct approach to create TCP pool?
Issue is resolved.
Two amendments to resolve the issue:
Upon returning object, the DataOutputStream should not to be closed.
clientSocket should be kept alive upon makeObject() [clientSocket.setKeepAlive(true)].
As a result connections are persisted and reused for next requests.
Related
I am using Table interface Interface( https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Table.html ) and using connection interface ( https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/client/Connection.html ) to get the Table object. But as mention in Connection interface link,"Connection creation is a heavy-weight operation. Connection implementations are thread-safe, so that the client can create a connection once, and share it with different threads.".
So if I am creating a single connection object for all threads(creating this object in static block), so what will happen if there will be some network issue and client lost connection with hbase cluster for some time. Will the Connection object will still work after that ?
If the connection is lost and come back again untill some certain period(TCP Timeout), everything will work fine.
As there is TCP connection established between client and hbase. Also as mentioned in the documentation, "The individual connections to servers, meta cache, zookeeper connection, etc are all shared by the Table and Admin instances obtained from this connection", if we have sent the data while the network was unreachable, the data will be present in the buffer and client will try to do retransmissions of that segment, and hbase will get it when network will come back.
But if the network will not become reachable untill certain period(TCP Timeout), then TCP finally gives up and will close the socket.For this situation you have to put some catch block to handle this or need to restart the jar.
My system encounter some connection leak in connection pool. I would like to list down some statistic of the connection pool regularly, how can I do that? For example, Current Capacity, Active Connections High Count, Connections Total Count, Leaked Connection Count and etc.
I am using javax.sql.DataSource to retrieve the connection from connection pool. But I couldn't find any interface that can retrieve those connection pool information. Any ideas?
I am using Oracle DB and Java EE as my server side script.
The javax.sql.DataSource is an interface and it just abstracts a data source. It does not involve providing pooled connections to it.
A connection pool is responsible for providing pooled, reusable connections to a database (data source).
First you need to find out which connection pool you're using. Connection pool implementations usually provide a way to query things like the number of active connections.
For example the Apache DBCP has a BasicDataSource class which is a connection pool, and it has a methods for this:
BasicDataSource.getMaxTotal();
BasicDataSource.getNumActive();
BasicDataSource.getNumIdle();
BasicDataSource.getMinIdle();
BasicDataSource.getMaxIdle();
Since you mentioned you're using Oracle DB, most likely your connection pool is OracleOCIConnectionPool (part of Oracle JDBC driver) which provides:
OracleOCIConnectionPool.getMaxLimit();
OracleOCIConnectionPool.getPoolSize();
OracleOCIConnectionPool.getActiveSize();
OracleOCIConnectionPool.getMinLimit();
In our project we are maintaining our own DB connection pool.
For resolving the issue 'java.sql.SQLRecoverableException: Io exception most of people has suggested to use standard connection pool like apache dbcp.
I am wondering what is the logic those standard pooling mechanism will perform during connection reset?
How do DBConnectionPool know that DB connection has timed out? since we know conn.isClosed() won't help here.
Is it each db connection will have one tcp client socket with DB server?
Finally is it advisable; whenever i return the connection to the pool; pool should close the connection; if the connection is existing more than ~10 mins from it is returned?
[~10 mins server side conn timeout variable]
Kindly answer all my questions.
I am answering this question assuming that you made use of Apache DBCP for connection pooling by using org.apache.commons.pool.impl.GenericObjectPool, org.apache.commons.dbcp.DataSourceConnectionFactory, org.apache.commons.dbcp.PoolableConnectionFactory and org.apache.commons.dbcp.PoolingDataSource classes.
I am wondering what is the logic those standard pooling mechanism
will perform during connection reset?
If GenericObjectPool.testOnBorrow and GenericObjectPool.testOnReturn are set true to The Connection will be validated whether it is active or not using a validationQuery set in PoolableConnectionFactory. If the validation is failed the Connection object is dropped and new one is created and added to the pool
How do DBConnectionPool know that DB connection has timed out? since
we know conn.isClosed() won't help here. Same mechanism as above
Is it each db connection will have one tcp client socket with DB
server? Yes
Finally is it advisable; whenever i return the connection to the
pool; pool should close the connection; if the connection is existing
more than ~10 mins from it is created? [~10 mins server side conn
timeout variable] If you think it should will not create unneccessary network traffic and if you have special reason to do that. You can do it. By setting minEvictableIdleTimeMillis in GenericObjectPool along with timeBetweenEvictionRunsMillis if you want to remove based on idle time
I am getting below error when I am trying to connect to a TCP server. My programs tries to open around 300-400 connections using diffferent threads and this is happening during 250th thread. Each thread uses its own connection to send and receive data.
java.net.SocketException: Connection timed out:could be due to invalid address
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:372)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:233)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:220)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:385)
Here is the code I have that a thread uses to get socket:
socket = new Socket(my_hostName, my_port);
Is there any default limit on number of connections that a TCP server can have at one time? If not how to solve this type of problems?
You could be getting a connection timeout if the server has a ServerSocket bound to the port you are connecting to, but is not accepting the connection.
If it always happens with the 250th connection, maybe the server is set up to only accept 250 connections. Someone has to disconnect so you can connect. Or you can increase the timeout; instead of creating the socket like that, create the socket with the empty constructor and then use the connect() method:
Socket s = new Socket();
s.connect(new InetSocketAddress(my_hostName, my_port), 90000);
Default connection timeout is 30 seconds; the code above waits 90 seconds to connect, then throws the exception if the connection cannot be established.
You could also set a lower connection timeout and do something else when you catch that exception...
Why all the connections? Is this a test program? In which case be aware that opening large numbers of connections from a single client stresses the client in ways that aren't exercised by real systems with large numbers of different client hosts, so test results from that kind of client aren't all that valid. You could be running out of client ports, or some other client resource.
If it isn't a test program, same question. Why all the connections? You'd be better off running a connection pool and reusing a much smaller number of connections serially. The network only has so much bandwidth after all; dividing it by 400 isn't very useful.
I'm implementing a java TCP/IP Server using ServerSocket to accept messages from clients via network sockets.
Works fine, except for clients on PDAs (a WIFI barcode scanner).
If I have a connection between server and pda - and the pda goues into suspend (standby) after some idle time - then there will be problems with the connection.
When the pda wakes up again, I can observer in a tcp monitor, that a second connection with a different port is established, but the old one remains established too:
localhost:2000 remotehost:4899 ESTABLISHED (first connection)
localhost:2000 remotehost:4890 ESTABLISHED (connection after wakeup)
And now communication doesn't work, as the client now uses the new connection, but the server still listens at the old one - so the server doesn't receive the messages. But when the server sends a message to the client he realizes the problem (receives a SocketException: Connection reset. The server then uses the new connection and all the messages which have been send in the meantime by the client will be received at a single blow!
So I first realize the network problems, when the server tries to send a message - but in the meantime there are no exceptions or anything. How can I properly react to this problem - so that the new connection is used, as soon as it is established (and the old one closed)?
From your description I guess that the server is structured like this:
server_loop
{
client_socket = server_socket.accept()
TalkToClientUntilConnectionCloses(client_socket)
}
I'd change it to process incoming connections and established connections in parallel. The simplest approach (from the implementation point of view) is to start a new thread for each client. It is not a good approach in general (it has poor scalability), but if you don't expect a lot of clients and can afford it, just change the server like this:
server_loop
{
client_socket = server_socket.accept()
StartClientThread(client_socket)
}
As a bonus, you get an ability to handle multiple clients simultaneously (and all the troubles attached too).
It sounds like the major issue is that you want the server to realize and drop the old connections as they become stale.
Have you considered setting a timeout on the connection on the server-side socket (the connection Socket, not the ServerSocket) so you can close/drop it after a certain period? Perhaps after the SO_TIMEOUT expires on the Socket, you could test it with an echo/keepalive command to verify that the connection is still good.