I am using following code for registering and listening to Oracle database change notifications. This code is working fine when i run it as a standalone java program. It is receiving the notification from the database and printing as expected.
public class DBChangeNotification {
static final String USERNAME = "XXX";
static final String PASSWORD = "YYY";
static String URL = "jdbc:oracle:thin:#xxxx:xxxx:xxxx";
public static void main(String[] args) {
DBChangeNotification demo = new DBChangeNotification();
try {
demo.run();
} catch (SQLException mainSQLException) {
mainSQLException.printStackTrace();
}
}
public void run() throws SQLException {
OracleConnection conn = connect();
Properties prop = new Properties();
prop.setProperty(OracleConnection.DCN_NOTIFY_ROWIDS, "true");
prop.setProperty(OracleConnection.DCN_QUERY_CHANGE_NOTIFICATION, "true");
prop.setProperty(OracleConnection.DCN_BEST_EFFORT, "true");
DatabaseChangeRegistration dcr = conn.registerDatabaseChangeNotification(prop);
try {
// add the listenerr:
DCNDemoListener list = new DCNDemoListener(this);
dcr.addListener(list);
// second step: add objects in the registration:
Statement stmt = conn.createStatement();
// associate the statement with the registration:
((OracleStatement) stmt).setDatabaseChangeRegistration(dcr);
ResultSet rs = stmt.executeQuery("select * from xxxxxxxx where yyyy='zzzzz'");
while (rs.next()) {
}
String[] tableNames = dcr.getTables();
for (int i = 0; i < tableNames.length; i++) {
System.out.println(tableNames[i] + " is part of the registration.");
}
rs.close();
stmt.close();
} catch (SQLException ex) {
// if an exception occurs, we need to close the registration in order
// to interrupt the thread otherwise it will be hanging around.
if (conn != null) {
conn.unregisterDatabaseChangeNotification(dcr);
}
ex.printStackTrace();
throw ex;
} finally {
try {
// Note that we close the connection!
conn.close();
} catch (Exception innerex) {
innerex.printStackTrace();
}
}
}
/**
* Creates a connection the database.
*/
OracleConnection connect() throws SQLException {
OracleDriver dr = new OracleDriver();
Properties prop = new Properties();
prop.setProperty("user", DBChangeNotification.USERNAME);
prop.setProperty("password", DBChangeNotification.PASSWORD);
return (OracleConnection) dr.connect(DBChangeNotification.URL, prop);
}
}
/**
* DCN listener: it prints out the event details in stdout.
*/
class DCNDemoListener implements DatabaseChangeListener {
DBChangeNotification demo;
DCNDemoListener(DBChangeNotification dem) {
System.out.println("DCNDemoListener");
demo = dem;
}
#Override
public void onDatabaseChangeNotification(DatabaseChangeEvent e) {
Thread t = Thread.currentThread();
System.out.println("DCNDemoListener: got an event (" + this + " running on thread " + t + ")");
System.out.println(e.toString());
synchronized (demo) {
demo.notify();
}
}
}
My requirement is to use this feature in a web application. Web application when started in the server, has to listen to data change notifications (may be on a separate thread) and notify the application through a websocket client. I have added the following code in contextInitialized method of servlet context listener, so that it will start as soon as the application starts.
public class MyServletContextListener implements ServletContextListener {
DBChangeNotification demo;
#Override
public void contextDestroyed(ServletContextEvent arg0) {
//Notification that the servlet context is about to be shut down.
}
#Override
public void contextInitialized(ServletContextEvent arg0) {
demo = new DBChangeNotification();
try {
demo.run();
} catch (SQLException mainSQLException) {
mainSQLException.printStackTrace();
}
}
}
I did not see any notifications received by the web application when database change event occurs in the registered table. Please help me in resolving the issue. I do not know whether this is a correct approach or not.... may please suggest any alternative except continuous polling. I need to start something in the server as soon as i receive notification from database. Thank you.
It might be that you're running your code on an Oracle instance that doesn't have the Notification API available.
Check this SO for more info
Related
I am trying to find a bug is some RabbitMQ client code that was developed six or seven years ago. The code was modified to allow for delayed messages. It seems that connections are created to the RabbitMQ server and then never destroyed. Each exists in a separate thread so I end up with 1000's of threads. I am sure the problem is very obvious / simple - but I am having trouble seeing it. I have been looking at the exchangeDeclare method (the commented out version is from the original code which seemed to work), but I have been unable to find the default values for autoDelete and durable which are being set in the modified code. The method below in within a Spring service class. Any help, advice, guidance and pointing out huge obvious errors appreciated!
private void send(String routingKey, String message) throws Exception {
String exchange = applicationConfiguration.getAMQPExchange();
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-delayed-type", "fanout");
Map<String, Object> headers = new HashMap<String, Object>();
headers.put("x-delay", 10000); //delay in miliseconds i.e 10secs
AMQP.BasicProperties.Builder props = new AMQP.BasicProperties.Builder().headers(headers);
Connection connection = null;
Channel channel = null;
try {
connection = myConnection.getConnection();
}
catch(Exception e) {
log.error("AMQP send method Exception. Unable to get connection.");
e.printStackTrace();
return;
}
try {
if (connection != null) {
log.debug(" [CORE: AMQP] Sending message with key {} : {}",routingKey, message);
channel = connection.createChannel();
// channel.exchangeDeclare(exchange, exchangeType);
channel.exchangeDeclare(exchange, "x-delayed-message", true, false, args);
// channel.basicPublish(exchange, routingKey, null, message.getBytes());
channel.basicPublish(exchange, routingKey, props.build(), message.getBytes());
}
else {
log.error("Total AMQP melt down. This should never happen!");
}
}
catch(Exception e) {
log.error("AMQP send method Exception. Unable to get send.");
e.printStackTrace();
}
finally {
channel.close();
}
}
This is the connection class
#Service
public class PersistentConnection {
private static final Logger log = LoggerFactory.getLogger(PersistentConnection.class);
private static Connection myConnection = null;
private Boolean blocked = false;
#Autowired ApplicationConfiguration applicationConfiguration;
#PreDestroy
private void destroy() {
try {
myConnection.close();
} catch (IOException e) {
log.error("Unable to close AMQP Connection.");
e.printStackTrace();
}
}
public Connection getConnection( ) {
if (myConnection == null) {
start();
}
return myConnection;
}
private void start() {
log.debug("Building AMQP Connection");
ConnectionFactory factory = new ConnectionFactory();
String ipAddress = applicationConfiguration.getAMQPHost();
String user = applicationConfiguration.getAMQPUser();
String password = applicationConfiguration.getAMQPPassword();
String virtualHost = applicationConfiguration.getAMQPVirtualHost();
String port = applicationConfiguration.getAMQPPort();
try {
factory.setUsername(user);
factory.setPassword(password);
factory.setVirtualHost(virtualHost);
factory.setPort(Integer.parseInt(port));
factory.setHost(ipAddress);
myConnection = factory.newConnection();
}
catch (Exception e) {
log.error("Unable to initialise AMQP Connection.");
e.printStackTrace();
}
myConnection.addBlockedListener(new BlockedListener() {
public void handleBlocked(String reason) throws IOException {
// Connection is now blocked
log.warn("Message Server has blocked. It may be resource limitted.");
blocked = true;
}
public void handleUnblocked() throws IOException {
// Connection is now unblocked
log.warn("Message server is unblocked.");
blocked = false;
}
});
}
public Boolean isBlocked() {
return blocked;
}
}
I have my client trying to lookup a JMS server. Here is my class JmsTest.java:
public static void main(String[] aInArgs)
{
boolean bContinue = true;
try
{
// determine JmsTest configuration based on command line arguments.
JmsTest jmsTest = parseCommandLine(aInArgs);
// connect to the server.
//jmsTest.initializeConnection();
Thread jmsFaultClientThread = null;
jmsFaultClientThread = new Thread("RUN") {
#Override
public void run() {
try
{
System.out.println("jmsFaultClient starting...");
jmsTest.initializeConnection();
}
catch (Exception e)
{
System.out.println("Exception: " + e.toString());
}
System.out.println("jmsFaultClient started.");
}
};
jmsFaultClientThread.start();
And my method initializeConnection():
public void initializeConnection() throws Exception
{
try
{
Hashtable env = new Hashtable();
env.put(Context.SECURITY_PRINCIPAL, user );
env.put(Context.SECURITY_CREDENTIALS, password);
jndiContext = new InitialContext(env);
System.out.println("Initializing Topic (" + strName + ")...");
try
{
topicConnectionFactory = (TopicConnectionFactory) jndiContext.lookup(CONNECTION_FACTORY);
}
catch (Exception e)
{
topicConnectionFactory = getExternalFactory(jndiContext);
}
When I run jmsTest.initializeConnection() like this everything works, and the lookup is working. However, the problem is when it's run inside the thread it gets stuck without any exception or any error. It's just stuck.
In my logs i'm seeing:
System.out.println("Initializing Topic (" + strName + ")...");
Which is a log inside my try / catch, and nothing else.
In dependencies, I have 2 jars, contening javax\jms. With the first one it's work inside the thread, and with the second one it doesn't. But I don't know why my jar can "block" the thread.
UPDATE 1 :
#AnotherJavaprogrammer said me to print the error:
here is my lookup with print :
try
{
getLogger().debug("TRY context");
Context lInitialContext = (Context) jndiContext.lookup(JMS_CONTEXT);
lInitialContext.lookup("SAMConnectionFactory");
getLogger().debug("END trying context");
}
catch (Exception e)
{
getLogger().debug("Catch");
getLogger().debug("Exception", e);
}
The output from getLogger().debug("END trying context") never comes, and I don't see the getLogger().debug("Catch") one either. So it appears I'm really "stuck" inside the lookup(). I can't go further, and it doesn't throw an exception.
This question already has answers here:
JUnit terminates child threads
(6 answers)
Closed 2 years ago.
I'm currently learning JDBC. And I try to update the product information and insert a log at the same time.
private void testTransaction() {
try {
// Get Connection
Connection connection = ConnectionUtils.getConnection();
connection.setAutoCommit(false);
// Execute SQL
Product product = new Product(1, 4000d);
productService.updateProduct(connection, product);
Log log = new Log(true, "None");
logService.insertLog(connection, log);
// Commit transaction
connection.commit();
} catch (Exception e) {
e.printStackTrace();
} finally {
ConnectionUtils.closeConnection();
}
}
When using single thread, it would be fine.
#Test
public void testMultiThread() {
testTransaction();
}
But When I using multi-thread, even start one thread, the process would terminate automatically.
#Test
public void testMultiThread() {
for (int i = 0; i < 1; i++) {
new Thread(this::testTransaction).start();
}
}
After debugging, I found that it was Class.forName() function in ConnectionUtils cause this situation.
public class ConnectionUtils {
static private String url;
static private String driver;
static private String username;
static private String password;
private static Connection connection = null;
private static ThreadLocal<Connection> t = new ThreadLocal<>();
static {
try {
Properties properties = new Properties();
properties.load(new FileReader("src/main/resources/jdbcConnection.properties"));
driver = properties.getProperty("driver");
url = properties.getProperty("url");
username = properties.getProperty("username");
password = properties.getProperty("password");
Class.forName(driver);
} catch (IOException | ClassNotFoundException e) {
e.printStackTrace();
}
}
public static Connection getConnection() {
try {
connection = DriverManager.getConnection(url, username, password);
} catch (Exception e) {
e.printStackTrace();
} finally {
t.set(connection);
}
return connection;
}
}
The process will terminate at Class.forName(). I found this by adding two print funcion before and after the statement. And only the former works.
System.out.println("Before");
Class.forName(driver);
System.out.println("After");
The console only print the Before and doesn't show any exception information.
I want to know that why multi-thread in java will cause this situation and how to solve this problem.
This is more likely your test method complete before your other threads and the test framework is not waiting (junit?). You need to wait until the threads have completed. You should use an Executors, this is more convinient.
#Test
public void testMultiThread() {
Thread[] threads = new Thread[1];
for (int i = 0; i < threads.length; i++) {
threads[i] = new Thread(this::testTransaction);
threads[i].start();
}
// wait thread completion
for (Thread th : threads) {
th.join();
}
}
Junit will terminate all your thread as long as the test method finish.
In your case, test will finish when the loop ends, it doesn't care whether
testTransaction has finished. It has nothing to do with class.forName , maybe it's just because this method exceute longer.
you can check this answer
I have a class which looks like that:
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
public class ConnectionPool {
private HikariDataSource hds;
private final String propertyFileName;
public ConnectionPool(String propertyFileName) {
if (propertyFileName == null) {
throw new IllegalArgumentException("propertyFileName can't be null");
}
this.propertyFileName = propertyFileName;
reloadFile();
}
public void reloadFile() {
if (hds != null) {
hds.close();
}
hds = new HikariDataSource(new HikariConfig(propertyFileName));
}
public HikariDataSource getHikariDataSource() {
return hds;
}
public String getPropertyFileName() {
return propertyFileName;
}
public void executeQuery(final String sql, final CallBack<ResultSet, SQLException> callBack) {
new Thread(new Runnable() {
#Override
public void run() {
Connection connection = null;
PreparedStatement preparedStatement = null;
ResultSet resultSet = null;
try {
connection = hds.getConnection();
preparedStatement = connection.prepareStatement(sql);
resultSet = preparedStatement.executeQuery();
callBack.call(resultSet, null);
} catch (SQLException e) {
callBack.call(null, e);
} finally {
if (resultSet != null) {
try {
resultSet.close();
} catch (SQLException ignored) {}
}
if (preparedStatement != null) {
try {
preparedStatement.close();
} catch (SQLException ignored) {}
}
if (connection != null) {
try {
connection.close();
} catch (SQLException ignored) {}
}
}
}
}).start();
}
public void executeUpdate(final String sql, final CallBack<Integer, SQLException> callBack) {
//TODO
}
public void execute(final String sql, final CallBack<Boolean, SQLException> callBack) {
//TODO
}
public void connection(final String sql, final CallBack<Connection, SQLException> callBack) {
//TODO
}
}
The problem is that the reloadFile() method can be called from a different thread as hds is used. So it's possible that hds is closed while I use a connection object of it in another thread. What's the best way to solve this problem? Should I wait a few seconds after creating the new HikariDataSource object befor closing the old one (until the queries are finished)?
Edit: Another question: Should hds be volatile, so that the changes of hds are visible for all threads?
Have had a very very quick and brief look in the source code in HikariDataSource. In its close(), it is calling its internal HikariPool's shutdown() method, for which it will try to properly close the pooled connections.
If you want to even avoid any chance that in-progress connection from force closing, one way is to make use of a ReadWriteLock:
public class ConnectionPool {
private HikariDataSource hds;
private ReentrantReadWriteLock dsLock = ....;
//....
public void reloadFile() {
dsLock.writeLock().lock();
try {
if (hds != null) {
hds.close();
}
hds = new HikariDataSource(new HikariConfig(propertyFileName));
} finally {
dsLock.writeLock().unlock();
}
}
public void executeQuery(final String sql, final CallBack<ResultSet, SQLException> callBack) {
new Thread(new Runnable() {
#Override
public void run() {
Connection connection = null;
PreparedStatement preparedStatement = null;
ResultSet resultSet = null;
dsLock.readLock().lock();
try {
connection = hds.getConnection();
// ....
} catch (SQLException e) {
callBack.call(null, e);
} finally {
// your other cleanups
dsLock.readLock().unlock();
}
}
}).start();
}
//....
}
This will make sure that
multiple thread can access your datasource (to get connection etc)
Reload of datasource needs to wait until thread using the datasource to complete
No thread is able to use the datasource to get connection when it is reloading.
Why exactly are you trying to cause HikariCP to reload? Many of the important pool parameters (minimumIdle,maximumPoolSize,connectionTimeout,etc.) are controllable at runtime through the JMX bean without restarting the pool.
Restarting the pool is a good way to "hang" your application for several seconds while connections are closed and rebuilt. If you can't do what you need through the JMX interface, Adrian's suggestion seems like quite a reasonable solution.
Other solutions are possible, but have more complexity.
EDIT: Just for my own entertainment, here is the more complex solution...
public class ConnectionPool {
private AtomicReference<HikariDataSource> hds;
public ConnectionPool(String propertyFileName) {
hds = new AtomicReference<>();
...
}
public void reloadFile() {
final HikariDataSource ds = hds.getAndSet(new HikariDataSource(new HikariConfig(propertyFileName)));
if (ds != null) {
new Thread(new Runnable() {
public void run() {
ObjectName poolName = new ObjectName("com.zaxxer.hikari:type=Pool (" + ds.getPoolName() + ")");
MBeanServer mBeanServer = ManagementFactory.getPlatformMBeanServer();
HikariPoolMXBean poolProxy = JMX.newMXBeanProxy(mBeanServer, poolName, HikariPoolMXBean.class);
poolProxy.softEvictConnections();
do {
Thread.sleep(500);
} while (poolProxy.getActiveConnections() > 0);
ds.close();
}
}).start();
}
}
public HikariDataSource getHikariDataSource() {
return hds.get();
}
public void executeQuery(final String sql, final CallBack<ResultSet, SQLException> callBack) {
new Thread(new Runnable() {
#Override
public void run() {
...
try {
connection = getHikariDataSource().getConnection();
...
}
}
}).start();
}
}
This will swap out the pool (atomically) and will start a thread that waits until all active connections have returned before shutting down the orphaned pool instance.
This assumes that you let HikariCP generate unique pool names, i.e. do not set poolName in your properties, and that registerMbeans=true.
A few options:
Synchronize all access to the data source so that only one thread can ever be messing with it. Not scaleable, but workable.
Roll your own connection pooling, such as Apache Commons Pooling so that each access, regardless of thread, requests a data source and pooling creates one as necessary. Can mess with data ACID, just depends on whether dirty data is needed, when data is flushed, transactionality, etc.
Each thread could also have its own data source using ThreadLocal so that each thread is totally independent of each other. Again, quality of data might be an issue, resources might be an issue if you've got "lots" of threads (depends on your definition) and too many open connections cause resource issues on either the client or server.
Disclaimer- I am not a Java programmer. Odds are I'll need to do my homework on any advice given, but I will gladly do so :)
That said, I wrote a complete database-backed socket server which is working just fine for my small tests, and now I'm getting ready for initial release. Since I do not know Java/Netty/BoneCP well- I have no idea if I made a fundamental mistake somewhere that will hurt my server before it even gets out the door.
For example, I have no idea what an executor group does exactly and what number I should use. Whether it's okay to implement BoneCP as a singleton, is it really necessary to have all those try/catch's for each database query? etc.
I've tried to reduce my entire server to a basic example which operates the same way as the real thing (I stripped this all in text, did not test in java itself, so excuse any syntax errors due to that).
The basic idea is that clients can connect, exchange messages with the server, disconnect other clients, and stay connected indefinitely until they choose or are forced to disconnect. (the client will send ping messages every minute to keep the connection alive)
The only major difference, besides untesting this example, is how the clientID is set (safely assume it is truly unique per connected client) and that there is some more business logic in checking of values etc.
Bottom line- can anything be done to improve this so it can handle the most concurrent users possible? Thanks!
//MAIN
public class MainServer {
public static void main(String[] args) {
EdgeController edgeController = new EdgeController();
edgeController.connect();
}
}
//EdgeController
public class EdgeController {
public void connect() throws Exception {
ServerBootstrap b = new ServerBootstrap();
ChannelFuture f;
try {
b.group(new NioEventLoopGroup(), new NioEventLoopGroup())
.channel(NioServerSocketChannel.class)
.localAddress(9100)
.childOption(ChannelOption.TCP_NODELAY, true)
.childOption(ChannelOption.SO_KEEPALIVE, true)
.childHandler(new EdgeInitializer(new DefaultEventExecutorGroup(10)));
// Start the server.
f = b.bind().sync();
// Wait until the server socket is closed.
f.channel().closeFuture().sync();
} finally { //Not quite sure how to get here yet... but no matter
// Shut down all event loops to terminate all threads.
b.shutdown();
}
}
}
//EdgeInitializer
public class EdgeInitializer extends ChannelInitializer<SocketChannel> {
private EventExecutorGroup executorGroup;
public EdgeInitializer(EventExecutorGroup _executorGroup) {
executorGroup = _executorGroup;
}
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast("idleStateHandler", new IdleStateHandler(200,0,0));
pipeline.addLast("idleStateEventHandler", new EdgeIdleHandler());
pipeline.addLast("framer", new DelimiterBasedFrameDecoder(8192, Delimiters.nulDelimiter()));
pipeline.addLast("decoder", new StringDecoder(CharsetUtil.UTF_8));
pipeline.addLast("encoder", new StringEncoder(CharsetUtil.UTF_8));
pipeline.addLast(this.executorGroup, "handler", new EdgeHandler());
}
}
//EdgeIdleHandler
public class EdgeIdleHandler extends ChannelHandlerAdapter {
private static final Logger logger = Logger.getLogger( EdgeIdleHandler.class.getName());
#Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception{
if(evt instanceof IdleStateEvent) {
ctx.close();
}
}
private void trace(String msg) {
logger.log(Level.INFO, msg);
}
}
//DBController
public enum DBController {
INSTANCE;
private BoneCP connectionPool = null;
private BoneCPConfig connectionPoolConfig = null;
public boolean setupPool() {
boolean ret = true;
try {
Class.forName("com.mysql.jdbc.Driver");
connectionPoolConfig = new BoneCPConfig();
connectionPoolConfig.setJdbcUrl("jdbc:mysql://" + DB_HOST + ":" + DB_PORT + "/" + DB_NAME);
connectionPoolConfig.setUsername(DB_USER);
connectionPoolConfig.setPassword(DB_PASS);
try {
connectionPool = new BoneCP(connectionPoolConfig);
} catch(SQLException ex) {
ret = false;
}
} catch(ClassNotFoundException ex) {
ret = false;
}
return(ret);
}
public Connection getConnection() {
Connection ret;
try {
ret = connectionPool.getConnection();
} catch(SQLException ex) {
ret = null;
}
return(ret);
}
}
//EdgeHandler
public class EdgeHandler extends ChannelInboundMessageHandlerAdapter<String> {
private final Charset CHARSET_UTF8 = Charset.forName("UTF-8");
private long clientID;
static final ChannelGroup channels = new DefaultChannelGroup();
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
Connection dbConnection = null;
Statement statement = null;
ResultSet resultSet = null;
String query;
Boolean okToPlay = false;
//Check if status for ID #1 is true
try {
query = "SELECT `Status` FROM `ServerTable` WHERE `ID` = 1";
dbConnection = DBController.INSTANCE.getConnection();
statement = dbConnection.createStatement();
resultSet = statement.executeQuery(query);
if (resultSet.first()) {
if (resultSet.getInt("Status") > 0) {
okToPlay = true;
}
}
} catch (SQLException ex) {
okToPlay = false;
} finally {
if (resultSet != null) {
try {
resultSet.close();
} catch (SQLException logOrIgnore) {
}
}
if (statement != null) {
try {
statement.close();
} catch (SQLException logOrIgnore) {
}
}
if (dbConnection != null) {
try {
dbConnection.close();
} catch (SQLException logOrIgnore) {
}
}
}
if (okToPlay) {
//clientID = setClientID();
sendCommand(ctx, "HELLO", "WORLD");
} else {
sendErrorAndClose(ctx, "CLOSED");
}
}
#Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
channels.remove(ctx.channel());
}
#Override
public void messageReceived(ChannelHandlerContext ctx, String request) throws Exception {
// Generate and write a response.
String[] segments_whitespace;
String command, command_args;
if (request.length() > 0) {
segments_whitespace = request.split("\\s+");
if (segments_whitespace.length > 1) {
command = segments_whitespace[0];
command_args = segments_whitespace[1];
if (command.length() > 0 && command_args.length() > 0) {
switch (command) {
case "HOWDY": processHowdy(ctx, command_args); break;
default: break;
}
}
}
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
TraceUtils.severe("Unexpected exception from downstream - " + cause.toString());
ctx.close();
}
/* */
/* STATES - / CLIENT SETUP */
/* */
private void processHowdy(ChannelHandlerContext ctx, String howdyTo) {
Connection dbConnection = null;
Statement statement = null;
ResultSet resultSet = null;
String replyBack = null;
try {
dbConnection = DBController.INSTANCE.getConnection();
statement = dbConnection.createStatement();
resultSet = statement.executeQuery("SELECT `to` FROM `ServerTable` WHERE `To`='" + howdyTo + "'");
if (resultSet.first()) {
replyBack = "you!";
}
} catch (SQLException ex) {
} finally {
if (resultSet != null) {
try {
resultSet.close();
} catch (SQLException logOrIgnore) {
}
}
if (statement != null) {
try {
statement.close();
} catch (SQLException logOrIgnore) {
}
}
if (dbConnection != null) {
try {
dbConnection.close();
} catch (SQLException logOrIgnore) {
}
}
}
if (replyBack != null) {
sendCommand(ctx, "HOWDY", replyBack);
} else {
sendErrorAndClose(ctx, "ERROR");
}
}
private boolean closePeer(ChannelHandlerContext ctx, long peerClientID) {
boolean success = false;
ChannelFuture future;
for (Channel c : channels) {
if (c != ctx.channel()) {
if (c.pipeline().get(EdgeHandler.class).receiveClose(c, peerClientID)) {
success = true;
break;
}
}
}
return (success);
}
public boolean receiveClose(Channel thisChannel, long remoteClientID) {
ChannelFuture future;
boolean didclose = false;
long thisClientID = (clientID == null ? 0 : clientID);
if (remoteClientID == thisClientID) {
future = thisChannel.write("CLOSED BY PEER" + '\n');
future.addListener(ChannelFutureListener.CLOSE);
didclose = true;
}
return (didclose);
}
private ChannelFuture sendCommand(ChannelHandlerContext ctx, String cmd, String outgoingCommandArgs) {
return (ctx.write(cmd + " " + outgoingCommandArgs + '\n'));
}
private ChannelFuture sendErrorAndClose(ChannelHandlerContext ctx, String error_args) {
ChannelFuture future = sendCommand(ctx, "ERROR", error_args);
future.addListener(ChannelFutureListener.CLOSE);
return (future);
}
}
When a network message arrive at server, it will be decoded and will release a messageReceived event.
If you look at your pipeline, last added thing to pipeline is executor. Because of that executor will receive what has been decoded and will release the messageReceived event.
Executors are processor of events, server will tell which events happening through them. So how executors are being used is an important subject. If there is only one executor and because of that, all clients using this same executor, there will be a queue for usage of this same executor.
When there are many executors, processing time of events will decrease, because there will not be any waiting for free executors.
In your code
new DefaultEventExecutorGroup(10)
means this ServerBootstrap will use only 10 executors at all its lifetime.
While initializing new channels, same executor group being used:
pipeline.addLast(this.executorGroup, "handler", new EdgeHandler());
So each new client channel will use same executor group (10 executor threads).
That is efficient and enough if 10 threads are able to process incoming events properly. But if we can see messages are being decoded/encoded but not handled as events quickly, that means there is need to increase amount of them.
We can increase number of executors from 10 to 100 like that:
new DefaultEventExecutorGroup(100)
So that will process event queue faster if there is enough CPU power.
What should not be done is creating new executor for each new channel:
pipeline.addLast(new DefaultEventExecutorGroup(10), "handler", new EdgeHandler());
Above line is creating a new executor group for each new channel, that will slow down things greatly, for example if there are 3000 clients, there will be 3000 executorgroups(threads). That is removing main advantage of NIO, ability to use with low thread amounts.
Instead of creating 1 executor for each channel, we can create 3000 executors at startup and at least they will not be deleted and created each time a client connects/disconnects.
.childHandler(new EdgeInitializer(new DefaultEventExecutorGroup(3000)));
Above line is more acceptable than creating 1 executor for each client, because all clients are connected to same ExecutorGroup and when a client disconnects Executors still there even if client data is removed.
If we must speak about database requests, some database queries can take long time to being completed, so if there are 10 executorss and there are 10 jobs being processed, 11th job will have to wait until one of others complete. This is a bottleneck if server receiving more than 10 very time consuming database job at the same time. Increasing count of executors will solve bottleneck to some degree.