Disclaimer- I am not a Java programmer. Odds are I'll need to do my homework on any advice given, but I will gladly do so :)
That said, I wrote a complete database-backed socket server which is working just fine for my small tests, and now I'm getting ready for initial release. Since I do not know Java/Netty/BoneCP well- I have no idea if I made a fundamental mistake somewhere that will hurt my server before it even gets out the door.
For example, I have no idea what an executor group does exactly and what number I should use. Whether it's okay to implement BoneCP as a singleton, is it really necessary to have all those try/catch's for each database query? etc.
I've tried to reduce my entire server to a basic example which operates the same way as the real thing (I stripped this all in text, did not test in java itself, so excuse any syntax errors due to that).
The basic idea is that clients can connect, exchange messages with the server, disconnect other clients, and stay connected indefinitely until they choose or are forced to disconnect. (the client will send ping messages every minute to keep the connection alive)
The only major difference, besides untesting this example, is how the clientID is set (safely assume it is truly unique per connected client) and that there is some more business logic in checking of values etc.
Bottom line- can anything be done to improve this so it can handle the most concurrent users possible? Thanks!
//MAIN
public class MainServer {
public static void main(String[] args) {
EdgeController edgeController = new EdgeController();
edgeController.connect();
}
}
//EdgeController
public class EdgeController {
public void connect() throws Exception {
ServerBootstrap b = new ServerBootstrap();
ChannelFuture f;
try {
b.group(new NioEventLoopGroup(), new NioEventLoopGroup())
.channel(NioServerSocketChannel.class)
.localAddress(9100)
.childOption(ChannelOption.TCP_NODELAY, true)
.childOption(ChannelOption.SO_KEEPALIVE, true)
.childHandler(new EdgeInitializer(new DefaultEventExecutorGroup(10)));
// Start the server.
f = b.bind().sync();
// Wait until the server socket is closed.
f.channel().closeFuture().sync();
} finally { //Not quite sure how to get here yet... but no matter
// Shut down all event loops to terminate all threads.
b.shutdown();
}
}
}
//EdgeInitializer
public class EdgeInitializer extends ChannelInitializer<SocketChannel> {
private EventExecutorGroup executorGroup;
public EdgeInitializer(EventExecutorGroup _executorGroup) {
executorGroup = _executorGroup;
}
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
pipeline.addLast("idleStateHandler", new IdleStateHandler(200,0,0));
pipeline.addLast("idleStateEventHandler", new EdgeIdleHandler());
pipeline.addLast("framer", new DelimiterBasedFrameDecoder(8192, Delimiters.nulDelimiter()));
pipeline.addLast("decoder", new StringDecoder(CharsetUtil.UTF_8));
pipeline.addLast("encoder", new StringEncoder(CharsetUtil.UTF_8));
pipeline.addLast(this.executorGroup, "handler", new EdgeHandler());
}
}
//EdgeIdleHandler
public class EdgeIdleHandler extends ChannelHandlerAdapter {
private static final Logger logger = Logger.getLogger( EdgeIdleHandler.class.getName());
#Override
public void userEventTriggered(ChannelHandlerContext ctx, Object evt) throws Exception{
if(evt instanceof IdleStateEvent) {
ctx.close();
}
}
private void trace(String msg) {
logger.log(Level.INFO, msg);
}
}
//DBController
public enum DBController {
INSTANCE;
private BoneCP connectionPool = null;
private BoneCPConfig connectionPoolConfig = null;
public boolean setupPool() {
boolean ret = true;
try {
Class.forName("com.mysql.jdbc.Driver");
connectionPoolConfig = new BoneCPConfig();
connectionPoolConfig.setJdbcUrl("jdbc:mysql://" + DB_HOST + ":" + DB_PORT + "/" + DB_NAME);
connectionPoolConfig.setUsername(DB_USER);
connectionPoolConfig.setPassword(DB_PASS);
try {
connectionPool = new BoneCP(connectionPoolConfig);
} catch(SQLException ex) {
ret = false;
}
} catch(ClassNotFoundException ex) {
ret = false;
}
return(ret);
}
public Connection getConnection() {
Connection ret;
try {
ret = connectionPool.getConnection();
} catch(SQLException ex) {
ret = null;
}
return(ret);
}
}
//EdgeHandler
public class EdgeHandler extends ChannelInboundMessageHandlerAdapter<String> {
private final Charset CHARSET_UTF8 = Charset.forName("UTF-8");
private long clientID;
static final ChannelGroup channels = new DefaultChannelGroup();
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
Connection dbConnection = null;
Statement statement = null;
ResultSet resultSet = null;
String query;
Boolean okToPlay = false;
//Check if status for ID #1 is true
try {
query = "SELECT `Status` FROM `ServerTable` WHERE `ID` = 1";
dbConnection = DBController.INSTANCE.getConnection();
statement = dbConnection.createStatement();
resultSet = statement.executeQuery(query);
if (resultSet.first()) {
if (resultSet.getInt("Status") > 0) {
okToPlay = true;
}
}
} catch (SQLException ex) {
okToPlay = false;
} finally {
if (resultSet != null) {
try {
resultSet.close();
} catch (SQLException logOrIgnore) {
}
}
if (statement != null) {
try {
statement.close();
} catch (SQLException logOrIgnore) {
}
}
if (dbConnection != null) {
try {
dbConnection.close();
} catch (SQLException logOrIgnore) {
}
}
}
if (okToPlay) {
//clientID = setClientID();
sendCommand(ctx, "HELLO", "WORLD");
} else {
sendErrorAndClose(ctx, "CLOSED");
}
}
#Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
channels.remove(ctx.channel());
}
#Override
public void messageReceived(ChannelHandlerContext ctx, String request) throws Exception {
// Generate and write a response.
String[] segments_whitespace;
String command, command_args;
if (request.length() > 0) {
segments_whitespace = request.split("\\s+");
if (segments_whitespace.length > 1) {
command = segments_whitespace[0];
command_args = segments_whitespace[1];
if (command.length() > 0 && command_args.length() > 0) {
switch (command) {
case "HOWDY": processHowdy(ctx, command_args); break;
default: break;
}
}
}
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
TraceUtils.severe("Unexpected exception from downstream - " + cause.toString());
ctx.close();
}
/* */
/* STATES - / CLIENT SETUP */
/* */
private void processHowdy(ChannelHandlerContext ctx, String howdyTo) {
Connection dbConnection = null;
Statement statement = null;
ResultSet resultSet = null;
String replyBack = null;
try {
dbConnection = DBController.INSTANCE.getConnection();
statement = dbConnection.createStatement();
resultSet = statement.executeQuery("SELECT `to` FROM `ServerTable` WHERE `To`='" + howdyTo + "'");
if (resultSet.first()) {
replyBack = "you!";
}
} catch (SQLException ex) {
} finally {
if (resultSet != null) {
try {
resultSet.close();
} catch (SQLException logOrIgnore) {
}
}
if (statement != null) {
try {
statement.close();
} catch (SQLException logOrIgnore) {
}
}
if (dbConnection != null) {
try {
dbConnection.close();
} catch (SQLException logOrIgnore) {
}
}
}
if (replyBack != null) {
sendCommand(ctx, "HOWDY", replyBack);
} else {
sendErrorAndClose(ctx, "ERROR");
}
}
private boolean closePeer(ChannelHandlerContext ctx, long peerClientID) {
boolean success = false;
ChannelFuture future;
for (Channel c : channels) {
if (c != ctx.channel()) {
if (c.pipeline().get(EdgeHandler.class).receiveClose(c, peerClientID)) {
success = true;
break;
}
}
}
return (success);
}
public boolean receiveClose(Channel thisChannel, long remoteClientID) {
ChannelFuture future;
boolean didclose = false;
long thisClientID = (clientID == null ? 0 : clientID);
if (remoteClientID == thisClientID) {
future = thisChannel.write("CLOSED BY PEER" + '\n');
future.addListener(ChannelFutureListener.CLOSE);
didclose = true;
}
return (didclose);
}
private ChannelFuture sendCommand(ChannelHandlerContext ctx, String cmd, String outgoingCommandArgs) {
return (ctx.write(cmd + " " + outgoingCommandArgs + '\n'));
}
private ChannelFuture sendErrorAndClose(ChannelHandlerContext ctx, String error_args) {
ChannelFuture future = sendCommand(ctx, "ERROR", error_args);
future.addListener(ChannelFutureListener.CLOSE);
return (future);
}
}
When a network message arrive at server, it will be decoded and will release a messageReceived event.
If you look at your pipeline, last added thing to pipeline is executor. Because of that executor will receive what has been decoded and will release the messageReceived event.
Executors are processor of events, server will tell which events happening through them. So how executors are being used is an important subject. If there is only one executor and because of that, all clients using this same executor, there will be a queue for usage of this same executor.
When there are many executors, processing time of events will decrease, because there will not be any waiting for free executors.
In your code
new DefaultEventExecutorGroup(10)
means this ServerBootstrap will use only 10 executors at all its lifetime.
While initializing new channels, same executor group being used:
pipeline.addLast(this.executorGroup, "handler", new EdgeHandler());
So each new client channel will use same executor group (10 executor threads).
That is efficient and enough if 10 threads are able to process incoming events properly. But if we can see messages are being decoded/encoded but not handled as events quickly, that means there is need to increase amount of them.
We can increase number of executors from 10 to 100 like that:
new DefaultEventExecutorGroup(100)
So that will process event queue faster if there is enough CPU power.
What should not be done is creating new executor for each new channel:
pipeline.addLast(new DefaultEventExecutorGroup(10), "handler", new EdgeHandler());
Above line is creating a new executor group for each new channel, that will slow down things greatly, for example if there are 3000 clients, there will be 3000 executorgroups(threads). That is removing main advantage of NIO, ability to use with low thread amounts.
Instead of creating 1 executor for each channel, we can create 3000 executors at startup and at least they will not be deleted and created each time a client connects/disconnects.
.childHandler(new EdgeInitializer(new DefaultEventExecutorGroup(3000)));
Above line is more acceptable than creating 1 executor for each client, because all clients are connected to same ExecutorGroup and when a client disconnects Executors still there even if client data is removed.
If we must speak about database requests, some database queries can take long time to being completed, so if there are 10 executorss and there are 10 jobs being processed, 11th job will have to wait until one of others complete. This is a bottleneck if server receiving more than 10 very time consuming database job at the same time. Increasing count of executors will solve bottleneck to some degree.
Related
I am trying to find a bug is some RabbitMQ client code that was developed six or seven years ago. The code was modified to allow for delayed messages. It seems that connections are created to the RabbitMQ server and then never destroyed. Each exists in a separate thread so I end up with 1000's of threads. I am sure the problem is very obvious / simple - but I am having trouble seeing it. I have been looking at the exchangeDeclare method (the commented out version is from the original code which seemed to work), but I have been unable to find the default values for autoDelete and durable which are being set in the modified code. The method below in within a Spring service class. Any help, advice, guidance and pointing out huge obvious errors appreciated!
private void send(String routingKey, String message) throws Exception {
String exchange = applicationConfiguration.getAMQPExchange();
Map<String, Object> args = new HashMap<String, Object>();
args.put("x-delayed-type", "fanout");
Map<String, Object> headers = new HashMap<String, Object>();
headers.put("x-delay", 10000); //delay in miliseconds i.e 10secs
AMQP.BasicProperties.Builder props = new AMQP.BasicProperties.Builder().headers(headers);
Connection connection = null;
Channel channel = null;
try {
connection = myConnection.getConnection();
}
catch(Exception e) {
log.error("AMQP send method Exception. Unable to get connection.");
e.printStackTrace();
return;
}
try {
if (connection != null) {
log.debug(" [CORE: AMQP] Sending message with key {} : {}",routingKey, message);
channel = connection.createChannel();
// channel.exchangeDeclare(exchange, exchangeType);
channel.exchangeDeclare(exchange, "x-delayed-message", true, false, args);
// channel.basicPublish(exchange, routingKey, null, message.getBytes());
channel.basicPublish(exchange, routingKey, props.build(), message.getBytes());
}
else {
log.error("Total AMQP melt down. This should never happen!");
}
}
catch(Exception e) {
log.error("AMQP send method Exception. Unable to get send.");
e.printStackTrace();
}
finally {
channel.close();
}
}
This is the connection class
#Service
public class PersistentConnection {
private static final Logger log = LoggerFactory.getLogger(PersistentConnection.class);
private static Connection myConnection = null;
private Boolean blocked = false;
#Autowired ApplicationConfiguration applicationConfiguration;
#PreDestroy
private void destroy() {
try {
myConnection.close();
} catch (IOException e) {
log.error("Unable to close AMQP Connection.");
e.printStackTrace();
}
}
public Connection getConnection( ) {
if (myConnection == null) {
start();
}
return myConnection;
}
private void start() {
log.debug("Building AMQP Connection");
ConnectionFactory factory = new ConnectionFactory();
String ipAddress = applicationConfiguration.getAMQPHost();
String user = applicationConfiguration.getAMQPUser();
String password = applicationConfiguration.getAMQPPassword();
String virtualHost = applicationConfiguration.getAMQPVirtualHost();
String port = applicationConfiguration.getAMQPPort();
try {
factory.setUsername(user);
factory.setPassword(password);
factory.setVirtualHost(virtualHost);
factory.setPort(Integer.parseInt(port));
factory.setHost(ipAddress);
myConnection = factory.newConnection();
}
catch (Exception e) {
log.error("Unable to initialise AMQP Connection.");
e.printStackTrace();
}
myConnection.addBlockedListener(new BlockedListener() {
public void handleBlocked(String reason) throws IOException {
// Connection is now blocked
log.warn("Message Server has blocked. It may be resource limitted.");
blocked = true;
}
public void handleUnblocked() throws IOException {
// Connection is now unblocked
log.warn("Message server is unblocked.");
blocked = false;
}
});
}
public Boolean isBlocked() {
return blocked;
}
}
I have 1.5 million records in my mysql table. I'm trying to read all the records in a batch process i.e,planning to read 1000 records in a batch and print those records in console.
For this I'm planning to implement multithreading concept using java. How can I implement this?
In MySQL you get all records at once or you get them one by one in a streaming fashion (see this answer). Alternatively, you can use the limit keyword for chunking (see this answer).
Whether you use streaming results or chunking, you can use multi-threading to process (or print) data while you read data. This is typically done using a producer-consumer pattern where, in this case, the producer retrieves data from the database, puts it on a queue and the consumer takes the data from the queue and processes it (e.g. print to the console).
There is a bit of administration overhead though: both producer and consumer can freeze or trip over an error and both need to be aware of this so that they do not hang forever (potentially freezing your application). This is where "reasonable" timeouts come in ("reasonable" depends entirely on what is appropriate in your situation).
I have tried to put this in a minimal running example, but it is still a lot of code (see below). There are two commented lines that can be used to test the timeout-case. There is also a refreshTestData variable that can be used to re-use inserted records (inserting records can take a long time).
To keep it clean, a lot of keywords like private/public are omitted (i.e. these need to be added in non-demo code).
import java.sql.*;
import java.util.*;
import java.util.concurrent.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class FetchRows {
private static final Logger log = LoggerFactory.getLogger(FetchRows.class);
public static void main(String[] args) {
try {
new FetchRows().print();
} catch (Exception e) {
e.printStackTrace();
}
}
void print() throws Exception {
Class.forName("com.mysql.jdbc.Driver").newInstance();
Properties dbProps = new Properties();
dbProps.setProperty("user", "test");
dbProps.setProperty("password", "test");
try (Connection conn = DriverManager.getConnection("jdbc:mysql://localhost:3306/test", dbProps)) {
try (Statement st = conn.createStatement()) {
prepareTestData(st);
}
// https://stackoverflow.com/a/2448019/3080094
try (Statement st = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY,
java.sql.ResultSet.CONCUR_READ_ONLY)) {
st.setFetchSize(Integer.MIN_VALUE);
fetchAndPrintTestData(st);
}
}
}
boolean refreshTestData = true;
int maxRecords = 5_555;
void prepareTestData(Statement st) throws SQLException {
int recordCount = 0;
if (refreshTestData) {
st.execute("drop table if exists fetchrecords");
st.execute("create table fetchrecords (id mediumint not null auto_increment primary key, created timestamp default current_timestamp)");
for (int i = 0; i < maxRecords; i++) {
st.addBatch("insert into fetchrecords () values ()");
if (i % 500 == 0) {
st.executeBatch();
log.debug("{} records available.", i);
}
}
st.executeBatch();
recordCount = maxRecords;
} else {
try (ResultSet rs = st.executeQuery("select count(*) from fetchrecords")) {
rs.next();
recordCount = rs.getInt(1);
}
}
log.info("{} records available for testing.", recordCount);
}
int batchSize = 1_000;
int maxBatchesInMem = 3;
int printFinishTimeoutS = 5;
void fetchAndPrintTestData(Statement st) throws SQLException, InterruptedException {
final BlockingQueue<List<FetchRecordBean>> printQueue = new LinkedBlockingQueue<List<FetchRecordBean>>(maxBatchesInMem);
final PrintToConsole printTask = new PrintToConsole(printQueue);
new Thread(printTask).start();
try (ResultSet rs = st.executeQuery("select * from fetchrecords")) {
List<FetchRecordBean> l = new LinkedList<>();
while (rs.next()) {
FetchRecordBean bean = new FetchRecordBean();
bean.setId(rs.getInt("id"));
bean.setCreated(new java.util.Date(rs.getTimestamp("created").getTime()));
l.add(bean);
if (l.size() % batchSize == 0) {
/*
* The printTask can stop itself when this producer is too slow to put records on the print-queue.
* Therefor, also check printTask.isStopping() to break the while-loop.
*/
if (printTask.isStopping()) {
throw new TimeoutException("Print task has stopped.");
}
enqueue(printQueue, l);
l = new LinkedList<>();
}
}
if (l.size() > 0) {
enqueue(printQueue, l);
}
} catch (TimeoutException | InterruptedException e) {
log.error("Unable to finish printing records to console: {}", e.getMessage());
printTask.stop();
} finally {
log.info("Reading records finished.");
if (!printTask.isStopping()) {
try {
enqueue(printQueue, Collections.<FetchRecordBean> emptyList());
} catch (Exception e) {
log.error("Unable to signal last record to print.", e);
printTask.stop();
}
}
if (!printTask.await(printFinishTimeoutS, TimeUnit.SECONDS)) {
log.error("Print to console task did not finish.");
}
}
}
int enqueueTimeoutS = 5;
// To test a slow printer, see also Thread.sleep statement in PrintToConsole.print.
// int enqueueTimeoutS = 1;
void enqueue(BlockingQueue<List<FetchRecordBean>> printQueue, List<FetchRecordBean> l) throws InterruptedException, TimeoutException {
log.debug("Adding {} records to print-queue.", l.size());
if (!printQueue.offer(l, enqueueTimeoutS, TimeUnit.SECONDS)) {
throw new TimeoutException("Unable to put print data on queue within " + enqueueTimeoutS + " seconds.");
}
}
int dequeueTimeoutS = 5;
class PrintToConsole implements Runnable {
private final BlockingQueue<List<FetchRecordBean>> q;
private final CountDownLatch finishedLock = new CountDownLatch(1);
private volatile boolean stop;
public PrintToConsole(BlockingQueue<List<FetchRecordBean>> q) {
this.q = q;
}
#Override
public void run() {
try {
while (!stop) {
List<FetchRecordBean> l = q.poll(dequeueTimeoutS, TimeUnit.SECONDS);
if (l == null) {
log.error("Unable to get print data from queue within {} seconds.", dequeueTimeoutS);
break;
}
if (l.isEmpty()) {
break;
}
print(l);
}
if (stop) {
log.error("Printing to console was stopped.");
}
} catch (Exception e) {
log.error("Unable to print records to console.", e);
} finally {
if (!stop) {
stop = true;
log.info("Printing to console finished.");
}
finishedLock.countDown();
}
}
void print(List<FetchRecordBean> l) {
log.info("Got list with {} records from print-queue.", l.size());
// To test a slow printer, see also enqueueTimeoutS.
// try { Thread.sleep(1500L); } catch (Exception ignored) {}
}
public void stop() {
stop = true;
}
public boolean isStopping() {
return stop;
}
public void await() throws InterruptedException {
finishedLock.await();
}
public boolean await(long timeout, TimeUnit tunit) throws InterruptedException {
return finishedLock.await(timeout, tunit);
}
}
class FetchRecordBean {
private int id;
private java.util.Date created;
public int getId() {
return id;
}
public void setId(int id) {
this.id = id;
}
public java.util.Date getCreated() {
return created;
}
public void setCreated(java.util.Date created) {
this.created = created;
}
}
}
Dependencies:
mysql:mysql-connector-java:5.1.38
org.slf4j:slf4j-api:1.7.20 (and to get logging shown in console: ch.qos.logback:logback-classic:1.1.7 with ch.qos.logback:logback-core:1.1.7)
I have browsed, searched ... and nothing sparkles to my mind!
I am running a chat type service between a server and an Android app. The client connects, the server registers the socket, and every 10 minutes the server sends to all connected devices a message.
My problem is that randomly I have a connection reset exception. I can not trace back when the problem occurs.
My server side code is:
final public class ChatRoomService {
private final static String AUTHENTICATE = "AUTHENTICATE";
private final static String BROADCAST = "BROADCAST";
private final static String DISCONNECT = "DISCONNECT";
private final static String OK = "OK";
private final static String NOK = "NK";
private final static Logger LOGGER = Logger.getLogger(ChatRoomService.class);
private ServerSocket listener = null;
#Inject
private EntityManager entityManager;
public EntityManager getEntityManager() {
return entityManager;
}
#Inject
private PlayerManager playerManager;
PlayerManager getPlayerManager() {
return playerManager;
}
private static HashSet<ChatRoomConnection> connections = new HashSet<ChatRoomConnection>();
public void addConnection(ChatRoomConnection c) {
synchronized(connections) {
connections.add(c);
}
}
public void removeConnection(ChatRoomConnection c) {
synchronized(connections) {
connections.remove(c);
}
}
public void startListeningToChatRoomConnection() throws IOException {
listener = new ServerSocket(9010);
try {
LOGGER.infof("startListening - Start listening on port %s", 9010);
while (true) {
ChatRoomConnection connection = new ChatRoomConnection(listener.accept(), this);
addConnection(connection);
connection.start();
}
} catch (IOException e) {
if (!listener.isClosed())
LOGGER.errorf("listenToChatRoomConnection - Connection lost during connection: %s", e.getMessage());
} finally {
if (listener != null && !listener.isClosed()) {
LOGGER.infof("listenToChatRoomConnection - Stop listening");
listener.close();
}
}
}
public void stopListeningToChatRoomConnection() throws IOException {
if (!listener.isClosed()) {
LOGGER.infof("stopListeningToChatRoomConnection - Stop listening");
listener.close();
listener = null;
// Closing all sockets
for (ChatRoomConnection connection : connections) {
connection.close();
}
// Clear up the connections list
synchronized (connections) {
connections.clear();
}
}
}
public void broadcastToChatRoomClients(Object message) {
synchronized (connections) {
// Log
LOGGER.debugf("Broadcast ChatRoom: %s - %s",
connections.size(),
message.toString());
for (ChatRoomConnection connection : connections) {
LOGGER.debugf("Broadcast ChatRoom to %s", connection.userName);
connection.publish(message);
}
}
}
private ChatRoomService() {
}
private static class ChatRoomConnection extends Thread {
private Socket socket;
private BufferedReader readerFromClient;
private PrintWriter writerToClient;
public String userName;
private ChatRoomService chatCService;
ChatRoomConnection(Socket socket, ChatRoomService chatRoomService) {
super("ChatRoomConnection");
this.socket = socket;
this.chatRoomService = chatRoomService;
}
public void run() {
try {
readerFromClient = new BufferedReader(new InputStreamReader(socket.getInputStream()));
writerToClient = new PrintWriter(socket.getOutputStream(), true);
// 1- Authenticate the Device/ Player
writerToClient.println(ChatRoomService.AUTHENTICATE);
writerToClient.flush();
Gson gson = new Gson();
Request request = gson.fromJson(readerFromClient.readLine(), Request.class);
if (chatRoomService.getPlayerManager().isPlayerSignedIn(request.getPlayerId(), request.getSignedInOn())) {
Player player = (Player) chatRoomService.getEntityManager().find(Player.class, request.getPlayerId());
userName = player.getUsername();
LOGGER.infof("listenToChatRoomConnection - Connection established with %s", userName);
writerToClient.println(ChatRoomService.OK);
writerToClient.flush();
while (true)
if ((readerFromClient.readLine() == null) ||
(readerFromClient.readLine().startsWith(ChatRoomService.DISCONNECT)))
break;
} else {
writerToClient.println(ChatRoomService.NOK);
writerToClient.flush();
}
} catch (Exception e) {
LOGGER.errorf("listenToChatRoomConnection - Error with %s: %s", userName, e.getMessage());
e.printStackTrace();
} finally {
try {
if (!socket.isClosed()) {
LOGGER.infof("listenToChatRoomConnection - Connection closed by the client for %s", userName);
socket.close();
}
} catch (IOException e) {
LOGGER.errorf("listenToChatRoomConnection - Can not close socket: %s", e.getMessage());
e.printStackTrace();
} finally {
chatRoomService.removeConnection(this);
}
}
}
public void publish(Object message) {
if (!socket.isClosed()) {
writerToClient.println(ChatRoomService.BROADCAST);
Gson gson = new Gson();
writerToClient.println(gson.toJson(message));
}
}
public void close() {
writerToClient.println(ChatRoomService.DISCONNECT);
try {
LOGGER.infof("listenToChatRoomConnection - Connection closed by the server for %s", userName);
socket.close();
} catch (IOException e) {
LOGGER.errorf("Error when trying to close a socket: %s", e.getMessage());
e.printStackTrace();
}
}
};
}
The device code is:
public class ServerBroadcastManager {
private static final String TAG = ServerBroadcastManager.class.getName();
// Type of messages from the server
static public String AUTHENTICATE = "AUTHENTICATE";
static public String DISCONNECT = "DISCONNECT";
static public String BROADCAST = "BROADCAST";
static public String OK = "OK";
static public String NOK = "NK";
private int networkPort;
private ServerBroadcastListener broadcastListener;
private Socket networkSocket;
BufferedReader in;
PrintWriter out;
public ServerBroadcastManager(Context context, ServerBroadcastListener listener, int port) {
this.networkPort = port;
this.broadcastListener = listener;
}
public void startListening(final Context context) {
Runnable run = new Runnable() {
#Override
public void run() {
// Make connection and initialize streams
try {
networkSocket = new Socket();
networkSocket.connect(new InetSocketAddress(mydomain, networkPort), 30*1000);
in = new BufferedReader(new InputStreamReader(
networkSocket.getInputStream()));
out = new PrintWriter(networkSocket.getOutputStream(), true);
// Process all messages from server, according to the protocol.
while (true) {
String line = in.readLine();
if (line.startsWith(ServerBroadcastManager.AUTHENTICATE)) {
Request request = formatAuthenticateRequest(context);
Gson requestGson = new Gson();
out.println(requestGson.toJson(request));
out.flush();
// Waiting for confirmation back
line = in.readLine();
if (line.startsWith(ServerBroadcastManager.OK)) {
} else if (line.startsWith(ServerBroadcastManager.NOK)) {
}
} else if (line.startsWith(ServerBroadcastManager.BROADCAST)) {
Gson gson = new Gson();
#SuppressWarnings("unchecked")
LinkedHashMap<String,String> broadcast = gson.fromJson(in.readLine(), LinkedHashMap.class);
broadcastListener.processBroadcast(broadcast);
} else if (line.startsWith(ServerBroadcastManager.DISCONNECT)) {
break;
}
}
} catch (UnknownHostException e) {
Log.i(TAG, "Can not resolve hostname");
} catch (SocketTimeoutException e) {
Log.i(TAG, "Connection Timed-out");
broadcastListener.connectionFailed();
} catch (IOException e) {
Log.i(TAG, "Connection raised on exception: " + e.getMessage());
if (!networkSocket.isClosed()) {
broadcastListener.connectionLost();
}
}
}
};
Thread thread = new Thread(run);
thread.start();
}
public void stopListening() {
try {
if (networkSocket != null)
networkSocket.close();
} catch (IOException e) {
Log.i(TAG, "Exception in stopListening: " + e.getMessage());
}
}
private Request formatAuthenticateRequest(Context context) {
Request request = new Request();
SharedPreferences settings = context.getApplicationContext().getSharedPreferences(Constants.USER_DETAILS, 0);
request.setPlayerId(BigInteger.valueOf((settings.getLong(Constants.USER_DETAILS_PLAYERID, 0))));
request.setSignedInOn(settings.getLong(Constants.USER_DETAILS_SIGNEDINON, 0));
return request;
}
}
My last resort might be to move my server to another location, and see if this could not be related to my broadband router. I have notice that some of my HTTP call do not reach the server as well, though port forwarding is properly in place.
Thanks.
David.
I can't find where in your source code the server sends a message every 10 minutes to all connected clients, but I have experienced connection reset exceptions while using long-lasting WebSocket connections. I solved that problem by making sure some data (ping-pong message) was send from the client every minute.
At the time I traced the problem to my home-router which simply closed all idle connections after 5 minutes, but firewalls can exhibit the same kind of behavior. Neither server or client will notice a closed connection until data is transmitted. This is especially nasty for the client if the client is expecting data from the server - that data will simply never arrive. Therefor, make it the responsibility of the client to check if a connection is still valid (and reconnect when needed).
Since the introduction of the ping-pong message from the client every minute, I have not seen connection reset exceptions.
I am writing a code to send a UDP Multicast over Wifi from my mobile device. There is a server code running on other devices in the network. The servers will listen to the multicast and respond with their IP Address and Type of the system (Type: Computer, Mobile Device, Raspberry Pi, Flyports etc..)
On the mobile device which has sent the UDP Multicast, I need to get the list of the devices responding to the UDP Multicast.
For this I have created a class which will work as the structure of the device details.
DeviceDetails.class
public class DeviceDetails
{
String DeviceType;
String IPAddr;
public DeviceDetails(String type, String IP)
{
this.DeviceType=type;
this.IPAddr=IP;
}
}
I am sending the UDP Multicast packet at the group address of 225.4.5.6 and Port Number 5432.
I have made a class which will call a thread which will send the UDP Packets. And on the other hand I have made a receiver thread which implements Callable Interface to return the list of the devices responding.
Here is the code:
MulticastReceiver.java
public class MulticastReceiver implements Callable<DeviceDetails>
{
DatagramSocket socket = null;
DatagramPacket inPacket = null;
boolean check = true;
public MulticastReceiver()
{
try
{
socket = new DatagramSocket(5500);
}
catch(Exception ioe)
{
System.out.println(ioe);
}
}
#Override
public DeviceDetails call() throws Exception
{
// TODO Auto-generated method stub
try
{
byte[] inBuf = new byte[WifiConstants.DGRAM_LEN];
//System.out.println("Listening");
inPacket = new DatagramPacket(inBuf, inBuf.length);
if(check)
{
socket.receive(inPacket);
}
String msg = new String(inBuf, 0, inPacket.getLength());
Log.v("Received: ","From :" + inPacket.getAddress() + " Msg : " + msg);
DeviceDetails device = getDeviceFromString(msg);
Thread.sleep(100);
return device;
}
catch(Exception e)
{
Log.v("Receiving Error: ",e.toString());
return null;
}
}
public DeviceDetails getDeviceFromString(String str)
{
String type;
String IP;
type=str.substring(0,str.indexOf('`'));
str = str.substring(str.indexOf('`')+1);
IP=str;
DeviceDetails device = new DeviceDetails(type,IP);
return device;
}
}
The following code is of the activity which calls the Receiver Thread:
public class DeviceManagerWindow extends Activity
{
public void searchDevice(View view)
{
sendMulticast = new Thread(new MultiCastThread());
sendMulticast.start();
ExecutorService executorService = Executors.newFixedThreadPool(1);
List<Future<DeviceDetails>> deviceList = new ArrayList<Future<DeviceDetails>>();
Callable<DeviceDetails> device = new MulticastReceiver();
Future<DeviceDetails> submit = executorService.submit(device);
deviceList.add(submit);
DeviceDetails[] devices = new DeviceDetails[deviceList.size()];
int i=0;
for(Future<DeviceDetails> future :deviceList)
{
try
{
devices[i] = future.get();
}
catch(Exception e)
{
Log.v("future Exception: ",e.toString());
}
}
}
}
Now the standard way of receiving the packet says to call the receive method under an infinite loop. But I want to receive the incoming connections only for first 30seconds and then stop looking for connections.
This is similar to that of a bluetooth searching. It stops after 1 minute of search.
Now the problem lies is, I could use a counter but the problem is thread.stop is now depricated. And not just this, if I put the receive method under infinite loop it will never return the value.
What should I do.? I want to search for say 30 seconds and then stop the search and want to return the list of the devices responding.
Instead of calling stop(), you should call interrupt(). This causes a InterruptedException to be thrown at interruptable spots at your code, e.g. when calling Thread.sleep() or when blocked by an I/O operation. Unfortunately, DatagramSocket does not implement InterruptibleChannel, so the call to receive cannot be interrupted.
So you either use DatagramChannel instead of the DatagramSocket, such that receive() will throw a ClosedByInterruptException if Thread.interrupt() is called. Or you need to set a timeout by calling DatagramSocket.setSoTimeout() causing receive() to throw a SocketTimeoutException after the specified interval - in that case, you won't need to interrupt the thread.
Simple approach
The easiest way would be to simply set a socket timeout:
public MulticastReceiver() {
try {
socket = new DatagramSocket(5500);
socket.setSoTimeout(30 * 1000);
} catch (Exception ioe) {
throw new RuntimeException(ioe);
}
}
This will cause socket.receive(inPacket); to throw a SocketTimeoutException after 30 seconds. As you already catch Exception, that's all you need to do.
Making MulticastReceiver interruptible
This is a more radical refactoring.
public class MulticastReceiver implements Callable<DeviceDetails> {
private DatagramChannel channel;
public MulticastReceiver() {
try {
channel = DatagramChannel.open();
channel.socket().bind(new InetSocketAddress(5500));
} catch (IOException ioe) {
throw new RuntimeException(ioe);
}
}
public DeviceDetails call() throws Exception {
ByteBuffer inBuf = ByteBuffer.allocate(WifiConstants.DGRAM_LEN);
SocketAddress socketAddress = channel.receive(inBuf);
String msg = new String(inBuf.array(), 0, inBuf.capacity());
Log.v("Received: ","From :" + socketAddress + " Msg : " + msg);
return getDeviceFromString(msg);;
}
}
The DeviceManagerWindow looks a bit different; I'm not sure what you intend to do there, as you juggle around with lists and arrays, but you only have one future... So I assume you want to listen for 30 secs and fetch as many devices as possible.
ExecutorService executorService = Executors.newFixedThreadPool(1);
MulticastReceiver receiver = new MulticastReceiver();
List<DeviceDetails> devices = new ArrayList<DeviceDetails>();
long runUntil = System.currentTimeMillis() + 30 * 1000;
while (System.currentTimeMillis() < runUntil) {
Future<Object> future = executorService.submit(receiver);
try {
// wait no longer than the original 30s for a result
long timeout = runUntil - System.currentTimeMillis();
devices.add(future.get(timeout, TimeUnit.MILLISECONDS));
} catch (Exception e) {
Log.v("future Exception: ",e.toString());
}
}
// shutdown the executor service, interrupting the executed tasks
executorService.shutdownNow();
That's about it. No matter which solution you choose, don't forget to close the socket/channel.
I have solved it.. you can run your code in following fashion:
DeviceManagerWindow.java
public class DeviceManagerWindow extends Activity
{
public static Context con;
public static int rowCounter=0;
Thread sendMulticast;
#Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_device_manager_window);
WifiManager wifi = (WifiManager)getSystemService( Context.WIFI_SERVICE );
if(wifi != null)
{
WifiManager.MulticastLock lock = wifi.createMulticastLock("WifiDevices");
lock.acquire();
}
TableLayout tb = (TableLayout) findViewById(R.id.DeviceList);
tb.removeAllViews();
con = getApplicationContext();
}
public void searchDevice(View view) throws IOException, InterruptedException
{
try
{
sendMulticast = new Thread(new MultiCastThread());
sendMulticast.start();
sendMulticast.join();
}
catch(Exception e)
{
Log.v("Exception in Sending:",e.toString());
}
here is the time bound search.... and you can quit your thread using thread.join
//Device Will only search for 1 minute
for(long stop=System.nanoTime()+TimeUnit.SECONDS.toNanos(1); stop>System.nanoTime();)
{
Thread recv = new Thread(new MulticastReceiver());
recv.start();
recv.join();
}
}
public static synchronized void addDevice(DeviceDetails device) throws InterruptedException
{
....
Prepare your desired list here.
....
}
}
Dont add any loop on the listening side. simply use socket.receive
MulticastReceiver.java
public class MulticastReceiver implements Runnable
{
DatagramSocket socket = null;
DatagramPacket inPacket = null;
public MulticastReceiver()
{
try
{
socket = new DatagramSocket(WifiConstants.PORT_NO_RECV);
}
catch(Exception ioe)
{
System.out.println(ioe);
}
}
#Override
public void run()
{
byte[] inBuf = new byte[WifiConstants.DGRAM_LEN];
//System.out.println("Listening");
inPacket = new DatagramPacket(inBuf, inBuf.length);
try
{
socket.setSoTimeout(3000)
socket.receive(inPacket);
String msg = new String(inBuf, 0, inPacket.getLength());
Log.v("Received: ","From :" + inPacket.getAddress() + " Msg : " + msg);
DeviceDetails device = getDeviceFromString(msg);
DeviceManagerWindow.addDevice(device);
socket.setSoTimeout(3000)will set the listening time for the socket only for 3 seconds. If the packet dont arrive it will go further.DeviceManagerWindow.addDevice(device);this line will call the addDevice method in the calling class. where you can prepare your list
}
catch(Exception e)
{
Log.v("Receiving Error: ",e.toString());
}
finally
{
socket.close();
}
}
public DeviceDetails getDeviceFromString(String str)
{
String type;
String IP;
type=str.substring(0,str.indexOf('`'));
str = str.substring(str.indexOf('`')+1);
IP=str;
DeviceDetails device = new DeviceDetails(type,IP);
return device;
}
}
Hope that works.. Well it will work.
All the best. Let me know if any problem.
I have a single thread trying to connect to a database using JDBCTemplate as follows:
JDBCTemplate jdbcTemplate = new JdbcTemplate(dataSource);
try{
jdbcTemplate.execute(new CallableStatementCreator() {
#Override
public CallableStatement createCallableStatement(Connection con)
throws SQLException {
return con.prepareCall(query);
}
}, new CallableStatementCallback() {
#Override
public Object doInCallableStatement(CallableStatement cs)
throws SQLException {
cs.setString(1, subscriberID);
cs.execute();
return null;
}
});
} catch (DataAccessException dae) {
throw new CougarFrameworkException(
"Problem removing subscriber from events queue: "
+ subscriberID, dae);
}
I want to make sure that if the above code throws DataAccessException or SQLException, the thread waits a few seconds and tries to re-connect, say 5 more times and then gives up. How can I achieve this? Also, if during execution the database goes down and comes up again, how can i ensure that my program recovers from this and continues running instead of throwing an exception and exiting?
Thanks in advance.
Try this. My considerations are : run a loop until the statements executed successfully. If there is a failure, tolerate the failure for 5 times and each time it will wait for 2 seconds for the next execution.
JDBCTemplate jdbcTemplate = new JdbcTemplate(dataSource);
boolean successfullyExecuted = false;
int failCount = 0;
while (!successfullyExecuted){
try{
jdbcTemplate.execute(new CallableStatementCreator() {
#Override
public CallableStatement createCallableStatement(Connection con)
throws SQLException {
return con.prepareCall(query);
}
}, new CallableStatementCallback() {
#Override
public Object doInCallableStatement(CallableStatement cs)
throws SQLException {
cs.setString(1, subscriberID);
cs.execute();
return null;
}
});
successfullyExecuted = true;
} catch (DataAccessException dae) {
if (failedCount < 5){
failedCount ++;
try{java.lang.Thread.sleep(2 * 1000L); // Wait for 2 seconds
}catch(java.lang.Exception e){}
}else{
throw new CougarFrameworkException(
"Problem removing subscriber from events queue: "
+ subscriberID, dae);
}
} catch (java.sql.SQLException sqle){
if (failedCount < 5){
failedCount ++;
}else{
try{java.lang.Thread.sleep(2 * 1000L); // Wait for 2 seconds
}catch(java.lang.Exception e){}
throw new CougarFrameworkException(
"Problem removing subscriber from events queue: "
+ subscriberID, dae);
}
}
}
It might be worthwhile for you to look into Spring's Aspect support. What you're describing is retry with (constant) backoff, and chances are you'll eventually need it somewhere else, be it talking to a web service, an email server, or any other complicated system susceptible to transient failures.
For instance, this simple method invokes the underlying method up to maxAttempts times whenever an exception is thrown, unless it is a subclass of a Throwable listed in noRetryFor.
private Object doRetryWithExponentialBackoff(ProceedingJoinPoint pjp, int maxAttempts,
Class<? extends Throwable>[] noRetryFor) throws Throwable {
Throwable lastThrowable = null;
for (int attempts = 0; attempts < maxAttempts; attempts++) {
try {
pauseExponentially(attempts, lastThrowable);
return pjp.proceed();
} catch (Throwable t) {
lastThrowable = t;
for (Class<? extends Throwable> noRetryThrowable : noRetryFor) {
if (noRetryThrowable.isAssignableFrom(t.getClass())) {
throw t;
}
}
}
}
throw lastThrowable;
}
private void pauseExponentially(int attempts, Throwable lastThrowable) {
if (attempts == 0)
return;
long delay = (long) (Math.random() * (Math.pow(4, attempts) * 100L));
log.warn("Retriable error detected, will retry in " + delay + "ms, attempts thus far: "
+ attempts, lastThrowable);
try {
Thread.sleep(delay);
} catch (InterruptedException e) {
// Nothing we need to do here
}
}
This advice could be applied to any bean you wish using Spring's Aspect support. See http://static.springsource.org/spring/docs/2.5.x/reference/aop.html for more details.
something like this:
private int retries;
/**
* Make this configurable.
*/
public void setRetries(final int retries) {
Assert.isTrue(retries > 0);
this.retries = retries;
}
public Object yourMethod() {
final int tries = 0;
Exception lastException = null;
for (int i = 0; i < this.retries; i++) {
try {
return jdbcTemplate.execute ... (your code here);
} catch (final SQLException e) {
lastException = e;
} catch (final DataAccessException e) {
lastException = e;
}
}
throw lastException;
}
How about writting an aspect (DBRetryAspect) over it;It will be more transparent.