HBase RegionServer abort - java

When I start hbase on my cluster, HMaster process and HQuorumPeer process start on master node while only HQuorumPeer process starts on slaves.
In the GUI console, in the task section, I can see the master (node0) in the state RUNNING and the status "Waiting for region servers count to settle; currently checked in 0, slept for 250920 ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, interval of 1500 ms".
In the software attributes section I can find all my nodes in the zookeeper quorum with the description "Addresses of all registered ZK servers".
So It seems that Zookeeper is working but in the log file it seems to be the problem.
Log hbase-clusterhadoop-master:
2016-09-08 12:26:14,875 INFO [main-SendThread(node0:2181)] zookeeper.ClientCnxn: Opening socket connection to server node0/192.168.1.113:2181. Will not attempt to authenticate using SASL (java.lang.SecurityException: Impossibile trovare una configurazione di login) 2016-09-08 12:26:14,882 WARN [main-SendThread(node0:2181)] zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 2016-09-08 12:26:14,994 WARN [main] zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=node3:2181,node2:2181,node1:2181,node0:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase
........
2016-09-08 12:32:53,063 INFO [master:node0:60000] zookeeper.ZooKeeper: Initiating client connection, connectString=node3:2181,node2:2181,node1:2181,node0:2181 sessionTimeout=90000 watcher=replicationLogCleaner0x0, quorum=node3:2181,node2:2181,node1:2181,node0:2181, baseZNode=/hbase
2016-09-08 12:32:53,064 INFO [master:node0:60000-SendThread(node3:2181)] zookeeper.ClientCnxn: Opening socket connection to server node3/192.168.1.112:2181. Will not attempt to authenticate using SASL (java.lang.SecurityException: Impossibile trovare una configurazione di login)
2016-09-08 12:32:53,065 INFO [master:node0:60000-SendThread(node3:2181)] zookeeper.ClientCnxn: Socket connection established to node3/192.168.1.112:2181, initiating session
2016-09-08 12:32:53,069 INFO [master:node0:60000-SendThread(node3:2181)] zookeeper.ClientCnxn: Session establishment complete on server node3/192.168.1.112:2181, sessionid = 0x357095a4b940001, negotiated timeout = 90000
2016-09-08 12:32:53,072 INFO [master:node0:60000] zookeeper.RecoverableZooKeeper: Node /hbase/replication/rs already exists and this is not a retry
2016-09-08 12:32:53,072 DEBUG [master:node0:60000] cleaner.CleanerChore: initialize cleaner=org.apache.hadoop.hbase.replication.master.ReplicationLogCleaner
2016-09-08 12:32:53,075 DEBUG [master:node0:60000] cleaner.CleanerChore: initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotLogCleaner
2016-09-08 12:32:53,076 DEBUG [master:node0:60000] cleaner.CleanerChore: initialize cleaner=org.apache.hadoop.hbase.master.cleaner.HFileLinkCleaner
2016-09-08 12:32:53,077 DEBUG [master:node0:60000] cleaner.CleanerChore: initialize cleaner=org.apache.hadoop.hbase.master.snapshot.SnapshotHFileCleaner
2016-09-08 12:32:53,078 DEBUG [master:node0:60000] cleaner.CleanerChore: initialize cleaner=org.apache.hadoop.hbase.master.cleaner.TimeToLiveHFileCleaner
2016-09-08 12:32:53,078 INFO [master:node0:60000] master.ServerManager: Waiting for region servers count to settle; currently checked in 0, slept for 0 ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, interval of 1500 ms.
2016-09-08 12:32:54,607 INFO [master:node0:60000] master.ServerManager: Waiting for region servers count to settle; currently checked in 0, slept for 1529 ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, interval of 1500 ms.
2016-09-08 12:32:56,137 INFO [master:node0:60000] master.ServerManager: Waiting for region servers count to settle; currently checked in 0, slept for 3059 ms, expecting minimum of 1, maximum of 2147483647, timeout of 4500 ms, interval of 1500 ms.
Log hbase-clusterhadoop-zookeeper-node0 (master):
2016-09-08 12:26:18,315 WARN [WorkerSender[myid=0]] quorum.QuorumCnxManager: Cannot open channel to 1 at election address node1/192.168.1.156:3888
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:382)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:241)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:228)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:431)
at java.net.Socket.connect(Socket.java:527)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341)
at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449)
at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430)
at java.lang.Thread.run(Thread.java:695)
Log hbase-clusterhadoop-regionserver-node1 (one of the slave):
2016-09-08 12:33:32,690 INFO [regionserver60020-SendThread(node3:2181)] zookeeper.ClientCnxn: Opening socket connection to server node3/192.168.1.112:2181. Will not attempt to authenticate using SASL (java.lang.SecurityException: Impossibile trovare una configurazione di login)
2016-09-08 12:33:32,691 INFO [regionserver60020-SendThread(node3:2181)] zookeeper.ClientCnxn: Socket connection established to node3/192.168.1.112:2181, initiating session
2016-09-08 12:33:32,692 INFO [regionserver60020-SendThread(node3:2181)] zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect
2016-09-08 12:33:32,793 WARN [regionserver60020] zookeeper.RecoverableZooKeeper: Possibly transient ZooKeeper, quorum=node3:2181,node2:2181,node1:2181,node0:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
2016-09-08 12:33:32,794 ERROR [regionserver60020] zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 4 attempts
2016-09-08 12:33:32,794 WARN [regionserver60020] zookeeper.ZKUtil: regionserver:600200x0, quorum=node3:2181,node2:2181,node1:2181,node0:2181, baseZNode=/hbase Unable to set watcher on znode /hbase/master
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:222)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:427)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:77)
at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:778)
at org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:751)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:884)
at java.lang.Thread.run(Thread.java:695)
2016-09-08 12:33:32,794 ERROR [regionserver60020] zookeeper.ZooKeeperWatcher: regionserver:600200x0, quorum=node3:2181,node2:2181,node1:2181,node0:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:222)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:427)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:77)
at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:778)
at org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:751)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:884)
at java.lang.Thread.run(Thread.java:695)
2016-09-08 12:33:32,795 FATAL [regionserver60020] regionserver.HRegionServer: ABORTING region server node1,60020,1473330794709: Unexpected exception during initialization, aborting
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:222)
at org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:427)
at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:77)
at org.apache.hadoop.hbase.regionserver.HRegionServer.initializeZooKeeper(HRegionServer.java:778)
at org.apache.hadoop.hbase.regionserver.HRegionServer.preRegistrationInitialization(HRegionServer.java:751)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:884)
at java.lang.Thread.run(Thread.java:695)
2016-09-08 12:33:32,798 FATAL [regionserver60020] regionserver.HRegionServer: RegionServer abort: loaded coprocessors are: []
2016-09-08 12:33:32,798 INFO [regionserver60020] regionserver.HRegionServer: STOPPED: Unexpected exception during initialization, aborting
2016-09-08 12:33:32,867 INFO [regionserver60020-SendThread(node0:2181)] zookeeper.ClientCnxn: Opening socket connection to server node0/192.168.1.113:2181. Will not attempt to authenticate using SASL (java.lang.SecurityException: Impossibile trovare una configurazione di login)
Log hbase-clusterhadoop-zookeeper-node1:
2016-09-08 12:33:32,075 WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0%0:2181] quorum.Learner: Unexpected exception, tries=0, connecting to node3/192.168.1.112:2888
java.net.ConnectException: Connection refused
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:382)
at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:241)
at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:228)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:431)
at java.net.Socket.connect(Socket.java:527)
at org.apache.zookeeper.server.quorum.Learner.connectToLeader(Learner.java:225)
at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:71)
at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:786)
2016-09-08 12:33:32,227 INFO [node1/192.168.1.156:3888] quorum.QuorumCnxManager: Received connection request /192.168.1.113:49844
2016-09-08 12:33:32,233 INFO [WorkerReceiver[myid=1]] quorum.FastLeaderElection: Notification: 1 (message format version), 0 (n.leader), 0x10000002d (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x1 (n.peerEpoch) FOLLOWING (my state)
2016-09-08 12:33:32,239 INFO [WorkerReceiver[myid=1]] quorum.FastLeaderElection: Notification: 1 (message format version), 3 (n.leader), 0x10000002d (n.zxid), 0x1 (n.round), LOOKING (n.state), 0 (n.sid), 0x1 (n.peerEpoch) FOLLOWING (my state)
2016-09-08 12:33:32,725 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxnFactory: Accepted socket connection from /192.168.1.111:49534
2016-09-08 12:33:32,725 WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn: Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running
2016-09-08 12:33:32,725 INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] server.NIOServerCnxn: Closed socket connection for client /192.168.1.111:49534 (no session established for client)
The conf file abase-site:
<configuration>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://node0:9000/hbase</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>node0,node1,node2,node3</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/Users/clusterhadoop/usr/local/zookeeper</value>
</property>
<property>
<name>hbase.tmp.dir</name>
<value>/Users/clusterhadoop/usr/local/hbtmp</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>hbase.master</name>
<value>node0:60000</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.property.maxClientCnxns</name>
<value>1000</value>
</property>
</configuration>
Hosts file:
127.0.0.1 localhost
127.0.0.1 node3
192.168.1.112 node3
192.168.1.156 node1
192.168.1.111 node2
192.168.1.113 node0
Any idea on what is the problem and how to solve it?

Related

Storm: Could not update the blob with keyposTopology-1-1501640527-stormconf.ser

I'm building a storm topology includes jdbc and redis,
after runs few hours, showing blew msg:
[2017-08-02 17:37:59.917] ERROR [org.apache.storm.blobstore.BlobStoreUtils:197] - Could not update the blob with keypositionTopology-1-1501640527-stormconf.ser
[2017-08-02 17:37:59.930] ERROR [org.apache.storm.blobstore.BlobStoreUtils:197] - Could not update the blob with keypositionTopology-1-1501640527-stormcode.ser
and then it show me this:
[2017-08-02 20:39:01.610] INFO [org.apache.storm.shade.org.apache.zookeeper.ClientCnxn:1096] - Client session timed out, have not heard from server in 13374ms for sessionid 0x15da0c008310049, closing socket connection and attempting reconnect
[2017-08-02 20:39:01.611] WARN [org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxn:357] - caught end of stream exception:
org.apache.storm.shade.org.apache.zookeeper.server.ServerCnxn$EndOfStreamException: Unable to read additional data from client sessionid 0x15da0c008310049, likely client has closed socket
at org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxn.doIO(NIOServerCnxn.java:228) [storm-core-1.0.2.jar:1.0.2]
at org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxnFactory.run(NIOServerCnxnFactory.java:208) [storm-core-1.0.2.jar:1.0.2]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_141]
[2017-08-02 20:39:01.614] INFO [org.apache.storm.shade.org.apache.zookeeper.server.NIOServerCnxn:1007] - Closed socket connection for client /0:0:0:0:0:0:0:1:62631 which had sessionid 0x15da0c008310048
[2017-08-02 20:39:01.711] INFO [org.apache.storm.shade.org.apache.curator.framework.state.ConnectionStateManager:228] - State change: SUSPENDED
[2017-08-02 20:39:01.711] WARN [org.apache.storm.cluster-state.zookeeper-state-factory: 0] - Received event :disconnected::none: with disconnected Writer Zookeeper.
[2017-08-02 20:39:01.713] INFO [org.apache.storm.shade.org.apache.curator.framework.state.ConnectionStateManager:228] - State change: SUSPENDED
[2017-08-02 20:39:01.713] WARN [org.apache.storm.cluster-state.zookeeper-state-factory: 0] - Received event :disconnected::none: with disconnected Writer Zookeeper.
[2017-08-02 20:39:01.713] INFO [org.apache.storm.zookeeper: 0] - Zookeeper state update: :disconnected:none
[2017-08-02 20:39:01.713] INFO [org.apache.storm.zookeeper: 0] - debian lost leadership.
and this shift...
[2017-08-02 20:59:30.930] WARN [org.apache.storm.kafka.PartitionManager:218] - Removing the failed offsets for Partition{host=10.2.5.207:9092, topic=grih_pos, partition=0} that are out of range: [1786800768, 1786800769, 1786800770, 1786800771, 1786800772, 1786800773, 1786800774, 1786800775, 1786800776, 1786800777, 1786800778, 1786800779, 1786800780, 1786800781, 1786800782, 1786800783, 1786800784, 1786800785, 1786800786, 1786800787, 1786800788, 1786800789, 1786800790, 1786800791, 1786800792, 1786800793, 1786800794, 1786800795, 1786800796, 1786800797, 1786800798, 1786800799, 1786800800, 1786800801, 1786800802, 1786800803, 1786800804, 1786800805, 1786800806, 1786800807, 1786800808, 1786800809, 1786800810, 1786800811, 1786800812, 1786800813, 1786800814, 1786800815, 1786800816, 1786800817, 1786800818, 1786800819, 1786800820, 1786800821, 1786800822, 1786800823, 1786800824, 1786800825, 1786800826, 1786800827, 1786800828, 1786800829, 1786800830, 1786800831, 1786800832, 1786800833, 1786800834, 1786800835, 1786800836, 1786800837, blablabla...
finally OOM:
[2017-08-02 22:30:16.311] INFO [org.apache.storm.shade.org.apache.zookeeper.ClientCnxn:512] - EventThread shut down
[2017-08-02 22:30:16.311] INFO [org.apache.storm.shade.org.apache.zookeeper.ZooKeeper:438] - Initiating client connection, connectString=localhost:2000/storm sessionTimeout=20000 watcher=org.apache.storm.shade.org.apache.curator.ConnectionState#58f254b1
[2017-08-02 22:30:16.309] INFO [org.apache.storm.shade.org.apache.zookeeper.ZooKeeper:438] - Initiating client connection, connectString=localhost:2000/storm sessionTimeout=20000 watcher=org.apache.storm.shade.org.apache.curator.ConnectionState#437ed416
[2017-08-02 22:30:21.338] WARN [org.apache.storm.shade.org.apache.curator.ConnectionState:191] - Connection attempt unsuccessful after 25460 (greater than max timeout of 20000). Resetting connection and trying again with a new connection.
[2017-08-02 22:30:21.341] INFO [org.apache.storm.shade.org.apache.zookeeper.ClientCnxn:975] - Opening socket connection to server 127.0.0.1/127.0.0.1:2000. Will not attempt to authenticate using SASL (unknown error)
[2017-08-02 22:30:21.338] ERROR [org.apache.storm.shade.org.apache.curator.ConnectionState:200] - Connection timed out for connection string (localhost:2000) and timeout (15000) / elapsed (15162)
org.apache.storm.shade.org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss
at org.apache.storm.shade.org.apache.curator.ConnectionState.checkTimeouts(ConnectionState.java:197) [storm-core-1.0.2.jar:1.0.2]
at org.apache.storm.shade.org.apache.curator.ConnectionState.getZooKeeper(ConnectionState.java:88) [storm-core-1.0.2.jar:1.0.2]
at org.apache.storm.shade.org.apache.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:116) [storm-core-1.0.2.jar:1.0.2]
at org.apache.storm.shade.org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:835) [storm-core-1.0.2.jar:1.0.2]
at org.apache.storm.shade.org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:809) [storm-core-1.0.2.jar:1.0.2]
at org.apache.storm.shade.org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:64) [storm-core-1.0.2.jar:1.0.2]
at org.apache.storm.shade.org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:267) [storm-core-1.0.2.jar:1.0.2]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:1.8.0_141]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_141]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) [?:1.8.0_141]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_141]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_141]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_141]
[2017-08-02 22:30:21.342] INFO [org.apache.storm.shade.org.apache.zookeeper.ClientCnxn:852] - Socket connection established to 127.0.0.1/127.0.0.1:2000, initiating session
[2017-08-02 22:30:28.864] WARN [org.apache.storm.shade.org.apache.curator.ConnectionState:191] - Connection attempt unsuccessful after 32985 (greater than max timeout of 20000). Resetting connection and trying again with a new connection.
[2017-08-02 22:30:41.625] INFO [org.apache.storm.shade.org.apache.zookeeper.ClientCnxn:975] - Opening socket connection to server 127.0.0.1/127.0.0.1:2000. Will not attempt to authenticate using SASL (unknown error)
[2017-08-02 22:30:41.631] ERROR [org.apache.storm.shade.org.apache.curator.framework.imps.CuratorFrameworkImpl:566] - Background exception was not retry-able or retry gave up
java.lang.OutOfMemoryError: GC overhead limit exceeded
at java.lang.String.substring(String.java:1969) ~[?:1.8.0_141]
at java.lang.Package.getPackage(Package.java:331) ~[?:1.8.0_141]
at java.lang.Class.getPackage(Class.java:796) ~[?:1.8.0_141]
at org.apache.logging.log4j.core.impl.ThrowableProxy.toCacheEntry(ThrowableProxy.java:495) ~[log4j-core-2.1.jar:2.1]
at org.apache.logging.log4j.core.impl.ThrowableProxy.toExtendedStackTrace(ThrowableProxy.java:547) ~[log4j-core-2.1.jar:2.1]
at org.apache.logging.log4j.core.impl.ThrowableProxy.<init>(ThrowableProxy.java:113) ~[log4j-core-2.1.jar:2.1]
at org.apache.logging.log4j.core.impl.Log4jLogEvent.getThrownProxy(Log4jLogEvent.java:323) ~[log4j-core-2.1.jar:2.1]
at org.apache.logging.log4j.core.pattern.ExtendedThrowablePatternConverter.format(ExtendedThrowablePatternConverter.java:64) ~[log4j-core-2.1.jar:2.1]
at org.apache.logging.log4j.core.pattern.PatternFormatter.format(PatternFormatter.java:36) ~[log4j-core-2.1.jar:2.1]
at org.apache.logging.log4j.core.layout.PatternLayout.toSerializable(PatternLayout.java:197) ~[log4j-core-2.1.jar:2.1]
at org.apache.logging.log4j.core.layout.PatternLayout.toSerializable(PatternLayout.java:55) ~[log4j-core-2.1.jar:2.1]
at org.apache.logging.log4j.core.layout.AbstractStringLayout.toByteArray(AbstractStringLayout.java:67) ~[log4j-core-2.1.jar:2.1]
at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:108) ~[log4j-core-2.1.jar:2.1]
at org.apache.logging.log4j.core.appender.RollingFileAppender.append(RollingFileAppender.java:88) ~[log4j-core-2.1.jar:2.1]
at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:99) ~[log4j-core-2.1.jar:2.1]
at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:430) ~[log4j-core-2.1.jar:2.1]
at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:409) ~[log4j-core-2.1.jar:2.1]
at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:367) ~[log4j-core-2.1.jar:2.1]
at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:112) ~[log4j-core-2.1.jar:2.1]
at org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:727) ~[log4j-api-2.1.jar:2.1]
at org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:716) ~[log4j-api-2.1.jar:2.1]
at org.apache.logging.slf4j.Log4jLogger.error(Log4jLogger.java:318) ~[log4j-slf4j-impl-2.1.jar:2.1]
at org.apache.storm.shade.org.apache.curator.ConnectionState.checkTimeouts(ConnectionState.java:200) ~[storm-core-1.0.2.jar:1.0.2]
at org.apache.storm.shade.org.apache.curator.ConnectionState.getZooKeeper(ConnectionState.java:88) ~[storm-core-1.0.2.jar:1.0.2]
at org.apache.storm.shade.org.apache.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:116) ~[storm-core-1.0.2.jar:1.0.2]
at org.apache.storm.shade.org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:835) ~[storm-core-1.0.2.jar:1.0.2]
at org.apache.storm.shade.org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:809) ~[storm-core-1.0.2.jar:1.0.2]
at org.apache.storm.shade.org.apache.curator.framework.imps.CuratorFrameworkImpl.access$300(CuratorFrameworkImpl.java:64) ~[storm-core-1.0.2.jar:1.0.2]
at org.apache.storm.shade.org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:267) ~[storm-core-1.0.2.jar:1.0.2]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[?:1.8.0_141]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) ~[?:1.8.0_141]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) ~[?:1.8.0_141]
I just set 2 bolt in this topology, one for jdbc and another for redis,
And the spout is KafkaSpout, consuming the data from Kafka.
this issue nagged at me 4 A LONG TIME,
I Google it over and over,
and got no shift..
Desiring for your help!

Hbase NoServerForRegionException: Unable to find region for helloTable

I'm trying connect hbase standalone(0.94.27) on ec2, from java client on My PC(windows server 2008).
here is my code snippets
HBaseChatConfiguration conf = HBaseChatConfiguration.getInstance();
conf.set("hbase.zookeeper.quorum", "my ec2 IP Address");
HTablePool tablePool = null;
try {
tablePool = new HTablePool(conf.getConfiguration(), 10);
HTableInterface table = tablePool.getTable(Bytes.toBytes("helloTable"));
System.out.println("table initiated");
Put p = new Put(Bytes.toBytes("testRow1"));
p.add(Bytes.toBytes("cf1"), Bytes.toBytes("cfs"), Bytes.toBytes("test"));
table.put(p);
But java application printed out a exception,
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to find region for helloTable,,99999999999999 after 10 tries.
at org.apache.hadoop.hbase.client.HTableFactory.createHTableInterface(HTableFactory.java:38)
at org.apache.hadoop.hbase.client.HTablePool.createHTable(HTablePool.java:265)
at org.apache.hadoop.hbase.client.HTablePool.findOrCreateTable(HTablePool.java:195)
at org.apache.hadoop.hbase.client.HTablePool.getTable(HTablePool.java:174)
at org.apache.hadoop.hbase.client.HTablePool.getTable(HTablePool.java:213)
at kr.stocktalk.chat.ChatServer.main(ChatServer.java:42)
Caused by: org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to find region for helloTable,,99999999999999 after 10 tries.
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:955)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:860)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:962)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:864)
at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:821)
at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:234)
at org.apache.hadoop.hbase.client.HTable.<init>(HTable.java:174)
at org.apache.hadoop.hbase.client.HTableFactory.createHTableInterface(HTableFactory.java:36)
... 5 more
But I can access the table on local shell
hbase(main):001:0> describe 'helloTable'
DESCRIPTION ENABLED
'helloTable', {NAME => 'cf1', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'NONE', REPLICATION_SCO true
PE => '0', VERSIONS => '3', COMPRESSION => 'NONE', MIN_VERSIONS => '0', TTL => '2147483647', KEEP_D
ELETED_CELLS => 'false', BLOCKSIZE => '65536', IN_MEMORY => 'false', ENCODE_ON_DISK => 'true', BLOC
KCACHE => 'true'}
1 row(s) in 0.9700 seconds
What can I do for access from java client application?
edited:
Here is hbase log
2016-03-26 06:37:11,607 INFO org.apache.hadoop.hbase.master.LoadBalancer: Skipping load balancing because balanced cluster; servers=1 regions=1 average=1.0 mostloaded=1 leastloaded=1
2016-03-26 06:37:11,612 DEBUG org.apache.hadoop.hbase.client.MetaScanner: Scanning .META. starting at row= for max=2147483647 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation#6b171477
2016-03-26 06:37:11,615 DEBUG org.apache.hadoop.hbase.master.CatalogJanitor: Scanned 1 catalog row(s) and gc'd 0 unreferenced parent region(s)
2016-03-26 06:38:12,490 INFO org.apache.zookeeper.server.NIOServerCnxnFactory: Accepted socket connection from /106.249.235.42:64629
2016-03-26 06:38:12,490 INFO org.apache.zookeeper.server.ZooKeeperServer: Client attempting to establish new session at /106.249.235.42:64629
2016-03-26 06:38:12,495 INFO org.apache.zookeeper.server.ZooKeeperServer: Established session 0x153b183d8e6000c with negotiated timeout 40000 for client /106.249.235.42:64629
2016-03-26 06:42:02,390 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Stats: total=2.04 MB, free=245.89 MB, max=247.92 MB, blocks=1, accesses=30, hits=29, hitRatio=96.66%, , cachingAccesses=30, cachingHits=29, cachingHitsRatio=96.66%, , evictions=0, evicted=0, evictedPerRun=NaN
2016-03-26 06:42:11,608 INFO org.apache.hadoop.hbase.master.LoadBalancer: Skipping load balancing because balanced cluster; servers=1 regions=1 average=1.0 mostloaded=1 leastloaded=1
2016-03-26 06:42:11,613 DEBUG org.apache.hadoop.hbase.client.MetaScanner: Scanning .META. starting at row= for max=2147483647 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation#6b171477
2016-03-26 06:42:11,616 DEBUG org.apache.hadoop.hbase.master.CatalogJanitor: Scanned 1 catalog row(s) and gc'd 0 unreferenced parent region(s)
2016-03-26 06:47:02,391 DEBUG org.apache.hadoop.hbase.io.hfile.LruBlockCache: Stats: total=2.04 MB, free=245.89 MB, max=247.92 MB, blocks=1, accesses=30, hits=29, hitRatio=96.66%, , cachingAccesses=30, cachingHits=29, cachingHitsRatio=96.66%, , evictions=0, evicted=0, evictedPerRun=NaN
2016-03-26 06:47:11,608 INFO org.apache.hadoop.hbase.master.LoadBalancer: Skipping load balancing because balanced cluster; servers=1 regions=1 average=1.0 mostloaded=1 leastloaded=1
2016-03-26 06:47:11,614 DEBUG org.apache.hadoop.hbase.client.MetaScanner: Scanning .META. starting at row= for max=2147483647 rows using org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation#6b171477
2016-03-26 06:47:11,617 DEBUG org.apache.hadoop.hbase.master.CatalogJanitor: Scanned 1 catalog row(s) and gc'd 0 unreferenced parent region(s)
2016-03-26 06:49:06,196 WARN org.apache.zookeeper.server.NIOServerCnxn: Exception causing close of session 0x153b183d8e6000c due to java.io.IOException: Connection reset by peer
2016-03-26 06:49:06,196 INFO org.apache.zookeeper.server.NIOServerCnxn: Closed socket connection for client /106.249.235.42:64629 which had sessionid 0x153b183d8e6000c
2016-03-26 06:49:46,000 INFO org.apache.zookeeper.server.ZooKeeperServer: Expiring session 0x153b183d8e6000c, timeout of 40000ms exceeded
2016-03-26 06:49:46,000 INFO org.apache.zookeeper.server.PrepRequestProcessor: Processed session termination for sessionid: 0x153b183d8e6000c

Error: can't able to get TABLES LISTS from REMOTE HBASE Database?

But I have add the ip address for quickstart.cloudera under C:\Windows\System32\drivers\etc this path. And i'm using this name quickstart.cloudera in the hbase-site.xml file, which i pasted in my eclipse project.
But the same code works while connecting in local system. I'm trying to run this program, but some issues.
HBaseConfiguration hc = new HBaseConfiguration( new Configuration( ) );
hc.set("hbase.master", "quickstart.cloudera:60000");
hc.set("hbase.zookeeper.quorum", "quickstart.cloudera");
hc.set("hbase.zookeeper.property.clientPort","2181");
HBaseAdmin admin = new HBaseAdmin(hc);
HTableDescriptor[] tableDescriptor = admin.listTables();
for (int i=0; i<tableDescriptor.length;i++ )
{
System.out.println(tableDescriptor[i].getNameAsString());
}
}
My Output:
15/10/01 15:30:55 INFO zookeeper.ClientCnxn: Opening socket connection to server quickstart.cloudera/192.168.0.106:2181. Will not attempt to authenticate using SASL (unknown error)
15/10/01 15:30:55 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /192.168.0.105:62868, server: quickstart.cloudera/192.168.0.106:2181
15/10/01 15:30:55 INFO zookeeper.ClientCnxn: Session establishment complete on server quickstart.cloudera/192.168.0.106:2181, sessionid = 0x150220e6706002c, negotiated timeout = 60000
15/10/01 15:30:55 WARN util.DynamicClassLoader: Failed to identify the fs of dir hdfs://192.168.0.106:8020/hbase/lib, ignored
java.io.IOException: No FileSystem for scheme: hdfs
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2138)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2145)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:80)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2184)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2166)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:302)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:194)
at org.apache.hadoop.hbase.util.DynamicClassLoader.<init>(DynamicClassLoader.java:104)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.<clinit>(ProtobufUtil.java:242)
at org.apache.hadoop.hbase.ClusterId.parseFrom(ClusterId.java:64)
at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:75)
at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:86)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:850)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.<init>(ConnectionManager.java:635)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
at java.lang.reflect.Constructor.newInstance(Unknown Source)
at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238)
at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:414)
at org.apache.hadoop.hbase.client.ConnectionManager.createConnection(ConnectionManager.java:407)
at org.apache.hadoop.hbase.client.ConnectionManager.getConnectionInternal(ConnectionManager.java:285)
at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:207)
at HbaseList.main(HbaseList.java:22)
15/10/01 15:31:53 INFO client.RpcRetryingCaller: Call exception, tries=10, retries=35, started=57299 ms ago, cancelled=false, msg=
15/10/01 15:32:14 INFO client.RpcRetryingCaller: Call exception, tries=11, retries=35, started=78658 ms ago, cancelled=false, msg=
15/10/01 15:32:35 INFO client.RpcRetryingCaller: Call exception, tries=12, retries=35, started=99756 ms ago, cancelled=false, msg=
Try setting these config settings
Configuration conf= new Configuration();
conf.set("fs.defaultFS", "hdfs://" + host + ":"+port);
conf.set("fs.hdfs.impl",
org.apache.hadoop.hdfs.DistributedFileSystem.class.getName()
);
conf.set("fs.file.impl",
org.apache.hadoop.fs.LocalFileSystem.class.getName()
);
HBaseConfiguration hc = new HBaseConfiguration( conf );

Connection's issue on cassandra database

i have problems to connect cassandra to spark, i can connect to cassandra by cqlsh but when i launch my program:
public static void main(String[] args) {
Cluster cluster;
Session session;
cluster = Cluster.builder().addContactPoint("127.0.0.1").build();
session = cluster.connect();
SparkConf conf = new SparkConf conf = new SparkConf().setAppName("CassandraExamples").setMaster("local[1]")
.set("spark.cassandra.connection.host", "9.168.86.84");
JavaSparkContext sc = new JavaSparkContext("spark://9.168.86.84:9042","CassandraExample",conf);
CassandraJavaPairRDD<String, String> rdd1 = javaFunctions(sc).cassandraTable("keyspace", "table",
mapColumnTo(String.class), mapColumnTo(String.class)).select("row1", "row2");
System.out.println("Data fetched: \n" + StringUtils.join(rdd1.toArray(), "\n"));
}
i'm getting this error:
15/06/11 11:41:15 WARN Remoting: Tried to associate with unreachable remote address [akka.tcp://sparkMaster#9.168.86.84:9042]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: Connection refused: no further information: /9.168.86.84:9042
15/06/11 11:41:34 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster#9.168.86.84:9042/user/Master...
15/06/11 11:41:35 WARN AppClient$ClientActor: Could not connect to akka.tcp://sparkMaster#9.168.86.84:9042: akka.remote.InvalidAssociation: Invalid address: akka.tcp://sparkMaster#9.168.86.84:9042
15/06/11 11:41:35 WARN Remoting: Tried to associate with unreachable remote address [akka.tcp://sparkMaster#9.168.86.84:9042]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: Connection refused: no further information: /9.168.86.84:9042
15/06/11 11:41:54 INFO AppClient$ClientActor: Connecting to master akka.tcp://sparkMaster#9.168.86.84:9042/user/Master...
15/06/11 11:41:55 WARN AppClient$ClientActor: Could not connect to akka.tcp://sparkMaster#9.168.86.84:9042: akka.remote.InvalidAssociation: Invalid address: akka.tcp://sparkMaster#9.168.86.84:9042
15/06/11 11:41:55 WARN Remoting: Tried to associate with unreachable remote address [akka.tcp://sparkMaster#9.168.86.84:9042]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: Connection refused: no further information: /9.168.86.84:9042
15/06/11 11:42:14 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
15/06/11 11:42:14 WARN SparkDeploySchedulerBackend: Application ID is not initialized yet.
15/06/11 11:42:14 ERROR TaskSchedulerImpl: Exiting due to error from cluster scheduler: All masters are unresponsive! Giving up.
cassandra.yaml has this properties:
listen_address: 9.168.86.84
start_native_transport: true
rpc_address: 0.0.0.0
native_transport_port: 9042
rpc_port: 9160
can someone tell my what's wrong?

Voldemort issue and error

Today i tried voldemort with my linux machine, but i cant proceed fully. Following error i got it while try to store a value by using "PUT" keyword.
pls look at this information and get the solution
server.properties
bdb.sync.transactions=false
bdb.cache.size=1000MB
max.threads=5000
http.enable=true
socket.enable=true
node.id=0
kumaran#mohandoss-Vostro-1550:/home/kumaran/voldemort-1.3.0$ ./bin/voldemort-admin-tool.sh --get-metadata --url tcp://localhost:5555
[19:05:55,373 voldemort.store.socket.clientrequest.ClientRequestExecutorFactory$ClientRequestSelectorManager] INFO Closed, exiting [voldemort-niosocket-client-3]
[19:05:55,374 voldemort.store.socket.clientrequest.ClientRequestExecutorFactory$ClientRequestSelectorManager] INFO Closed, exiting [voldemort-niosocket-client-8]
[19:05:55,374 voldemort.store.socket.clientrequest.ClientRequestExecutorFactory$ClientRequestSelectorManager] INFO Closed, exiting [voldemort-niosocket-client-7]
[19:05:55,373 voldemort.store.socket.clientrequest.ClientRequestExecutorFactory$ClientRequestSelectorManager] INFO Closed, exiting [voldemort-niosocket-client-6]
[19:05:55,373 voldemort.store.socket.clientrequest.ClientRequestExecutorFactory$ClientRequestSelectorManager] INFO Closed, exiting [voldemort-niosocket-client-5]
[19:05:55,373 voldemort.store.socket.clientrequest.ClientRequestExecutorFactory$ClientRequestSelectorManager] INFO Closed, exiting [voldemort-niosocket-client-4]
[19:05:55,373 voldemort.store.socket.clientrequest.ClientRequestExecutorFactory$ClientRequestSelectorManager] INFO Closed, exiting [voldemort-niosocket-client-1]
[19:05:55,373 voldemort.store.socket.clientrequest.ClientRequestExecutorFactory$ClientRequestSelectorManager] INFO Closed, exiting [voldemort-niosocket-client-2]
[19:05:55,376 voldemort.store.socket.clientrequest.ClientRequestExecutor] WARN No client associated with Socket[unconnected] [main]
[19:05:55,376 voldemort.store.socket.clientrequest.ClientRequestExecutor] INFO Closing remote connection from Socket[unconnected] [main]
localhost:0
Key - cluster.xml
version() ts:1379597320733
: <cluster>
<name>Kumaran</name>
<server>
<id>0</id>
<host>localhost</host>
<http-port>8081</http-port>
<socket-port>5554</socket-port>
<admin-port>5555</admin-port>
<partitions>0, 1</partitions>
</server>
<server>
<id>1</id>
<host>localhost</host>
<http-port>8082</http-port>
<socket-port>5556</socket-port>
<admin-port>5557</admin-port>
<partitions>2, 3</partitions>
</server>
</cluster>
Key - stores.xml
version() ts:1379597320847
: <stores>
<store>
<name>test1</name>
<persistence>bdb</persistence>
<routing-strategy>consistent-routing</routing-strategy>
<routing>client</routing>
<replication-factor>2</replication-factor>
<required-reads>2</required-reads>
<required-writes>2</required-writes>
<key-serializer>
<type>string</type>
<schema-info version="0">UTF-8</schema-info>
</key-serializer>
<value-serializer>
<type>string</type>
<schema-info version="0">UTF-8</schema-info>
</value-serializer>
</store>
</stores>
Key - server.state
version() ts:1379597320870
: NORMAL_SERVER
Key - node.id
version() ts:1379597320865
: 0
Key - rebalancing.steal.info.key
version() ts:1379597320869
: []
localhost:1
Key - cluster.xml
Error in retrieving Failure while checking out socket for localhost:5557(ad1):
Key - stores.xml
Error in retrieving Failure while checking out socket for localhost:5557(ad1):
Key - server.state
Error in retrieving Failure while checking out socket for localhost:5557(ad1):
Key - node.id
Error in retrieving Failure while checking out socket for localhost:5557(ad1):
Key - rebalancing.steal.info.key
Error in retrieving Failure while checking out socket for localhost:5557(ad1):
Kumaran
Unreachable store exception, try caching your client connection, probably got too many open connections and are now being refused.
Make sure you have enough client side threads to handle your server connection threads.
Can't give much more help as your question is not exactly detailed or helpful.

Categories

Resources