ThreadPool executor and huge amount of clients connected simultaneously - java

I'm building a client-server application on Java with sockets. As far as I've understood to create a thread for every client being connected is too expensive. Instead we can use ThreadPool Executor. As said in the concurrent documentation we can create a thread pool with a fixed size.
class NetworkService implements Runnable {
private final ServerSocket serverSocket;
private final ExecutorService pool;
public NetworkService(int port, int poolSize)
throws IOException {
serverSocket = new ServerSocket(port);
pool = Executors.newFixedThreadPool(poolSize);
}
public void run() { // run the service
try {
for (;;) {
pool.execute(new Handler(serverSocket.accept()));
}
} catch (IOException ex) {
pool.shutdown();
}
}
}
And it seems we have at most poolSize thread running at every point of time. But what if we need to maintain a number of connection that is more than poolSize. How is it going to work?

If you are going to have really huge amount of clients, you should consider NIO for it, because creating thread for each client will be too expensive.
NIO uses selectors and channels and doesn't require to create new thread for each connection. See image.
Did you hear about netty ? I don't know what you are going to implement but seems like it will be useful.

Related

Concurency at a thread pool in Java

I face this problem in Java.
I have a server class named MyServer and I want to implement a thread pool where each thread runs a method of MyServer when a request comes. I have created another class that implements a server pool named MultiThreadedSocketServer. The class is this:
public class MultiThreadedSocketServer {
public void startServer(MyServer s, int localport, int threadPoolSize) {
final ExecutorService clientProcessingPool = Executors.newFixedThreadPool(threadPoolSize);
Runnable serverTask = new Runnable() {
#Override
public void run() {
try {
ServerSocket serverSocket = new ServerSocket(localport);
System.out.println("Waiting for clients to connect...");
while (true) {
Socket clientSocket = serverSocket.accept();
clientProcessingPool.submit(new ClientTask(clientSocket, s));
}
} catch (IOException e) {
System.err.println("Unable to process client request");
e.printStackTrace();
}
}
};
Thread serverThread = new Thread(serverTask);
serverThread.start();
}
}
the class named MultiThreadedSocketServer has an argument named Server s which passes it in client Task class which a thread is created. The client task class is this:
class ClientTask implements Runnable {
private final Socket clientSocket;
private MyServer s;
public ClientTask(Socket clientSocket, MyServer s) {
this.s = s;
this.clientSocket = clientSocket;
}
#Override
public void run() {
System.out.println("Got a client !");
String inputLine = null;
try {
BufferedReader in = new BufferedReader(new InputStreamReader(clientSocket.getInputStream()));
// Do whatever required to process the client's request
inputLine = in.readLine();
if (inputLine.equals("Bye")) {
System.out.println("Bye");
System.exit(0);
}
s.handleRequest(inputLine);
clientSocket.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
As you can see when a request comes the handleRequest method of class MyServer is invoked. I want to make this method to run synchronized, meaning only one thread at a time to be able to run this method. Adding synchronized before the method implementation does not achieve anything.
Can anybody give me the proper way to do this?
Thanks in advance for your time.
PS: I added the whole code
MyServer Class
http://pastebin.com/6i2bn5jj
Multithreaded server Class
http://pastebin.com/hzfLJbCS
As it is evident in main I create three requests with handleRequest with arguments Task, task2 and Bye.
The correct output would be
Waiting for clients to connect...
Got a client !
This is an input Task
Request for Task
Got a client !
This is an input task2
Request for task2
Got a client !
This is an input
Bye
But Instead the order is mixed. Sometimes Bye which shuts the server can be executed first. I want to ensure that the order is the one where the requests are created in the main.
But Instead the order is mixed. Sometimes Bye which shuts the server can be executed first. I want to ensure that the order is the one where the requests are created in the main.
You say that you want the server to handle requests in order. This is hard to ensure because you are opening up 3 sockets and writing them to the server but not waiting for any response. This is implementation dependent but I'm not sure there is any guarantee that when the client returns from doing a socket InputStream write, that the server has received the bytes. This means that from the client side, there is no guarantee that the IO completes in the order that you want.
To see if this is the problem, I would remove the System.exit(0) to see if the other lines make it, just after the "Bye" string does. Or you could put a Thread.sleep(5000); before the exit(0).
A simple sort-of fix would be to make sure your PrintStream has auto-flush turned on. That at least will call flush on the socket but even then there are race conditions between the client and the server. If the auto-flush doesn't work then I'd have your client wait for a response from the server. So then the first client would write the first command and wait for the acknowledgement before going to the 2nd command.
In terms of your original question, locking on the server wouldn't help because of the race conditions. The "Bye" might make it first and lock the server fine.
These sorts of questions around how to synchronize the threads in a multi-threaded program really make no sense to me. The whole point of threads is that they run asynchronously in parallel and don't have to operate in any particular order. The more that you force your program to spit out the output in a particular order, the more you are arguing for writing this without any threads.
Hope this helps.
If the problem is that the bye message kills the server before other requests can be handled, one solution could be to not call System.exit(0); on bye.
The bye message could set a flag block further requests from being handled and also notify some other mechanism to call System.exit(0); when the thread pool is idle with no requests left to handle.

Benefits of Netty Client over I/O Client?

I want to know if I can save my application threads by implementing Netty Client.
I wrote a demo client please find the below code. Expecting that a single thread can connect to different port handle them efficiently but i was wrong. Netty creates per thread connection.
public class NettyClient {
public static void main(String[] args) {
Runnable runA = new Runnable() {
public void run() {
Connect(5544);
}
};
Thread threadA = new Thread(runA, "threadA");
threadA.start();
try {
Thread.sleep(1000);
} catch (InterruptedException x) {
}
Runnable runB = new Runnable() {
public void run() {
Connect(5544);
}
};
Thread threadB = new Thread(runB, "threadB");
threadB.start();
}
static ClientBootstrap bootstrap = null;
static NettyClient ins = new NettyClient();
public NettyClient() {
bootstrap = new ClientBootstrap(new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(), Executors.newCachedThreadPool()));
/*
* ClientBootstrap A helper class which creates a new client-side
* Channel and makes a connection attempt.
*
* NioClientSocketChannelFactory A ClientSocketChannelFactory which
* creates a client-side NIO-based SocketChannel. It utilizes the
* non-blocking I/O mode which was introduced with NIO to serve many
* number of concurrent connections efficiently
*
* There are two types of threads :Boss thread Worker threads Boss
* Thread passes control to worker thread.
*/
// Configure the client.
ChannelGroup channelGroup = new DefaultChannelGroup(NettyClient.class.getName());
// Only 1 thread configured but still aceepts threadA and Thread B
// connection
OrderedMemoryAwareThreadPoolExecutor pipelineExecutor = new OrderedMemoryAwareThreadPoolExecutor(
1, 1048576, 1073741824, 1, TimeUnit.MILLISECONDS,
new NioDataSizeEstimator(), new NioThreadFactory("NioPipeline"));
bootstrap.setPipelineFactory(new NioCommPipelineFactory(channelGroup,
pipelineExecutor));
// bootstrap.setPipelineFactory(new
// BackfillClientSocketChannelFactory());
bootstrap.setOption("child.tcpNoDelay", true);
bootstrap.setOption("child.keepAlive", true);
bootstrap.setOption("child.reuseAddress", true);
bootstrap.setOption("readWriteFair", true);
}
public static NettyClient getins() {
return ins;
}
public static void Connect(int port) {
ChannelFuture future = bootstrap
.connect(new InetSocketAddress("localhost", port));
Channel channel = future.awaitUninterruptibly().getChannel();
System.out.println(channel.getId());
channel.getCloseFuture().awaitUninterruptibly();
}
}
Now I want to know what are the benefits of using Netty client? Does it save Threads?
Netty saves threads. Your NettyClient wastes threads when waiting synchronously for opening and closing of the connections (calling awaitUninterruptibly()).
BTW how many connections will your client have? Maybe using classic synchronous one-thread-per-connection approach would suffice? Usually we have to save threads on a server side.
Netty allows you to handle thousands of connections with a handful of threads.
When used in a client application, it allows a handful of threads to make thousands of concurrent connections to server.
You have put sleep() in your thread. We must never block the Netty worker/boss threads. Even if there is a need to perform any one-off blocking operation, it must be off-loaded to another executor. Netty uses NIO, and the same thread can be used for creating a new connection, while the earlier connection gets some data in its input buffer.

Multi Thread Java Server

am currently working on a project where I have to build a multi thread server. I only started to work with threads so please understand me.
So far I have a class that implements the Runnable object, bellow you can see the code I have for the run method provided by the Runnable object.
public void run() {
while(true) {
try {
clientSocket = serversocket.accept();
for (int i = 0; i < 100; i++) {
DataOutputStream respond = new DataOutputStream(clientSocket.getOutputStream());
respond.writeUTF("Hello World! " + i);
try {
Thread.sleep(1000);
} catch(InterruptedException e) {
//
}
}
} catch(IOException e) {
System.out.println(e.getMessage());
}
}
}
Bellow is the main method that creates a new object of the server class and creates a threat. initializing the Thread.
public static void main(String args[]) {
new Thread(new Server(1234, "", false)).start();
}
I know this creates a new thread but it does not serve multiple clients at once. The first client need to close the connection for the second to be served. How can I make a multi threated server that will serve different client sockets at once? Do I create the thread on the clientSocket = serverSocket.accept();
yes.
from the docs:
Supporting Multiple Clients
To keep the KnockKnockServer example simple, we designed it to listen for and handle a single connection request. However, multiple client requests can come into the same port and, consequently, into the same ServerSocket. Client connection requests are queued at the port, so the server must accept the connections sequentially. However, the server can service them simultaneously through the use of threads—one thread per each client connection.
The basic flow of logic in such a server is this:
while (true) {
accept a connection;
create a thread to deal with the client;
}
The thread reads from and writes to the client connection as necessary.
https://docs.oracle.com/javase/tutorial/networking/sockets/clientServer.html

Pre-initializing a pool of worker threads to reuse connection objects (sockets)

I need to build a pool of workers in Java where each worker has its own connected socket; when the worker thread runs, it uses the socket but keeps it open to reuse later. We decided on this approach because the overhead associated with creating, connecting, and destroying sockets on an ad-hoc basis required too much overhead, so we need a method by which a pool of workers are pre-initializaed with their socket connection, ready to take on work while keeping the socket resources safe from other threads (sockets are not thread safe), so we need something along these lines...
public class SocketTask implements Runnable {
Socket socket;
public SocketTask(){
//create + connect socket here
}
public void run(){
//use socket here
}
}
On application startup, we want to initialize the workers and, hopefully, the socket connections somehow too...
MyWorkerPool pool = new MyWorkerPool();
for( int i = 0; i < 100; i++)
pool.addWorker( new WorkerThread());
As work is requested by the application, we send tasks to the worker pool for immediate execution...
pool.queueWork( new SocketTask(..));
Updated with Working Code
Based on helpful comments from Gray and jontejj, I've got the following code working...
SocketTask
public class SocketTask implements Runnable {
private String workDetails;
private static final ThreadLocal<Socket> threadLocal =
new ThreadLocal<Socket>(){
#Override
protected Socket initialValue(){
return new Socket();
}
};
public SocketTask(String details){
this.workDetails = details;
}
public void run(){
Socket s = getSocket(); //gets from threadlocal
//send data on socket based on workDetails, etc.
}
public static Socket getSocket(){
return threadLocal.get();
}
}
ExecutorService
ExecutorService threadPool =
Executors.newFixedThreadPool(5, Executors.defaultThreadFactory());
int tasks = 15;
for( int i = 1; i <= tasks; i++){
threadPool.execute(new SocketTask("foobar-" + i));
}
I like this approach for several reasons...
Sockets are local objects (via ThreadLocal) available to the running tasks, eliminating concurrency issues.
Sockets are created once and kept open, reused
when new tasks get queued, eliminating socket object create/destroy overhead.
One idea would be to put the Sockets in a BlockingQueue. Then whenever you need a Socket your threads can take() from the queue and when they are done with the Socket they put() it back on the queue.
public void run() {
Socket socket = socketQueue.take();
try {
// use the socket ...
} finally {
socketQueue.put(socket);
}
}
This has the added benefits:
You can go back to using the ExecutorService code.
You can separate the socket communication from the processing of the results.
You don't need a 1-to-1 correspondence to processing threads and sockets. But the socket communications may be 98% of the work so maybe no gain.
When you are done and your ExecutorService completes, you can shutdown your sockets by just dequeueing them and closing them.
This does add the additional overhead of another BlockingQueue but if you are doing Socket communications, you won't notice it.
we don't believe ThreadFactory addresses our needs ...
I think you could make this work if you used thread-locals. Your thread factory would create a thread that first opens the socket, stores it in a thread-local, then calls the Runnable arg which does all of the work with the socket, dequeuing jobs from the ExecutorService internal queue. Once it is done the arg.run() method would finish and you could get the socket from the thread-local and close it.
Something like the following. It's a bit messy but you should get the idea.
ExecutorService threadPool =
Executors.newFixedThreadPool(10,
new ThreadFactory() {
public Thread newThread(final Runnable r) {
Thread thread = new Thread(new Runnable() {
public void run() {
openSocketAndStoreInThreadLocal();
// our tasks would then get the socket from the thread-local
r.run();
getSocketFromThreadLocalAndCloseIt();
}
});
return thread;
}
}));
So your tasks would implement Runnable and look like:
public SocketWorker implements Runnable {
private final ThreadLocal<Socket> threadLocal;
public SocketWorker(ThreadLocal<Socket> threadLocal) {
this.threadLocal = threadLocal;
}
public void run() {
Socket socket = threadLocal.get();
// use the socket ...
}
}
I think you should use a ThreadLocal
package com.stackoverflow.q16680096;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class Main
{
public static void main(String[] args)
{
ExecutorService pool = Executors.newCachedThreadPool();
int nrOfConcurrentUsers = 100;
for(int i = 0; i < nrOfConcurrentUsers; i++)
{
pool.submit(new InitSocketTask());
}
// do stuff...
pool.submit(new Task());
}
}
package com.stackoverflow.q16680096;
import java.net.Socket;
public class InitSocketTask implements Runnable
{
public void run()
{
Socket socket = SocketPool.get();
// Do initial setup here
}
}
package com.stackoverflow.q16680096;
import java.net.Socket;
public final class SocketPool
{
private static final ThreadLocal<Socket> SOCKETS = new ThreadLocal<Socket>(){
#Override
protected Socket initialValue()
{
return new Socket(); // Pass in suitable arguments here...
}
};
public static Socket get()
{
return SOCKETS.get();
}
}
package com.stackoverflow.q16680096;
import java.net.Socket;
public class Task implements Runnable
{
public void run()
{
Socket socket = SocketPool.get();
// Do stuff with socket...
}
}
Where each thread gets its own socket.

DatagramChannel packet listener taking high CPU

I have a packet Listener Thread in Java for UDP packets along with 2-3 other threads.
It was running fine till today but now the process javaw.exe has started using CONSTANT 50% CPU.
Here is my code.
public class PacketListenerThread implements Runnable {
private SocketAddress receivedSocketAddress;
private DatagramChannel channel;
private ExecutorService pool;
public PacketListenerThread(DatagramChannel channel, ExecutorService pool) {
this.channel = channel;
this.pool = pool;
}
#Override
public void run() {
while (true) {
receivedSocketAddress = null;
ByteBuffer recvbuf = ByteBuffer.allocate(1400);
recvbuf.clear();
try {
receivedSocketAddress = channel.receive(recvbuf);
} catch (IOException e) {
e.printStackTrace();
}
if (receivedSocketAddress != null) {
pool.submit(new PacketHandlerRunnable(new TaskObject(receivedSocketAddress, recvbuf)));
}
}
}
}
I have stopped all other threads but this thread still uses "CONSTANT" 50% CPU .
See Javadoc:
If a datagram is immediately available, or if this channel is in blocking mode and one eventually becomes available, then the datagram is copied into the given byte buffer and its source address is returned. If this channel is in non-blocking mode and a datagram is not immediately available then this method immediately returns null.
Maybe your call to channel.receive(recvbuf) does not block, so you are looping at inifite speed which explains your CPU load.

Categories

Resources