I want to know if I can save my application threads by implementing Netty Client.
I wrote a demo client please find the below code. Expecting that a single thread can connect to different port handle them efficiently but i was wrong. Netty creates per thread connection.
public class NettyClient {
public static void main(String[] args) {
Runnable runA = new Runnable() {
public void run() {
Connect(5544);
}
};
Thread threadA = new Thread(runA, "threadA");
threadA.start();
try {
Thread.sleep(1000);
} catch (InterruptedException x) {
}
Runnable runB = new Runnable() {
public void run() {
Connect(5544);
}
};
Thread threadB = new Thread(runB, "threadB");
threadB.start();
}
static ClientBootstrap bootstrap = null;
static NettyClient ins = new NettyClient();
public NettyClient() {
bootstrap = new ClientBootstrap(new NioClientSocketChannelFactory(
Executors.newCachedThreadPool(), Executors.newCachedThreadPool()));
/*
* ClientBootstrap A helper class which creates a new client-side
* Channel and makes a connection attempt.
*
* NioClientSocketChannelFactory A ClientSocketChannelFactory which
* creates a client-side NIO-based SocketChannel. It utilizes the
* non-blocking I/O mode which was introduced with NIO to serve many
* number of concurrent connections efficiently
*
* There are two types of threads :Boss thread Worker threads Boss
* Thread passes control to worker thread.
*/
// Configure the client.
ChannelGroup channelGroup = new DefaultChannelGroup(NettyClient.class.getName());
// Only 1 thread configured but still aceepts threadA and Thread B
// connection
OrderedMemoryAwareThreadPoolExecutor pipelineExecutor = new OrderedMemoryAwareThreadPoolExecutor(
1, 1048576, 1073741824, 1, TimeUnit.MILLISECONDS,
new NioDataSizeEstimator(), new NioThreadFactory("NioPipeline"));
bootstrap.setPipelineFactory(new NioCommPipelineFactory(channelGroup,
pipelineExecutor));
// bootstrap.setPipelineFactory(new
// BackfillClientSocketChannelFactory());
bootstrap.setOption("child.tcpNoDelay", true);
bootstrap.setOption("child.keepAlive", true);
bootstrap.setOption("child.reuseAddress", true);
bootstrap.setOption("readWriteFair", true);
}
public static NettyClient getins() {
return ins;
}
public static void Connect(int port) {
ChannelFuture future = bootstrap
.connect(new InetSocketAddress("localhost", port));
Channel channel = future.awaitUninterruptibly().getChannel();
System.out.println(channel.getId());
channel.getCloseFuture().awaitUninterruptibly();
}
}
Now I want to know what are the benefits of using Netty client? Does it save Threads?
Netty saves threads. Your NettyClient wastes threads when waiting synchronously for opening and closing of the connections (calling awaitUninterruptibly()).
BTW how many connections will your client have? Maybe using classic synchronous one-thread-per-connection approach would suffice? Usually we have to save threads on a server side.
Netty allows you to handle thousands of connections with a handful of threads.
When used in a client application, it allows a handful of threads to make thousands of concurrent connections to server.
You have put sleep() in your thread. We must never block the Netty worker/boss threads. Even if there is a need to perform any one-off blocking operation, it must be off-loaded to another executor. Netty uses NIO, and the same thread can be used for creating a new connection, while the earlier connection gets some data in its input buffer.
Related
I am familiar with Netty basics and have used it to build a typical application server running on TCP designed to serve many clients/connections. However, I recently have a requirement to build a server which is designed to handle handful of clients or only one client most of the times. But the client is the gateway to many devices and therefore generate substantial traffic to the server I am trying to design.
My questions are:
Is it possible / recommended at all to use Netty for this use case? I have seen the discussion here.
Is it possible to use multithreaded EventExecutor to the channel handlers in the pipeline so that instead of channel EventLoop, the concurrency is achieved by the EventExecutor thread pool? Will it ensure that one message from the client will be handled by one thread through all handlers, while the next message by another thread?
Is there any example implementation available?
According to the documentation of io.netty.channel.oio you can use it if you don't have lot's of client. In this case, every connection will be handled in a separate thread and use Java old blocking IO under the hood. Take a look at OioByteStreamChannel::activate:
/**
* Activate this instance. After this call {#link #isActive()} will return {#code true}.
*/
protected final void activate(InputStream is, OutputStream os) {
if (this.is != null) {
throw new IllegalStateException("input was set already");
}
if (this.os != null) {
throw new IllegalStateException("output was set already");
}
if (is == null) {
throw new NullPointerException("is");
}
if (os == null) {
throw new NullPointerException("os");
}
this.is = is;
this.os = os;
}
As you can see, the oio Streams will be used there.
According to your comment. You can Specify EventExecutorGroup while adding handler to a pipeline as this:
new ChannelInitializer<Channel> {
public void initChannel(Channel ch) {
ch.pipeline().addLast(new YourHandler());
}
}
Let's take a look at the AbstractChannelHandlerContext:
#Override
public EventExecutor executor() {
if (executor == null) {
return channel().eventLoop();
} else {
return executor;
}
}
Here we see that if you don't register your EventExecutor it will use the child event group you specified while creating the ServerBootstrap.
new ServerBootstrap()
.group(new OioEventLoopGroup(), new OioEventLoopGroup())
//acceptor group //child group
Here is how reading from channel is invoked AbstractChannelHandlerContext::invokeChannelRead:
static void invokeChannelRead(final AbstractChannelHandlerContext next, Object msg) {
final Object m = next.pipeline.touch(ObjectUtil.checkNotNull(msg, "msg"), next);
EventExecutor executor = next.executor();
if (executor.inEventLoop()) {
next.invokeChannelRead(m);
} else {
executor.execute(new Runnable() { //Invoked by the EventExecutor you specified
#Override
public void run() {
next.invokeChannelRead(m);
}
});
}
}
Even for a few connections I would go with NioEventLoopGroup.
Regarding your question:
Is it possible to use multithreaded EventExecutor to the channel
handlers in the pipeline so that instead of channel EventLoop, the
concurrency is achieved by the EventExecutor thread pool? Will it
ensure that one message from the client will be handled by one thread
through all handlers, while the next message by another thread?
Netty's Channel guarantees that every processing for an inbound or an outbound message will occur in the same thread. You don't have to hack an EventExecutor of your own to handle this. If serving inbound messages doesn't require long-lasting processings your code will look like basic usage of ServerBootstrap. You might find useful to tune the number of threads in the pool.
I have an application that is communicating with a UDP server. My application listens on one port (say 1234) and sends on another (say 5678). The UDP server I am communicating with also requires a "heartbeat" ever 5 seconds, for which I create another thread. When my application first starts up, I create the listen thread, then create the heartbeat thread, then I start sending the UDP server message packets. The only thing, however, is that it seems like all the packets I send out finish before the heartbeat thread starts.
Here is what I have for my listener:
public class MyListener implements Runnable {
private volatile boolean run = true;
private DatagramSocket myDatagramSocket;
private DatagramPacket myDatagramPacket;
private byte[] receiveBuffer;
private int receiveBufferSize;
#Override
public void run(){
while(run){
try {
myDatagramSocket = new DatagramSocket(null);
InetSocketAddress myInetSocketAddress = new InetSocketAddress(1234);
myDatagramSocket.bind(myInetSocketAddress);
receiveBuffer = new byte[2047];
myDatagramPacket = new DatagramPacket(receiveBuffer, 2047);
myDatagramSocket.receive(myDatagramPacket);
byte[] data = myDatagramPacket.getData();
receiveBufferSize = myDatagramPacket.getLength();
switch(messageID){
...
}
} catch (Exception e){
}
}
}
}
Here is what I have for my heartbeat:
public class MyHeartbeat implements Runnable {
private volatile boolean run = true;
private HeartbeatSenderClass heartbeatSender;
#Override
public void run(){
while(run){
try {
TimeUnit.SECONDS.sleep(5);
heartbeatSender.sendHeartbeat();
} catch(Exception e){
}
}
}
}
Here is what I have for my main class:
public class MyApp {
public static void main(String[] args){
MyListener listener = new MyListener();
Thread listenerThread = new Thread(listener);
listenerThread.setName("Listener Thread");
listenerThread.start();
MyHeartbeat heartbeat = new MyHeartbeat();
Thread heartbeatThread = new Thread(heartbeat);
heartbeatThread.setName("Heartbeat Thread");
heartbeatThread.start();
MySender sender = new MySender();
Thread senderThread = new Thread(sender);
senderThread.setName("Sender Thread");
senderThread.start();
}
}
All of my packets are making it to the UDP server, but not smoothly like I would have thought. I would have thought that while I am sending packets to the server, every 5 seconds my heartbeat would be sent out. However, it seems like my heartbeats are going out only after my packets are done sending. Also, I believe I am not getting all of the messages from the UDP server. I say this because I have sniffed the UDP packets on my machine and I see data coming from the server that my receiver is not receiving/processing. Any suggestions?
You have in heartbeat this:
TimeUnit.SECONDS.sleep(5);
heartbeatSender.sendHeartbeat();
So before sending the very first beat, you wait for 5 seconds. No wonder that the other threads do their job meanwhile.
The DatagramSocket you use to send the packets is a shared resource that is contended between threads, and then if a thread consume too much of that resource, another one may starve. See: Thread starvation
Also if you are loosing packets, it happens because you can't read as fast as you should. If udp packets arrive faster then they can be read, the queue will discard the remaining.
Under linux, for example you can control the receive buffer with:
sudo sysctl -w net.core.rmem_default=26214400
sudo sysctl -w net.ipv4.udp_mem='26214400 26214400 26214400'
sudo sysctl -w net.ipv4.udp_rmem_min=26214400
But anyway if we are talking about a sustained loss, you should consider to have a thread for reading the buffer, a queue and a thread to process the readed data.
am currently working on a project where I have to build a multi thread server. I only started to work with threads so please understand me.
So far I have a class that implements the Runnable object, bellow you can see the code I have for the run method provided by the Runnable object.
public void run() {
while(true) {
try {
clientSocket = serversocket.accept();
for (int i = 0; i < 100; i++) {
DataOutputStream respond = new DataOutputStream(clientSocket.getOutputStream());
respond.writeUTF("Hello World! " + i);
try {
Thread.sleep(1000);
} catch(InterruptedException e) {
//
}
}
} catch(IOException e) {
System.out.println(e.getMessage());
}
}
}
Bellow is the main method that creates a new object of the server class and creates a threat. initializing the Thread.
public static void main(String args[]) {
new Thread(new Server(1234, "", false)).start();
}
I know this creates a new thread but it does not serve multiple clients at once. The first client need to close the connection for the second to be served. How can I make a multi threated server that will serve different client sockets at once? Do I create the thread on the clientSocket = serverSocket.accept();
yes.
from the docs:
Supporting Multiple Clients
To keep the KnockKnockServer example simple, we designed it to listen for and handle a single connection request. However, multiple client requests can come into the same port and, consequently, into the same ServerSocket. Client connection requests are queued at the port, so the server must accept the connections sequentially. However, the server can service them simultaneously through the use of threads—one thread per each client connection.
The basic flow of logic in such a server is this:
while (true) {
accept a connection;
create a thread to deal with the client;
}
The thread reads from and writes to the client connection as necessary.
https://docs.oracle.com/javase/tutorial/networking/sockets/clientServer.html
I'm building a client-server application on Java with sockets. As far as I've understood to create a thread for every client being connected is too expensive. Instead we can use ThreadPool Executor. As said in the concurrent documentation we can create a thread pool with a fixed size.
class NetworkService implements Runnable {
private final ServerSocket serverSocket;
private final ExecutorService pool;
public NetworkService(int port, int poolSize)
throws IOException {
serverSocket = new ServerSocket(port);
pool = Executors.newFixedThreadPool(poolSize);
}
public void run() { // run the service
try {
for (;;) {
pool.execute(new Handler(serverSocket.accept()));
}
} catch (IOException ex) {
pool.shutdown();
}
}
}
And it seems we have at most poolSize thread running at every point of time. But what if we need to maintain a number of connection that is more than poolSize. How is it going to work?
If you are going to have really huge amount of clients, you should consider NIO for it, because creating thread for each client will be too expensive.
NIO uses selectors and channels and doesn't require to create new thread for each connection. See image.
Did you hear about netty ? I don't know what you are going to implement but seems like it will be useful.
I need to build a pool of workers in Java where each worker has its own connected socket; when the worker thread runs, it uses the socket but keeps it open to reuse later. We decided on this approach because the overhead associated with creating, connecting, and destroying sockets on an ad-hoc basis required too much overhead, so we need a method by which a pool of workers are pre-initializaed with their socket connection, ready to take on work while keeping the socket resources safe from other threads (sockets are not thread safe), so we need something along these lines...
public class SocketTask implements Runnable {
Socket socket;
public SocketTask(){
//create + connect socket here
}
public void run(){
//use socket here
}
}
On application startup, we want to initialize the workers and, hopefully, the socket connections somehow too...
MyWorkerPool pool = new MyWorkerPool();
for( int i = 0; i < 100; i++)
pool.addWorker( new WorkerThread());
As work is requested by the application, we send tasks to the worker pool for immediate execution...
pool.queueWork( new SocketTask(..));
Updated with Working Code
Based on helpful comments from Gray and jontejj, I've got the following code working...
SocketTask
public class SocketTask implements Runnable {
private String workDetails;
private static final ThreadLocal<Socket> threadLocal =
new ThreadLocal<Socket>(){
#Override
protected Socket initialValue(){
return new Socket();
}
};
public SocketTask(String details){
this.workDetails = details;
}
public void run(){
Socket s = getSocket(); //gets from threadlocal
//send data on socket based on workDetails, etc.
}
public static Socket getSocket(){
return threadLocal.get();
}
}
ExecutorService
ExecutorService threadPool =
Executors.newFixedThreadPool(5, Executors.defaultThreadFactory());
int tasks = 15;
for( int i = 1; i <= tasks; i++){
threadPool.execute(new SocketTask("foobar-" + i));
}
I like this approach for several reasons...
Sockets are local objects (via ThreadLocal) available to the running tasks, eliminating concurrency issues.
Sockets are created once and kept open, reused
when new tasks get queued, eliminating socket object create/destroy overhead.
One idea would be to put the Sockets in a BlockingQueue. Then whenever you need a Socket your threads can take() from the queue and when they are done with the Socket they put() it back on the queue.
public void run() {
Socket socket = socketQueue.take();
try {
// use the socket ...
} finally {
socketQueue.put(socket);
}
}
This has the added benefits:
You can go back to using the ExecutorService code.
You can separate the socket communication from the processing of the results.
You don't need a 1-to-1 correspondence to processing threads and sockets. But the socket communications may be 98% of the work so maybe no gain.
When you are done and your ExecutorService completes, you can shutdown your sockets by just dequeueing them and closing them.
This does add the additional overhead of another BlockingQueue but if you are doing Socket communications, you won't notice it.
we don't believe ThreadFactory addresses our needs ...
I think you could make this work if you used thread-locals. Your thread factory would create a thread that first opens the socket, stores it in a thread-local, then calls the Runnable arg which does all of the work with the socket, dequeuing jobs from the ExecutorService internal queue. Once it is done the arg.run() method would finish and you could get the socket from the thread-local and close it.
Something like the following. It's a bit messy but you should get the idea.
ExecutorService threadPool =
Executors.newFixedThreadPool(10,
new ThreadFactory() {
public Thread newThread(final Runnable r) {
Thread thread = new Thread(new Runnable() {
public void run() {
openSocketAndStoreInThreadLocal();
// our tasks would then get the socket from the thread-local
r.run();
getSocketFromThreadLocalAndCloseIt();
}
});
return thread;
}
}));
So your tasks would implement Runnable and look like:
public SocketWorker implements Runnable {
private final ThreadLocal<Socket> threadLocal;
public SocketWorker(ThreadLocal<Socket> threadLocal) {
this.threadLocal = threadLocal;
}
public void run() {
Socket socket = threadLocal.get();
// use the socket ...
}
}
I think you should use a ThreadLocal
package com.stackoverflow.q16680096;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
public class Main
{
public static void main(String[] args)
{
ExecutorService pool = Executors.newCachedThreadPool();
int nrOfConcurrentUsers = 100;
for(int i = 0; i < nrOfConcurrentUsers; i++)
{
pool.submit(new InitSocketTask());
}
// do stuff...
pool.submit(new Task());
}
}
package com.stackoverflow.q16680096;
import java.net.Socket;
public class InitSocketTask implements Runnable
{
public void run()
{
Socket socket = SocketPool.get();
// Do initial setup here
}
}
package com.stackoverflow.q16680096;
import java.net.Socket;
public final class SocketPool
{
private static final ThreadLocal<Socket> SOCKETS = new ThreadLocal<Socket>(){
#Override
protected Socket initialValue()
{
return new Socket(); // Pass in suitable arguments here...
}
};
public static Socket get()
{
return SOCKETS.get();
}
}
package com.stackoverflow.q16680096;
import java.net.Socket;
public class Task implements Runnable
{
public void run()
{
Socket socket = SocketPool.get();
// Do stuff with socket...
}
}
Where each thread gets its own socket.