why netty can not continuously send message? - java

I wrote a server to send large number of message to all clients after connecting.
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
while (true){
content = arrayblockqueue.poll()
ctx.writeAndFlush(content+"\r\n");
}
}
after sending thousands of message , the channel do not send message anymore. through debugging, I found that AbstractNioByteChannel.incompleteWrite was invoked and the selectionKey will be add SelectionKey.OP_WRITE when the network is congestion. After being set OP_WRITE , AbstractNioUnsafe.isFlushPending() will return true so the flush() can not be done indeed . How to let netty recover this situation ? Or I use netty in a wrong way ?

Your handler method is invoked directly from an I/O thread. Until your handler method returns, the I/O thread which called the handler method cannot perform any I/O, and that's why you are not seeing anything is written.
Looking from your code, what you want is to get a message from a blocking queue and write it to a channel. Instead of using a blocking queue, you can just write to the channel. Almost all operations in Netty are thread safe. For example:
public static void main(String[] args) throws Exception {
...
Channel ch = ...;
for (int i = 0; i < 1000000; i ++) {
ch.writeAndFlush(String.valueOf(i) + "\r\n");
}
...
}
// And your handler doesn't need an arrayblockingqueue.
However, the code above will probably make the event queue of Netty grow infinitely, resulting OutOfMemoryError. To prevent the write requests from being queued infinitely, you have to use the future returned by the writeAndFlush() operation.
for (int i = 0; i < 1000000; i ++) {
ChannelFuture f = ch.writeAndFlush(String.valueOf(i) + "\r\n");
if ((i + 1) % 100 == 0) {
// Wait until the write request is actually finished
// so that the event queue becomes empty.
f.sync();
}
}

Related

How to run a task for a specific amount of time

I'm implementing some sort of chat application and I need some help. This is the simplified code:
//...
Boolean stop = false;
while(!stop) {
ServerRequest message = (ServerRequest) ois.readObject();
broadcastMessage((String)message.getData()); //this method sends the client's message to all the other clients on the server
stop = (System.nanoTime() - start >= handUpTime); // I want to let the client send his messages for no more than handUpTime seconds
} //...
I want to let a client to send his messages to the server for a certain amount of time (handUpTime) and then "block" him, but I don't know how to do this in an "elegant" manner. Of course, my code stumbles upon the ois.readObject() part, as the System waits to receive a message, and continues to run for more than handUpTime seconds. How can I solve this problem? I'm open to other approaches too.
You can try:
ExecutorService executorService = Executors.newSingleThreadExecutor();
Callable<Object> callable = () -> {
// Perform some blocking computation
return someObject
};
Future<Object> future = executorService.submit(callable);
Object result = future.get(YOUR_TIMEOUT, TimeUnit.SECONDS);
If the future.get() doesn't return in certain amount of time, it throws a TimeoutException so you should handle the exception. See this post.

Jetty - Possible memory leak when using websockets and ByteBuffer

I'm using Jetty 9.3.5.v20151012 to deliver a large number of events to clients using websockets. The events consist of 3 parts: a number, an event type and a timestamp and each event is serialized as byte[] and sent using ByteBuffer.
After a certain number of hours/days, depending on the number of clients, I notice an increase in heap memory and without any possibility for the GC to recover it.
When the heap (set to 512MB) is almost full, the memory used by the jvm is about 700-800 MB and the CPU is at 100% (it seams like the GC is trying very often to clean up). At the beginning, when I start Jetty, the memory is at about 30MB when calling the GC but after some time, this number increases more and more. Eventually the process is killed.
I'm using jvisualvm as profiler for memory leak debug and I've attached some screenshots of the head dump:
Here is the main code that handles the message sending using ByteBuffer:
I basically have a method that creates a byte[] (fullbytes) for all events that need to be sent in one message:
byte[] numberBytes = ByteBuffer.allocate(4).putFloat(number).array();
byte[] eventBytes = ByteBuffer.allocate(2).putShort(event).array();
byte[] timestampBytes = ByteBuffer.allocate(8).putDouble(timestamp).array();
for (int i = 0; i < eventBytes.length; i++) {
fullbytes[i + scount*eventLength] = eventBytes[i];
}
for (int i = 0; i < numberBytes.length; i++) {
fullbytes[eventBytes.length + i + scount*eventLength] = numberBytes[i];
}
for (int i = 0; i < timestampBytes.length; i++) {
fullbytes[numberBytes.length + eventBytes.length + i + scount*eventLength] = timestampBytes[i];
}
And then another method (called in a separate thread) that sends the bytes on the websockets
ByteBuffer bb = ByteBuffer.wrap(fullbytes);
wsSession.getRemote().sendBytesByFuture(bb);
bb.clear();
As I've read on a few place (in documentation or here and here), this issue should not appear, since I'm not using direct ByteBuffer. Could this be a bug related to Jetty / websockets?
Please advise!
EDIT:
I've made some more tests and I have noticed that the problem appears when sending messages to a client that is not connected, but jetty has not received the onClose event (for ex. a user puts his laptop in standby). Because the on close event is not triggered, the server code doesn't unregister the client and keeps trying to send the messages to that client. I don't know why but the close event is received after 1 or 2 hours. Also, sometimes (in don't know the context yet) although the event is received and the client (socket) is unregistered, a reference to a WebSocketSession object (for that client) still hangs. I haven't found out why this happens yet.
Until then, I have 2 possible workarounds, but I have no idea how to achieve them (that have other good uses as well):
Always detect when a connection is not open (or closed temporarily, for ex. user puts laptop in standby). I tried using sendPing() and implementing onFrame() but I couldn't find a solution. Is there a way to do this?
Periodically "flush" the buffer. How can I discard the messages that were not sent to the client so they don't keep on queuing?
EDIT 2
This may be pointing the topic to another direction so I made another post here.
EDIT 3
I've done some more tests regarding the large number of messages/bytes sent and I found out why "it seamed" that the memory leak only appeared sometimes: when sending bytes async on a different thread than the one used when sevlet.configure() is called, after a large build-up, the memory is not being released after the client disconnects. Also I couldn't simulate the memory leak when using sendBytes(ByteBuffer), only with sendBytesByFuture(ByteBuffer) and sendBytes(ByteBuffer, WriteCallback).
This seams very strangem but I don't believe I'm doing something "wrong" in the tests.
Code:
#Override
public void configure(WebSocketServletFactory factory) {
factory.getPolicy().setIdleTimeout(1000 * 0);
factory.setCreator(new WebSocketCreator() {
#Override
public Object createWebSocket(ServletUpgradeRequest req,
ServletUpgradeResponse resp) {
return new WSTestMemHandler();
}
});
}
#WebSocket
public class WSTestMemHandler {
private boolean connected = false;
private int n = 0;
public WSTestMemHandler(){
}
#OnWebSocketClose
public void onClose(int statusCode, String reason) {
connected = false;
connections --;
//print debug
}
#OnWebSocketError
public void onError(Throwable t) {
//print debug
}
#OnWebSocketConnect
public void onConnect(final Session session) throws InterruptedException {
connected = true;
connections ++;
//print debug
//the code running in another thread will trigger memory leak
//when to client endpoint is down and messages are still sent
//because the GC will not cleanup after onclose received and
//client disconnects
//if the "while" loop is run in the same thread, the memory
//can be released when onclose is received, but that would
//mean to hold the onConnect() method and not return. I think
//this would be bad practice.
new Thread(new Runnable() {
#Override
public void run() {
while (connected) {
testBytesSend(session);
try {
Thread.sleep(4);
} catch (InterruptedException e) {
}
}
//print debug
}
}).start();
}
private void testBytesSend(Session session) {
try {
int noEntries = 200;
ByteBuffer bb = ByteBuffer.allocate(noEntries * 14);
for (int i = 0; i < noEntries; i++) {
n+= 1.0f;
bb.putFloat(n);
bb.putShort((short)1);
bb.putDouble(123456789123.0);
}
bb.flip();
session.getRemote().sendBytes(bb, new WriteCallback() {
#Override
public void writeSuccess() {
}
#Override
public void writeFailed(Throwable arg0) {
}
});
//print debug
} catch (Exception e) {
e.printStackTrace();
}
}
}
Your ByteBuffer use is incredibly inefficient.
Don't create all of those minor/tiny ByteBuffers just to get a byte array, and then toss it out. ick.
Note: you don't even use the .array() call correctly, as not all ByteBuffer allocations have a backing array you can access like that.
The byte array's numberBytes, eventBytes, timestampBytes, and fullbytes should not exist.
Create a single ByteBuffer, representing the entire message you intend to send, allocate it it to be either the size you need, or larger.
Then put the individual bytes you want into it, flip it, and give the Jetty implementation that single ByteBuffer.
Jetty will use the standard ByteBuffer information (such as position and limit) to determine what part of that ByteBuffer should actually be sent.

Handle Java socket concurrency

I am building a server that sends data via a single TCP socket for each user every 2 seconds and on a separate thread. There are also special events occasionally sent along side with the regular data. Sometimes, data in multiple packets would mix up so I created a queue to make sure it does not happen. However, the issue is still there, is my approach not correct or is there something wrong with my code?
protected void sendData (byte[] data) {
if (isSendingData) {
dataQueue.add(data);
return;
}
isSendingData = true;
Thread sendThread = new Thread() {
public void run () {
try {
BufferedOutputStream outStream = new BufferedOutputStream(connectionSocket.getOutputStream());
outStream.write(data);
outStream.flush();
// check queue, if there are data, send
byte[] moreData = null;
if (dataQueue.size() > 0) {
moreData = dataQueue.remove(0);
}
isSendingData = false;
if (moreData != null) {
sendData(moreData);
}
}
catch (Exception e) {
System.out.println ("Error sending data to peripheral: " + e);
isSendingData = false;
}
}
};
sendThread.start ();
}
The proper idiom to remove concurrency issues using a queue is to have a long-lived thread run an infinite loop which takes elements from the queue and processes them. Typically you'll use a blocking queue so that on each iteration the thread goes to sleep until there is an item to process.
Your solution deviates from the above in many aspects. For example:
if (isSendingData) {
dataQueue.add(data);
return;
}
isSendingData = true;
—if this method is called concurrently, this will result in a race condition: both threads can read isSendingData as false, then concurrently proceed to sending data over the network. If isSendingData isn't volatile, you've also got a data race on it (entirely separate from the race condition explained above).
if (dataQueue.size() > 0) {
moreData = dataQueue.remove(0);
}
—this is another race condition: after you read size as zero, the other thread can add an item to the queue. Now that item will possibly never be processed. It will linger in the queue until another such thread is started.
The more obvious way your solution is not correct is that the thread you start has no loops and is intended to just process one message, plus possibly one extra message in the queue. This should be reworked so that there are no special cases and sendData always, unconditionally, submits to a queue and never does any sending on its own.
I would do this completely differently. You don't want arbitrarily long queues in your application.
Have your hearbeat thread synchronize on the socket when sending the heartbeat.
Don't have it sending anything else.
Get rid of the queue, isSendingData, etc.
Have your main application synchronize on the socket when it wants to send, and just send whenever it needs to.
Use the same BufferedOutputStream or BufferedWriter for all sending, and flush it after each send.

How to make a thread wait until a variable reaches a specific value (Multi-threaded Java)

I have a server program which accepts client connections. These client connections can belong to many streams. For example two or more clients can belong to the same stream. Out of these streams one message I have to pass but I have to wait until all the streams are established. For this I maintain the following data structure.
ConcurrentHashMap<Integer, AtomicLong> conhasmap = new ConcurrentHashMap<Integer, AtomicLong>();
Integer is the stream ID and Long is the client number. To make one thread for a given stream to wait till AtomicLong reach a specific value I used the following loop. Actually the first packet of the stream puts it stream ID and the number of connections to wait. With each connection I decrease the connections to wait.
while(conhasmap.get(conectionID) != new AtomicLong(0)){
// Do nothing
}
However this loop blocks the other threads. According to this
answer it does a volatile read. How can I modify the code to wait the correct thread for a given stream until it reaches a specific value?
If you're using Java 8, CompletableFuture could be a good fit. Here's a complete, contrived example which is waiting for 5 clients to connect and send a message to a server (simulated using a BlockingQueue with offer/poll).
In this example, when the expected client connected message count is reached, a CompletableFuture hook is completed, which then runs arbitrary code on any thread of your choice.
In this example, you don't have any complex thread wait/notify setups or busy wait loops.
package so.thread.state;
import java.util.concurrent.*;
import java.util.concurrent.atomic.AtomicBoolean;
import java.util.concurrent.atomic.AtomicLong;
public class Main {
public static String CONNETED_MSG = "CONNETED";
public static Long EXPECTED_CONN_COUNT = 5L;
public static ExecutorService executor = Executors.newCachedThreadPool();
public static BlockingQueue<String> queue = new LinkedBlockingQueue<>();
public static AtomicBoolean done = new AtomicBoolean(false);
public static void main(String[] args) throws Exception {
// add a "server" thread
executor.submit(() -> server());
// add 5 "client" threads
for (int i = 0; i < EXPECTED_CONN_COUNT; i++) {
executor.submit(() -> client());
}
// clean shut down
Thread.sleep(TimeUnit.SECONDS.toMillis(1));
done.set(true);
Thread.sleep(TimeUnit.SECONDS.toMillis(1));
executor.shutdown();
executor.awaitTermination(1, TimeUnit.SECONDS);
}
public static void server() {
print("Server started up");
// track # of client connections established
AtomicLong connectionCount = new AtomicLong(0L);
// at startup, create my "hook"
CompletableFuture<Long> hook = new CompletableFuture<>();
hook.thenAcceptAsync(Main::allClientsConnected, executor);
// consume messages
while (!done.get()) {
try {
String msg = queue.poll(5, TimeUnit.MILLISECONDS);
if (null != msg) {
print("Server received client message");
if (CONNETED_MSG.equals(msg)) {
long count = connectionCount.incrementAndGet();
if (count >= EXPECTED_CONN_COUNT) {
hook.complete(count);
}
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
print("Server shut down");
}
public static void client() {
queue.offer(CONNETED_MSG);
print("Client sent message");
}
public static void allClientsConnected(Long count) {
print("All clients connected, count: " + count);
}
public static void print(String msg) {
System.out.println(String.format("[%s] %s", Thread.currentThread().getName(), msg));
}
}
You get output like this
[pool-1-thread-1] Server started up
[pool-1-thread-5] Client sent message
[pool-1-thread-3] Client sent message
[pool-1-thread-2] Client sent message
[pool-1-thread-6] Client sent message
[pool-1-thread-4] Client sent message
[pool-1-thread-1] Server received client message
[pool-1-thread-1] Server received client message
[pool-1-thread-1] Server received client message
[pool-1-thread-1] Server received client message
[pool-1-thread-1] Server received client message
[pool-1-thread-4] All clients connected, count: 5
[pool-1-thread-1] Server shut down
Your expression:
conhasmap.get(conectionID) != new AtomicLong(0)
will always be true because you are comparing the object references, which will never be equal, instead of the values. The better expression would be:
conhasmap.get(conectionID).longValue() != 0L)
, but looping like this without wait/notify logic within the loop is not a good practice because it uses CPU time constantly. Instead, each thread should call .wait() on the AtomicLong instance, and when it is decremented or incremented, you should call .notifyAll() on the AtomicLong instance to wake up each waiting thread to check the expression. The AtomicLong class may already be calling the notifyAll() method whenever it is modified, but I don't know for sure.
AtomicLong al = conhasmap.get(conectionID);
synchronized(al) {
while(al.longValue() != 0L) {
al.wait(100); //wait up to 100 millis to be notified
}
}
In the code that increments/decrements, it will look like:
AtomicLong al = conhasmap.get(conectionID);
synchronized(al) {
if(al.decrementAndGet() == 0L) {
al.notifyAll();
}
}
I personally would not use an AtomicLong for this counter because you are not benefiting from the lock-less behavior of the AtomicLong. Just use a java.lang.Long instead because you will need to synchronize on the counter object for the wait()/notify() logic anyway.

Sequentially Channel Write Sends Corrupted Data in Java.NIO

I have a Server that uses non blocking sockets, nio. Server works in a separate thread and there is another thread called Game. Game thread holds the server object and uses server.sendMessage, Server thread only reads the data. When I call sendMessage two times sequentially for 2 packets in a while loop, after a moment i get "java.io.StreamCorruptedException: invalid stream header: 6B6574B4" error in client.
part of server code:
public void write(SelectionKey channelKey, byte[] buffer) {
if (buffer != null) {
int bytesWritten;
try {
SocketChannel channel = (SocketChannel) channelKey.channel();
synchronized (channel) {
bytesWritten = channel.write(ByteBuffer.wrap(buffer));
}
if (bytesWritten == -1) {
resetKey(channelKey);
disconnected(channelKey);
}
} catch (Exception e) {
resetKey(channelKey);
disconnected(channelKey);
}
}
}
public void broadcast(byte[] buf, SelectionKey fr) {
synchronized (clientList) {
Iterator<SelectionKey> i = clientList.iterator();
while (i.hasNext()) {
SelectionKey key = i.next();
if (fr != key)
write(key, buf);
}
}
}
public synchronized void sendMessage(Packets pk) {
broadcast(pk.toByteArray(), null);
}
My guess (from the small amount of code you have included) is that you are not delineating your messages at all. even though you send 2 messages separately, the io layer may split/combine those in various ways such that the receiver gets part of one message attached to a previous message. you should use some sort of "message" protocol to indicate to the receiver exactly how many bytes to consume so that it can correctly parse each incoming message (e.g. write the message byte length first, then the message bytes).
as a side note, the write() method is not guaranteed to write all the bytes in one call, so you should be handling the return value and writing the remaining bytes as necessary.
You need to flip() before writing, and compact() afterwards, and you need to stop assuming that one write() writes the entire buffer. It returns a value for a reason. You need to loop, or if you're in non-blocking mode you need to proceeds as follows:
Write.
If the write didn't complete fully, register the channel for OP_WRITE and return to the select loop.
When the channel becomes writable, try the write again, and if it still doesn't complete just keep looping.
Otherwise deregister OP_WRITE.

Categories

Resources