I'm using Jetty 9.3.5.v20151012 to deliver a large number of events to clients using websockets. The events consist of 3 parts: a number, an event type and a timestamp and each event is serialized as byte[] and sent using ByteBuffer.
After a certain number of hours/days, depending on the number of clients, I notice an increase in heap memory and without any possibility for the GC to recover it.
When the heap (set to 512MB) is almost full, the memory used by the jvm is about 700-800 MB and the CPU is at 100% (it seams like the GC is trying very often to clean up). At the beginning, when I start Jetty, the memory is at about 30MB when calling the GC but after some time, this number increases more and more. Eventually the process is killed.
I'm using jvisualvm as profiler for memory leak debug and I've attached some screenshots of the head dump:
Here is the main code that handles the message sending using ByteBuffer:
I basically have a method that creates a byte[] (fullbytes) for all events that need to be sent in one message:
byte[] numberBytes = ByteBuffer.allocate(4).putFloat(number).array();
byte[] eventBytes = ByteBuffer.allocate(2).putShort(event).array();
byte[] timestampBytes = ByteBuffer.allocate(8).putDouble(timestamp).array();
for (int i = 0; i < eventBytes.length; i++) {
fullbytes[i + scount*eventLength] = eventBytes[i];
}
for (int i = 0; i < numberBytes.length; i++) {
fullbytes[eventBytes.length + i + scount*eventLength] = numberBytes[i];
}
for (int i = 0; i < timestampBytes.length; i++) {
fullbytes[numberBytes.length + eventBytes.length + i + scount*eventLength] = timestampBytes[i];
}
And then another method (called in a separate thread) that sends the bytes on the websockets
ByteBuffer bb = ByteBuffer.wrap(fullbytes);
wsSession.getRemote().sendBytesByFuture(bb);
bb.clear();
As I've read on a few place (in documentation or here and here), this issue should not appear, since I'm not using direct ByteBuffer. Could this be a bug related to Jetty / websockets?
Please advise!
EDIT:
I've made some more tests and I have noticed that the problem appears when sending messages to a client that is not connected, but jetty has not received the onClose event (for ex. a user puts his laptop in standby). Because the on close event is not triggered, the server code doesn't unregister the client and keeps trying to send the messages to that client. I don't know why but the close event is received after 1 or 2 hours. Also, sometimes (in don't know the context yet) although the event is received and the client (socket) is unregistered, a reference to a WebSocketSession object (for that client) still hangs. I haven't found out why this happens yet.
Until then, I have 2 possible workarounds, but I have no idea how to achieve them (that have other good uses as well):
Always detect when a connection is not open (or closed temporarily, for ex. user puts laptop in standby). I tried using sendPing() and implementing onFrame() but I couldn't find a solution. Is there a way to do this?
Periodically "flush" the buffer. How can I discard the messages that were not sent to the client so they don't keep on queuing?
EDIT 2
This may be pointing the topic to another direction so I made another post here.
EDIT 3
I've done some more tests regarding the large number of messages/bytes sent and I found out why "it seamed" that the memory leak only appeared sometimes: when sending bytes async on a different thread than the one used when sevlet.configure() is called, after a large build-up, the memory is not being released after the client disconnects. Also I couldn't simulate the memory leak when using sendBytes(ByteBuffer), only with sendBytesByFuture(ByteBuffer) and sendBytes(ByteBuffer, WriteCallback).
This seams very strangem but I don't believe I'm doing something "wrong" in the tests.
Code:
#Override
public void configure(WebSocketServletFactory factory) {
factory.getPolicy().setIdleTimeout(1000 * 0);
factory.setCreator(new WebSocketCreator() {
#Override
public Object createWebSocket(ServletUpgradeRequest req,
ServletUpgradeResponse resp) {
return new WSTestMemHandler();
}
});
}
#WebSocket
public class WSTestMemHandler {
private boolean connected = false;
private int n = 0;
public WSTestMemHandler(){
}
#OnWebSocketClose
public void onClose(int statusCode, String reason) {
connected = false;
connections --;
//print debug
}
#OnWebSocketError
public void onError(Throwable t) {
//print debug
}
#OnWebSocketConnect
public void onConnect(final Session session) throws InterruptedException {
connected = true;
connections ++;
//print debug
//the code running in another thread will trigger memory leak
//when to client endpoint is down and messages are still sent
//because the GC will not cleanup after onclose received and
//client disconnects
//if the "while" loop is run in the same thread, the memory
//can be released when onclose is received, but that would
//mean to hold the onConnect() method and not return. I think
//this would be bad practice.
new Thread(new Runnable() {
#Override
public void run() {
while (connected) {
testBytesSend(session);
try {
Thread.sleep(4);
} catch (InterruptedException e) {
}
}
//print debug
}
}).start();
}
private void testBytesSend(Session session) {
try {
int noEntries = 200;
ByteBuffer bb = ByteBuffer.allocate(noEntries * 14);
for (int i = 0; i < noEntries; i++) {
n+= 1.0f;
bb.putFloat(n);
bb.putShort((short)1);
bb.putDouble(123456789123.0);
}
bb.flip();
session.getRemote().sendBytes(bb, new WriteCallback() {
#Override
public void writeSuccess() {
}
#Override
public void writeFailed(Throwable arg0) {
}
});
//print debug
} catch (Exception e) {
e.printStackTrace();
}
}
}
Your ByteBuffer use is incredibly inefficient.
Don't create all of those minor/tiny ByteBuffers just to get a byte array, and then toss it out. ick.
Note: you don't even use the .array() call correctly, as not all ByteBuffer allocations have a backing array you can access like that.
The byte array's numberBytes, eventBytes, timestampBytes, and fullbytes should not exist.
Create a single ByteBuffer, representing the entire message you intend to send, allocate it it to be either the size you need, or larger.
Then put the individual bytes you want into it, flip it, and give the Jetty implementation that single ByteBuffer.
Jetty will use the standard ByteBuffer information (such as position and limit) to determine what part of that ByteBuffer should actually be sent.
Related
I'm trying to use a Flux to stream events to subscribers using RSocket. There can be a huge backlog of events (in the database) and they must be send out in order without any gaps without either flooding the publisher (out of memory) or the consumer. None of the OverflowStrategy's seem suitable:
IGNORE: I'd like to block (or get a callback when there's more demand), not get an error
ERROR: I'd like to block (or get a callback when there's more demand), not get an error
DROP: bad, because events cannot be skipped (no gaps)
LATEST: bad, because events cannot be skipped (no gaps)
BUFFER: leads to out of memory on publisher
I have everything working, but if I don't limit my rate in the subscribers the publisher side goes out of memory -- that's bad, as a bad subscriber could kill my service. For some reason, I'm misunderstanding how back pressure works. Everywhere I look there is talk of limitRate. This works, but it only works for me on the subscriber side. Using a limitRate on the publisher side has no effect at all.
I've used Flux.generate and Flux.create to create the events I want on the publisher side, but they don't seem to respond to back pressure at all. So I must be missing something as the whole back pressure mechanism in Reactor is described as very transparent and easy to use...
Here's my publisher:
#MessageMapping("events")
public Flux<String> events(String data) {
Flux<String> flux = Flux.generate(new Consumer<SynchronousSink<String>>() {
long offset = 0;
#Override
public void accept(SynchronousSink<String> emitter) {
emitter.next("" + offset++);
}
});
return flux.limitRate(100); // limitRate doesn't do anything
}
And my consumer:
#Autowired RSocketRequester requester;
#EventListener(ApplicationReadyEvent.class)
public void run() throws InterruptedException {
requester.route("events")
.data("Just Go")
.retrieveFlux(String.class)
//.limitRate(1000) // commenting this line makes publisher go OOM
.bufferTimeout(20000, Duration.ofMillis(10))
.subscribe(new Consumer<List<String>>() {
long totalReceived = 0;
long totalBytes = 0;
#Override
public void accept(List<String> s) {
totalReceived += s.size();
totalBytes += s.stream().mapToInt(String::length).sum();
System.out.printf("So we received: %4d messages # %8.1f msg/sec (%d kB/sec)\n", s.size(), ((double)totalReceived / (System.currentTimeMillis() - time)) * 1000, totalBytes / (System.currentTimeMillis() - time));
try {
Thread.sleep(200); // Delay consumer so publisher has to slow down
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
});
Thread.sleep(100000); // leave Spring running for a bit (dirty)
}
What I don't understand why this wouldn't work. The generate uses a call back, but it keeps getting called as fast as possible leading to huge memory allocations in the JVM and it goes OOM. Why does it keep calling generate?
What am I missing?
I'm using a 3rd party library, which has a method:
secureSend(int channel, byte[] data);
This method sends my binary data to the library, and if the data is larger than 64K, the method splits it to 64K chunks and sends them in order.
This method is marked as blocking, so it won't return immediately. Therefore is also advised to spawn a thread for each usage of this function:
new Thread(new Runnable() {
public void run() {
library.secureSend(channel, mydata);
}
}).start();
If I'm trying to send larger data (>1Mb), it will take about 30 seconds. This is fine.
However sometimes I need to interrupt the sending because there is a higher priority data to send.
Currently, If I spawn a new thread with calling secureSend it will have to wait, as library operates in FIFO-manner, ie.: it will finish first with previous sendings.
I decompiled the library's class files, and secureSend has the following pseudo algorithm:
public synchronized void secureSend(int c, byte[] data) {
try {
local_data = data;
HAS_MORE_DATA_TO_SEND = (local_data.length > 0)
while (HAS_MORE_DATA_TO_SEND) {
HAS_MORE_DATA_TO_SEND = sendChunk(...); //calculates offset, and length, and returns if still has more, operates with local_data!
}
} catch(IOException ex) {}
}
I've tried to interrupt the thread (I've stored it), but it didn't helped.
The library spends a lot of time in that while loop. However, it also fear of IOException.
My question: can I anyhow interrupt/kill/abort this function call? Maybe somehow throwing an IOException into the Thread? Is this somewhat possible?
Producer-Consumer blog post states that:
"2) Producer doesn't need to know about who is consumer or how many consumers are there. Same is true with Consumer."
My problem is that I have an array of data that I need to get from the Webserver to clients as soon as possible. The clients can appear mid-calculation. Multiple clients at different times can request the array of data. Once the calculation is complete it is cached and then it can simply be read.
Exmaple Use Case: While the calculation is occurring I want to serve each and every datum of the array as soon as possible. I can't use a BlockingQueue because say if a second client starts to request the array while the first one has already used .take() on the first half of the array. Then the second client missed half the data! I need a BlockingQueue where you don't have to take(), but you could instead just read(int index).
Solution? I have a good amount of writes on my array, so I wouldn't want to use CopyOnWriteArrayList? The Vector class should work but would be inefficient?
Is it preferable to use a ThreadSafeList like this and just add a waitForElement() function? I just don't want to reinvent the wheel and I prefer crowd tested solutions for multi-threaded problems...
As far as I understand you need to broadcast data to subscribers/clients.
Here are some ways that I know for approaching it.
Pure Java solution, every client has a BlockingQueue and every time you broadcast a message you put it every queue.
for(BlockingQueue client: clients){
client.put(msg);
}
RxJava provides a reactive approach. Clients will be subscribers and ever time you emit a message, subscribers will be notified and they can choose to cancel their subscription
Observable<String> observable = Observable.create(sub->{
String[] msgs = {"msg1","msg2","msg3"};
for (String msg : msgs) {
if(!sub.isUnsubscribed()){
sub.onNext(msg);
}
}
if (!sub.isUnsubscribed()) { // completes
sub.onCompleted();
}
});
Now multiple subscribers can choose to receive messages.
observable.subscribe(System.out::println);
observable.subscribe(System.out::println);
Observables are a bit functional, they can choose what they need.
observable.filter(msg-> msg.equals("msg2")).map(String::length)
.subscribe(msgLength->{
System.out.println(msgLength); // or do something useful
});
Akka provides broadcast routers
This is not exactly a trivial problem; but not too hard to solve either.
Assuming your producer is an imperative program; it generates data chunk by chunk, adding each chunk to the cache; the process terminates either successfully or with an error.
The cache should have this interface for the produce to push data in it
public class Cache
public void add(byte[] bytes)
public void finish(boolean error)
Each consumer obtains a new view from the cache; the view is a blocking data source
public class Cache
public View newView()
public class View
// return null for EOF
public byte[] read() throws Exception
Here's a straightforward implementation
public class Cache
{
final Object lock = new Object();
int state = INIT;
static final int INIT=0, DONE=1, ERROR=2;
ArrayList<byte[]> list = new ArrayList<>();
public void add(byte[] bytes)
{
synchronized (lock)
{
list.add(bytes);
lock.notifyAll();
}
}
public void finish(boolean error)
{
synchronized (lock)
{
state = error? ERROR : DONE;
lock.notifyAll();
}
}
public View newView()
{
return new View();
}
public class View
{
int index;
// return null for EOF
public byte[] read() throws Exception
{
synchronized (lock)
{
while(state==INIT && index==list.size())
lock.wait();
if(state==ERROR)
throw new Exception();
if(index<list.size())
return list.get(index++);
assert state==DONE && index==list.size();
return null;
}
}
}
}
It can be optimized a little; most importantly, after state=DONE, consumers should not need synchronized; a simple volatile read is enough, which can be achieved by a volatile state
I am building a server that sends data via a single TCP socket for each user every 2 seconds and on a separate thread. There are also special events occasionally sent along side with the regular data. Sometimes, data in multiple packets would mix up so I created a queue to make sure it does not happen. However, the issue is still there, is my approach not correct or is there something wrong with my code?
protected void sendData (byte[] data) {
if (isSendingData) {
dataQueue.add(data);
return;
}
isSendingData = true;
Thread sendThread = new Thread() {
public void run () {
try {
BufferedOutputStream outStream = new BufferedOutputStream(connectionSocket.getOutputStream());
outStream.write(data);
outStream.flush();
// check queue, if there are data, send
byte[] moreData = null;
if (dataQueue.size() > 0) {
moreData = dataQueue.remove(0);
}
isSendingData = false;
if (moreData != null) {
sendData(moreData);
}
}
catch (Exception e) {
System.out.println ("Error sending data to peripheral: " + e);
isSendingData = false;
}
}
};
sendThread.start ();
}
The proper idiom to remove concurrency issues using a queue is to have a long-lived thread run an infinite loop which takes elements from the queue and processes them. Typically you'll use a blocking queue so that on each iteration the thread goes to sleep until there is an item to process.
Your solution deviates from the above in many aspects. For example:
if (isSendingData) {
dataQueue.add(data);
return;
}
isSendingData = true;
—if this method is called concurrently, this will result in a race condition: both threads can read isSendingData as false, then concurrently proceed to sending data over the network. If isSendingData isn't volatile, you've also got a data race on it (entirely separate from the race condition explained above).
if (dataQueue.size() > 0) {
moreData = dataQueue.remove(0);
}
—this is another race condition: after you read size as zero, the other thread can add an item to the queue. Now that item will possibly never be processed. It will linger in the queue until another such thread is started.
The more obvious way your solution is not correct is that the thread you start has no loops and is intended to just process one message, plus possibly one extra message in the queue. This should be reworked so that there are no special cases and sendData always, unconditionally, submits to a queue and never does any sending on its own.
I would do this completely differently. You don't want arbitrarily long queues in your application.
Have your hearbeat thread synchronize on the socket when sending the heartbeat.
Don't have it sending anything else.
Get rid of the queue, isSendingData, etc.
Have your main application synchronize on the socket when it wants to send, and just send whenever it needs to.
Use the same BufferedOutputStream or BufferedWriter for all sending, and flush it after each send.
I wrote a server to send large number of message to all clients after connecting.
#Override
public void channelActive(ChannelHandlerContext ctx) throws Exception {
while (true){
content = arrayblockqueue.poll()
ctx.writeAndFlush(content+"\r\n");
}
}
after sending thousands of message , the channel do not send message anymore. through debugging, I found that AbstractNioByteChannel.incompleteWrite was invoked and the selectionKey will be add SelectionKey.OP_WRITE when the network is congestion. After being set OP_WRITE , AbstractNioUnsafe.isFlushPending() will return true so the flush() can not be done indeed . How to let netty recover this situation ? Or I use netty in a wrong way ?
Your handler method is invoked directly from an I/O thread. Until your handler method returns, the I/O thread which called the handler method cannot perform any I/O, and that's why you are not seeing anything is written.
Looking from your code, what you want is to get a message from a blocking queue and write it to a channel. Instead of using a blocking queue, you can just write to the channel. Almost all operations in Netty are thread safe. For example:
public static void main(String[] args) throws Exception {
...
Channel ch = ...;
for (int i = 0; i < 1000000; i ++) {
ch.writeAndFlush(String.valueOf(i) + "\r\n");
}
...
}
// And your handler doesn't need an arrayblockingqueue.
However, the code above will probably make the event queue of Netty grow infinitely, resulting OutOfMemoryError. To prevent the write requests from being queued infinitely, you have to use the future returned by the writeAndFlush() operation.
for (int i = 0; i < 1000000; i ++) {
ChannelFuture f = ch.writeAndFlush(String.valueOf(i) + "\r\n");
if ((i + 1) % 100 == 0) {
// Wait until the write request is actually finished
// so that the event queue becomes empty.
f.sync();
}
}