Background
I have implemented an adapter interface using the RPC protocol, but recently have been tasked with implementing the interface using a WebSocket listener. With RPC, I was easily able to start an RPC listener thread to listen for events on a separate thread, but I'm not finding it so simple when it comes to JSR356.
The Question
I'm attempting to implement a Java WebSocket ClientEndpoint that connects to a subscription URI, but I want to do so in a manner that utilizes multi-threading. I've been having a hard time finding any examples where multi-threading is needed from a client endpoint perspective. Is this even possible?
I need the WebSocket message handler to handle messages without blocking the main thread. I have not implemented the message handler yet because I'm not sure how to go about creating it in a way to accomplish what I want. Can anyone help point me in a better direction? Here's what I have so far:
EventHandler.java
#ClientEndpoint
public class EventHandler {
private URI subscriptionURI;
private Session clientSession;
public EventHandler(URI subscriptionURI) throws URISyntaxException {
this.subscriptionURI = subscriptionURI;
}
/**
* Attempts to connect to the CADI WebSocket server.
* #throws Exception
*/
public void connect() throws Exception {
// Grab the WebSocket container and attempt to connect to the subscription URI
WebSocketContainer container = ContainerProvider.getWebSocketContainer();
container.connectToServer(this, subscriptionURI);
}
/**
* Closes the CADI WebSocket client session.
*/
public void close() {
try {
// Close the client session if it is open
if(clientSession != null && clientSession.isOpen())
clientSession.close();
}
catch(Exception e) {
LogMaster.getErrorLogger().error("Could not close the WebSocket client session. It may have been closed already.", e);
}
}
#OnOpen
public void socketOpened(Session session) {
this.clientSession = session;
}
}
Here is how I start a new thread to connect to the WebSocket. What are the implications of this, though? Are subsequent messages received on the WebSocket going to block the main thread still?
EventHandler eventHandler = new EventHandler(new URI("wss://localhost/Example"));
new Thread()
{
#Override
public void run() {
try {
eventHandler.connect();
}
catch (Exception e) {
LogMaster.getErrorLogger().error("Could not start EventHandler.", e);
}
}
}.start();
Related
I am familiar with Netty basics and have used it to build a typical application server running on TCP designed to serve many clients/connections. However, I recently have a requirement to build a server which is designed to handle handful of clients or only one client most of the times. But the client is the gateway to many devices and therefore generate substantial traffic to the server I am trying to design.
My questions are:
Is it possible / recommended at all to use Netty for this use case? I have seen the discussion here.
Is it possible to use multithreaded EventExecutor to the channel handlers in the pipeline so that instead of channel EventLoop, the concurrency is achieved by the EventExecutor thread pool? Will it ensure that one message from the client will be handled by one thread through all handlers, while the next message by another thread?
Is there any example implementation available?
According to the documentation of io.netty.channel.oio you can use it if you don't have lot's of client. In this case, every connection will be handled in a separate thread and use Java old blocking IO under the hood. Take a look at OioByteStreamChannel::activate:
/**
* Activate this instance. After this call {#link #isActive()} will return {#code true}.
*/
protected final void activate(InputStream is, OutputStream os) {
if (this.is != null) {
throw new IllegalStateException("input was set already");
}
if (this.os != null) {
throw new IllegalStateException("output was set already");
}
if (is == null) {
throw new NullPointerException("is");
}
if (os == null) {
throw new NullPointerException("os");
}
this.is = is;
this.os = os;
}
As you can see, the oio Streams will be used there.
According to your comment. You can Specify EventExecutorGroup while adding handler to a pipeline as this:
new ChannelInitializer<Channel> {
public void initChannel(Channel ch) {
ch.pipeline().addLast(new YourHandler());
}
}
Let's take a look at the AbstractChannelHandlerContext:
#Override
public EventExecutor executor() {
if (executor == null) {
return channel().eventLoop();
} else {
return executor;
}
}
Here we see that if you don't register your EventExecutor it will use the child event group you specified while creating the ServerBootstrap.
new ServerBootstrap()
.group(new OioEventLoopGroup(), new OioEventLoopGroup())
//acceptor group //child group
Here is how reading from channel is invoked AbstractChannelHandlerContext::invokeChannelRead:
static void invokeChannelRead(final AbstractChannelHandlerContext next, Object msg) {
final Object m = next.pipeline.touch(ObjectUtil.checkNotNull(msg, "msg"), next);
EventExecutor executor = next.executor();
if (executor.inEventLoop()) {
next.invokeChannelRead(m);
} else {
executor.execute(new Runnable() { //Invoked by the EventExecutor you specified
#Override
public void run() {
next.invokeChannelRead(m);
}
});
}
}
Even for a few connections I would go with NioEventLoopGroup.
Regarding your question:
Is it possible to use multithreaded EventExecutor to the channel
handlers in the pipeline so that instead of channel EventLoop, the
concurrency is achieved by the EventExecutor thread pool? Will it
ensure that one message from the client will be handled by one thread
through all handlers, while the next message by another thread?
Netty's Channel guarantees that every processing for an inbound or an outbound message will occur in the same thread. You don't have to hack an EventExecutor of your own to handle this. If serving inbound messages doesn't require long-lasting processings your code will look like basic usage of ServerBootstrap. You might find useful to tune the number of threads in the pool.
I am using GRPC-Java 1.1.2. In an active GRPC session, I have a few bidirectional streams open. Is there a way to clean them from the client end when the client is disconnecting? When I try to disconnect, I run the following look for a fixed number of times and then disconnect but I can see the following error on the server side (not sure if its caused by another issue though):
disconnect from client
while (!channel.awaitTermination(3, TimeUnit.SECONDS)) {
// check for upper bound and break if so
}
channel.shutdown().awaitTermination(3, TimeUnit.SECONDS);
error on server
E0414 11:26:48.787276000 140735121084416 ssl_transport_security.c:439] SSL_read returned 0 unexpectedly.
E0414 11:26:48.787345000 140735121084416 secure_endpoint.c:185] Decryption error: TSI_INTERNAL_ERROR
If you want to close gRPC (server-side or bi-di) streams from the client end, you will have to attach the rpc call with a Context.CancellableContext found in package io.grpc.
Suppose you have an rpc:
service Messaging {
rpc Listen (ListenRequest) returns (stream Message) {}
}
In the client side, you will handle it like this:
public class Messaging {
private Context.CancellableContext mListenContext;
private MessagingGrpc.MessagingStub getMessagingAsyncStub() {
/* return your async stub */
}
public void listen(final ListenRequest listenRequest, final StreamObserver<Message> messageStream) {
Runnable listenRunnable = new Runnable() {
#Override
public void run() {
Messaging.this.getMessagingAsyncStub().listen(listenRequest, messageStream);
}
if (mListenContext != null && !mListenContext.isCancelled()) {
Log.d(TAG, "listen: already listening");
return;
}
mListenContext = Context.current().withCancellation();
mListenContext.run(listenRunnable);
}
public void cancelListen() {
if (mListenContext != null) {
mListenContext.cancel(null);
mListenContext = null;
}
}
}
Calling cancelListen() will emulate the error, 'CANCELLED', the connection will be closed, and onError of your StreamObserver<Message> messageStream will be invoked with throwable message: 'CANCELLED'.
If you use shutdownNow() it will more aggressively shutdown the RPC streams you have. Also, you need to call shutdown() or shutdownNow() before calling awaitTermination().
That said, a better solution would be to end all your RPCs gracefully before closing the channel.
I have a scenario where I am establishing TCP connection using netty NIO, suppose server went down than how can I automatically connect to server when it comes up again ?
Or Is there any way to attach availability listener on server ?
You can have a DisconnectionHandler, as the first thing on your client pipeline, that reacts on channelInactive by immediately trying to reconnect or scheduling a reconnection task.
For example,
public class DisconnectionHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelInactive(final ChannelHandlerContext ctx) throws Exception {
Channel channel = ctx.channel();
/* If shutdown is on going, ignore */
if (channel.eventLoop().isShuttingDown()) return;
ReconnectionTask reconnect = new ReconnectionTask(channel);
reconnect.run();
}
}
The ReconnectionTask would be something like this:
public class ReconnectionTask implements Runnable, ChannelFutureListener {
Channel previous;
public ReconnectionTask(Channel c) {
this.previous = c;
}
#Override
public void run() {
Bootstrap b = createBootstrap();
b.remoteAddress(previous.remoteAddress())
.connect()
.addListener(this);
}
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (!future.isSuccess()) {
// Will try to connect again in 100 ms.
// Here you should probably use exponential backoff or some sort of randomization to define the retry period.
previous.eventLoop()
.schedule(this, 100, MILLISECONDS);
return;
}
// Do something else when success if needed.
}
}
Check here for an example of Exponential Backoff library.
I implementing websockets using Vert.x 3.
The scenario is simple: opening socket from client doing some 'blocking' work at the vertex verticle worker and when finish response with the answer to the client(via the open socket)
Please tell me if I am doing it right:
Created VertxWebsocketServerVerticle. as soon as the websocket is opening and request coming from the client I am using eventBus and passing the message to
EventBusReceiverVerticle. there I am doing blocking operation.
how I am actually sending back the response back to VertxWebsocketServerVerticle and sending it back to the client?
code:
Main class:
public static void main(String[] args) throws InterruptedException {
Vertx vertx = Vertx.vertx();
vertx.deployVerticle(new EventBusReceiverVerticle("R1"),new DeploymentOptions().setWorker(true));
vertx.deployVerticle(new VertxWebsocketServerVerticle());
}
VertxWebsocketServerVerticle:
public class VertxWebsocketServerVerticle extends AbstractVerticle {
public void start() {
vertx.createHttpServer().websocketHandler(webSocketHandler -> {
System.out.println("Connected!");
Buffer buff = Buffer.buffer().appendInt(12).appendString("foo");
webSocketHandler.writeFinalBinaryFrame(buff);
webSocketHandler.handler(buffer -> {
String inputString = buffer.getString(0, buffer.length());
System.out.println("inputString=" + inputString);
vertx.executeBlocking(future -> {
vertx.eventBus().send("anAddress", inputString, event -> System.out.printf("got back from reply"));
future.complete();
}, res -> {
if (res.succeeded()) {
webSocketHandler.writeFinalTextFrame("output=" + inputString + "_result");
}
});
});
}).listen(8080);
}
#Override
public void stop() throws Exception {
super.stop();
}
}
EventBusReceiverVerticle :
public class EventBusReceiverVerticle extends AbstractVerticle {
private String name = null;
public EventBusReceiverVerticle(String name) {
this.name = name;
}
public void start(Future<Void> startFuture) {
vertx.eventBus().consumer("anAddress", message -> {
System.out.println(this.name +
" received message: " +
message.body());
try {
//doing some looong work..
Thread.sleep(10000);
System.out.printf("finished waiting\n");
startFuture.complete();
} catch (InterruptedException e) {
e.printStackTrace();
}
});
}
}
I always get:
WARNING: Message reply handler timed out as no reply was received - it will be removed
github project at: https://github.com/IdanFridman/VertxAndWebSockets
thank you,
ray.
Since you are blocking your websocket handler until it receives a reply for the sent message to the EventBus, which will not, in fact, be received until the set up delay of 10s laps, you certainly will get warning since the reply handler of the event bus will timeout -> Message sent but no response received before the timeout delay.
Actually I don't know if you are just experimenting the Vert.x toolkit or you are trying to fulfill some requirement, but certainly you have to adapt your code to match in the Vert.x spirit:
First you should better not block until a message is received in your websocket handler, keep in mind that everything is asynchrounous when it comes to Vert.x.
In order to sleep for some time, use the Vert.x way and not the Thread.sleep(delay), i.e. vertx.setTimer(...).
I am using jersey to implement a SSE scenario.
The server keeps connections alive. And push data to clients periodically.
In my scenario, there is a connection limit, only a certain number of clients can subscribe to the server at the same time.
So when a new client is trying to subscribe, I do a check(EventOutput.isClosed) to see if any old connections are not active anymore, so they can make room for new connections.
But the result of EventOutput.isClosed is always false, unless the client explicitly calls close of EventSource. This means that if a client drops accidentally(power outage or internet cutoff), it's still hogging the connection, and new clients can not subscribe.
Is there a work around for this?
#CuiPengFei,
So in my travels trying to find an answer to this myself I stumbled upon a repository that explains how to handle gracefully cleaning up the connections from disconnected clients.
The encapsulate all of the SSE EventOutput logic into a Service/Manager. In this they spin up a thread that checks to see if the EventOutput has been closed by the client. If so they formally close the connection (EventOutput#close()). If not they try to write to the stream. If it throws an Exception then the client has disconnected without closing and it handles closing it. If the write is successful then the EventOutput is returned to the pool as it is still an active connection.
The repo (and the actual class) are available here. Ive also included the class without imports below in case the repo is ever removed.
Note that they bind this to a Singleton. The store should be globally unique.
public class SseWriteManager {
private final ConcurrentHashMap<String, EventOutput> connectionMap = new ConcurrentHashMap<>();
private final ScheduledExecutorService messageExecutorService;
private final Logger logger = LoggerFactory.getLogger(SseWriteManager.class);
public SseWriteManager() {
messageExecutorService = Executors.newScheduledThreadPool(1);
messageExecutorService.scheduleWithFixedDelay(new messageProcessor(), 0, 5, TimeUnit.SECONDS);
}
public void addSseConnection(String id, EventOutput eventOutput) {
logger.info("adding connection for id={}.", id);
connectionMap.put(id, eventOutput);
}
private class messageProcessor implements Runnable {
#Override
public void run() {
try {
Iterator<Map.Entry<String, EventOutput>> iterator = connectionMap.entrySet().iterator();
while (iterator.hasNext()) {
boolean remove = false;
Map.Entry<String, EventOutput> entry = iterator.next();
EventOutput eventOutput = entry.getValue();
if (eventOutput != null) {
if (eventOutput.isClosed()) {
remove = true;
} else {
try {
logger.info("writing to id={}.", entry.getKey());
eventOutput.write(new OutboundEvent.Builder().name("custom-message").data(String.class, "EOM").build());
} catch (Exception ex) {
logger.info(String.format("write failed to id=%s.", entry.getKey()), ex);
remove = true;
}
}
}
if (remove) {
// we are removing the eventOutput. close it is if it not already closed.
if (!eventOutput.isClosed()) {
try {
eventOutput.close();
} catch (Exception ex) {
// do nothing.
}
}
iterator.remove();
}
}
} catch (Exception ex) {
logger.error("messageProcessor.run threw exception.", ex);
}
}
}
public void shutdown() {
if (messageExecutorService != null && !messageExecutorService.isShutdown()) {
logger.info("SseWriteManager.shutdown: calling messageExecutorService.shutdown.");
messageExecutorService.shutdown();
} else {
logger.info("SseWriteManager.shutdown: messageExecutorService == null || messageExecutorService.isShutdown().");
}
}}
Wanted to provide an update on this:
What was happening is that the eventSource on the client side (js) never got into readyState '1' unless we did a broadcast as soon as a new subscription was added. Even in this state the client could receive data pushed from the server. Adding call to do a broadcast of a simple "OK" message helped kicking the eventSource into readyState 1.
On closing the connection from the client side; to be pro-active in cleaning up resources, just closing the eventSource on the client side doesn't help. We must make another ajax call to the server to force the server to do a broadcast. When the broadcast is forced, jersey will clean up the connections that are no longer alive and will in-turn release resources (Connections in CLOSE_WAIT). If not a connection will linger in CLOSE_WAIT till the next broadcast happens.