StreamObserver<RouteSummary> responseObserver = new StreamObserver<RouteSummary>() {
#Override
public void onNext(RouteSummary summary) {
LOGGER.info("Action.");
}
#Override
public void onError(Throwable t) {
LOGGER.error("Error.");
}
#Override
public void onCompleted() {
LOGGER.info("Completed.");
}
};
There is a grpc connection and client side streaming. Its started but If the Grpc Client is restarted,
how can I keep getting responses where they left off.
Client-side streaming means request streaming. Assuming you meant server side streaming, if the client is restarted it has basically lost the messages in the previous response stream.
Related
1. Current problem
1 Netty client, 1 Netty server
In netty client, 3 different kafka threads send message to netty server by 1 ChannelHandler
On netty client side, it seems like channel handler send each thread's message at once.
But on netty server side, it seems like netty read multiple messages sent from 3 kafka threads at once.
I checked this by logging :
Client side log image, Server side log image
2. Code
Netty client code
public class ClientHandler extends ChannelInboundHandlerAdapter {
...
#Override
public void channelActive(ChannelHandlerContext ctx) {
log.debug("channelActive");
this.ctx = ctx;
}
public void sendMessage(String message) {
log.info("Sent message: {}", message);
ByteBuf messageBuffer = Unpooled.buffer();
messageBuffer.writeBytes(message.getBytes());
ctx.writeAndFlush(messageBuffer);
}
Netty server code
public class NettyServerHandler extends ChannelInboundHandlerAdapter {
...
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
NettyServer.ServerStatus.increaseAndGetTotalReadMsg();
ByteBuf byteBuf = (ByteBuf)msg;
log.info(byteBuf.toString(Charset.defaultCharset()));
byteBuf.release();
}
}
Here says that writeAndFlush is thread-safe but it seem's doesn't. Is it normal case or not?
In our app, to communicate with the server we use MQTT Client. We use sockets only when the app is in Foreground mode, we may receive a lot of messages over Socket, thus we don't need to put Socket in Service.
How I can have the Socket Client in the background thread to send and receive messages, without freezing UI.
this is code of my Paho MQTT Socket Client:
mqttAndroidClient = new MqttAndroidClient(getApplicationContext(), serverUri, clientId);
mqttAndroidClient.setCallback(new MqttCallbackExtended() {
#Override
public void connectComplete(boolean reconnect, String serverURI) {
if (reconnect) {
addToHistory("Reconnected to : " + serverURI);
// Because Clean Session is true, we need to re-subscribe
subscribeToTopic();
} else {
addToHistory("Connected to: " + serverURI);
}
}
#Override
public void connectionLost(Throwable cause) {
addToHistory("The Connection was lost.");
}
#Override
public void messageArrived(String topic, MqttMessage message) throws Exception {
addToHistory("Incoming message: " + new String(message.getPayload()));
}
#Override
public void deliveryComplete(IMqttDeliveryToken token) {
}
});
I am using GRPC-Java 1.1.2. In an active GRPC session, I have a few bidirectional streams open. Is there a way to clean them from the client end when the client is disconnecting? When I try to disconnect, I run the following look for a fixed number of times and then disconnect but I can see the following error on the server side (not sure if its caused by another issue though):
disconnect from client
while (!channel.awaitTermination(3, TimeUnit.SECONDS)) {
// check for upper bound and break if so
}
channel.shutdown().awaitTermination(3, TimeUnit.SECONDS);
error on server
E0414 11:26:48.787276000 140735121084416 ssl_transport_security.c:439] SSL_read returned 0 unexpectedly.
E0414 11:26:48.787345000 140735121084416 secure_endpoint.c:185] Decryption error: TSI_INTERNAL_ERROR
If you want to close gRPC (server-side or bi-di) streams from the client end, you will have to attach the rpc call with a Context.CancellableContext found in package io.grpc.
Suppose you have an rpc:
service Messaging {
rpc Listen (ListenRequest) returns (stream Message) {}
}
In the client side, you will handle it like this:
public class Messaging {
private Context.CancellableContext mListenContext;
private MessagingGrpc.MessagingStub getMessagingAsyncStub() {
/* return your async stub */
}
public void listen(final ListenRequest listenRequest, final StreamObserver<Message> messageStream) {
Runnable listenRunnable = new Runnable() {
#Override
public void run() {
Messaging.this.getMessagingAsyncStub().listen(listenRequest, messageStream);
}
if (mListenContext != null && !mListenContext.isCancelled()) {
Log.d(TAG, "listen: already listening");
return;
}
mListenContext = Context.current().withCancellation();
mListenContext.run(listenRunnable);
}
public void cancelListen() {
if (mListenContext != null) {
mListenContext.cancel(null);
mListenContext = null;
}
}
}
Calling cancelListen() will emulate the error, 'CANCELLED', the connection will be closed, and onError of your StreamObserver<Message> messageStream will be invoked with throwable message: 'CANCELLED'.
If you use shutdownNow() it will more aggressively shutdown the RPC streams you have. Also, you need to call shutdown() or shutdownNow() before calling awaitTermination().
That said, a better solution would be to end all your RPCs gracefully before closing the channel.
I have a scenario where I am establishing TCP connection using netty NIO, suppose server went down than how can I automatically connect to server when it comes up again ?
Or Is there any way to attach availability listener on server ?
You can have a DisconnectionHandler, as the first thing on your client pipeline, that reacts on channelInactive by immediately trying to reconnect or scheduling a reconnection task.
For example,
public class DisconnectionHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelInactive(final ChannelHandlerContext ctx) throws Exception {
Channel channel = ctx.channel();
/* If shutdown is on going, ignore */
if (channel.eventLoop().isShuttingDown()) return;
ReconnectionTask reconnect = new ReconnectionTask(channel);
reconnect.run();
}
}
The ReconnectionTask would be something like this:
public class ReconnectionTask implements Runnable, ChannelFutureListener {
Channel previous;
public ReconnectionTask(Channel c) {
this.previous = c;
}
#Override
public void run() {
Bootstrap b = createBootstrap();
b.remoteAddress(previous.remoteAddress())
.connect()
.addListener(this);
}
#Override
public void operationComplete(ChannelFuture future) throws Exception {
if (!future.isSuccess()) {
// Will try to connect again in 100 ms.
// Here you should probably use exponential backoff or some sort of randomization to define the retry period.
previous.eventLoop()
.schedule(this, 100, MILLISECONDS);
return;
}
// Do something else when success if needed.
}
}
Check here for an example of Exponential Backoff library.
I implementing websockets using Vert.x 3.
The scenario is simple: opening socket from client doing some 'blocking' work at the vertex verticle worker and when finish response with the answer to the client(via the open socket)
Please tell me if I am doing it right:
Created VertxWebsocketServerVerticle. as soon as the websocket is opening and request coming from the client I am using eventBus and passing the message to
EventBusReceiverVerticle. there I am doing blocking operation.
how I am actually sending back the response back to VertxWebsocketServerVerticle and sending it back to the client?
code:
Main class:
public static void main(String[] args) throws InterruptedException {
Vertx vertx = Vertx.vertx();
vertx.deployVerticle(new EventBusReceiverVerticle("R1"),new DeploymentOptions().setWorker(true));
vertx.deployVerticle(new VertxWebsocketServerVerticle());
}
VertxWebsocketServerVerticle:
public class VertxWebsocketServerVerticle extends AbstractVerticle {
public void start() {
vertx.createHttpServer().websocketHandler(webSocketHandler -> {
System.out.println("Connected!");
Buffer buff = Buffer.buffer().appendInt(12).appendString("foo");
webSocketHandler.writeFinalBinaryFrame(buff);
webSocketHandler.handler(buffer -> {
String inputString = buffer.getString(0, buffer.length());
System.out.println("inputString=" + inputString);
vertx.executeBlocking(future -> {
vertx.eventBus().send("anAddress", inputString, event -> System.out.printf("got back from reply"));
future.complete();
}, res -> {
if (res.succeeded()) {
webSocketHandler.writeFinalTextFrame("output=" + inputString + "_result");
}
});
});
}).listen(8080);
}
#Override
public void stop() throws Exception {
super.stop();
}
}
EventBusReceiverVerticle :
public class EventBusReceiverVerticle extends AbstractVerticle {
private String name = null;
public EventBusReceiverVerticle(String name) {
this.name = name;
}
public void start(Future<Void> startFuture) {
vertx.eventBus().consumer("anAddress", message -> {
System.out.println(this.name +
" received message: " +
message.body());
try {
//doing some looong work..
Thread.sleep(10000);
System.out.printf("finished waiting\n");
startFuture.complete();
} catch (InterruptedException e) {
e.printStackTrace();
}
});
}
}
I always get:
WARNING: Message reply handler timed out as no reply was received - it will be removed
github project at: https://github.com/IdanFridman/VertxAndWebSockets
thank you,
ray.
Since you are blocking your websocket handler until it receives a reply for the sent message to the EventBus, which will not, in fact, be received until the set up delay of 10s laps, you certainly will get warning since the reply handler of the event bus will timeout -> Message sent but no response received before the timeout delay.
Actually I don't know if you are just experimenting the Vert.x toolkit or you are trying to fulfill some requirement, but certainly you have to adapt your code to match in the Vert.x spirit:
First you should better not block until a message is received in your websocket handler, keep in mind that everything is asynchrounous when it comes to Vert.x.
In order to sleep for some time, use the Vert.x way and not the Thread.sleep(delay), i.e. vertx.setTimer(...).