Vert.x send NetSocket between Verticles - java

I am creating a simple TCP server using Vert.x and data is sent between a client and a server in the form of compressed packets.
I would like to use Verticles to create something of this nature (where [Something] is a Verticle and arrows show flow of data):
Buffer In -> [Decompress Packet] -> [Parse Packet] -> Reply to NetSocket
The problem is that I am unsure how I can carry the NetSocket from one Verticle (the result from Decompress Packet) to the next. I can of course send the result from the Decompress Packet to the Parse Packet Verticle but when the Parse Packet Verticle receives this data it will not have any handle to reply to the NetSocket using the reference it has to the sender.
Essentially, I need to carry the NetSocket through the event bus so that once the final Verticle is reached, it can then reply to the data.

As it has been said in the comments you probably want a set of handlers instead of Verticles. Look for example how vertx-web handlers work. A handler is a simple lambda that performs one small task and can decide to pass the work to the next or abort the execution calling a failure method.
A very basic implementation is just to keep a List of lambdas (Java functional interfaces) that you add and once a socket is received you iterate the list.
If you need to perform async IO in your handlers then you cannot use a simple iterator you need to do it async, a basic async iterator wrapper could be:
abstract class AsyncIterator<T> implements Handler<T> {
private final Iterator<T> iterator;
private boolean end = false;
public AsyncIterator(Iterable<T> iterable) {
this(iterable.iterator());
}
public AsyncIterator(Iterator<T> iterator) {
this.iterator = iterator;
next();
}
public final boolean hasNext() {
return !end;
}
public final void next() {
if (iterator.hasNext()) {
handle(iterator.next());
} else {
end = true;
handle(null);
}
}
public final void remove() {
iterator.remove();
}
}
and you just need to use it like:
new AsyncIterator<Object>(keys) {
#Override
public void handle(Object key) {
if (hasNext()) {
// here your handler code...
// once it is complete your handler need to call:
next();
} else {
// no more entries to iterate...
// close your socket?
}
}
};
});

Actually, you don't have to pass netsockets between vertices.
In Vert.x, every socket automatically registers a handler on events, you can use that for your scenario. Check document here:
Every socket automatically registers a handler on the event bus, and when any buffers are received in this handler, it writes them to itself.
This enables you to write data to a socket which is potentially in a completely different verticle or even in a different Vert.x instance by sending the buffer to the address of that handler.
The address of the handler is given by writeHandlerID
Since writeHandlerID is a normal string, it is not a big deal to send it to verticle2 from verticle1. In verticle2, eventbus.send(writeHandlerID, [something you want to reply]). That's it.
We have applied this tip in our Application.

Related

Using Netty to build a server with only few clients

I am familiar with Netty basics and have used it to build a typical application server running on TCP designed to serve many clients/connections. However, I recently have a requirement to build a server which is designed to handle handful of clients or only one client most of the times. But the client is the gateway to many devices and therefore generate substantial traffic to the server I am trying to design.
My questions are:
Is it possible / recommended at all to use Netty for this use case? I have seen the discussion here.
Is it possible to use multithreaded EventExecutor to the channel handlers in the pipeline so that instead of channel EventLoop, the concurrency is achieved by the EventExecutor thread pool? Will it ensure that one message from the client will be handled by one thread through all handlers, while the next message by another thread?
Is there any example implementation available?
According to the documentation of io.netty.channel.oio you can use it if you don't have lot's of client. In this case, every connection will be handled in a separate thread and use Java old blocking IO under the hood. Take a look at OioByteStreamChannel::activate:
/**
* Activate this instance. After this call {#link #isActive()} will return {#code true}.
*/
protected final void activate(InputStream is, OutputStream os) {
if (this.is != null) {
throw new IllegalStateException("input was set already");
}
if (this.os != null) {
throw new IllegalStateException("output was set already");
}
if (is == null) {
throw new NullPointerException("is");
}
if (os == null) {
throw new NullPointerException("os");
}
this.is = is;
this.os = os;
}
As you can see, the oio Streams will be used there.
According to your comment. You can Specify EventExecutorGroup while adding handler to a pipeline as this:
new ChannelInitializer<Channel> {
public void initChannel(Channel ch) {
ch.pipeline().addLast(new YourHandler());
}
}
Let's take a look at the AbstractChannelHandlerContext:
#Override
public EventExecutor executor() {
if (executor == null) {
return channel().eventLoop();
} else {
return executor;
}
}
Here we see that if you don't register your EventExecutor it will use the child event group you specified while creating the ServerBootstrap.
new ServerBootstrap()
.group(new OioEventLoopGroup(), new OioEventLoopGroup())
//acceptor group //child group
Here is how reading from channel is invoked AbstractChannelHandlerContext::invokeChannelRead:
static void invokeChannelRead(final AbstractChannelHandlerContext next, Object msg) {
final Object m = next.pipeline.touch(ObjectUtil.checkNotNull(msg, "msg"), next);
EventExecutor executor = next.executor();
if (executor.inEventLoop()) {
next.invokeChannelRead(m);
} else {
executor.execute(new Runnable() { //Invoked by the EventExecutor you specified
#Override
public void run() {
next.invokeChannelRead(m);
}
});
}
}
Even for a few connections I would go with NioEventLoopGroup.
Regarding your question:
Is it possible to use multithreaded EventExecutor to the channel
handlers in the pipeline so that instead of channel EventLoop, the
concurrency is achieved by the EventExecutor thread pool? Will it
ensure that one message from the client will be handled by one thread
through all handlers, while the next message by another thread?
Netty's Channel guarantees that every processing for an inbound or an outbound message will occur in the same thread. You don't have to hack an EventExecutor of your own to handle this. If serving inbound messages doesn't require long-lasting processings your code will look like basic usage of ServerBootstrap. You might find useful to tune the number of threads in the pool.

How to write/read string data from two different threads in Android/Java

I have an Android application that receives ASCII strings (so every character in the string corresponds to exactly one byte) from a BLE device in thread A.
These strings come in chunks with a maximum length. For example, lets say that the max length is 4, and we receive the following strings:
"ABCD" (4), "EFGH" (4), "I\r\n" (3)
At the other hand, I have another thread B that needs to read these strings but as a complete line. In the example, after receiving all three packets, this thread should read a line:
"ABCDEFGHI"
My first bet was to implement a custom InputStream and OutputStream using a common underlying BlockingQueue. Then using an OutputStreamWriter to write incoming strings in thread A and an InputStreamReader wrapped inside a BufferedStream to use the readLine() function from thread B, but it is not working.
I can see that bytes (chunks) are added to the queue when using the custom OutputStream on thread A but when I call readLine() from thread B, it blocks and never returns a string even when I know a full line has been added to the underlying queue.
I'm pretty sure I'm reinventing the wheel here and I've been unable to find a definitive answer searching the Web. There must be a better way to do this in Java/Android. It sounds like a very common pattern.
I mostly do things in C# so there might be some class(es) I'm missing. I took a look at ByteBuffer also but it seems that going this way forces me to implement my own readLine() function because there is no InputStream to be used by BufferedReader, etc.
You can easily send data between threads with Greenrobot's EventBus.
Greenrobot's EventBus is a library that allows communication between components (Activity, Fragment, Services and backgrounds threads).
build.gradle
dependencies {
compile 'org.greenrobot:eventbus:3.0.0'
}
1. LISTENER (Thread A)
public class BleListener{
private static Context _context;
private static BleListener _instance;
private static ListenerThread _listenerThread;
private static boolean _isListenerThreadEnable = false;
private BleListener(Context context){
_context = context;
// set ble config and open ble port in here
// ....
// enable listener thread
if (!_isListenerThreadEnable) {
_listenerThread = new ListenerThread();
_listenerThread.start();
_isListenerThreadEnable = true;
}
}
// call this function from outer class
public static BleListener getInstance(Context context) {
if (_instance == null) {
_instance = new BleListener(Context context);
}
return _instance;
}
private class ListenerThread extends Thread {
ListenerThread() {
// setting your receive buffer, thread priority in here
}
#Override
public void run() {
while (_isListenerThreadEnable) {
synchronized (_bleDevice) {
int _receivedCount = _bleDevice.getQueueStatus();
while (_receivedCount > 0) {
// append your received data in here with ByteBuffer or StringBuffer
// ..
// parsing data for get valid data
// ..
// send valid data out when receive special character (end of message flag) or when timeout received with EventBus
EventBus.getDefault().post( ValidModal);
}
}
Thread.Yield();
}
}
}
}
2. MAIN (Thread B - Read data from Thread A)
Subscribers also need to register themselves to and unregister from the bus. Only while subscribers are registered, they will receive events. In Android, in activities and fragments you should usually register according to their life cycle. For most cases onStart/onStop works fine:
#Override
public void onStart() {
super.onStart();
EventBus.getDefault().register(this);
}
#Override
public void onStop() {
EventBus.getDefault().unregister(this);
super.onStop();
}
Subscribers implement event handling methods (also called “subscriber methods”) that will be called when an event is posted. These are defined with the #Subscribe annotation.
#Subscribe(threadMode = ThreadMode.MAIN)
public void onMessage(ValidModal) {
// You will get valid data from thread A here.
//..
}
As recommended by Ted Hopp I finally used a PipedInputStream and PipedOutputStream (wrapped inside OutputStreamWriter and BufferedReader).
It works like a charm and does exactly what I needed. Thank you!

Clearing resources on unsubscribe

I am having some trouble with executing some logic when a subscription has been unsubscribed. I've been at this for hours and I have made little progress so far. This is a simplified version of my code:
public class Command<E> {
public CommandActionObservable execute() {
final CommandAction<E> command = createCommand();
final OnSubscribe<CommandAction<E>> onSubscribe = (subscriber) -> {
/* Create a listener that handles notifications and register it.
* The idea here is to push the command downstream so it can be re-executed
*/
final Listener listener = (event) -> {
subscriber.onNext(command);
}
registerListener(listener);
/* This is where I'm having trouble. The unregister method
* should be executed when the subscriber unsubscribed,
* but it never happens
*/
subscriber.add(Subscriptions.create(() -> {
unregisterListener(listener);
}));
// pass the initial command downstream
subscriber.onNext(command);
kickOffBackgroundAction();
}
final Observable<CommandAction<E>> actionObservable = Observable.create(onSubscribe)
.onBackpressureLatest()
.observeOn(Shedulers.io())
.onBackpressureLatest();
return new CommandActionObservable((subscriber) -> {
actionObservable.unsafeSubscribe(subscriber);
})
}
public class CommandActionObservable extends Observable<CommandAction<E> {
// default constructor omitted
public Observable<E> toResult() {
return lift((Operator) (subscriber) -> {
return new Subscriber<CommandAction<E>>() {
// delegate onCompleted and onError to subscriber
public void onNext(CommandAction<E> action) {
// execute the action and pass the result downstream
final E result = action.execute();
subscriber.onNext(result)
}
}
}
}
}
}
I am using the Command in the usual way, adding the resulting subscription to a CompositeSubscription and unsubscribing from it in onDestroy(). Here is an example:
final Observable<SomeType> obs = new Command<SomeType>()
.execute()
.toResult();
subscription.add(obs.subscribe(// impl here));
public void onDestroy() {
super.onDestroy();
subscription.unsubscribe();
}
As mentioned, I can't get the unsubscription logic to work and unregister the listener, which causes memory leaks in the app. If I call doOnUnsubscribe() on obs it gets called, so I am unsubscibing correctly, but maybe the nesting of the observables and lifting causes some issues.
I'd be glad to head opinions on this one.
Turns out it was way easier than I anticipated.
After a bit of digging around I was able to come up with the answer on my own. Just posting this for people that may end up in the same situation as me.
So, as I mentioned in my question, if I added a doOnSubscribe() action to the observable I was getting in my Activity, it gets notified. Next I tried adding the same action on the inner Observables I'm creating in the execute() method. They were not getting called. So, I came to the conclusion that the chain was getting broken somewhere between the observable in my activity and the observables I was creating in execute().
The only thing that was happening to the stream was the application of my custom Operator implemented in toResult(). After a Google search, I came across this excellent article - Pitfalls of Operator Implementation. I was indeed braking the chain in my operator and the upstream observables were not notified of the unsubscription.
After I did what the author advices, all is good. Here is what I needed to do:
lift((Operator) (subscriber) -> {
// connect the upstream and downstream subscribers to keep the chain intact
new Subscriber<CommandAction<E>>(subscriber) {
// the implementation is the same
}
}

BlockingQueue to block and return object till the object with specified id becomes available on the queue

I have got some legacy code in some middle application where request response methods are sync.
Now the new interface to the back end is async, but i have to simulate the sync request in middle application.
I need to make request call and will wait for the async response to come. It is multi threaded application where there will be multiple simultaneous requests and response.
I am wondering if there is a some kind of blocking queue in java where i can put the request object and wait till the response object with specified id(by equals method) is put back on queue.
So there will be two method that will put the object on queue, lets say they are requestSender and responseReceiver. Request sender will put a request and wait and than responseReceiver put all the responses on the queue and when response object matches than only the associated requestSender will get the object and that that will return it to the frontend.
in short is there something like take(object) method in queue and it will only return when specific object get available on the queue.
If that is not possible with the blocking queue than what other approach should i use to simulate sync request-response in the middle application for front end when back end to middle application is async.
Thank you so much in advance.
This is how i implemented the solution for now, please let me know if there is a better way, thanks
public class OrderFixResponseBlockingQueue {
private Map<OrderFixKey, ArrayBlockingQueue<FxOrder>> responseQueueMap = new HashMap<OrderFixKey, ArrayBlockingQueue<FxOrder>>();
public FxOrder get(FxOrder order, int timeoutInSecs) {
try {
OrderFixKey key = new OrderFixKey(order);
ArrayBlockingQueue<FxOrder> orderQueue = new ArrayBlockingQueue<FxOrder>(1);
responseQueueMap.put(key, orderQueue);
FxOrder responseOrder = orderQueue.poll(timeoutInSecs, TimeUnit.SECONDS);
responseQueueMap.remove(orderQueue);
return responseOrder;
} catch (InterruptedException e) {
e.printStackTrace();
}
return null;
}
public void put(FxOrder order) {
OrderFixKey key = new OrderFixKey(order);
ArrayBlockingQueue<FxOrder> queue = responseQueueMap.get(key);
if (queue != null) {
queue.add(order);
} else {
System.out.println("Put queue not available, should be a request timeout");
}
}
private class OrderFixKey {
private final String orderId;
OrderFixKey(FxOrder order) {
this.orderId = order.getOrderId();
}
#Override
public boolean equals(Object obj) {
if (obj instanceof FxOrder) {
FxOrder that = (FxOrder) obj;
return this.orderId.equals(that.getOrderId());
}
return false;
}
#Override
public int hashCode() {
return this.orderId.hashCode();
}
}
}
Look at ArrayBlockingQueue or LinkedBlockingQueue
So you will have producer producing the elements who can just offer to one of these queue, while your consumer thread will block on take method of Queue until you have an element from the producer and then you can consume that element in your consumer thread.
There is no such queue which will block until the same id is pushed onto Queue. I would suggest, tou could create your customized queue if that's the case.

multithreading within vertx

I am a newbie to vert.x. I was trying out the vert.x "NetServer" capability. http://vertx.io/core_manual_java.html#writing-tcp-servers-and-clients and it works like a charm .
However , I also read that "A verticle instance is strictly single threaded.
If you create a simple TCP server and deploy a single instance of it then all the handlers for that server are always executed on the same event loop (thread)."
Currently, for my implementation, I wanted to receive the TCP stream of bytes and then trigger another component. But this should not be a blocking call within the "start" method of the Verticle. So, is it a good practice, to write an executor within the start method? or does vertx automatically handle such cases.
Here is a snippet
public class TCPListener extends Verticle {
public void start(){
NetServer server = vertx.createNetServer();
server.connectHandler(new Handler<NetSocket>() {
public void handle(NetSocket sock) {
container.logger().info("A client has connected");
sock.dataHandler(new Handler<Buffer>() {
public void handle(Buffer buffer) {
container.logger().info("I received " + buffer.length() + " bytes of data");
container.logger().info("I received " + new String(buffer.getBytes()));
//Trigger another component here. SHould be done in a sperate thread.
//The previous call should be returned . No need to wait for component response.
}
});
}
}).listen(1234, "host");
}
}
What should be mechanism to make this a non blocking call.
I don't think this is the way to go for vert.x.
A better way would be to use the event bus properly instead of Executor. Have a worker respond to the event on the bus, do the processing, and signal the bus when it's completed.
Creating threads defeats the purpose of going with vert.x.
The most flexible way is to create an ExecutorService and process requests with it. This brings fine-grained control over threading model of workers (fixed or variable number of threads, what work should be performed serially on a single thread, etc).
Modified sample might look like this:
public class TCPListener extends Verticle {
private final ExecutorService executor = Executors.newFixedThreadPool(10);
public void start(){
NetServer server = vertx.createNetServer();
server.connectHandler(new Handler<NetSocket>() {
public void handle(final NetSocket sock) { // <-- Note 'final' here
container.logger().info("A client has connected");
sock.dataHandler(new Handler<Buffer>() {
public void handle(final Buffer buffer) { // <-- Note 'final' here
//Trigger another component here. SHould be done in a sperate thread.
//The previous call should be returned . No need to wait for component response.
executor.submit(new Runnable() {
public void run() {
//It's okay to read buffer data here
//and use sock.write() if necessary
container.logger().info("I received " + buffer.length() + " bytes of data");
container.logger().info("I received " + new String(buffer.getBytes()));
}
}
}
});
}
}).listen(1234, "host");
}
}
As duffymo mentioned creating threads defeats the purpose of using vertx. Best way would be to write a message into eventbus and create a new handler listening for messages from the eventbus. Updated the code to showcase this. Writing the messages to "next.topic" topic, and registered a handler to read message from "next.topic" topic.
public class TCPListener extends Verticle {
public void start(){
NetServer server = vertx.createNetServer();
server.connectHandler(new Handler<NetSocket>() {
public void handle(NetSocket sock) {
container.logger().info("A client has connected");
sock.dataHandler(new Handler<Buffer>() {
public void handle(Buffer buffer) {
String recvMesg = new String(buffer.getBytes());
container.logger().info("I received " + buffer.length() + " bytes of data");
container.logger().info("I received " + recvMesg);
//Writing received message to event bus
vertx.eventBus().send("next.topic", recvMesg);
}
});
}
}).listen(1234, "host");
//Registering new handler listening to "next.topic" topic on event bus
vertx.eventBus().registerHandler("next.topic", new Handler<Message<String>() {
public void handle(Message<String> mesg) {
container.logger.info("Received message: "+mesg.body());
}
};
}
}

Categories

Resources