I am a newbie to vert.x. I was trying out the vert.x "NetServer" capability. http://vertx.io/core_manual_java.html#writing-tcp-servers-and-clients and it works like a charm .
However , I also read that "A verticle instance is strictly single threaded.
If you create a simple TCP server and deploy a single instance of it then all the handlers for that server are always executed on the same event loop (thread)."
Currently, for my implementation, I wanted to receive the TCP stream of bytes and then trigger another component. But this should not be a blocking call within the "start" method of the Verticle. So, is it a good practice, to write an executor within the start method? or does vertx automatically handle such cases.
Here is a snippet
public class TCPListener extends Verticle {
public void start(){
NetServer server = vertx.createNetServer();
server.connectHandler(new Handler<NetSocket>() {
public void handle(NetSocket sock) {
container.logger().info("A client has connected");
sock.dataHandler(new Handler<Buffer>() {
public void handle(Buffer buffer) {
container.logger().info("I received " + buffer.length() + " bytes of data");
container.logger().info("I received " + new String(buffer.getBytes()));
//Trigger another component here. SHould be done in a sperate thread.
//The previous call should be returned . No need to wait for component response.
}
});
}
}).listen(1234, "host");
}
}
What should be mechanism to make this a non blocking call.
I don't think this is the way to go for vert.x.
A better way would be to use the event bus properly instead of Executor. Have a worker respond to the event on the bus, do the processing, and signal the bus when it's completed.
Creating threads defeats the purpose of going with vert.x.
The most flexible way is to create an ExecutorService and process requests with it. This brings fine-grained control over threading model of workers (fixed or variable number of threads, what work should be performed serially on a single thread, etc).
Modified sample might look like this:
public class TCPListener extends Verticle {
private final ExecutorService executor = Executors.newFixedThreadPool(10);
public void start(){
NetServer server = vertx.createNetServer();
server.connectHandler(new Handler<NetSocket>() {
public void handle(final NetSocket sock) { // <-- Note 'final' here
container.logger().info("A client has connected");
sock.dataHandler(new Handler<Buffer>() {
public void handle(final Buffer buffer) { // <-- Note 'final' here
//Trigger another component here. SHould be done in a sperate thread.
//The previous call should be returned . No need to wait for component response.
executor.submit(new Runnable() {
public void run() {
//It's okay to read buffer data here
//and use sock.write() if necessary
container.logger().info("I received " + buffer.length() + " bytes of data");
container.logger().info("I received " + new String(buffer.getBytes()));
}
}
}
});
}
}).listen(1234, "host");
}
}
As duffymo mentioned creating threads defeats the purpose of using vertx. Best way would be to write a message into eventbus and create a new handler listening for messages from the eventbus. Updated the code to showcase this. Writing the messages to "next.topic" topic, and registered a handler to read message from "next.topic" topic.
public class TCPListener extends Verticle {
public void start(){
NetServer server = vertx.createNetServer();
server.connectHandler(new Handler<NetSocket>() {
public void handle(NetSocket sock) {
container.logger().info("A client has connected");
sock.dataHandler(new Handler<Buffer>() {
public void handle(Buffer buffer) {
String recvMesg = new String(buffer.getBytes());
container.logger().info("I received " + buffer.length() + " bytes of data");
container.logger().info("I received " + recvMesg);
//Writing received message to event bus
vertx.eventBus().send("next.topic", recvMesg);
}
});
}
}).listen(1234, "host");
//Registering new handler listening to "next.topic" topic on event bus
vertx.eventBus().registerHandler("next.topic", new Handler<Message<String>() {
public void handle(Message<String> mesg) {
container.logger.info("Received message: "+mesg.body());
}
};
}
}
Related
In GRPC...what is the most efficient of calling another GRPC service before it can answer to any request?
My code here looks a bit of a mess... in the constructor of the GreetingServiceImpl, I am starting a Thread just to get
some sort of Greetings list from a GreetingServiceRepository service running on a different port?
So the use case is something like this... There is a GRPC service GreetingsRepository which contains a list of greetings and
a GreetingServiceImpl which calls the GreetingsRepository.. I wanted to customize the response so that I can return a custom response
for every request....
public class MyGrpcServer {
static public void main(String [] args) throws IOException, InterruptedException {
Server server = ServerBuilder.forPort(8080)
.addService(new GreetingServiceImpl()).build();
System.out.println("Starting server...");
server.start();
System.out.println("Server started!");
server.awaitTermination();
}
public static class GreetingServiceImpl extends GreetingServiceGrpc.GreetingServiceImplBase {
public GreetingServiceImpl(){
init();
}
public void init(){
//Do initial long running task
//Like running a thread that will call another service from a repository
Thread t1 = new Thread(){
public void run(){
//Call another grpc service
ManagedChannel channel = ManagedChannelBuilder.forAddress("localhost", 8081)
.usePlaintext(true)
.build();
GreetingServiceRepository.eGreetingServiceRepositoryBlockingStub stub =
GreetingServiceRepositoryGrpc.newBlockingStub(channel);
//Do something with the response
}
}
t1.start();
}
#Override
public void greeting(HelloRequest request, StreamObserver<HelloResponse> responseObserver) {
System.out.println(request);
//USE THE LIST OF GREETINGS FROM THE REPOSITORY and customize it per user
//String greeting = "Hello there, " + request.getName();
//String greeting = "Holla, " + request.getName();
String greeting = "Good Morning, " + request.getName();
HelloResponse response = HelloResponse.newBuilder().setGreeting(greeting).build();
responseObserver.onNext(response);
responseObserver.onCompleted();
}
}
}
Is there a way in GRPC to initialize the service before it can respond to any other request?
I am not sure if constructor is a good idea..and firing up another thread just to call another service.
There's two major ways: 1) delay starting the server until dependent services are ready, and 2) delay clients sending requests to this server, until dependent services are ready.
Delay starting the server until ready:
GreetingServiceImpl gsi = new GreetingServiceImpl();
Server server = ServerBuilder.forPort(8080)
.addService(gsi).build();
System.out.println("Starting server...");
gsi.init();
server.start();
Delaying clients sending requests to this server depends on how clients learn of the server's address. For example, if using a load balancing proxy that uses the Health service, wait until ready and then call:
healthStatusManager.setStatus("", ServingStatus.SERVING);
The proxy will then learn this server is healthy and inform clients about the backend.
I implementing websockets using Vert.x 3.
The scenario is simple: opening socket from client doing some 'blocking' work at the vertex verticle worker and when finish response with the answer to the client(via the open socket)
Please tell me if I am doing it right:
Created VertxWebsocketServerVerticle. as soon as the websocket is opening and request coming from the client I am using eventBus and passing the message to
EventBusReceiverVerticle. there I am doing blocking operation.
how I am actually sending back the response back to VertxWebsocketServerVerticle and sending it back to the client?
code:
Main class:
public static void main(String[] args) throws InterruptedException {
Vertx vertx = Vertx.vertx();
vertx.deployVerticle(new EventBusReceiverVerticle("R1"),new DeploymentOptions().setWorker(true));
vertx.deployVerticle(new VertxWebsocketServerVerticle());
}
VertxWebsocketServerVerticle:
public class VertxWebsocketServerVerticle extends AbstractVerticle {
public void start() {
vertx.createHttpServer().websocketHandler(webSocketHandler -> {
System.out.println("Connected!");
Buffer buff = Buffer.buffer().appendInt(12).appendString("foo");
webSocketHandler.writeFinalBinaryFrame(buff);
webSocketHandler.handler(buffer -> {
String inputString = buffer.getString(0, buffer.length());
System.out.println("inputString=" + inputString);
vertx.executeBlocking(future -> {
vertx.eventBus().send("anAddress", inputString, event -> System.out.printf("got back from reply"));
future.complete();
}, res -> {
if (res.succeeded()) {
webSocketHandler.writeFinalTextFrame("output=" + inputString + "_result");
}
});
});
}).listen(8080);
}
#Override
public void stop() throws Exception {
super.stop();
}
}
EventBusReceiverVerticle :
public class EventBusReceiverVerticle extends AbstractVerticle {
private String name = null;
public EventBusReceiverVerticle(String name) {
this.name = name;
}
public void start(Future<Void> startFuture) {
vertx.eventBus().consumer("anAddress", message -> {
System.out.println(this.name +
" received message: " +
message.body());
try {
//doing some looong work..
Thread.sleep(10000);
System.out.printf("finished waiting\n");
startFuture.complete();
} catch (InterruptedException e) {
e.printStackTrace();
}
});
}
}
I always get:
WARNING: Message reply handler timed out as no reply was received - it will be removed
github project at: https://github.com/IdanFridman/VertxAndWebSockets
thank you,
ray.
Since you are blocking your websocket handler until it receives a reply for the sent message to the EventBus, which will not, in fact, be received until the set up delay of 10s laps, you certainly will get warning since the reply handler of the event bus will timeout -> Message sent but no response received before the timeout delay.
Actually I don't know if you are just experimenting the Vert.x toolkit or you are trying to fulfill some requirement, but certainly you have to adapt your code to match in the Vert.x spirit:
First you should better not block until a message is received in your websocket handler, keep in mind that everything is asynchrounous when it comes to Vert.x.
In order to sleep for some time, use the Vert.x way and not the Thread.sleep(delay), i.e. vertx.setTimer(...).
I'm trying to make a chat application for the network in my college. It's actually two programs: One for the server and the other for the clients. All client messages will be sent to server with their sender's name and intended target prepended to them. The server, using this information, sends the message to the target.
I wrote a program which simulates the server side of things with 4 classes: Model, MessageCentre, Receiver and Sender.
Receiver, on an independent thread, generates strings and adds them to the queue in MessageCentre with random time-outs. Sender checks if queue is empty, and if not, it 'sends' the message (just prints it).The Model class simply contains the main method which starts the Receiver and Sender threads.
This is the code of the simulation:
Model class->
package model;
public class Model {
public static void main(String[] args) {
Receiver receiver = new Receiver();
Sender sender = new Sender();
receiver.start();
sender.start();
}
}
MessageCentre class->
package model;
import java.util.LinkedList;
import java.util.Queue;
public class MessageCentre {
private static Queue<String> pendingMessages = new LinkedList<>();
public static synchronized boolean centreIsEmpty() {
return pendingMessages.isEmpty();
}
public static synchronized String readNextAndRemove() {
return pendingMessages.remove();
}
public static synchronized boolean addToQueue(String message) {
return pendingMessages.add(message);
}
}
Receiver class->
package model;
import java.util.Random;
public class Receiver extends Thread {
private int instance;
public Receiver() {
instance = 0; //gets incremented after each message
}
#Override
public void run() {
while (true) {
boolean added = MessageCentre.addToQueue(getMessage());
if (!added) {
System.out.println("Message " + instance + " failed to send");
}
try {
//don't send for another 0 to 10 seconds
Thread.sleep(new Random().nextInt(10_000));
} catch (InterruptedException ex) {
ex.printStackTrace();
}
}
}
private String getMessage() {
int copyInstance = instance;
instance++;
return "Message " + copyInstance;
}
}
Sender class->
package model;
public class Sender extends Thread {
#Override
public void run() {
while(true) {
if(!MessageCentre.centreIsEmpty()) {
System.out.println(MessageCentre.readNextAndRemove());
}
}
}
}
Question: If the getMessage() method of the Receiver class were to be replaced by a method which accepts messages from a socket input stream, is there a chance that some messages would be lost?
It is crucial that all received messages be written to the queue so that no messages are lost. This simulation seems to run fine, but I have no way to test a scenario where a large influx of messages are being received through a socket.
The case which I fear might occur is the following:
Receiver gets a message and attempts to write it to the queue.
Sender has a hold of the queue to read and remove items from it, thereby preventing Receiver from writing the newest message. The Receiver finally gets the opportunity to write the current message to the queue, but a new message simultaneously enters the socket input stream to be lost forever.
Is this scenario possible? If so, can it be prevented by setting the priority level of Receiver to be higher than Sender?
a new message simultaneously enters the socket input stream to be lost forever
is always possible, but large numbers of dropped requests are unlikely unless you're under heavy load or have unnecessarily large critical sections. Dropped requests also happen due to factors outside your control, so your system needs to be robust in the face of them anyway.
Use a queue implementation from java.util.concurrent instead of manually synchronizing on a LinkedList and the queue portion of your code should be fine.
I have a Set of clients, and a Event I want to broadcast to them. To be more exact, I'm using CopyOnWriteArraySet to avoid ConcurrentModificationException.
It's all working nicely, but I'm beginning to hit a performance issue with large number of clients.
Could you suggest a way to serve the clients in parallel?
The broadcast loop now looks basically like this:
for (Client client : clients) {
sendTo(client, event);
}
With Java 8, you can replace the loop with:
clients.parallelForEach(client -> sendTo(client, event));
With Java 7, you will need to manually write the code. A first simple version would look like:
private final ExecutorService executor = new FixedThreadPool(N_THREADS);
private void send(Set<Client> clients, final Event event) {
for (Client client : clients) {
final Client c = client;
executor.submit(new Runnable() { public void run() { sendTo(c, event); }});
}
}
You may use threads using ExecutorService. Here's a kickoff example:
int MAX_THREADS = 10;
ExecutorSertive executor = Executors.newFixedThreadPool(MAX_THREADS);
for (final Client client : clients) {
executor.execute(new Runnable() {
#Override
public void run() {
//if event is a variable or a parameter in the bigger method
//mark it as final
sentTo(client, event);
}
});
}
executor.shutdown();
Motivation
I want extra eyes to confirm that I am able to call this method XMPPConnection.sendPacket(
Packet ) concurrently. For my current code, I am invoking a List of Callables (max 3) in a serial fashion. Each Callable sends/receives XMPP packets on the one piece of XMPPConnection. I plan to parallelize these Callables by spinning off multiple threads & each Callable will invoke sendPacket on the shared XMPPConnection without synchronization.
XMPPConnection
class XMPPConnection
{
private boolean connected = false;
public boolean isConnected()
{
return connected;
}
PacketWriter packetWriter;
public void sendPacket( Packet packet )
{
if (!isConnected())
throw new IllegalStateException("Not connected to server.");
if (packet == null)
throw new NullPointerException("Packet is null.");
packetWriter.sendPacket(packet);
}
}
PacketWriter
class PacketWriter
{
public void sendPacket(Packet packet)
{
if (!done) {
// Invoke interceptors for the new packet
// that is about to be sent. Interceptors
// may modify the content of the packet.
processInterceptors(packet);
try {
queue.put(packet);
}
catch (InterruptedException ie) {
ie.printStackTrace();
return;
}
synchronized (queue) {
queue.notifyAll();
}
// Process packet writer listeners. Note that we're
// using the sending thread so it's expected that
// listeners are fast.
processListeners(packet);
}
protected PacketWriter( XMPPConnection connection )
{
this.queue = new ArrayBlockingQueue<Packet>(500, true);
this.connection = connection;
init();
}
}
What I conclude
Since the PacketWriter is using a BlockingQueue, there is no problem with my intention to invoke sendPacket from multiple threads. Am I correct ?
Yes you can send packets from different threads without any problems.
The Smack blocking queue is because what you can't do is let the different threads write the output stream at the same time. Smack takes the responsibility of synchronizing the output stream by writing it with a per packet granularity.
The pattern implemented by Smack is simply a typical producer/consumer concurrency pattern. You may have several producers (your threads) and only one consumer (the Smack's PacketWriter running in it's own thread).
Regards.
You haven't provided enough information here.
We don't know how the following are implemented:
processInterceptors
processListeners
Who reads / writes the 'done' variable? If one thread sets it to true, then all the other threads will silently fail.
From a quick glance, this doesn't look thread safe, but there's no way to tell for sure from what you've posted.
Other issues:
Why is PacketWriter a class member of XMPPConnectionwhen it's only used in one method?
Why does PacketWriter have a XMPPConnection member var and not use it?
You might consider using a BlockingQueue if you can restrict to Java 5+.
From the Java API docs, with a minor change to use ArrayBlockingQueue:
class Producer implements Runnable {
private final BlockingQueue queue;
Producer(BlockingQueue q) { queue = q; }
public void run() {
try {
while(true) { queue.put(produce()); }
} catch (InterruptedException ex) { ... handle ...}
}
Object produce() { ... }
}
class Consumer implements Runnable {
private final BlockingQueue queue;
Consumer(BlockingQueue q) { queue = q; }
public void run() {
try {
while(true) { consume(queue.take()); }
} catch (InterruptedException ex) { ... handle ...}
}
void consume(Object x) { ... }
}
class Setup {
void main() {
BlockingQueue q = new ArrayBlockingQueue();
Producer p = new Producer(q);
Consumer c1 = new Consumer(q);
Consumer c2 = new Consumer(q);
new Thread(p).start();
new Thread(c1).start();
new Thread(c2).start();
}
}
For your usage you'd have your real sender (holder of the actual connection) be the Consumer, and packet preparers/senders be the producers.
An interesting additional thought is that you could use a PriorityBlockingQueue to allow flash override XMPP packets that are sent before any other waiting packets.
Also, Glen's points on the design are good points. You might want to take a look at the Smack API (http://www.igniterealtime.org/projects/smack/) rather than creating your own.