Using RxJava for request response layer with WebSockets - java

I'm trying to implement a request -> response layer on top of websockets in Java. I recently stumbled across RxJava, which seems like a nice library to use for this. Down below is my current approach for handling the request response flow (unimportant code omitted for readability)
public class SimpleServer extends WebSocketServer {
Gson gson = new Gson();
Map<String, Function<JsonObject, Void>> requests = new HashMap<>();
private static int count = 0;
public SimpleServer(InetSocketAddress address) {
super(address);
}
#Override
public void onMessage(WebSocket conn, String message) {
String type = ...;
JsonObject payload = ...;
if (type.equals("response")) {
Request request = requests.get(requestId).apply(payload);
}
}
public Single<JsonObject> request(String action) {
requests.put(Integer.toString(count++), response -> {
source.onSuccess(response);
return null;
});
broadcast(...);
}
}
Is this a viable solution or is there a better way? I was thinking if there was a way to use RxJava for both ways, i.e. the request would listen to an "onMessage" observable or something along those lines.
All help will be greatly appreciated.

You can use RxJava for communication in both ways. Let's start with a simpler one – receiving messages. I recommend you use BehaviorRelay what behaves both like Observer and Consumer. You can both listen for emitted values and produce values – messages in our case. A simple implementation might look like this:
public class SimpleServer extends WebSocketServer {
private BehaviorRelay<String> receivedMessages = BehaviorRelay.create();
public SimpleServer(InetSocketAddress address) {
super(address);
}
#Override
public void onMessage(WebSocket conn, String message) {
receivedMessages.accept(message); // "sends" value to the relay
}
public Observable<String> getReceivedMessagesRx() {
return receivedMessages.hide(); // Cast Relay to Observable
}
//...
You can now call function getReceivedMessagesRx() and subscribe for incoming messages.
Now the more interesting part – sending messages. Let's assume, you have some Observable, what produces messages you want to send:
// ...
private Disposable senderDisposable = Disposables.disposed(); // (1)
public void setMessagesSender(Observable<String> messagesToSend) { // (2)
senderDisposable = messagesToSend.subscribe(message -> {
broadcast(message);
}, throwable -> {
// handle broadcast error
});
}
public void clear() { // (3)
senderDisposable.dispose();
}
}
What happens here:
Create Disposable which holds a reference to running observer of the messages to be sent.
Subscribe to passed Observable what emits every time you want to send a message. This function is meant to be called only once. If you want to call it multiple times, handle the disposal of previous sender or use CompositeDisposable to store multiple disposables.
When you are done working with your server, do not forget to dispose messages sender.

Related

Debounce similar requests with reactor-grpc

In order to offload my database, I would like to debounce similar requests in a gRPC service (say for instance that they share the same id part of the request) that serves an API which does not have strong requirements in terms of latency. I know how to do that with vanilla gRPC but I am sure what kind of API of Mono I can use.
The API calling directly the db looks like this:
public Mono<Blob> getBlob(
Mono<MyRequest> request) {
return request.
map(reader.getBlob(request.getId()));
I have a feeling I should use delaySubscription but then it does not seem that groupBy is part of the Mono API that gRPC services handle.
It's perfeclty ok to detect duplicates not using reactive operators:
// Guava cache as example.
private final Cache<String, Boolean> duplicatesCache = CacheBuilder.newBuilder()
.expireAfterWrite(Duration.ofMinutes(1))
.build();
public Mono<Blob> getBlob(Mono<MyRequest> request) {
return request.map(req -> {
var id = req.getId();
var cacheKey = extractSharedIdPart(id);
if (duplicatesCache.getIfPresent(cacheKey) == null) {
duplicatesCache.put(cacheKey, true);
return reader.getBlob(id);
} else {
return POISON_PILL; // Any object that represents debounce hit.
// Or use flatMap() + Mono.error() instead.
}
});
}
If for some reason you absolutely want to use reactive operators, then first you need to convert incoming grpc requests into Flux. This can be achieved using thirdparty libs like salesforce/reactive-grpc or directly:
class MyService extends MyServiceGrpc.MyServiceImplBase {
private FluxSink<Tuple2<MyRequest, StreamObserver<MyResponse>>> sink;
private Flux<Tuple2<MyRequest, StreamObserver<MyResponse>>> flux;
MyService() {
flux = Flux.create(sink -> this.sink = sink);
}
#Override
public void handleRequest(MyRequest request, StreamObserver<MyResponse> responseObserver) {
sink.next(Tuples.of(request, responseObserver));
}
Flux<Tuple2<MyRequest, StreamObserver<MyResponse>>> getFlux() {
return flux;
}
}
Next you subscribe to this flux and use operators you like:
public static void main(String[] args) {
var mySvc = new MyService();
var server = ServerBuilder.forPort(DEFAULT_PORT)
.addService(mySvc)
.build();
server.start();
mySvc.getFlux()
.groupBy(...your grouping logic...)
.flatMap(group -> {
return group.sampleTimeout(...your debounce logic...);
})
.flatMap(...your handling logic...)
.subscribe();
}
But beware of using groupBy with lots of distinct shared id parts:
The groups need to be drained and consumed downstream for groupBy to work correctly. Notably when the criteria produces a large amount of groups, it can lead to hanging if the groups are not suitably consumed downstream (eg. due to a flatMap with a maxConcurrency parameter that is set too low).

How to access the payload of the message arrived of the callback method (messageArrived) in the main method Eclipse Paho?

Problem statement:- I am trying to automate a MQTT flow, for that I a need to publish and subscribe to multiple topics but in a sequential order. The trick part is that the message received from the first publish has some value which will be passed in the next sub/pub commands.
For eg.
Sub to topicA/abc
Pub to topicA/abc
Message received on topicA/abc is xyz
sub to topic topicA/xyz
pub to topic topicA/xyz
I am able to receive the message on the first topic but I am not getting how to access the payload of the received message in the main method and pass and attach it to the next topic for next sub.
Is there a way to get the retrieved the message payload from messageArrived callback method to the main method where is client instance is created?
Note:- I am using a single client for publish and subscribe.
kindly help me out as I have ran out of options and methods to do so.
Edited:-
Code snippet
Main class
public class MqttOverSSL {
String deviceId;
MqttClient client = null;
public MqttOverSSL() {
}
public MqttOverSSL(String deviceId) throws MqttException, InterruptedException {
this.deviceId = deviceId;
MqttConnection mqttConObj = new MqttConnection();
this.client = mqttConObj.mqttConnection();
}
public void getLinkCodeMethod() throws MqttException, InterruptedException {
client.subscribe("abc/multi/" + deviceId + "/linkcode", 0);
publish(client, "abc/multi/" + deviceId + "/getlinkcode", 0, "".getBytes());
}
}
Mqtt Claback impl:-
public class SimpleMqttCallBack implements MqttCallback {
String arrivedMessage;
#Override
public void connectionLost(Throwable throwable) {
System.out.println("Connection to MQTT broker lost!");
}
#Override
public void messageArrived(String s, MqttMessage mqttMessage) throws Exception {
arrivedMessage = mqttMessage.toString();
System.out.println("Message received:\t" + arrivedMessage);
linkCode(arrivedMessage);
}
#Override
public void deliveryComplete(IMqttDeliveryToken iMqttDeliveryToken) {
System.out.println("Delivery complete callback: Publish Completed "+ Arrays.toString(iMqttDeliveryToken.getTopics()));
}
public void linkCode(String arrivedMessage) throws MqttException {
System.out.println("String is "+ arrivedMessage);
Gson g = new Gson();
GetCode code = g.fromJson(arrivedMessage, GetCode.class);
System.out.println(code.getLinkCode());
}
}
Publisher class:-
public class Publisher {
public static void publish(MqttClient client, String topicName, int qos, byte[] payload) throws MqttException {
String time = new Timestamp(System.currentTimeMillis()).toString();
log("Publishing at: "+time+ " to topic \""+topicName+"\" qos "+qos);
// Create and configure a message
MqttMessage message = new MqttMessage(payload);
message.setQos(qos);
// Send the message to the server, control is not returned until
// it has been delivered to the server meeting the specified
// quality of service.
client.publish(topicName, message);
}
static private void log(String message) {
boolean quietMode = false;
if (!quietMode) {
System.out.println(message);
}
}
}
OK, it's a little clearer what you are trying to do now.
Short answer No, you can not pass values back to the "main method". MQTT is asynchronous that means you have no idea when a message will arrive for a topic you subscribe to.
You need to update your code to deal check what the incoming message topic is and then deal do what ever action you wanted to do with that response in the messageArrived() handler. If you have a sequence of task to do then you may need to implement what is known as a state machine in order to keep track of where you are in the sequence.

Apache Camel creating Consumer component

I'm newbie to Apache Camel. In hp nonstop there is a Receiver that receives events generated by event manager assume like a stream. My goal is to setup a consumer end point which receives the incoming message and process it through Camel.
Another end point I simply need to write it in logs. From my study I understood that for Consumer end point I need to create own component and configuration would be like
from("myComp:receive").to("log:net.javaforge.blog.camel?level=INFO")
Here is my code snippet which receives message from event system.
Receive receive = com.tandem.ext.guardian.Receive.getInstance();
byte[] maxMsg = new byte[500]; // holds largest possible request
short errorReturn = 0;
do { // read messages from $receive until last close
try {
countRead = receive.read(maxMsg, maxMsg.length);
String receivedMessage=new String(maxMsg, "UTF-8");
//Here I need to handover receivedMessage to camel
} catch (ReceiveNoOpeners ex) {
moreOpeners = false;
} catch(Exception e) {
moreOpeners = false;
}
} while (moreOpeners);
Can someone guide with some hints how to make this as a Consumer.
The 10'000 feet view is this:
You need to start out with implementing a component. The easiest way to get started is to extend org.apache.camel.impl.DefaultComponent. The only thing you have to do is override DefaultComponent::createEndpoint(..). Quite obviously what it does is create your endpoint.
So the next thing you need is to implement your endpoint. Extend org.apache.camel.impl.DefaultEndpoint for this. Override at the minimum DefaultEndpoint::createConsumer(Processor) to create your own consumer.
Last but not least you need to implement the consumer. Again, best ist to extend org.apache.camel.impl.DefaultConsumer. The consumer is where your code has to go that generates your messages. Through the constructor you receive a reference to your endpoint. Use the endpoint reference to create a new Exchange, populate it and send it on its way along the route. Something along the lines of
Exchange ex = endpoint.createExchange(ExchangePattern.InOnly);
setMyMessageHeaders(ex.getIn(), myMessagemetaData);
setMyMessageBody(ex.getIn(), myMessage);
getAsyncProcessor().process(ex, new AsyncCallback() {
#Override
public void done(boolean doneSync) {
LOG.debug("Mssage was processed " + (doneSync ? "synchronously" : "asynchronously"));
}
});
I recommend you pick a simple component (DirectComponent ?) as an example to follow.
Herewith adding my own consumer component may help someone.
public class MessageConsumer extends DefaultConsumer {
private final MessageEndpoint endpoint;
private boolean moreOpeners = true;
public MessageConsumer(MessageEndpoint endpoint, Processor processor) {
super(endpoint, processor);
this.endpoint = endpoint;
}
#Override
protected void doStart() throws Exception {
int countRead=0; // number of bytes read
do {
countRead++;
String msg = String.valueOf(countRead)+" "+System.currentTimeMillis();
Exchange ex = endpoint.createExchange(ExchangePattern.InOnly);
ex.getIn().setBody(msg);
getAsyncProcessor().process(ex, new AsyncCallback() {
#Override
public void done(boolean doneSync) {
log.info("Mssage was processed " + (doneSync ? "synchronously" : "asynchronously"));
}
});
// This is an echo server so echo request back to requester
} while (moreOpeners);
}
#Override
protected void doStop() throws Exception {
moreOpeners = false;
log.debug("Message processor is shutdown");
}
}

How to get an existing websocket instance

I'm working on an application that uses Websockets (Java EE 7) to send messages to all the connected clients asynchronously. The server (Websocket endpoint) should send these messages whenever a new article (an engagement modal in my app) is created.
Everytime a connection is established to the websocket endpoint, I'm adding the corresponding session to a list, which I could be able to access outside.
But the problem I had is, when I'm accessing this created websocket endpoint to which all the clients connected from outside (any other business class), I've get the existing instance (like a singleton).
So, can you please suggest me a way I can get an existing instance of the websocket endpoint, as I can't create it as new MyWebsocketEndPoint() coz it'll be created by the websocket internal mechanism whenever the request from a client is received.
For a ref:
private static WebSocketEndPoint INSTANCE = null;
public static WebSocketEndPoint getInstance() {
if(INSTANCE == null) {
// Instead of creating a new instance, I need an existing one
INSTANCE = new WebSocketEndPoint ();
}
return INSTANCE;
}
Thanks in advance.
The container creates a separate instance of the endpoint for every client connection, so you can't do what you're trying to do. But I think what you're trying to do is send a message to all the active client connections when an event occurs, which is fairly straightforward.
The javax.websocket.Session class has the getBasicRemote method to retrieve a RemoteEndpoint.Basic instance that represents the endpoint associated with that session.
You can retrieve all the open sessions by calling Session.getOpenSessions(), then iterate through them. The loop will send each client connection a message. Here's a simple example:
#ServerEndpoint("/myendpoint")
public class MyEndpoint {
#OnMessage
public void onMessage(Session session, String message) {
try {
for (Session s : session.getOpenSessions()) {
if (s.isOpen()) {
s.getBasicRemote().sendText(message);
}
} catch (IOException ex) { ... }
}
}
But in your case, you probably want to use CDI events to trigger the update to all the clients. In that case, you'd create a CDI event that a method in your Websocket endpoint class observes:
#ServerEndpoint("/myendpoint")
public class MyEndpoint {
// EJB that fires an event when a new article appears
#EJB
ArticleBean articleBean;
// a collection containing all the sessions
private static final Set<Session> sessions =
Collections.synchronizedSet(new HashSet<Session>());
#OnOpen
public void onOpen(final Session session) {
// add the new session to the set
sessions.add(session);
...
}
#OnClose
public void onClose(final Session session) {
// remove the session from the set
sessions.remove(session);
}
public void broadcastArticle(#Observes #NewArticleEvent ArticleEvent articleEvent) {
synchronized(sessions) {
for (Session s : sessions) {
if (s.isOpen()) {
try {
// send the article summary to all the connected clients
s.getBasicRemote().sendText("New article up:" + articleEvent.getArticle().getSummary());
} catch (IOException ex) { ... }
}
}
}
}
}
The EJB in the above example would do something like:
...
#Inject
Event<ArticleEvent> newArticleEvent;
public void publishArticle(Article article) {
...
newArticleEvent.fire(new ArticleEvent(article));
...
}
See the Java EE 7 Tutorial chapters on WebSockets and CDI Events.
Edit: Modified the #Observer method to use an event as a parameter.
Edit 2: wrapped the loop in broadcastArticle in synchronized, per #gcvt.
Edit 3: Updated links to Java EE 7 Tutorial. Nice job, Oracle. Sheesh.
Actually, WebSocket API provides a way how you can control endpoint instantiation. See https://tyrus.java.net/apidocs/1.2.1/javax/websocket/server/ServerEndpointConfig.Configurator.html
simple sample (taken from Tyrus - WebSocket RI test):
public static class MyServerConfigurator extends ServerEndpointConfig.Configurator {
public static final MyEndpointAnnotated testEndpoint1 = new MyEndpointAnnotated();
public static final MyEndpointProgrammatic testEndpoint2 = new MyEndpointProgrammatic();
#Override
public <T> T getEndpointInstance(Class<T> endpointClass) throws InstantiationException {
if (endpointClass.equals(MyEndpointAnnotated.class)) {
return (T) testEndpoint1;
} else if (endpointClass.equals(MyEndpointProgrammatic.class)) {
return (T) testEndpoint2;
}
throw new InstantiationException();
}
}
You need to register this to an endpoint:
#ServerEndpoint(value = "/echoAnnotated", configurator = MyServerConfigurator.class)
public static class MyEndpointAnnotated {
#OnMessage
public String onMessage(String message) {
assertEquals(MyServerConfigurator.testEndpoint1, this);
return message;
}
}
or you can use it with programmatic endpoints as well:
public static class MyApplication implements ServerApplicationConfig {
#Override
public Set<ServerEndpointConfig> getEndpointConfigs(Set<Class<? extends Endpoint>> endpointClasses) {
return new HashSet<ServerEndpointConfig>
(Arrays.asList(ServerEndpointConfig.Builder
.create(MyEndpointProgrammatic.class, "/echoProgrammatic")
.configurator(new MyServerConfigurator())
.build()));
}
#Override
public Set<Class<?>> getAnnotatedEndpointClasses(Set<Class<?>> scanned) {
return new HashSet<Class<?>>(Arrays.asList(MyEndpointAnnotated.class));
}
Of course it is up to you if you will have one configurator used for all endpoints (ugly ifs as in presented snippet) or if you'll create separate configurator for each endpoint.
Please do not copy presented code as it is - this is only part of Tyrus tests and it does violate some of the basic OOM paradigms.
See https://github.com/tyrus-project/tyrus/blob/1.2.1/tests/e2e/src/test/java/org/glassfish/tyrus/test/e2e/GetEndpointInstanceTest.java for complete test.

Socket-based Message Factory

I'm looking for some ideas on implementing a basic message factory that reads a header from an input stream and creates the appropriate message type based on the type defined in the message header.
So I have something like (roughly.. and I'm willing to change the design if a better paradigm is presented here)
class MessageHeader {
public String type;
}
class MessageA extends Message {
public static final String MESSAGE_TYPE = "MSGA";
public MessageA (DataInputStream din) {
var1 = din.readInt ();
var2 = din.readInt ()
// etc
}
}
and I essentially want to do something like this:
MessageHeader header = ... read in from stream.
if (header.type == MessageA.MESSAGE_TYPE) {
return new MessageA (din);
} else if (header.type == MessageB.MESSAGE_TYPE) {
return new MessageB (din);
}
Although this scheme works I feel like there could be a better method using a Map and an Interface somehow...
public interface MessageCreator {
public Message create (DataInputStream);
}
Map <String, MessageCreater> factory = new Map <String, MessageCreator> ();
factory.put (MessageTypeA.MESSAGE_TYPE, new MessageCreator () {
public Message create (DataInputStream din) {
return new MessageA (din); }});
...
// Read message header
Message createdMessage = Map.get (header.type).create (din);
But then whenever I want to use the message I have to use instanceof and cast to the correct subclass.
Is there a 3rd (better?) option? Maybe there's a way to accomplish this using templates. Any help is appreciated. Thanks
Edit: I guess it's important to note I want to "dispatch" the message to a function. So essentially I really want to do this:
MessageHeader header = ... read in from stream.
if (header.type == MessageA.MESSAGE_TYPE) {
handleMessageA (new MessageA (din));
} else if (header.type == MessageB.MESSAGE_TYPE) {
handleMessageB (new MessageB (din))
}
So a pattern that incorporates the factory and a dispatch would be perfect
How about letting the guy who creates the messages actually dispatch to a handler.
So you'd add a handler interface like this:
public interface MessageHandler {
void handleTypeA(MessageA message);
void handleTypeB(MessageB message);
}
Then you'd have a dispatcher which is basically the same thing as your MessageCreator, except it calls the correct method on the handler instead of returning the message object.
public interface MessageDispatcher {
void createAndDispatch(DataInputStream input, MessageHandler handler);
}
The implementation is then almost identical to the first code snippet you posted:
public void createAndDispatch(DataInputStream input, MessageHandler handler) {
MessageHeader header = ... read in from stream.
if (header.type == MessageA.MESSAGE_TYPE) {
handler.handleTypeA(new MessageA (din));
} else if (header.type == MessageB.MESSAGE_TYPE) {
handler.handleTypeB(new MessageB (din));
}
}
Now you only have the one spot in the code where you have to do a switch or if/else if and after that everything is specifically typed and there's no more casting.

Categories

Resources