Let's say I have a Load Balancer (LB) in front of 1..n VertX (V) instances, each VertX instance is connected to a queue (Q), and I have 1..m Backends (BE).
A user clicks on a button which makes a post request or even opens a web socket, the load balancer forwards the request to one of the VertX instances, which fires a request to the queue, one of the Backends consumes the message and sends a response back; if the correct VertX instance consumes it, it can lookup the response handler and write a response to the user, if the wrong VertX instance consumes it, there won't be a response handler to write a response to and the user will wait indefinitely for a response.
See this sketch:
Alternatively, V2 dies and the load balancer reconnects the user to V1 which means even if I could send it back to the exact same one that made the request, it's not guaranteed to still be there once the response comes back, but the user might still be there awaiting a response via another VertX instance.
What I'm currently doing is to generate a GUID for each new connection, then as soon as the websocket connects, store the websocket handler inside a hashmap against the GUID and then when the BE wants to respond, it does a fanout to all 1..n VertX instances, the one that currently has the correct GUID in its hashmap can then write a response to the user.
Same for handling POST / GET in this manner.
Pseudocode:
queue.handler { q ->
q.handler {
val handler = someMap.get(q.guid)
// only respond if handler exists
if (handler != null){
handler.writeResponse(someresponsemessagehere)
}
}
}
vertx.createHttpServer().websocketHandler { ws ->
val guid = generateGUID()
someMap.put(guid, ws)
ws.writeFinalTextFrame("guid=${guid}")
ws.handler {
val guid = extractGuid(it)
// send request to BE including generated GUID
sendMessageToBE(guid, "blahblah")
}
}.requestHandler { router.accept(it) }.listen(port)
This does however mean that if I have a 1000 VertX applications running, that the backend will need to fanout its message to a 1000 frontend instances of which only one will make use of the message.
VertX seems like it already takes care of async operations very well, is there a way in VertX to identify each websocket connection instead of having to maintain a map of GUIDs mapped to websocket handlers / post handlers?
Also, referring to the picture, is there a way for V3 to consume the message, but still be able to write a response back to the websocket handler that's currently connected to V2?
What you're missing from your diagram is the Vertx EventBus.
Basically you can assume that your V1...Vn are interconnected:
V1<->V2<->...<->Vn
Let's assume that Va receives your outbound Q message (the red line), that is intended for Vb.
It then should send it to Vb using EventBus:
eventBus.send("Vb UUID", "Message for Vb, also containing WebSocket UUID", ar -> {
if (ar.succeeded()) {
// All went well
}
else {
// Vb died or has other problems. Your choice how to handle this
}
});
Related
I have a service where a couple requests can be long running actions. Occasionally we have timeouts for these requests, and that causes bad state because steps of the flux stop executing after the cancel is called when the client disconnects. Ideally we want this action to continue processing to completion.
I've seen WebFlux - ignore 'cancel' signal recommend using the cache method... Are there any better solutions and/or drawbacks to using cache to achieve this?
there are some solutions for that.
One could be to make it asyncron. when you get the request from the customer you can put it in a processor
Sinks.Many<QueueTask<T>> queue = Sinks.many().multicast().onBackpressureBuffer()
and when the client comes from the customer you just push it to the queue and the queue will be in background processing the items.
But in this case customer will not get any response with the progress of item. only if you send it by socket or he do another request after some times.
Another one is to use Chunked http request.
#GetMapping(value = "/sms-stream/{s}", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
Flux<String> streamResponse(#PathVariable("s") String s) {
return service.streamResponse(s);
}
In this case the connection will be open and you can close it automatically in server when processing is done
I have a Java process which is listening for messages from a queue hosted by ActiveMQ and calls webservices if received status is COMPLETE.
I was also thinking of handling webservices calls using ActiveMQ as well. Is there a way I could make best use of ActiveMQ using maybe another queue?
This might help me in handling a scenario if one or more webservice calls fails in first attempt. But I'm trying to think what could I do to achieve something like this.
Do I need to forward webservice parameters to ActiveMQ queue and then look for something?
#Autowired
private JavaMailSender javaMailSender;
// Working Code with JMS 2.0
#JmsListener(destination = "MessageProducer")
public void processBrokerQueues(String message) throws DaoException {
...
if(receivedStatus.equals("COMPLETE")) {
// Do something inside COMPLETE
// Need to call 8-10 webservices (shown only 2 below for breviety)
ProcessBroker obj = new ProcessBroker();
ProcessBroker obj1 = new ProcessBroker();
// Calling webservice 1
try {
System.out.println("Testing 1 - Send Http POST request #1 ");
obj.sendHTTPPOST1();
} finally {
obj.close();
}
// Calling webservice 2.
try {
System.out.println("Testing 2 - Send Http POST request #2");
obj1.sendHTTPPOST2();
} finally {
obj1.close();
}
} else {
// Do something
}
}
What I'm looking for:
Basically If there is a possibility of creating an additional message queue, submit 8 messages to the queue, one corresponding to each webservice end point and have a queue listener dedicated to the message queue. If the message could not be delivered successfully, it could be put back on the queue for later delivery.
I have a problem, and I don't know exactly what to search for.
I have a spring boot app which broadcast the message via web socket with a stomp javascript client. The question is if I can put a lock on the message when it is sent because I want no one to send another message at the same time. The system that I want to make is like a traffic light.
If you can give me an example or what to look for.
You should use synchronized keyword and wait for the client response. synchronized keyword ensures that only one thread can execute the method at the same time. And you need client response because you can sequentially send two messages, say in two seconds interval, but your client will get them at the same time. Response can be some dummy ok-message.
public class Traffic {
synchronized void Send() {
// write message to websocket
// read response from websocket
}
}
I have two services hosted under the same context file in Spring + Apache CXF (Services A and B). There is a third-party service C, that I have to call from service A, and that will send the response to service B. (by means of addressing). I have managed to perform the communication between services A -> C -> B. Everything OK there. The problem is I would like to perform some logic in service A according to the response sent to service B. That means I would like to do something like this in service A
ServiceC_Client clientC = ....
....
clientC.callOperation();
// somehow wait for a signal from service B or until a timeout have been reached.
// The response will be correlated to this particular thread by means of
// WS-Addressing MessageId field
// Read relevant data response sent to B (B stores the relevant data in database)
....
// continue operation of method of A
In service B, I would have something like this
public void callBackResponse(ResponseData response){
// Perform operations with response and store relevant data in database
// Service A will know data is sent to a particular run of A thanks to
// Addressing MessageID and RelatesTo fields
// Notify Service A a response was received
}
Is this possible? Can I achieve this in Java? Maybe Java Message Queues? I don't really know if it is possible.
I'm using Atmosphere in my Spring MVC app to facilitate push, using a streaming transport.
Throughout the lifecycle of my app, the client will subscribe and unsubscribe for many different topics.
Atmosphere seems to use a single http connection per subscription - ie., every call to $.atmosphere.subscribe(request) creates a new connection. This quickly exhausts the number of connections allowed from the browser to the atmosphere server.
Instead of creating a new resource each time, I'd like to be able to add and remove the AtmosphereResource to broadcasters after it's initial creation.
However, as the AtmosphereResource is a one-to-one representation of the inbound request, each time the client sends a request to the server, it arrives on a new AtomsphereResource, meaning I have no way to reference the original resource, and append it to the topic's Broadcaster.
I've tried using both $.atmosphere.subscribe(request) and calling atmosphereResource.push(request) on the resource returned from the original subscribe() call. However, this made no difference.
What is the correct way to approach this?
Here's how I got it working:
First, when the client does their initial connect, ensure that the atmosphere-specific headers are accepted by the browser before calling suspend():
#RequestMapping("/subscribe")
public ResponseEntity<HttpStatus> connect(AtmosphereResource resource)
{
resource.getResponse().setHeader("Access-Control-Expose-Headers", ATMOSPHERE_TRACKING_ID + "," + X_CACHE_DATE);
resource.suspend();
}
Then, when the client sends additional subscribe requests, although they come in on a different resource, they contain the ATMOPSHERE_TRACKING_ID of the original resource. This allows you to look it up via the resourceFactory:
#RequestMapping(value="/subscribe", method=RequestMethod.POST)
public ResponseEntity<HttpStatus> addSubscription(AtmosphereResource resource, #RequestParam("topic") String topic)
{
String atmosphereId = resource.getResponse().getHeader(ATMOSPHERE_TRACKING_ID);
if (atmosphereId == null || atmosphereId.isEmpty())
{
log.error("Cannot add subscription, as the atmosphere tracking ID was not found");
return new ResponseEntity<HttpStatus>(HttpStatus.BAD_REQUEST);
}
AtmosphereResource originalResource = resourceFactory.find(atmosphereId);
if (originalResource == null)
{
log.error("The provided Atmosphere tracking ID is not associated to a known resource");
return new ResponseEntity<HttpStatus>(HttpStatus.BAD_REQUEST);
}
Broadcaster broadcaster = broadcasterFactory.lookup(topic, true);
broadcaster.addAtmosphereResource(originalResource);
log.info("Added subscription to {} for atmosphere resource {}",topic, atmosphereId);
return getOkResponse();
}