I use Angular with Spring for internal API communication. Now I need to call external api to get some data. That external API provides callback feature, so I trigger call from Angular, call Spring rest method which, at the end, calls external API.
On the other hand I get data on my callback methods (Spring rest also), but I dont know how to transfer those data back to Angular.
Is websocket the only option?
If internal API calls are longer than the allowable client timeout, then you will need to find a WebSocket or WebSocket-like alternative. The alternative to WebSockets is to use long-polling (good example with source code here), which is essentially having the client repeatedly make requests until the original task is completed and the data can be sent. Either way, you'll need to use a pub/sub mechanism of some sort to handle multiple users, which is where things can get complicated.
Pub/sub can be complicated and I don't have an example on-hand, but essentially you must (1) have the client subscribe to a channel using a unique identifier (you can do this with CometD via a service channel), (2) evaluate the response, (3) publish the response to the client's subscribed channel, and finally (4) close the channel when it's no longer in use.
I've had some luck with CometD as a library to simplify pub/sub channel management, and it provides a good abstraction for asynchronous communication, although I haven't tried it with Spring and it might be heavy for what you want to do.
Another option that I'm less familiar with is to use Spring with STOMP. This seems to come recommended by others. Spring does provide ways to send messages to single users using STOMP: Sending message to specific user on Spring Websocket
I'd recommend following the above STOMP example.
Additional STOMP resources:
Getting Started with WebSockets (Spring)
Enable STOMP Over WebSocket
Angular2 with Stomp.js
As a side note, throttling may be necessary here, too, as with any time that you can spawn long-running threads from clients.
If you choose not to use STOMP or CometD, then a lighter solution could be to roll your own pub/sub for this single case using Spring's DeferredResult (as in Roger Hughes example), tying the subscription request to the long-poll request via a UUID token, which might also be the session ID if you choose to disallow concurrent requests per user. On subscription, the system can associate the request with a UUID and return this UUID to the client. The client can then make long-poll requests (again, as in Roger Hughes example) but with the UUID attached. The server can wait until the request for the given UUID has completed and then return the result to the client via the client's active long-poll request.
Channel management (i.e., request/UUID-tracking) can be done by clearing the channel on DeferredResult result retrieval and removing orphan channels with a separate thread acting as a GC -- or, perhaps better yet, automatically clearing orphan channels by removing them on DeferredResult completion/timeout if no active listeners exist. If you opt for the latter option, you will want to make sure that the client won't have any delay between its long-poll requests so that the DeferredResult doesn't unintentionally complete with no listeners.
Related
We are in the process of designing the migration of our monolithic Java application to microservices to meet various client requirements such as scalability, high availability, etc. The core function of our application is data processing, i.e. retrieve data from a source, put it through 0 or more transformations, and finally push the result to a destination. For this reason, we are looking at Spring Cloud Data Flow running on Kubernetes and Kafka to do the heavy lifting for us, with a few custom built stream applications to handle our business logic.
One thing we have not yet figured out though is how it could handle synchronous responses to requests being sent in via an HTTP source - specifically when some processing is required before responding. For example, let us say that a request is received containing two different amounts in a JSON packet. We then pass this on to a custom "addition" transformer that outputs the sum of these amounts, and needs to return the result back to the calling party. However, because the transformer is a completely separate process that received the data by consuming it from a Kafka topic, it has no access to the original HTTP connection to respond.
Is this something that is possible with Spring Cloud Data Flow, perhaps by combining it with something like Spring Cloud Gateway to manage the HTTP connection? Or are we barking up the wrong tree completely?
It is not easy combine Async Flows (Spring Cloud Data Flow) with a Sync HTTP Flow (HTTP Requests have a timeout and Async Flow processing time is not known upfront). What you can do is return an id as response for your HTTP source and have another HTTP Endpoint to check the status of the initial request and get the result back using that. This is a polling approach.
Which is the best architecture to handle multiple http requests sending a list of ids (with more than 1000 ids in each request) and each request will be a heavy time-consuming process that communicate with other systems (that uses SOAP or REST) and saves data into a RDBMS?
Should I send the ids in a POST HTTP request?
Should I use Spring Framework Async/Futures (without return) in a Resource/Controller REST to handle the Time-consuming process and return HTTP 202 Accepted (and maybe an ID to query the status of the process or the resource)?
Or Is it good if my Resources/Controllers REST only put the messages in a JMS Queue (maybe a persistent queue for each REST method) and Message-Driven Beans (for each method) consume the messages and process the request?
It is a good idea if my Resources/Controllers REST or a class acting like a proxy save the requests for my system and other systems responses in a database for logging purposes and to fix problems? Or should I use the retry feature in JMS for example?
I agree with dunni; however, your general requirements sound like JMS would be a good place to start. That would allow clients to drop off requests and allow back end components to work on the requests as they are able. Of course, you need enough back end power to get all the requests completed in a reasonable time frame.
I am working on a project which have many legacy heterogeneous systems. We are planing to connect them using JMS/MOM/ESB but need Synchronous web service calls from client.
i.e Request/Response architecture using web service is a requirement.
Client will make calls and wait for response.
My question is how can we implement Request/ response system which internally work on JMS/MOM to connect desperate systems?
Second question : Do any existing JMS/MOM or ESB implementations support such synchronous architecture?
first about second question: All of them support such architecture.
second about first:-)
just a brief idea:
when you send a message to target JMS Queue you have to have a unique message ID (in headers as example) then target system must answer (response) to "replyTo" queue. Then you have to have listener on that replyTo queue with filter by that unique ID and your flow must wait for response from that listener.
Something like that...
I'd like to synchronize the state to all the clients interested in particular entity changes. So I'd like to achieve something like:
exposing CRUD API on entity (via HTTP/REST and websockets)
and routing the response (of the modifying calls) to websockets topic
So technically, I'd be interested in ideas to mix spring-data-rest with spring websockets implementation to achieve something like spring-data-websocket.
There are a two solutions coming to my mind, and in fact both would be:
spring-data-rest to expose my entities via REST/HTTP API
websocket controllers (used for the modification calls on entities)
The websocket controllers would look like this:
#Controller
public class EntityAWebSocketController {
#MessageMapping("/EntityA/update")
#SendTo("/topic/EntityA/update")
public EntityA update(EntityA entityA) throws Exception {
// persist,....
return entityA;
}
}
Scenario 1: Websocket API called from REST/HTTP API
Rules:
client request is always REST/HTTP API
response is REST/HTTP API for all the operations
moreover for modifying operations the websocket message comes as well
Technically, could be achieved, by:
calling the websocket controllers from the spring-rest-data events (namely in the AfterCreateEvent, AfterSaveEvent, AfterLinkSaveEvent, AfterDeleteEvent)
Still the solution seems quite sick to me, as I'd need to go for:
client A --HTTP request--> Server (spring-data-rest controller)
Server (AfterXXXEvent in the spring-data-rest controller) --websocket message--> Spring websocket controller
Spring websocket controller --websocket message via topic--> all Clients interested in the topic
Server (spring-data-rest controller) --HTTP response--> client A
Scenario 2: Websocket API independent from REST API
Rules:
client request is REST/HTTP API for non-modifying operations only
response is REST/HTTP API for non-modifying operations only
client sends websocket message for all the modifying operations
websocket message is sent to client for all the modifying operations only
Well, if no other ideas come up, I'd go for the later one, but still, it would be great if I could have somehow generated C(R)UD methods exposed via websockets as well, something like spring-data-websockets and handle only the routes in my implementation.
As I feel like I'd have to manually expose (via *WebSocketControllers) all the CUD methods for all my entities. And I might be too lazy for that.
Ideas?
Scenario 2 talks about, in the last step, a single client.But I thought your requirement was for a topic since you wanted multiple clients.
If I wanted to complete 2 for your stated requirement, then you might want to maintain a list of clients and implement your own queue or use a ForkJoinPool to message all your clients listening in on your WebSockets. Having said that, A topic is definitely more elegant here but overall looks too complicated with different interfaces
For all messages from client to server, just go with a simple wire protocol and use a collection to parameterize, it could be
RParam1.......
At the server, you need a controller to map these to different requests(and operations). Somehow does not look like too much work.
Hope this helps.
The same architecture has bugged my mind for a while now and it will be a long story if I want to mention all the drawbacks and advantages of it so let me jump into the implementation.
The second scenario is valid, but as you mentioned its better to perform the crud actions on same websocket session. This shall remove the need for HTTP handshakes on every request, and reduces the body size of messages, therefore you will have better latency. Meanwhile, you already have a persisting connection to a server, so why not make good use out of it?
I searched around for a while and after 6 years from your question, I still couldn't find any websocket protocols that can make this happen, so I decided to work on that by myself cause I needed it for another dummy project.
Another advantage of such protocol could be that it doesn't require much changes to your already written controllers. So it should be able to support Spring Framework (for example) annotations and make websocket endpoints out of it.
The hard part about implementing such protocol in another framework like spring is that as its not nice to create ServletRequest and ServletResponse and convert them to your own websocket protocol, you loose some advantages. For example, any http filter you have written in your application till that moment, will be meaningless because its not really easy to pass your websocket messages through those filters.
About the protocol itself: I decided everything to be passed through json format, alongside a unique id for each request so we can map callbacks on client side to the request id. And of course there is a filter chain to add your filters to it.
Another hard to deal thing here is Spring Security as that too works like http filters in some cases. In my own lib I could finally handle annotations like #PreAuthorize but if you are using antMatchers in your HTTP Security Config, it would be a problem.
Therefore, creating websocket adapter to call http controllers will have many drawbacks.
You can check out the project here: Rest Over Websocket. Its written for Spring Boot.
I'm looking for a way of implementing a simple API to receive log messages (for various events across various systems).
We've concluded an HTTP get request is the most open (lowest barrier to entry) for the different code bases and systems that would need to post to this server.
The server itself needs to provide an HTTP GET api where I would send a message e.g. logging.internal/?system=email&message=An email failed
However we'd like this to be non blocking so any application can throw information to this server and never have to wait (not slowing any production systems).
Does anyone know of any framework to implement this in Java, or an appropriate approach?
In java, for the server, you can use any implementation of JAX-RS for the Restful part, and when processing the message, just call an asynchronous EJB method ( http://docs.oracle.com/javaee/6/tutorial/doc/gkkqg.html ), which will do the longer processing.
This will allow the RESTful request to return as fast as possible.
In this case, the only blocking part will be the http request/response.
If you want to make it less blocking, issue the RESTful request from the client in an Async method as well (or a Message Driven Bean if using Java EE 5).