In my current situation, the frontend client is making an api call to a backend endpoint (java) at a 15 second interval to see if a resource exists. The resource will be created through some business logic. Once the resource exists, client will get the data from api and process it.
However, it seems that it is a costly performance and not scalable to call an api every 15 seconds. I was wondering the best practice for this - the client waiting for a resource to exist to execute some logic.
Is there a way / best practice to send/push data from the server to the client rather than the other way around as well as being unidirectional (server -> client)..
Thank you in advance.
In order to solve this properly you will need to implement WebSocket.
The Request from the client will be a GET and the server will approve it with 200 status code to confirm.
Then ,when the server will done process your request , it will broadcast the data via the websocket directly to your web application.
Is there a way / best practice to send/push data from the server to the client rather than the other way around as well as being unidirectional (server -> client)..
What you've just described here is known as the observer pattern. The whole idea of it is to have a list of observers attached to observables and push notifications each time the state of observable changes.
You could implement this pattern in your Java back-end by exposing a subscription endpoint in which you'd specify what you want to observe, along with what URI to call back in case there's a state change, or some other mechanism for pushing server notifications. However, you might have to solve another problem which is having your "client" act as a server, permanently or temporarily, for these notifications, if you want to avoid periodic API queries.
Obviously, you want to have an 'unsubscribe' endpoint to free resources. You might have to consider what to do if the client unexpectedly loses connection or is not engaging for some other reason (some time-to-live for subscription sounds like a good idea here).
Related
Is making REST based web service (POST) asynchronous is the best way to handle thousands of requests at one time (Keeping in mind that I have only single instance of server serving the request)?
Edited:
Jersey is wrongly tagged.
For eg: I have a rest based web service, which is supposed to be consumed by 100 thousand clients within a very short span of time (~60 seconds). I understand that if I am allowed to deploy multiple instance of the server, then I can use a load balancer to handle all my incoming request and delegate them accordingly. But I am restricted to use only single instance. What design could I opt within this restriction?
I could think of making the request asynchronous( which will not respond to client immediately ) in order to be able to let the server be free from this load and handle the requests at it's own pace.
For now we can ignore memory limitations.
Please let me know if this clarifies your doubt?
The term asynchronous could have different meanings in different places. For a web application code, it could refer to a Nonblocking I/O server such as Node or Netty/Akka which is a way for HTTP Requests to time multiplex on the same worker threads. If you're writing callbacks or using async or future constructs, it probably is non-blocking I/O which people sometimes refer to as asynchronous.
However, I could have REST API running on Node which implements non-blocking I/O, but the API or the overall architecture is still fully synchronous. For example, let's say I have an API endpoint POST /photos, which takes in a photo, creates image thumbnails, stores the URLs of the photo in a SQL Db and then stores the images in S3. The REST API could still block from the initial POST until after the image is processed and stored.
A second way is for the server to accept the photo process as a job and return immediately. Then the server could store the photo in a in memory or network based queue to be processed later by some other worker thread. In fact, I could even implement this async architecture even with a blocking server like some good old Java 7 and Jetty.
This question might sound a bit abstract,answered (but did my search didn't stumble on a convenient answer) or not specific at all ,but I will try to provide as much information as I can.
I am building a mobile application which will gather and send sensory data to a remote server. The remote server will collect all these data in a mySQL database and make computations (not the mysql database ,another process/program) . What I wanna know is :
After some updates in the database , is it doable to send a response from a RESTful Server to a certain client (the one who like did the last update probably) ,using something like "a background thread"? Or this should be done via socket connection through server-client response?
Some remarks:
I am using javaEE, Spring MVC with hibernate and tomcat (cause I am familiar with the environment though in a more asynchronous manner).
I thought this would be a convenient way because the SQL schema is not much complicated and security and authentication issues are not needed (it's a prototype).
Also there is a front-end webpage that will have to visualize these data, so such a back-end system would look like a good option for getting the job done fast.
Lastly I saw this solution :
Is there a way to 'listen' for a database event and update a page in real time?
My issue is that besides the page I wanna update the client's side with messages from the RESTful server.
If all these above are unecessary and a more simple client-server application will prove better and less complex please be welcome to inform me.
Thank you in advance.
Generally you should upload your data to a resource on the server (e.g. POST /widgets and the server should immediately return with a 201 Created or (if creation is too slow and needs to happen later) 202 Accepted status. There are several approaches after that happens, each has their merits:
Polling - The server's response includes a location field which the client can then proceed to poll until a change happens (e.g. check for an update every second). This is the easiest approach and quite efficient if you use HTTP caching effectively and the average number of checks is relatively low.
Push notification - Server sends a push notification when the change happens, report's generated, etc. Obviously this requires you to store the client's details and their notification requirements. This is probably the cleanest approach and also easy to scale. In the case of Android (also iOS) you have free push notifications available via Google Cloud Messaging.
Set up a persistent connection between client and server, e.g. using a Websocket or low-level TCP connection. This should yield the fastest response times, but will probably be a drain on phone battery, harder to scale on the server, and more complex to code.
I am developing Restful services where we will be inserting/updating new records into database.
Since REST uses HTTP for its communication and HTTP is not reliable, I am worried that, the request may not be sent to the server in case of connection failure.
One of the suggestions I found in the link was "if connection fails just retry again from the client side." But we don't have any control over the client applications.
Other solution was to implement messaging systems like RabbitMQ/JMS to ensure reliability.
I also found in the following link that adding session states improves reliability. I am not able to understand how this happen and more importantly doesn't a good restful service is always stateless?
So to summarize my questions:
To achieve reliability, is Messaging systems best possible approach?
How does session management help me in achieving reliability?
Messaging can help, as long as you don't do any processing when you receive a command to insert or update information, as you need to immediately put the command in a queue. This solution usually adds quite a bit of complexity as you need to notify your client asynchronously when you finish processing the command (was it successful or did it fail?... or did I fail to send the outcome?).
Session management? For reliability? Never heard of that :). Restful services are usually stateless... so no sessions here!
Another option (but depends how your clients integrate with you) is to allow your clients to generate the ids of the items you are going to be storing/updating, in this case, if they get an error back, but you have processed the command successfully, the client can retry, and the same update will happen. You can pair this with versioning to prevent stale updates arriving late.
I'm creating a monitor application that monitors the activities of a user. There are four elements in my system:
EventCatcher: The EventCatcher is responsible for catching all the events that happen in a subsystem and pushes the data to the EventHandler. Based from observation, there is an average of 10 events per second that is being pushed to the EventHandler. Some events are UserLogin, UserLogout.
EventHandler: The EventHandler is a singleton class that handles all the incoming events from the EventCatcher. It also keeps track of all the logged in users in the system. So, whenever the EventHandler receives a UserLogin event, the User object is extracted from the event and is stored in a HashMap. When a UserLogout event is received, that User object will be remove from the HashMap. This class also maintains a Set of all active Websocket sessions because everytime an event has occurred, I would want to inform all the open sessions that a particular event happened.
Websocket Endpoint: This is just a simple Java class annotated with #ServerEndpoint.
Clients: The system I will be building is for internal (company) use only. At production, at most, there will only be around 5 - 10 clients. All the clients will be receiving the same information every time an event has occurred.
So right now I am trying to convince my supervisor that Websockets is the way to go, however, my supervisor finds it really unnecessary because a simple polling solution would do the trick.
His points are:
We don't really need up-to-date information by the millisecond. We can poll every second.
If I was to maintain a list of open WebSocket sessions, how would that work in a clustered environment (we use a load balancer)
If I plan to send information to the client every time an event (UserLogin, UserLogout) has occurred, I should be able to just send small updates to all WebSocket sessions - meaning, I can't be sending a whole JSON dump of everything. So that means, for every WebSocket instance, I would have to maintain another Set of Users and properly maintain it to mirror the Set contained in the EventHandler.
What my supervisor suggests is that I lose the WebSocket and just convert it to a simple Servlet and let the clients poll every second to receive the entire JSON dump.
In this scenario, should I stick with WebSockets? Or should I just poll?
The main advantage, as far as I've read, of Websockets vs. polling is that by using Websockets, you will have a persistent connection from client to server. HTTP is not really meant for real-time data.
Also, polling requires sending an HTTP request every time and every request comes with HTTP headers. If an HTTP request header contains 800 bytes, then that's 48kb sent per minute per client. With a WebSocket, this isn't problem.
But then again, we won't really have a lot of active clients. We're not concerned about third parties sniffing our requests because this system is for company use only - internal use! And I believe my supervisor wants something simple and reliable.
I am fine with either way. I just want to be sure whether I'm using the right tool for the job.
Additional question: If WebSockets is the way to go, is there any reason why I should consider polling?
The entire purpose of WebSocket is to efficiently support continuing connections between client and server.
I’m not clear on how you are implementing your app. If this is a web app running in a Servlet environment leveraging WebSocket support in the web server, be aware that you need to use recent versions of the Servlet container. For example, with Tomcat you must use either version 8 or the latest updates to version 7.
And of course the web browser must have support for WebSocket.
Be aware that WebSocket is still a new technology that has been changing and evolving in both the specs and the implementations.
Atmosphere
You may want to consider using the Atmosphere framework. Atmosphere supports multiple techniques of Push including WebSocket & Comet.
The Vaadin web-app framework leverages Atmosphere to provide automatic support for Push in your app. By default, WebSocket is automatically attempted first. If WebSocket is not available, Vaadin+Atmosphere falls back automatically to the other techniques including polling.
i am trying to create framework/library /API for creating small multiuser games, in which the goal is to achieve 'decoupling' between the server, client and business logic.
The server in my case is kind of registering the clients and sent that
list to business logic, clients are registering with the server,
and the business logic do the game logic stuff and updates the client by getting the list of client from server.
But currently,
i have only one class, so its trivial but this could consist of several game objects
(and what would be the role of classes serialized/remote
like the game engine, player, score, move, board).
i decided to use the RMI for this and this will definitely use callback
mechanism can somebody told me.
How could i achieve this encorporating all the requirement of server updating clients (callbacks).
PS:i m currently working on the design, which has one remote/serialized object for handling gamelogic but i wanted to
use other classes as i mentioned for sake of making multiuser game library and to show the use of important classes in it as an example.
thanks a lot
jibby
If you are intending this framework to work for real time games then I would advise against using RMI - it isn't really designed for that sort of thing. Also be aware that two-way RMI between machines on different subnets is very hard to get working.
It seems as if you need the clients to be informed by the server when events occur. When your client connects it can lookup a Remote object from the server's RMI registry and call a method on that to pass a Remote object it has created (hosted on the client) to the server. The server will have to maintain a collection of these client objects and iterate through them to send events. This is a tricky architecture to get right as if the network goes down or a client goes offline you will have to deal with all sorts of nasty error handling and freeze ups. I would recommend you keep the majority of communication in one direction - from client to server. Also keep it as simple as possible - simply a Remote object on the server with various methods that take Serializables as parameters and return Serializables.
Whether or not this is MVC depends on your interpretation. You could see the clients as views with the model and controller on the server in which case it is MVC with the event mechanism being a remote implementation of the observer pattern.
The trickiest part of the task will definitely be getting the code on the server that notifies the clients correct as it will need to be multi-threaded and handle errors gracefully - good luck!