Clustered, Stateless web application - how to handle concurrent requests? - java

We have a spring mvc web application that will serve many users in their actions.
Our data source is a mySQL DB.
The web app is located on a cluster of tomcat server.
Users do not have a session while interacting with the web application.
The protocol is a simple REST API between users and the web app.
We would like to ignore users requests if another request handling is still in progress. for example:
user1 requests for action1...
action1 handling begins in webapp.
user1 requests for action1 again (before the handling is completed)
server declines the 2nd request since action1 handling is still in progress..
action1 handling completed.
result for action1 is returned to user1's client.
(now further actions is accepted by the web app)
How is it possible to achieve this? if we used a single node of web app we could manage it simply in memory, but since we use a cluster and no shared memory\cache it wont work.
Another alternative we thought about is using a locking in the DB,
for example create a constraint for the userID in a dedicated table (called userLock) and before each handling we simply insert into this table, and finalizing by removing the entry from it, now if an illegal request is made, a constraint exception is thrown and not handled)
are there any other alternatives to this "semaphore" behavior?

Hazelcast provides distributed implementation of java.util.concurrent.locks.Lock which could be of use to you.
HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();
Lock lock = hazelcastInstance.getLock( "myLock" );
lock.lock();
try {
// do something here
} finally {
lock.unlock();
}
More options are listed in this answer.

You could use some queuing system, JMS or other
http://en.m.wikipedia.org/wiki/Java_Message_Service

Related

Tomcat Websockets Across Multiple Servers

Current Solution
I have a Java server (Tomcat) setup issue that I'm hoping someone can provide some guidance on. Currently my web application is a single-server that has a Java backend running on Tomcat 8.5. To handle Websocket connections, I keep a Map of all the javax.websocket.Session passed in the onOpen() method.
#ServerEndpoint("/status")
public class StatusMessenger
{
private static ConcurrentHashMap<String, Session> sessions = new ConcurrentHashMap();
#OnOpen
public void onOpen(Session session) throws Exception
{
String sessionId = session.getRequestParameterMap().get("sessionId").get(0);
sessions.put(session.getId(), session);
}
My application only broadcasts messages to all users, so the broadcast() in my code simply loops through sessions.values() and sends the message through each javax.websocket.Session.
public static void broadcast(String event, String message)
{
for (Session session: sessions.values())
{
// send the message
}
}
I'm not even sure that's the correct way to handle Websockets in Tomcat, but it's worked for me for years, so I assume it's acceptable.
The Problem
I want to now horizontally scale out my application on AWS to multiple servers. For the most part my application is stateless and I store the regular HTTP session information in the database. My problem is this static Map of javax.websocket.Session - it's not stateless, and there's a different Map on each server, each with their own list of javax.websocket.Sessions.
In my application, the server code in certain situations will need to broadcast out a message to all the users. These events may happen on any server in this multi-server setup. The event will trigger the broadcast() method which loops through the javax.websocket.Sessions. However, it will only loop through the sessions in it's own Map.
How do I get the multi-server application to broadcast this message to all websocket connections stored across all the servers in the setup? The application works fine on a single-server (obviously) because there's only 1 list of websocket sessions. In other words, how do I write a stateless application that needs to store the websocket connections so it can communicate with them later?
I found 2 alternative solutions for this...
In my load balancer I put a rule to route all paths with /{my websocket server path} to 1 server so that all the Sessions were on the same server.
Use a 3rd party web push library like Pusher (http://pusher.com)

Stateful Rsocket Application

in my project I want to have multiple clients connecting to a service. I am using the java Rsocket implementation.
The service should maintain a state for each client. Now at this point I either can manage the clients by some identifier. This option I have already implemented. But I do not want to manage the session manually using strings.
So another idea is to identify the clients by the Rsocket connection. Is there a way to use Rsocket channel for identification of a specific client?
Imagine an example service and a couple of clients. Each client has the Rsocket channel with the service up and running. Is there a way to identify these clients on the server side using the Rsocket channel? Would be amazing if you could show a programmatic example of such behavior.
Thank you!
EDIT (describing the case more detailed)
Here is my example.
We currently have three CORBA objects that are used as demonstrated in the diagram:
LoginObject (to which a reference is retrieved via NamingService). Clients can call a login() method to obtain a session
The Session object has various methods for query details about the current serivce context and most importatly to obtain a Transaction object
The Transaction object can be used to execute various commands via a generic method that take a commandName and a list of key-value pairs as parameters.
After the client executed n commands he can commit or rollback the transaction (also via methods on the Transaction object).
so here we use the session object to execute transactions on our service.
Now we decided to move away from CORBA to Rsocket. Thus we need Rsocket microservice to be able to store the session's state, otherwise we can't know what's going to be commited or rolled back. Can this be done with just individual Publisher for each client?
Here's an example I made the other day that will create a stateful RSocket using Netifi's broker:
https://github.com/netifi/netifi-stateful-socket
Unfortunately you'd need to build our develop branch locally to try it out (https://github.com/netifi/netifi-java) - there should be a release with the code by the end of the week if you don't want to build it locally.
I'm working on a pure RSocket example too, but if you want to see how it would take a look at the StatefulSocket found in the example. It should give you a clue how to deal with the session with pure RSocket.
Regarding your other questions about a transaction manager - you would need to tie your transaction to the Reactive Streams signals that are being emitted - if you received an cancel, an onError you'd roll back, and if received a onComplete you would commit the transaction. There are side effect methods from Flux/Mono that should make this easy to deal with. Depending on what you are doing you could also use the BaseSubscriber as it has hooks to deal with the different Reactive Streams signals.
Thanks,
Robert
An example of resuming connections i.e. maintaining the state on the server, has landed in the rsocket-java repo
https://github.com/rsocket/rsocket-java/commit/d47629147dd1a4d41c7c8d5af3d80838e01d3ba5
The resumes a whole connection, including whatever state is associated with each individual channel etc.
There is an rsocket-cli project that lets you try this out. Start and stop the socat process and observe the client and server progress.
$ socat -d TCP-LISTEN:5001,fork,reuseaddr TCP:localhost:5000
$ ./rsocket-cli --debug --resume --server -i cli:time tcp://localhost:5000
$ ./rsocket-cli -i client --stream --resume tcp://localhost:5001
From your description it looks like channel will work best, I haven't used channel before so I can't really guarantee (sorry). But what I'd recommend you to try something like this:
A transcation contoller:
public class TransactionController implements Publisher<Payload> {
List<Transaction> transcations = new ArrayList<>();
#Override
public void subscribe(Subscriber<? super Payload> subscriber) {
}
public void processPayload(Payload payload) {
// handle transcations...
}
}
And in your RSocket implementation override the requestChannel:
#Override
public Flux<Payload> requestChannel(Publisher<Payload> payloads) {
// Create new controller for each channel
TranscationController cntrl = new TranscationController();
    Flux.from(payloads)
      .subscribe(cntrl::processPayload);
    return Flux.from(cntrl);
}

How do I ensure my Java Play application only accepts HTTP requests from a particular host?

I'm using the Play framework for Java for an app. I am attempting to make it distributed by deploying copies of the application across several servers and having a configurable list of nodes in the database, which must be able to communicate between each other. In MongoDB, the list is stored in JSON like so:
{
"master": "host1.com:2678",
"nodes": ["host2.com:2678", "host3.com:2678", "host4.com:2678"]
}
The code deployed on each server is identical, but the scheduler is enabled only on the master node and will schedule particular work to nodes depending on how busy they are. Code not provided here as the specifics of the scheduler's operation isn't important for my question.
In order to know how busy they are, to schedule things and for other status updates, the nodes need to be able to communicate with each other.
Play Framework's Web Service client allows me to do this by making HTTP requests from within one node to the other like so
HttpResponse res = WS.url("http://host2.com").get();
But the idea is for specific HTTP requests (such as those used for scheduling) to be allowed only if coming from another one of the nodes (Be it the master or slave nodes) but not from a web browser, curl, etc. How do I do that securely? I can check for the host of the incoming request or particular headers but surely those are easy to forge?
If you want this to be enforced on all controllers, Check out play allowed hosts filter.
If you want to enforce this filter on a specific Controller \ method you can try to do this:
class MyController #Injects()(filter: AllowedHostsFilter) extends Controller {
def get = filter.apply(Action.async { implicit request =>
Future.successful(Ok)
})
}
You could have a look into pac4j.org they have a lot of options to implement security features on play.
You could maybe filter by ip address:
http://www.pac4j.org/1.9.x/docs/authenticators/ip.html

callback function do background jobs after the completing the action in java spring

I am new to Java Spring Framework, I am Rails developer I have requirement in java spring like I need to do background jobs but after the response send to the end User. It should not wait for the jobs to complete. But the jobs should run every time action completes.
Is a webservice app. We have Service, Bo and DAO layers and we are logging any exceptions occurred while processing the user data in database before response send to user, but now we want to move(Exception handling) after response send to user to increase the performance.
I remember in rails we have callbacks/filters after the action executed it calls the methods we want to executed. Same is available in java Spring?
Thanks,
Senthil
I assume the use case is something like a user requests a long-running task, and you want to return a response immediately and then launch the task in the background.
Spring can help with this. See
http://docs.spring.io/spring/docs/3.2.x/spring-framework-reference/html/scheduling.html
In particular see the #Async annotation.
With respect to the client getting a response back following the async processing (exception or otherwise), you can do it, but it's extra work.
Normally the immediate response would include some kind of ID that the client could come back with after some period of time. (For example, when you run a search against the Splunk API, it gives you a job ID, and you come back later with that job ID to check on the result). If this works, do that. The client has to poll but the implementation is the simplest.
If not, then you have to have some way for the client to listen for the response. This could be a "reply-to" web service endpoint on the client (perhaps passed in with the original request as a custom X-Reply-To HTTP header), or it could be a message queue, etc.

Can Servlets have multi-step interactions?

Is there any way to start executing java Servlet code (specifically, in Websphere Application Server) (one session, one thread on the Servlet) and then pause to get more information from the calling client at various points? I require that the current session, and ongoing Servlet thread, not die until specified, and instead keep waiting (open) for information from the client.
Is this kind of ongoing conversation possible? Or can the Servlet call to "doPost" only be started - and then the Servlet ignores the client until it finishes?
As suggested, I would use an object stored in session to maintain the state needed. You can also modify the session on a servlet by servlet basis if you need certain actions to extend the session timeout beyond the webapp defaults using the following method in the HttpSession API:
public void setMaxInactiveInterval(int interval) Specifies the time, in seconds, between client requests before the servlet container will invalidate this session. A negative time indicates the session should never timeout.
You just need to establish your logic for your object setting/retrieval from session. Typically something like this:
HttpSession session = req.getSession();
MyBeanClass bean;
Object temp = null;
temp = session.getAttribute("myBean");
if(temp !=null) {
bean = (MyBeanClass) temp;
} else {
bean = new MyBeanClass();
}
// Logic
session.setAttribute("myBean", bean);
You can save/update your session state between requests and when the next request comes, you can restore and continue whatever you were doing.
I have not done this with directly, but the underlying support is somewhat related to Jetty's continuation model and Servlet 3.0 Suspend/Resume support.
Web frameworks that work like the post description (actually, they are resumed across different connections) are sometimes called Continuation-Based frameworks. I am unsure of any such frameworks in Java (as the Java language is not conducive to such models) but there are two rather well known examples of the general principle:
Seaside (for Smalltalk) and;
Lift (for Scala).
Hope this was somewhat useful.

Categories

Resources