Current Solution
I have a Java server (Tomcat) setup issue that I'm hoping someone can provide some guidance on. Currently my web application is a single-server that has a Java backend running on Tomcat 8.5. To handle Websocket connections, I keep a Map of all the javax.websocket.Session passed in the onOpen() method.
#ServerEndpoint("/status")
public class StatusMessenger
{
private static ConcurrentHashMap<String, Session> sessions = new ConcurrentHashMap();
#OnOpen
public void onOpen(Session session) throws Exception
{
String sessionId = session.getRequestParameterMap().get("sessionId").get(0);
sessions.put(session.getId(), session);
}
My application only broadcasts messages to all users, so the broadcast() in my code simply loops through sessions.values() and sends the message through each javax.websocket.Session.
public static void broadcast(String event, String message)
{
for (Session session: sessions.values())
{
// send the message
}
}
I'm not even sure that's the correct way to handle Websockets in Tomcat, but it's worked for me for years, so I assume it's acceptable.
The Problem
I want to now horizontally scale out my application on AWS to multiple servers. For the most part my application is stateless and I store the regular HTTP session information in the database. My problem is this static Map of javax.websocket.Session - it's not stateless, and there's a different Map on each server, each with their own list of javax.websocket.Sessions.
In my application, the server code in certain situations will need to broadcast out a message to all the users. These events may happen on any server in this multi-server setup. The event will trigger the broadcast() method which loops through the javax.websocket.Sessions. However, it will only loop through the sessions in it's own Map.
How do I get the multi-server application to broadcast this message to all websocket connections stored across all the servers in the setup? The application works fine on a single-server (obviously) because there's only 1 list of websocket sessions. In other words, how do I write a stateless application that needs to store the websocket connections so it can communicate with them later?
I found 2 alternative solutions for this...
In my load balancer I put a rule to route all paths with /{my websocket server path} to 1 server so that all the Sessions were on the same server.
Use a 3rd party web push library like Pusher (http://pusher.com)
Related
in my project I want to have multiple clients connecting to a service. I am using the java Rsocket implementation.
The service should maintain a state for each client. Now at this point I either can manage the clients by some identifier. This option I have already implemented. But I do not want to manage the session manually using strings.
So another idea is to identify the clients by the Rsocket connection. Is there a way to use Rsocket channel for identification of a specific client?
Imagine an example service and a couple of clients. Each client has the Rsocket channel with the service up and running. Is there a way to identify these clients on the server side using the Rsocket channel? Would be amazing if you could show a programmatic example of such behavior.
Thank you!
EDIT (describing the case more detailed)
Here is my example.
We currently have three CORBA objects that are used as demonstrated in the diagram:
LoginObject (to which a reference is retrieved via NamingService). Clients can call a login() method to obtain a session
The Session object has various methods for query details about the current serivce context and most importatly to obtain a Transaction object
The Transaction object can be used to execute various commands via a generic method that take a commandName and a list of key-value pairs as parameters.
After the client executed n commands he can commit or rollback the transaction (also via methods on the Transaction object).
so here we use the session object to execute transactions on our service.
Now we decided to move away from CORBA to Rsocket. Thus we need Rsocket microservice to be able to store the session's state, otherwise we can't know what's going to be commited or rolled back. Can this be done with just individual Publisher for each client?
Here's an example I made the other day that will create a stateful RSocket using Netifi's broker:
https://github.com/netifi/netifi-stateful-socket
Unfortunately you'd need to build our develop branch locally to try it out (https://github.com/netifi/netifi-java) - there should be a release with the code by the end of the week if you don't want to build it locally.
I'm working on a pure RSocket example too, but if you want to see how it would take a look at the StatefulSocket found in the example. It should give you a clue how to deal with the session with pure RSocket.
Regarding your other questions about a transaction manager - you would need to tie your transaction to the Reactive Streams signals that are being emitted - if you received an cancel, an onError you'd roll back, and if received a onComplete you would commit the transaction. There are side effect methods from Flux/Mono that should make this easy to deal with. Depending on what you are doing you could also use the BaseSubscriber as it has hooks to deal with the different Reactive Streams signals.
Thanks,
Robert
An example of resuming connections i.e. maintaining the state on the server, has landed in the rsocket-java repo
https://github.com/rsocket/rsocket-java/commit/d47629147dd1a4d41c7c8d5af3d80838e01d3ba5
The resumes a whole connection, including whatever state is associated with each individual channel etc.
There is an rsocket-cli project that lets you try this out. Start and stop the socat process and observe the client and server progress.
$ socat -d TCP-LISTEN:5001,fork,reuseaddr TCP:localhost:5000
$ ./rsocket-cli --debug --resume --server -i cli:time tcp://localhost:5000
$ ./rsocket-cli -i client --stream --resume tcp://localhost:5001
From your description it looks like channel will work best, I haven't used channel before so I can't really guarantee (sorry). But what I'd recommend you to try something like this:
A transcation contoller:
public class TransactionController implements Publisher<Payload> {
List<Transaction> transcations = new ArrayList<>();
#Override
public void subscribe(Subscriber<? super Payload> subscriber) {
}
public void processPayload(Payload payload) {
// handle transcations...
}
}
And in your RSocket implementation override the requestChannel:
#Override
public Flux<Payload> requestChannel(Publisher<Payload> payloads) {
// Create new controller for each channel
TranscationController cntrl = new TranscationController();
Flux.from(payloads)
.subscribe(cntrl::processPayload);
return Flux.from(cntrl);
}
I'm using the Play framework for Java for an app. I am attempting to make it distributed by deploying copies of the application across several servers and having a configurable list of nodes in the database, which must be able to communicate between each other. In MongoDB, the list is stored in JSON like so:
{
"master": "host1.com:2678",
"nodes": ["host2.com:2678", "host3.com:2678", "host4.com:2678"]
}
The code deployed on each server is identical, but the scheduler is enabled only on the master node and will schedule particular work to nodes depending on how busy they are. Code not provided here as the specifics of the scheduler's operation isn't important for my question.
In order to know how busy they are, to schedule things and for other status updates, the nodes need to be able to communicate with each other.
Play Framework's Web Service client allows me to do this by making HTTP requests from within one node to the other like so
HttpResponse res = WS.url("http://host2.com").get();
But the idea is for specific HTTP requests (such as those used for scheduling) to be allowed only if coming from another one of the nodes (Be it the master or slave nodes) but not from a web browser, curl, etc. How do I do that securely? I can check for the host of the incoming request or particular headers but surely those are easy to forge?
If you want this to be enforced on all controllers, Check out play allowed hosts filter.
If you want to enforce this filter on a specific Controller \ method you can try to do this:
class MyController #Injects()(filter: AllowedHostsFilter) extends Controller {
def get = filter.apply(Action.async { implicit request =>
Future.successful(Ok)
})
}
You could have a look into pac4j.org they have a lot of options to implement security features on play.
You could maybe filter by ip address:
http://www.pac4j.org/1.9.x/docs/authenticators/ip.html
We have a spring mvc web application that will serve many users in their actions.
Our data source is a mySQL DB.
The web app is located on a cluster of tomcat server.
Users do not have a session while interacting with the web application.
The protocol is a simple REST API between users and the web app.
We would like to ignore users requests if another request handling is still in progress. for example:
user1 requests for action1...
action1 handling begins in webapp.
user1 requests for action1 again (before the handling is completed)
server declines the 2nd request since action1 handling is still in progress..
action1 handling completed.
result for action1 is returned to user1's client.
(now further actions is accepted by the web app)
How is it possible to achieve this? if we used a single node of web app we could manage it simply in memory, but since we use a cluster and no shared memory\cache it wont work.
Another alternative we thought about is using a locking in the DB,
for example create a constraint for the userID in a dedicated table (called userLock) and before each handling we simply insert into this table, and finalizing by removing the entry from it, now if an illegal request is made, a constraint exception is thrown and not handled)
are there any other alternatives to this "semaphore" behavior?
Hazelcast provides distributed implementation of java.util.concurrent.locks.Lock which could be of use to you.
HazelcastInstance hazelcastInstance = Hazelcast.newHazelcastInstance();
Lock lock = hazelcastInstance.getLock( "myLock" );
lock.lock();
try {
// do something here
} finally {
lock.unlock();
}
More options are listed in this answer.
You could use some queuing system, JMS or other
http://en.m.wikipedia.org/wiki/Java_Message_Service
my application must open a tcp socket connection to a server and listen to periodically incoming messages.
What are the best practices to implement this in a JEE 7 application?
Right now I have something like this:
#javax.ejb.Singleton
public class MessageChecker {
#Asynchronous
public void startChecking() {
// set up things
Socket client = new Socket(...);
[...]
// start a loop to retrieve the incoming messages
while((line = reader.readLine())!=null){
LOG.debug("Message from socket server: " + line);
}
}
}
The MessageChecker.startChecking() function is called from a #Startup bean with a #PostConstruct method.
#javax.ejb.Singleton
#Startup
public class Starter() {
#Inject
private MessageChecker checker;
#PostConstruct
public void startup() {
checker.startChecking();
}
}
Do you think this is the correct approach?
Actually it is not working well. The application server (JBoss 8 Wildfly) hangs and does not react to shutdown or re-deployment commands any more. I have the feeling that the it gets stuck in the while(...) loop.
Cheers
Frank
Frank, it is bad practice to do any I/O operations while you're in an EJB context. The reason behind this is simple. When working in a cluster:
They will inherently block each other while waiting on I/O connection timeouts and all other I/O related waiting timeouts. That is if the connection does not block for an unspecified amount of time, in which case you will have to create another Thread which scans for dead connections.
Only one of the EJBs will be able to connect and send/recieve information , the others will just wait in line. This way your system will not scale. No matter how many how many EJBs you have in your cluster, only one will actually do its work.
Apparently you already ran into problems by doing that :) . Jboss 8 seems not to be able to properly create and destroy the bean.
Now, I know your bean is a #Singleton so your architecture does not rely on transactionality, clustering and distribution of reading from that socket. So you might be ok with that.
However :D , you are asking for a java EE compliant way of solving this. Here is what should be done:
Redesign your solution to go with JMS. It 'smells' like you are trying to provide an async messaging functionality (Send a message & wait for reply). You might be using a synchronous protocol to do async messaging. Just give it a thought.
Create a JCA compliant adapter which will be injected in your EJB as a #Resource
You will have a connection pool configurable at AS level ( so you can have different values for different environments
You will have transactionality and rollback. Of course the rollback behavior will have to be coded by you
You can inject it via a #Resource annotation
There are some adapters out there, some might fit like a glove, some might be a bit overdesigned.
Oracle JCA Adapter
We have a Java EE application deployed on a Glassfish 3.1.2 cluster which provides a REST API using JAX-RS. We regularly deploy new versions of the application by deploying an EAR to a duplicate cluster instance, then update the HTTP load balancer to send traffic to the updated instance instead of the old one.
This allows us to upgrade with no loss of availability as described here: http://docs.oracle.com/cd/E18930_01/html/821-2426/abdio.html#abdip. We are frequently making significant changes to the application, which makes the new versions "incompatible" (which is why we use two clusters).
We now have to provide a message-queue interface to the application for some high throughput internal messaging (from C++ producers). However, using Message Driven Beans I cannot see how it is possible to upgrade an application without any service disruption?
The options I have investigated are:
Single remote JMS queue (openMQ)
Producers send messages to a single message queue, messages are handled by a MDB. When we start a second cluster instance, messages should be load-balanced to the upgraded cluster, but when we stop the "old" cluster, outstanding transactions will be lost.
I considered using JMX to disable producers/consumers to that message queue during the upgrade, but that only pauses message delivery. Oustanding messages will still be lost when we disable the old cluster (I think?).
I also considered ditching the #MessageDriven annotation and creating a MessageConsumer manually. This does seem to work, but the MessageConsumer cannot then access other EJB's using the EJB annotation (as far as I know):
// Singleton bean with start()/stop() functions that
// enable/disable message consumption
#Singleton
#Startup
public class ServerControl {
private boolean running=false;
#Resource(lookup = "jms/TopicConnectionFactory")
private TopicConnectionFactory topicConnectionFactory;
#Resource(lookup = "jms/MyTopic")
private Topic topic;
private Connection connection;
private Session session;
private MessageConsumer consumer;
public ServerControl()
{
this.running = false;
}
public void start() throws JMSException {
if( this.running ) return;
connection = topicConnectionFactory.createConnection();
session = dbUpdatesConnection.createSession(false, Session.DUPS_OK_ACKNOWLEDGE);
consumer = dbUpdatesSession.createConsumer(topic);
consumer.setMessageListener(new MessageHandler());
// Start the message queue handlers
connection.start();
this.running = true;
}
public void stop() throws JMSException {
if( this.running == false ) return;
// Stop the message queue handlers
consumer.close();
this.running = false;
}
}
// MessageListener has to invoke functions defined in other EJB's
#Stateless
public class MessageHandler implements MessageListener {
#EJB
SomeEjb someEjb; // This is null
public MessageHandler() {
}
#Override
public void onMessage(Message message) {
// This works but someEjb is null unless I
// use the #MessageDriven annotation, but then I
// can't gracefully disconnect from the queue
}
}
Local/Embedded JMS queue for each cluster
Clients would have to connect to two different message queue brokers (one for each cluster).
Clients would have to be notified that a cluster instance is going down and stop sending messages to that queues on that broker.
Generally much less convenient and tidy than the existing http solution.
Alternative message queue providers
Connect Glassfish up to a different type of message queue or different vendor (e.g. Apache OpenMQ), perhaps one of these has the ability to balance traffic away from a particular set of consumers?
I have assumed that just disabling the application will just "kill" any outstanding transactions. If disabling the application allows existing transactions to complete then I could just do that after bringing the second cluster up.
Any help would be appreciated! Thanks in advance.
If you use highly availability then all of your messages for the cluster will be stored in a single data store as opposed to the local data store on each instance. You could then configure both clusters to use the same store. Then when shutting down the old and spinning up the new you have access to all the messages.
This is a good video that helps to explain high availability jms for glassfish.
I don't understand your assumption that when we stop the "old" cluster, outstanding transactions will be lost. The MDBs will be allowed to finish their message processing before the application stops and any unacknowledged messages will be handled by the "new" cluster.
If the loadbalancing between old and new version is an issue, I would put MDBs into separate .ear and stop old MDBs as soon as new MDBs is online, or even before that, if your use case allows for delay in message processing until the new version is deployed.