interacting with JavaFX network service in a GUI - java

I am transitioning from writing a java Swing application to JavaFX to write a modern java based GUI application.
I would like to know the best approach to create a network based reusable threading service. The way I coded up the network service was to use a controller class (generated from the FXML via the Net-beans GUI). I put the threading logic here via a private Service member named 'transmitter' and I wired up the start/stop logic via the Start/Stop button's event callback.
The network based thread is implemented as a javafx Service - I did this since I would like to restart the service/thread whenever the destination address changes. This seems to be the recommended approach in place of a stand alone Task.
The network service is very simple right now, all it does is use some GUI widgets to configure a packet to transmit to a host/port once a second. I need to restart the service only if the host/port widget changes, however if the network service is running, I would like to modify the packet without interrupting/restarting the DatagramSocket. The place where I have questions and require some guidance are:
What is the recommended approach to threading a network thread in an
FXML based application? An example would be greatly appreciated.
How do I safely communicate changes from GUI widgets (via their
action performed callbacks) to the running service class?
Shown below are the most relevant parts of my controller class:
/**
* FXML Controller class
*
* #author johnc
*/
public class OpMessageServerController implements Initializable {
#FXML
private Text mCurrentDateTimeText;
#FXML
private Label mApplicationStatus;
#FXML
private ComboBox<DiscreteStatus> mPofDS;
#FXML
private ComboBox<PhaseOfFlightFMS> mPofFMS;
#FXML
private ComboBox<DiscreteStatus> mTailNumberDS;
#FXML
private ComboBox<DiscreteStatus> mConfigTableDS;
#FXML
private ComboBox<DiscreteStatus> mDateTimeDS;
#FXML
private TextField mEpicPN;
#FXML
private TextField mConfigTablePNHash;
#FXML
private TextField mTailNumber;
#FXML
private ComboBox<DiscreteStatus> mTopLevelPNDS;
#FXML
private Button mStartStopButton;
#FXML
private ComboBox<String> mDLMUHostSpec;
#FXML
private CheckBox connectionStatusC1;
#FXML
private CheckBox wsuConnectionStatus;
#FXML
private CheckBox connectionStatusC4;
#FXML
private CheckBox connectionStatusC3;
#FXML
private CheckBox connectionStatusC2;
#FXML
private CheckBox dlmuwConnectionStatus;
private Service<Void> transmitter;
/**
* Initializes the controller class.
* #param url
* #param rb
*/
#Override
public void initialize(URL url, ResourceBundle rb) {
mPofDS.setItems(FXCollections.observableArrayList(DiscreteStatus.values()));
mPofDS.getSelectionModel().selectFirst();
mPofFMS.setItems(FXCollections.observableArrayList(PhaseOfFlightFMS.values()));
mPofFMS.getSelectionModel().selectFirst();
mTailNumberDS.setItems(FXCollections.observableArrayList(DiscreteStatus.values()));
mTailNumberDS.getSelectionModel().selectFirst();
mConfigTableDS.setItems(FXCollections.observableArrayList(DiscreteStatus.values()));
mConfigTableDS.getSelectionModel().selectFirst();
mDateTimeDS.setItems(FXCollections.observableArrayList(DiscreteStatus.values()));
mDateTimeDS.getSelectionModel().selectFirst();
mTopLevelPNDS.setItems(FXCollections.observableArrayList(DiscreteStatus.values()));
mTopLevelPNDS.getSelectionModel().selectFirst();
// mDLMUHostSpec.setItems(FXCollections.observableArrayList(
// FXCollections.observableArrayList("localhost:1234", "192.168.200.2:1234")));
// add event handler here to update the current date/time label
// this should also update the transmit datastructure
final Timeline timeline = new Timeline(new KeyFrame(
Duration.seconds(1), (ActionEvent event) -> {
LocalDateTime currentDateTime = LocalDateTime.now();
mCurrentDateTimeText.setText(currentDateTime.format(
DateTimeFormatter.ofPattern("kk:mm:ss uuuu")));
}));
timeline.setCycleCount(Animation.INDEFINITE);
timeline.play();
// create a service.
transmitter = new Service() {
#Override
protected Task createTask() {
return new Task<Void>() {
#Override
protected Void call() throws InterruptedException {
updateMessage("Running...");
updateProgress(0, 10);
DatagramSocket sock = null;
while (!isCancelled()) {
try {
if (sock == null) {
DatagramSocket sock = new DatagramSocket();
}
} catch (SocketException ex) {
Logger.getLogger(OpMessageServerController.class.getName()).log(Level.SEVERE, null, ex);
}
//Block the thread for a short time, but be sure
//to check the InterruptedException for cancellation
OpSupportMessage opSupportMessage = new OpSupportMessage(
DiscreteStatus.NormalOperation,
PhaseOfFlightFMS.Cruise,
DiscreteStatus.NormalOperation,
"TAILNUM",
DiscreteStatus.NormalOperation);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
String[] specParts = mDLMUHostSpec.getValue().split(":");
if (specParts.length == 2) {
try {
opSupportMessage.write(bos);
byte[] buff = bos.toByteArray();
DatagramPacket packet = new DatagramPacket(
buff, buff.length, InetAddress.getByName(
specParts[0]), Integer.parseInt(specParts[1]));
mSocket.send(packet);
Thread.sleep(1000);
} catch (IOException ex) {
} catch (InterruptedException interrupted) {
if (isCancelled()) {
updateMessage("Cancelled");
break;
}
}
}
}
updateMessage("Cancelled");
return null;
}
#Override
protected void succeeded() {
System.out.println("Scanning completed.");
}
#Override
protected void failed() {
System.out.println("Scanning failed.");
}
#Override
protected void running() {
System.out.println("Scanning started.");
}
#Override
protected void cancelled() {
System.out.println("Scanning cancelled.");
}
private void DatagramSocket() {
throw new UnsupportedOperationException("Not supported yet."); //To change body of generated methods, choose Tools | Templates.
}
};
}
};
mApplicationStatus.textProperty().bind(transmitter.messageProperty());
};
#FXML
private void startStopButtonAction(ActionEvent event) {
if (!transmitter.isRunning()) {
transmitter.reset();
transmitter.start();
}
}
…
}

Background
This answer is based upon a collection of comments on the question, it rambles a bit, does not provide a solution targeted at the code in the question and does not address some of the concepts in the question such as a low level UDP socket based communication system - apologies for that.
Sample Solution Project
I did a proof of concept of a JavaFX app using web socket based communication: javafx-websocket-test. Perhaps some of the concepts from there might help you, in particular the client JavaFX Task and Service code and the sample client application and controller that uses it.
The project does demonstrate, in an executable implementation, some of the communication principles outlined in Adam Bien's article on JavaFX Integration Strategies that James_D linked, for example:
Setting up a web socket endpoint within a JavaFX service.
Wrapping each communication interaction in an async JavaFX Task.
Using async event callbacks to shunt success and failure results back to the UI.
Additionally the example shows interaction between the network service and the JavaFX UI, with the JavaFX UI making async requests to the service and processing async responses from it.
I do recall the seemingly simple Java web socket API did contain a few gotchas. It is just a proof of concept, so be careful of using it as the basis for a robust network service.
Commentary and thoughts
This is actually an invariably tricky question to answer IMO, due to these reasons:
There are many forms of network communication, some of which are suited to different applications.
There is (currently) no standard or best practice of integrating network services with JavaFX applications.
Providing a robust network connection with UI status monitoring and exception handling is often not as straight-forward as it might seem and is easy to get wrong.
There are many subtleties to be dealt with, such as:
What to do in the event of a communication failure?
What to do if the application issues requests at a faster rate than the network or server can process?
What happens if the user shuts down the application while messages are outstanding?
How to ensure that the UI is not frozen while lengthy communication processes occur?
How to provide UI feedback that lengthy network processing is on-going?
What underlying communication technology is being used?
Is the underlying communication stateful or stateless?
Is the communication non-blocking and event driven or blocking?
How to serialize and deserialize data for transmission?
Even though a one-size fits all communication model would be difficult, a "standard" communication model could be adapted which fits many needs. For example something similar to http ajax calls in the browser based network model or NetConnections for flash. Those seem to function well enough for a wide variety of needs. Though of course, they aren't optimal for everything, otherwise alternate systems such as web sockets or http live streaming would not have been created.
Ideally, there would be a single, standardized API like jQuery.ajax() for JavaFX client => server communication, but I haven't yet seen anybody create a JavaFX equivalent of that kind of API.
Unlike the rest of the core JavaFX APIs, such standardized high-level interfaces for network communication don't exist in an off-the-shelf form at the moment. However, there are plenty of libraries and functions available to act as the basic building blocks for developing your own service; perhaps even too many to reasonably process.
Note that most higher level network protocol libraries, such as a Tyrus web socket implementation or the Apache HTTP components underlying a JAX-RS provider have their own internal thread-pools for communication. Systems like netty are based upon nio and are event driven rather than thread managed. What your JavaFX network client service is one of these two things:
For non-blocking I/O it is issuing async calls, hooking into the response events and relaying them back to JavaFX via Platform.runLater.
For blocking I/O, it spawning a thread with a Task or Service with either an implicit or explicit executor service pool to manage the UI interaction but not the actual network comms.
A key and confusing thing is that the JavaFX application code should always perform the network communication in an async manner. For non-blocking I/O the call is already async, so no wrapper task is necessarily required. For blocking I/O, you don't want to block the UI thread, so the Task wrapper running in it's own thread prevents that occurring.
One would think this would make the non-blocking I/O calls simpler, but it doesn't really, as the JDK's non-blocking I/O API is quite low level and is pretty tricky to code to. It isn't really appropriate for high level application code.
Generally, application code is better off using a higher level library such as JAX-RS, web sockets or akka (or, preferably, a layer on top of them) which internally manage the details of the communication in either a blocking or non-blocking fashion AND provide an event driven API for sending and receiving messages. The individual message events can be wrapped in a JavaFX Task for async processing. So, from the JavaFX application point of view, everything is event driven, nothing is blocking, and the same application API works regardless of the underlying communication protocol and blocking/non-blocking communication infrastructure.
thanks for the proof of concept application, this will be quite useful, however one thing that is a bit obscure is how one can safely communicate GUI changes to the running service thread safely. It appears that the HelloService uses a 'name' simple string property to communicate changes from the GUI to the service before it is started. I wonder how one might communicate UI changes to a running background service in a thread safe manner. Via some sort or message api perhaps?
A BlockingQueue with a fixed max-size which rejects additional requests when the queue is full can be used for communication from JavaFX thread based code to a consumer service. It is a reasonably elegant solution to the classic producer-consumer problem.
Of course, you could just skip the blocking queue and keep creating async tasks ad-nauseum, which is fine for low volume communication, but could lead to a starvation of limited thread resources for high volume communication. One standard way to handle that is to use an ExecutorService from Executors which manages a thread pool. The thread pool for the executor service can be defined to be bounded to a max number of threads and internally use an unbounded queue where messages pile up if all threads are busy. That way you don't need to define your own blocking queue, you just issue async service requests and they are immediately handled in threads if they can be or the requests pile up in the internal queue if they cannot.
This is actually the way that a JavaFX Service works:
The Service by default uses a thread pool Executor with some unspecified default or maximum thread pool size. This is done so that naive code will not completely swamp the system by creating thousands of Threads.
and:
If an Executor is specified on the Service, then it will be used to actually execute the service. Otherwise, a daemon thread will be created and executed. If you wish to create non-daemon threads, then specify a custom Executor (for example, you could use a ThreadPoolExecutor with a custom ThreadFactory).
More sophisticated solutions for which a simple BlockedQueue messaging is not appropriate would use a topic based message queue style solution, e.g., a Java based STOMP client such as this kaazing example.
Getting the message info to the service is just part of the requirement, that is essentially doing an async message send. You also need to process the response that comes back. To do that, there are two alternatives:
You model each request as a separate Task, and the onSuccess and onError handlers process the task response. Running the task within a service ensures that it is handled by an executor with a fixed thread pool backed by an internal queue for overflow.
You write your own long running service interface with it's own API and encapsulate a blocking queue for requests, using Platform.runLater for handling communicating results back to the UI.
To make the response handler logic dynamic and adjustable by the caller, you could pass the handler function as a lambda function to be executed on success for the original call using Platform.runLater.
If you wrap the call in a Task or Service, and use the onSucceeded function, you don't need the runLater call, because the implementation will ensure that the onSucceeded handler is called on the JavaFX thread once the task completes.
Note, that often the network request and response require some conversion of marshaling and unmarshaling of data to and from a serializable stream. Some of the higher level network APIs such as the JAX-RS or web socket providers provide interfaces and utilities to do some of this work for you, often using specific libraries for different types of conversion, such as JAXB for XML serialization of Jackson for JSON serialization.
Slightly related info and further thoughts
This next is probably a bit off-topic, but this is an example of BlockingQueue and Task interaction, it is not a network service, but it does demonstrate use of queues within a producer/consumer situation, with a reactive UI and progress monitoring.
One other thing that would be interesting to see (at least for me), is an Akka based solution for JavaFX client->server communication. That seems like a nice alternative to traditional http/rest/soap/rmi calls or message queue based processing. Akka is inherently an event based solution for fault-tolerant asynchronous concurrent communication, so it would seem a good match-up for a UI based framework such as JavaFX, allowing a developer to process at an appropriate layer of abstraction. But I have yet to see a JavaFX based messaging client that relies on Akka.

I would like to know the best approach to create a network based
reusable threading service. The way I coded up the network service was
to use a controller class (generated from the FXML via the Net-beans
GUI). I put the threading logic here via a private Service member
named 'transmitter' and I wired up the start/stop logic via the
Start/Stop button's event callback.
I humbly suggest that you develop your network service and your GUI controller as separate projects.
I would have the network service running in its own container or virtual machine as a daemon/background thread. The advantage of this organization is that it keeps your server away from the vagaries of the JavaFX event loop and application thread. You'll want to design your service to recognize administration commands and/or interrupt requests from your controller. You can develop your network services as REST or whatever you want without wondering how to roll this into the JavaFX application thread.
I would then have the GUI controller running as a separate GUI application either in the same process or, if remote administration is desired, in a separate JVM (and use IPC to send/receive administration messages).
TL;DR: if it were me, I would resist the temptation to program the network service as a JavaFX application.

Related

Are vertx-web route handlers a singleton or does it instantiate a new one every time?

I am new to Java and vertx. In creating a vertx-web PoC, I chose instance properties on my handlers as an easy way to share data in a futures chain because of callbacks. Looking at my code someone asked if my implementation was multi-threaded. I started reading more docs and came across event-loop/worker threads and standard/worker/multi-thread verticles. This then mutated into wondering about concurrent calls.
In providing a handler to Route.handler() does a new TestHandler get instantiated for every request or does handler create a singleton on the Route and TestHandler bar shared for concurrent requests?
If it is a singleton. How could i handle concurrency? More instances on the vertx deployment? Stateless Handler classes?
Java examples and articles seem to use demonstrate things in a very simple context and leaves any implementation to be coupled with a working knowledge of advanced concepts. This has been making learning Java very difficult. Any help/tips/advice for a new Java dev would be much appreciated.
Application.java
public class Application {
public static void main(String[] args) {
Vertx vertx = Vertx.vertx();
vertx.deployVerticle("com.foo.testServer");
}
}
TestServer.java
public class TestServer extends AbstractVerticle {
#Override
public void start() {
Router router = Router.router(vertx);
router.post("/test").handler(new TestHandler());
vertx.createHttpServer().requestHandler(router).listen(8000);
}
}
TestHandler.java
public class TestHandler implements Handler<RoutingContext> {
private String bar;
// ... other future methods
public void handle(#NotNull RoutingContext ctx) {
bar = ctx.getBodyAsJson().getString("bar");
// ... future method calls
ctx.response().setStatusCode(200).end(bar);
}
}
The purpose of Vert.x is to be able to handle a lot of concurrent requests with very few threads, using non-blocking APIs.
Indeed, when you deploy a verticle, an event loop is assigned to the instance and a single thread will handle the events.
This is by design and it relieves the developer from thinking about complex multi-threading issues.
Then of course modern hardware often provides multiple cores. In this case, you can deploy multiple instances of your verticle. You will still benefit from the single-threaded development model and also take advantage of all the CPU cores.
More on this in the Reactor and Multi-Reactor section of the Vert.x core docs.
Usually in vert.x instances of Handler, Promise and Future are singletons and are instantiated on once-per-verticle-instance basis.
As soon as they are supposed to be thread-safe (pure functional, no shared-state), they can be used in multi-threaded environments by default.

jersey ws 2.0 #suspended AsyncResponse, what does it do?

I am analyzing some jersey 2.0 code and i have a question on how the following method works:
#Stateless
#Path("/mycoolstuff")
public class MyEjbResource {
…
#GET
#Asynchronous //does this mean the method executes on child thread ?
public void longRunningOperation(#Suspended AsyncResponse ar) {
final String result = executeLongRunningOperation();
ar.resume(result);
}
private String executeLongRunningOperation() { … }
}
Lets say im at a web browser and i type in www.mysite/mycoolstuff
this will execute the method but im not understanding what the asyncResponse is used for neither the #Asynchronous annotation. From the browser how would i notice its asychnronous ? what would be the difference in removing the annotation ? Also the suspended annotation after reading the documentation i'm not clear its purpose.
is the #Asynchronous annotation simply telling the program to execute this method on a new thread ? is it a convenience method for doing "new Thread(.....)" ?
Update: this annotation relieves the server of hanging onto the request processing thread. Throughput can be better. Anyway from the official docs:
Request processing on the server works by default in a synchronous processing mode, which means that a client connection of a request is processed in a single I/O container thread. Once the thread processing the request returns to the I/O container, the container can safely assume that the request processing is finished and that the client connection can be safely released including all the resources associated with the connection. This model is typically sufficient for processing of requests for which the processing resource method execution takes a relatively short time. However, in cases where a resource method execution is known to take a long time to compute the result, server-side asynchronous processing model should be used. In this model, the association between a request processing thread and client connection is broken. I/O container that handles incoming request may no longer assume that a client connection can be safely closed when a request processing thread returns. Instead a facility for explicitly suspending, resuming and closing client connections needs to be exposed. Note that the use of server-side asynchronous processing model will not improve the request processing time perceived by the client. It will however increase the throughput of the server, by releasing the initial request processing thread back to the I/O container while the request may still be waiting in a queue for processing or the processing may still be running on another dedicated thread. The released I/O container thread can be used to accept and process new incoming request connections.
#Suspended have more definite if you used it, else it will not make any difference of using it.
Let's talk about benefits of it:
#Suspended will pause/Suspend the current thread until it gets response,by default #NO_TIMEOUT no suspend timeout set. So it doesn't mean your request response (I/O)thread will get free and be available for other request.
Now Assume you want your service to be a response with some specific time, but the method you are calling from resource not guarantee the response time, then how will you manage your service response time? At that time, you can set suspend timeout for your service using #Suspended, and even provide a fall back response when time get exceed.
Below is some sample of code for setting suspend/pause timeout
public void longRunningOperation(#Suspended AsyncResponse ar) {
ar.setTimeoutHandler(customHandler);
ar.setTimeout(10, TimeUnit.SECONDS);
final String result = executeLongRunningOperation();
ar.resume(result);
}
for more details refer this
The #Suspended annotation is added before an AsyncResponse parameter on the resource method to tell the underlying web server not to expect this thread to return a response for the remote caller:
#POST
public void asyncPost(#Suspended final AsyncResponse ar, ... <args>) {
someAsyncMethodInYourServer(<args>, new AsyncMethodCallback() {
#Override
void completed(<results>) {
ar.complete(Response.ok(<results>).build());
}
#Override
void failed(Throwable t) {
ar.failed(t);
}
}
}
Rather, the AsyncResponse object is used by the thread that calls completed or failed on the callback object to return an 'ok' or throw an error to the client.
Consider using such asynchronous resources in conjunction with an async jersey client. If you're trying to implement a ReST service that exposes a fundamentally async api, these patterns allow you to project the async api through the ReST interface.
We don't create async interfaces because we have a process that takes a long time (minutes or hours) to run, but rather because we don't want our threads to ever sleep - we send the request and register a callback handler to be called later when the result is ready - from milliseconds to seconds later - in a synchronous interface, the calling thread would be sleeping during that time, rather than doing something useful. One of the fastest web servers ever written is single threaded and completely asynchronous. That thread never sleeps, and because there is only one thread, there's no context switching going on under the covers (at least within that process).
The #suspend annotation makes the caller actually wait until your done work. Lets say you have a lot of work to do on another thread. when you use jersey #suspend the caller just sits there and waits (so on a web browser they just see a spinner) until your AsyncResponse object returns data to it.
Imagine you had a really long operation you had to do and you want to do it on another thread (or multiple threads). Now we can have the user wait until we are done. Don't forget in jersey you'll need to add the " true" right in the jersey servlet definition in web.xml to get it to work.

Business Logic in Netty?

I'm developing a server based on the Netty libraby and I'm having a problem with how to structure the application with regards to business Logic.
currenty I have the business logic in the last handler and that's where I access the database. The thing I can't wrap my head around is the latency of accessing the database(blocking code). Is it advisable to do it in the handler or is there an alternative? code below:
public void channelRead(ChannelHandlerContext ctx, Object msg)
throws Exception {
super.channelRead(ctx, msg);
Msg message = (Msg)msg;
switch(message.messageType){
case MType.SIGN_UP:
userReg.signUp(message.user);// blocking database access
break;
}
}
you should execute the blocking calls in DefaultEventExecutorGroup or your custom threadpool that can be added to when the handler is added
pipeline.addLast(new DefaultEventExecutorGroup(50),"BUSSINESS_LOGIC_HANDLER", new BHandler());
ctx.executor().execute(new Runnable() {
#Override
public void run() {
//Blocking call
}});
Your custom handler is initialized by Netty everytime the Server receives a request, hence one instance of the handler is responsible for handling one Client.
So, it is perfectly fine for issuing blocking calls in your handler. It will not affect other Client's, as long as you don't block it indefinitely (or atleast not for very long time), thereby not blocking Netty's Thread for long and you do not get too much load on your server instance.
However, if you want to go for asynchronous design, then there can be more than a few design patterns that you can use.
For eg. with Netty, if you can implement WebSockets, then perhaps you can make the blocking calls in a separate Thread, and when the results are available, you can push them to the client through the WebSocket already established.

Best practive for socket client within EJB or CDI bean

my application must open a tcp socket connection to a server and listen to periodically incoming messages.
What are the best practices to implement this in a JEE 7 application?
Right now I have something like this:
#javax.ejb.Singleton
public class MessageChecker {
#Asynchronous
public void startChecking() {
// set up things
Socket client = new Socket(...);
[...]
// start a loop to retrieve the incoming messages
while((line = reader.readLine())!=null){
LOG.debug("Message from socket server: " + line);
}
}
}
The MessageChecker.startChecking() function is called from a #Startup bean with a #PostConstruct method.
#javax.ejb.Singleton
#Startup
public class Starter() {
#Inject
private MessageChecker checker;
#PostConstruct
public void startup() {
checker.startChecking();
}
}
Do you think this is the correct approach?
Actually it is not working well. The application server (JBoss 8 Wildfly) hangs and does not react to shutdown or re-deployment commands any more. I have the feeling that the it gets stuck in the while(...) loop.
Cheers
Frank
Frank, it is bad practice to do any I/O operations while you're in an EJB context. The reason behind this is simple. When working in a cluster:
They will inherently block each other while waiting on I/O connection timeouts and all other I/O related waiting timeouts. That is if the connection does not block for an unspecified amount of time, in which case you will have to create another Thread which scans for dead connections.
Only one of the EJBs will be able to connect and send/recieve information , the others will just wait in line. This way your system will not scale. No matter how many how many EJBs you have in your cluster, only one will actually do its work.
Apparently you already ran into problems by doing that :) . Jboss 8 seems not to be able to properly create and destroy the bean.
Now, I know your bean is a #Singleton so your architecture does not rely on transactionality, clustering and distribution of reading from that socket. So you might be ok with that.
However :D , you are asking for a java EE compliant way of solving this. Here is what should be done:
Redesign your solution to go with JMS. It 'smells' like you are trying to provide an async messaging functionality (Send a message & wait for reply). You might be using a synchronous protocol to do async messaging. Just give it a thought.
Create a JCA compliant adapter which will be injected in your EJB as a #Resource
You will have a connection pool configurable at AS level ( so you can have different values for different environments
You will have transactionality and rollback. Of course the rollback behavior will have to be coded by you
You can inject it via a #Resource annotation
There are some adapters out there, some might fit like a glove, some might be a bit overdesigned.
Oracle JCA Adapter

ArrayBlockingQueue synchronization in multi node deployment

In simple description, I have a servlet and response time is long so I decided to divide it into two parts, one just composes a response to client, and second let's say performs some business logic and stores result in DB. To decrease response time I execute business logic asynchronously using ThreadPoolExecutor in combination with ArrayBlockingQueue. Using ArrayBlockingQueue I can ensure original FIFO ordering if requests were sequential for the same client. This is important prerequisite.
Here is a snippet:
Servlet
public class HelloServlet extends HttpServlet {
AsyncExecutor exe = new AsyncExecutor();
protected void doGet(HttpServletRequest req,
HttpServletResponse resp) throws ServletException, IOException {
PrintWriter w = resp.getWriter();
exe.executeAsync(exe.new Task(this));
w.print("HELLO TO CLIENT");
}
protected void someBusinessMethod(){
// long time execution here
}
}
and Executor
public class AsyncExecutor {
static final BlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(10, true);
static final Executor executor = new ThreadPoolExecutor(3, 5, 20L, TimeUnit.SECONDS, queue);
public void executeAsync(Task t){
boolean isTaskAccepted = false;
while(!isTaskAccepted){
try {
executor.execute(t);
isTaskAccepted = true;
} catch (RejectedExecutionException e){
}
}
}
class Task implements Runnable{
private HelloServlet servlet;
Task(HelloServlet servlet){
this.servlet = servlet;
}
#Override
public void run() {
// just call back to servlet's business method
servlet.someBusinessMethod();
}
}
}
This implementation works fine if I deploy it only to one Tomcat node, since I have only one ArrayBlockingQueue in application. But if I have several nodes and load balancer in front then I can not guarantee FIFO ordering of requests for async execution for the same client since I have already several Queues.
My question is, how it is possible to guarantee the same order of requests to be executed asynchronously for the same client in clustered (multi node) deployment? I think ActiveMQ probably a solution (not preferable for me), or load balancer configuration, or can it be implemented in code?
Hope some of these ideas help.
Thanks Sam for you prompt suggestions.
In the first post I described a problem in very simplified way so to clarify it better let's say I have a legacy web app deployed to Tomcat and it serves some Licensing Model(old one). Then we got a new Licensing Model (this is a GlassFish app) and we need to use it alongside with old one to be in sync. For the end user such integration must be transparent and not intrusive. So user request is served like this.
caller send a request (create subscription for example)
execute business logic of the the new licensing model
execute business logic of the the old licensing model
despite the result of the p.3 return response of p.2 in format of old licensing model back to caller
(optional) handle failure of p.3 if any
This was implemented with Aspect which intercepts requests of p.1 and executes the rest of stuff sequentially. And as I said in previous post p.3 execution time can be long that's why I want to make it asynchronous. Let's have a look at snippet of Aspect (instead of Servlet from the first post).
#Aspect #Component public class MyAspect {
#Autowired
private ApplicationContext ctx;
#Autowired
private AsyncExecutor asyncExecutor;
#Around("#annotation(executeOpi)")
public Object around(ProceedingJoinPoint jp, ExecuteOpi executeOpi) throws Throwable {
LegacyMapper newModelExecutor = ctx.getBean(executeOpi.legacyMapper());
// executes a new model and then return result in the format of old model
Object result = newModelExecutor.executeNewModelLogic(joinPoint.getArgs());
// executes old model logic asynchronously
asyncExecutor.executeAsync(asyncExecutor.new Task(this, jp)
return object
}
public void executeOldModelLogic(ProceedingJoinPoint jp) throws Throwable{
// long time execution here
jp.proceed();
}
}
With this implementation as in the first post, I can guarantee a FIFO order of executeOldModelLogic methods, if requests come to the same Tomcat node. But with multi-node deployment and round robin LB in front I can end-up with such a case when for the same caller "update subscription in old model" can come first to ArrayBlockingQueue than "create subscription in old model", which of course a bad logical bug.
And as for points you suggested:
p1, p2 and p4: I probably can't use it as a solution since I don't have a state of object as such. You see that I pass to Runnable task a references of Aspect and JoinPoint to make a call back of executeOldModelLogic from Runnable to Aspect
p3 Don't know about this might be worthwhile to investigate
p5 This is a direction I want go for further investigation, I have a gut feeling it is only way to solve my problem in the given conditions.
There are some solutions that come to mind off hand.
Use the database: post the jobs to be run in a database table, have a secondary server process run the jobs, and place results in an output table. Then when users call back to the web page, it can pick up any results waiting for them from the output table.
Use a JMS service: this is a pretty lightweight messaging service, which would integrate with your Tomcat application reasonably well. The downside here is that you have to run another server side component, and build the integration layer with your app. But that's not a big disadvantage.
Switch to a full J2EE container (Java App Server): and use an EJB Singleton. I have to admit, I don't have any experience with running a Singleton across separate server instances, but I believe that some of them may be able to handle it.
Use EHCache or some other distributed cache: I built a Queue wrapper around EHCache to enable it to be used like a FIFO queue, and it also has RMI (or JMS) replication, so multiple nodes will see the same data.
Use the Load Balancer: If your load balancer supports session level balancing, then all requests for a single user session can be directed to the same node. In a big web environment where I worked we were unable to share user session state across multiple servers, so we set up load balancing to ensure that the user's session always was directed to same web server.
Hope some of these ideas help.

Categories

Resources