I will clarify my question.
I have a task to integrate two systems: a frontend serving html and backend which gives data to frontend.
Backend have a very large REST api so I have to use multiple routes.
I planned to use single camel context and wrap all routes into it.
<camelContext xmlns="http://activemq.apache.org/camel/schema/spring">
<from uri="direct:data"/>
<to uri="ahc:http://localhost/data"/>
<!--And so on. More than 70 routes-->
</camelContext>
Then, I planned to invoke the route using #Produce annotation on service method as adviced in Hiding middleware article
public interface Service {
String data();
}
public class MyBean {
#Produce(uri = "direct:data")
protected Service producer;
public void doSomething() {
// lets send a message
String response = producer.data();
}
}
As I understand information taken from here and here I'll end up with additional 70 thread in my app (one for each route). I fear that it can cause a serious performance hit and while the backend api will grow the thread number will grow with it. Is it correct? How can I avoid this if it's true? As I understand, I can't employ ExecutorService thread pool in this case.
Thanks in advance for any answer.
No you will not end up with a thread per route. The threading module is often tied to the threading model of the consumer (eg the route input).
For example a route that uses a timer component will use a scheduled thread pool (1 thread). And the JMS component will use 1 or more threads, depending on if you set concurrentConsumers=N, etc.
The direct component is like a direct method invocation and it uses the caller thread, so there is 0 new threads for that threading model.
If all your 70 routes uses the AHC in the < to > then you may want to re-use the same endpoint, so you reuse the thread pool of the AHC library. Or alternative to configure a shared pool to be used for all AHC endpoints.
And btw this question was also posted on the Camel user forum / mailinglist: http://camel.465427.n5.nabble.com/Can-multiple-camel-routes-cause-a-very-large-number-of-threads-tp5736620.html
Related
I have one functionality in online application. I need to mail receipt to customer after generate receipt. My problem is mail function takes more time nearly 20 to 30 seconds, customer could not wait for long time during online transaction.
So i have used java ExecutorService to run independently mail service [sendMail] and return response PAGE to customer either mail sent or not.
Is it right to use ExecutorService in online application [Http request & Response]. Below is my code. Kindly advice.
#RequestMapping(value="/generateReceipt",method=RequestMethod.GET)
public #ResponseBody ReceiptBean generateReceipt(HttpServletRequest httpRequest,HttpServletResponse httpResponse) {
// Other codes here
...
...
I need run below line independently, since it takes more time. so commeneted and wrote executor service
//mailService.sendMail(httpRequest, httpResponse, receiptBean);
java.util.concurrent.ExecutorService executorService = java.util.concurrent.Executors.newFixedThreadPool(10);
executorService.execute(new Runnable() {
ReceiptBean receiptBean1;
public void run() {
mailService.sendMail(httpRequest, httpResponse, receiptBean);
}
public Runnable init(ReceiptBean receiptBean) {
this.receiptBean = receiptBean1;
return(this);
}
}.init(receiptBean));
executorService.shutdown();
return receiptBean;
}
You can do that, although I wouldn't expect this code in a controller class but in a separate on (Separation of Concerns and all).
However, since you seem to be using Spring, you might as well use their scheduling framework.
It is fine to use Executor Service to make an asynchronous mail sending request, but you should try to follow SOLID principles in your design. Let the service layer take care of running the executor task.
https://en.wikipedia.org/wiki/SOLID
I agree with both #daniu and #Ankur regarding the separation of concerns u should follow. So just create a dedicated service like "EmailService" and inject it where needed.
Moreover you are leveraging the Spring framework and u can take advantage of its Async feature.
If u prefer to write your own async code then I'll suggest to use maybe a CompletableFuture instead of the ExecutorService to better handling failure (maybe u want store messages not sent into a queue for achieving retry feature or some other behaviour).
I am transitioning from writing a java Swing application to JavaFX to write a modern java based GUI application.
I would like to know the best approach to create a network based reusable threading service. The way I coded up the network service was to use a controller class (generated from the FXML via the Net-beans GUI). I put the threading logic here via a private Service member named 'transmitter' and I wired up the start/stop logic via the Start/Stop button's event callback.
The network based thread is implemented as a javafx Service - I did this since I would like to restart the service/thread whenever the destination address changes. This seems to be the recommended approach in place of a stand alone Task.
The network service is very simple right now, all it does is use some GUI widgets to configure a packet to transmit to a host/port once a second. I need to restart the service only if the host/port widget changes, however if the network service is running, I would like to modify the packet without interrupting/restarting the DatagramSocket. The place where I have questions and require some guidance are:
What is the recommended approach to threading a network thread in an
FXML based application? An example would be greatly appreciated.
How do I safely communicate changes from GUI widgets (via their
action performed callbacks) to the running service class?
Shown below are the most relevant parts of my controller class:
/**
* FXML Controller class
*
* #author johnc
*/
public class OpMessageServerController implements Initializable {
#FXML
private Text mCurrentDateTimeText;
#FXML
private Label mApplicationStatus;
#FXML
private ComboBox<DiscreteStatus> mPofDS;
#FXML
private ComboBox<PhaseOfFlightFMS> mPofFMS;
#FXML
private ComboBox<DiscreteStatus> mTailNumberDS;
#FXML
private ComboBox<DiscreteStatus> mConfigTableDS;
#FXML
private ComboBox<DiscreteStatus> mDateTimeDS;
#FXML
private TextField mEpicPN;
#FXML
private TextField mConfigTablePNHash;
#FXML
private TextField mTailNumber;
#FXML
private ComboBox<DiscreteStatus> mTopLevelPNDS;
#FXML
private Button mStartStopButton;
#FXML
private ComboBox<String> mDLMUHostSpec;
#FXML
private CheckBox connectionStatusC1;
#FXML
private CheckBox wsuConnectionStatus;
#FXML
private CheckBox connectionStatusC4;
#FXML
private CheckBox connectionStatusC3;
#FXML
private CheckBox connectionStatusC2;
#FXML
private CheckBox dlmuwConnectionStatus;
private Service<Void> transmitter;
/**
* Initializes the controller class.
* #param url
* #param rb
*/
#Override
public void initialize(URL url, ResourceBundle rb) {
mPofDS.setItems(FXCollections.observableArrayList(DiscreteStatus.values()));
mPofDS.getSelectionModel().selectFirst();
mPofFMS.setItems(FXCollections.observableArrayList(PhaseOfFlightFMS.values()));
mPofFMS.getSelectionModel().selectFirst();
mTailNumberDS.setItems(FXCollections.observableArrayList(DiscreteStatus.values()));
mTailNumberDS.getSelectionModel().selectFirst();
mConfigTableDS.setItems(FXCollections.observableArrayList(DiscreteStatus.values()));
mConfigTableDS.getSelectionModel().selectFirst();
mDateTimeDS.setItems(FXCollections.observableArrayList(DiscreteStatus.values()));
mDateTimeDS.getSelectionModel().selectFirst();
mTopLevelPNDS.setItems(FXCollections.observableArrayList(DiscreteStatus.values()));
mTopLevelPNDS.getSelectionModel().selectFirst();
// mDLMUHostSpec.setItems(FXCollections.observableArrayList(
// FXCollections.observableArrayList("localhost:1234", "192.168.200.2:1234")));
// add event handler here to update the current date/time label
// this should also update the transmit datastructure
final Timeline timeline = new Timeline(new KeyFrame(
Duration.seconds(1), (ActionEvent event) -> {
LocalDateTime currentDateTime = LocalDateTime.now();
mCurrentDateTimeText.setText(currentDateTime.format(
DateTimeFormatter.ofPattern("kk:mm:ss uuuu")));
}));
timeline.setCycleCount(Animation.INDEFINITE);
timeline.play();
// create a service.
transmitter = new Service() {
#Override
protected Task createTask() {
return new Task<Void>() {
#Override
protected Void call() throws InterruptedException {
updateMessage("Running...");
updateProgress(0, 10);
DatagramSocket sock = null;
while (!isCancelled()) {
try {
if (sock == null) {
DatagramSocket sock = new DatagramSocket();
}
} catch (SocketException ex) {
Logger.getLogger(OpMessageServerController.class.getName()).log(Level.SEVERE, null, ex);
}
//Block the thread for a short time, but be sure
//to check the InterruptedException for cancellation
OpSupportMessage opSupportMessage = new OpSupportMessage(
DiscreteStatus.NormalOperation,
PhaseOfFlightFMS.Cruise,
DiscreteStatus.NormalOperation,
"TAILNUM",
DiscreteStatus.NormalOperation);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
String[] specParts = mDLMUHostSpec.getValue().split(":");
if (specParts.length == 2) {
try {
opSupportMessage.write(bos);
byte[] buff = bos.toByteArray();
DatagramPacket packet = new DatagramPacket(
buff, buff.length, InetAddress.getByName(
specParts[0]), Integer.parseInt(specParts[1]));
mSocket.send(packet);
Thread.sleep(1000);
} catch (IOException ex) {
} catch (InterruptedException interrupted) {
if (isCancelled()) {
updateMessage("Cancelled");
break;
}
}
}
}
updateMessage("Cancelled");
return null;
}
#Override
protected void succeeded() {
System.out.println("Scanning completed.");
}
#Override
protected void failed() {
System.out.println("Scanning failed.");
}
#Override
protected void running() {
System.out.println("Scanning started.");
}
#Override
protected void cancelled() {
System.out.println("Scanning cancelled.");
}
private void DatagramSocket() {
throw new UnsupportedOperationException("Not supported yet."); //To change body of generated methods, choose Tools | Templates.
}
};
}
};
mApplicationStatus.textProperty().bind(transmitter.messageProperty());
};
#FXML
private void startStopButtonAction(ActionEvent event) {
if (!transmitter.isRunning()) {
transmitter.reset();
transmitter.start();
}
}
…
}
Background
This answer is based upon a collection of comments on the question, it rambles a bit, does not provide a solution targeted at the code in the question and does not address some of the concepts in the question such as a low level UDP socket based communication system - apologies for that.
Sample Solution Project
I did a proof of concept of a JavaFX app using web socket based communication: javafx-websocket-test. Perhaps some of the concepts from there might help you, in particular the client JavaFX Task and Service code and the sample client application and controller that uses it.
The project does demonstrate, in an executable implementation, some of the communication principles outlined in Adam Bien's article on JavaFX Integration Strategies that James_D linked, for example:
Setting up a web socket endpoint within a JavaFX service.
Wrapping each communication interaction in an async JavaFX Task.
Using async event callbacks to shunt success and failure results back to the UI.
Additionally the example shows interaction between the network service and the JavaFX UI, with the JavaFX UI making async requests to the service and processing async responses from it.
I do recall the seemingly simple Java web socket API did contain a few gotchas. It is just a proof of concept, so be careful of using it as the basis for a robust network service.
Commentary and thoughts
This is actually an invariably tricky question to answer IMO, due to these reasons:
There are many forms of network communication, some of which are suited to different applications.
There is (currently) no standard or best practice of integrating network services with JavaFX applications.
Providing a robust network connection with UI status monitoring and exception handling is often not as straight-forward as it might seem and is easy to get wrong.
There are many subtleties to be dealt with, such as:
What to do in the event of a communication failure?
What to do if the application issues requests at a faster rate than the network or server can process?
What happens if the user shuts down the application while messages are outstanding?
How to ensure that the UI is not frozen while lengthy communication processes occur?
How to provide UI feedback that lengthy network processing is on-going?
What underlying communication technology is being used?
Is the underlying communication stateful or stateless?
Is the communication non-blocking and event driven or blocking?
How to serialize and deserialize data for transmission?
Even though a one-size fits all communication model would be difficult, a "standard" communication model could be adapted which fits many needs. For example something similar to http ajax calls in the browser based network model or NetConnections for flash. Those seem to function well enough for a wide variety of needs. Though of course, they aren't optimal for everything, otherwise alternate systems such as web sockets or http live streaming would not have been created.
Ideally, there would be a single, standardized API like jQuery.ajax() for JavaFX client => server communication, but I haven't yet seen anybody create a JavaFX equivalent of that kind of API.
Unlike the rest of the core JavaFX APIs, such standardized high-level interfaces for network communication don't exist in an off-the-shelf form at the moment. However, there are plenty of libraries and functions available to act as the basic building blocks for developing your own service; perhaps even too many to reasonably process.
Note that most higher level network protocol libraries, such as a Tyrus web socket implementation or the Apache HTTP components underlying a JAX-RS provider have their own internal thread-pools for communication. Systems like netty are based upon nio and are event driven rather than thread managed. What your JavaFX network client service is one of these two things:
For non-blocking I/O it is issuing async calls, hooking into the response events and relaying them back to JavaFX via Platform.runLater.
For blocking I/O, it spawning a thread with a Task or Service with either an implicit or explicit executor service pool to manage the UI interaction but not the actual network comms.
A key and confusing thing is that the JavaFX application code should always perform the network communication in an async manner. For non-blocking I/O the call is already async, so no wrapper task is necessarily required. For blocking I/O, you don't want to block the UI thread, so the Task wrapper running in it's own thread prevents that occurring.
One would think this would make the non-blocking I/O calls simpler, but it doesn't really, as the JDK's non-blocking I/O API is quite low level and is pretty tricky to code to. It isn't really appropriate for high level application code.
Generally, application code is better off using a higher level library such as JAX-RS, web sockets or akka (or, preferably, a layer on top of them) which internally manage the details of the communication in either a blocking or non-blocking fashion AND provide an event driven API for sending and receiving messages. The individual message events can be wrapped in a JavaFX Task for async processing. So, from the JavaFX application point of view, everything is event driven, nothing is blocking, and the same application API works regardless of the underlying communication protocol and blocking/non-blocking communication infrastructure.
thanks for the proof of concept application, this will be quite useful, however one thing that is a bit obscure is how one can safely communicate GUI changes to the running service thread safely. It appears that the HelloService uses a 'name' simple string property to communicate changes from the GUI to the service before it is started. I wonder how one might communicate UI changes to a running background service in a thread safe manner. Via some sort or message api perhaps?
A BlockingQueue with a fixed max-size which rejects additional requests when the queue is full can be used for communication from JavaFX thread based code to a consumer service. It is a reasonably elegant solution to the classic producer-consumer problem.
Of course, you could just skip the blocking queue and keep creating async tasks ad-nauseum, which is fine for low volume communication, but could lead to a starvation of limited thread resources for high volume communication. One standard way to handle that is to use an ExecutorService from Executors which manages a thread pool. The thread pool for the executor service can be defined to be bounded to a max number of threads and internally use an unbounded queue where messages pile up if all threads are busy. That way you don't need to define your own blocking queue, you just issue async service requests and they are immediately handled in threads if they can be or the requests pile up in the internal queue if they cannot.
This is actually the way that a JavaFX Service works:
The Service by default uses a thread pool Executor with some unspecified default or maximum thread pool size. This is done so that naive code will not completely swamp the system by creating thousands of Threads.
and:
If an Executor is specified on the Service, then it will be used to actually execute the service. Otherwise, a daemon thread will be created and executed. If you wish to create non-daemon threads, then specify a custom Executor (for example, you could use a ThreadPoolExecutor with a custom ThreadFactory).
More sophisticated solutions for which a simple BlockedQueue messaging is not appropriate would use a topic based message queue style solution, e.g., a Java based STOMP client such as this kaazing example.
Getting the message info to the service is just part of the requirement, that is essentially doing an async message send. You also need to process the response that comes back. To do that, there are two alternatives:
You model each request as a separate Task, and the onSuccess and onError handlers process the task response. Running the task within a service ensures that it is handled by an executor with a fixed thread pool backed by an internal queue for overflow.
You write your own long running service interface with it's own API and encapsulate a blocking queue for requests, using Platform.runLater for handling communicating results back to the UI.
To make the response handler logic dynamic and adjustable by the caller, you could pass the handler function as a lambda function to be executed on success for the original call using Platform.runLater.
If you wrap the call in a Task or Service, and use the onSucceeded function, you don't need the runLater call, because the implementation will ensure that the onSucceeded handler is called on the JavaFX thread once the task completes.
Note, that often the network request and response require some conversion of marshaling and unmarshaling of data to and from a serializable stream. Some of the higher level network APIs such as the JAX-RS or web socket providers provide interfaces and utilities to do some of this work for you, often using specific libraries for different types of conversion, such as JAXB for XML serialization of Jackson for JSON serialization.
Slightly related info and further thoughts
This next is probably a bit off-topic, but this is an example of BlockingQueue and Task interaction, it is not a network service, but it does demonstrate use of queues within a producer/consumer situation, with a reactive UI and progress monitoring.
One other thing that would be interesting to see (at least for me), is an Akka based solution for JavaFX client->server communication. That seems like a nice alternative to traditional http/rest/soap/rmi calls or message queue based processing. Akka is inherently an event based solution for fault-tolerant asynchronous concurrent communication, so it would seem a good match-up for a UI based framework such as JavaFX, allowing a developer to process at an appropriate layer of abstraction. But I have yet to see a JavaFX based messaging client that relies on Akka.
I would like to know the best approach to create a network based
reusable threading service. The way I coded up the network service was
to use a controller class (generated from the FXML via the Net-beans
GUI). I put the threading logic here via a private Service member
named 'transmitter' and I wired up the start/stop logic via the
Start/Stop button's event callback.
I humbly suggest that you develop your network service and your GUI controller as separate projects.
I would have the network service running in its own container or virtual machine as a daemon/background thread. The advantage of this organization is that it keeps your server away from the vagaries of the JavaFX event loop and application thread. You'll want to design your service to recognize administration commands and/or interrupt requests from your controller. You can develop your network services as REST or whatever you want without wondering how to roll this into the JavaFX application thread.
I would then have the GUI controller running as a separate GUI application either in the same process or, if remote administration is desired, in a separate JVM (and use IPC to send/receive administration messages).
TL;DR: if it were me, I would resist the temptation to program the network service as a JavaFX application.
I have a JAX-RS/Jersey Rest API which gets a request and needs to do an additional job in a separate thread but I am not sure whether it would be advisable to use a threadpool or not. I expect a lot of requests to this API (a few thousands a day) but I only have a single additional job in the background.
Would it be bad to just create a new Thread each time like this? Any advice would be appreciated. I have not used a ThreadPool before.
#Get
#Path("/myAPI")
public Response myCall() {
// call load in the background
load();
...
// do main job here
mainJob();
...
}
private void load() {
new Thread(new Runnable() {
#Override
public void run() {
doSomethingInTheBackground();
}
}).start();
}
Edit:
Just to clarify. I only need a single additional job to run in the background. This job will call another API to log some info and that's it. But it has to do this for every request and I do not need to wait for a response. That's why I thought of just doing this in a new background thread.
Edit2:
So this is what I came up with now. Could anyone please tell me if this seems OK (it works locally) and if I need to shutdown the executor (see my comment in the code)?
// Configuration class
#Bean (name = "executorService")
public ExecutorService executorService() {
return Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors() + 1);
}
// Some other class
#Qualifier("executorService")
#Autowired
private ExecutorService executorService;
....
private void load() {
executorService.submit(new Runnable() {
#Override
public void run() {
doSomethingInTheBackground();
}
});
// If I enable this I will get a RejectedExecutionException
// for a next request.
// executorService.shutdown();
}
Threadpool is a good way of dealing with this for two reasons:
1) you will reuse existing threads in the pool, sort of less overhead
2) more importantly, your system will not get bog down if system goes under attack and some party tries to start zillions of sessions at once because of size of the pool will be preset.
Use of threadpools is not complicated at all. See here more about threadpools. And also take a look at oracle documentation.
It sounds to me you don't need to create multiple threads at all.
(although I might be wrong, I don't know the specifics of your task).
Could you perhaps create exactly 1 thread that does background work, and give that thread a LinkedBlockingQueue to store the parameters of the doSomethingInTheBackground call?
This solution wouldn't work if it is of the utmost importance that the background task starts right away, even when the server is under heavy load. But for example for my most recent task (retrieve text externally, return them to the API caller, then delayed-add the text to the SOLR layer) this was a perfectly fine solution.
I suggest using neither of the approaches you mention, but to use a JMS queue. You can easily embed an ActiveMQ instance in your application. First create one or more separate consumer threads in the background to pick up jobs from the queue.
Then when a request is received just push a message with the job details on the JMS queue. This is a much better architecture and more scalable than fiddling with low level threads or thread pools.
See also: this answer and the activeMQ site.
I am analyzing some jersey 2.0 code and i have a question on how the following method works:
#Stateless
#Path("/mycoolstuff")
public class MyEjbResource {
…
#GET
#Asynchronous //does this mean the method executes on child thread ?
public void longRunningOperation(#Suspended AsyncResponse ar) {
final String result = executeLongRunningOperation();
ar.resume(result);
}
private String executeLongRunningOperation() { … }
}
Lets say im at a web browser and i type in www.mysite/mycoolstuff
this will execute the method but im not understanding what the asyncResponse is used for neither the #Asynchronous annotation. From the browser how would i notice its asychnronous ? what would be the difference in removing the annotation ? Also the suspended annotation after reading the documentation i'm not clear its purpose.
is the #Asynchronous annotation simply telling the program to execute this method on a new thread ? is it a convenience method for doing "new Thread(.....)" ?
Update: this annotation relieves the server of hanging onto the request processing thread. Throughput can be better. Anyway from the official docs:
Request processing on the server works by default in a synchronous processing mode, which means that a client connection of a request is processed in a single I/O container thread. Once the thread processing the request returns to the I/O container, the container can safely assume that the request processing is finished and that the client connection can be safely released including all the resources associated with the connection. This model is typically sufficient for processing of requests for which the processing resource method execution takes a relatively short time. However, in cases where a resource method execution is known to take a long time to compute the result, server-side asynchronous processing model should be used. In this model, the association between a request processing thread and client connection is broken. I/O container that handles incoming request may no longer assume that a client connection can be safely closed when a request processing thread returns. Instead a facility for explicitly suspending, resuming and closing client connections needs to be exposed. Note that the use of server-side asynchronous processing model will not improve the request processing time perceived by the client. It will however increase the throughput of the server, by releasing the initial request processing thread back to the I/O container while the request may still be waiting in a queue for processing or the processing may still be running on another dedicated thread. The released I/O container thread can be used to accept and process new incoming request connections.
#Suspended have more definite if you used it, else it will not make any difference of using it.
Let's talk about benefits of it:
#Suspended will pause/Suspend the current thread until it gets response,by default #NO_TIMEOUT no suspend timeout set. So it doesn't mean your request response (I/O)thread will get free and be available for other request.
Now Assume you want your service to be a response with some specific time, but the method you are calling from resource not guarantee the response time, then how will you manage your service response time? At that time, you can set suspend timeout for your service using #Suspended, and even provide a fall back response when time get exceed.
Below is some sample of code for setting suspend/pause timeout
public void longRunningOperation(#Suspended AsyncResponse ar) {
ar.setTimeoutHandler(customHandler);
ar.setTimeout(10, TimeUnit.SECONDS);
final String result = executeLongRunningOperation();
ar.resume(result);
}
for more details refer this
The #Suspended annotation is added before an AsyncResponse parameter on the resource method to tell the underlying web server not to expect this thread to return a response for the remote caller:
#POST
public void asyncPost(#Suspended final AsyncResponse ar, ... <args>) {
someAsyncMethodInYourServer(<args>, new AsyncMethodCallback() {
#Override
void completed(<results>) {
ar.complete(Response.ok(<results>).build());
}
#Override
void failed(Throwable t) {
ar.failed(t);
}
}
}
Rather, the AsyncResponse object is used by the thread that calls completed or failed on the callback object to return an 'ok' or throw an error to the client.
Consider using such asynchronous resources in conjunction with an async jersey client. If you're trying to implement a ReST service that exposes a fundamentally async api, these patterns allow you to project the async api through the ReST interface.
We don't create async interfaces because we have a process that takes a long time (minutes or hours) to run, but rather because we don't want our threads to ever sleep - we send the request and register a callback handler to be called later when the result is ready - from milliseconds to seconds later - in a synchronous interface, the calling thread would be sleeping during that time, rather than doing something useful. One of the fastest web servers ever written is single threaded and completely asynchronous. That thread never sleeps, and because there is only one thread, there's no context switching going on under the covers (at least within that process).
The #suspend annotation makes the caller actually wait until your done work. Lets say you have a lot of work to do on another thread. when you use jersey #suspend the caller just sits there and waits (so on a web browser they just see a spinner) until your AsyncResponse object returns data to it.
Imagine you had a really long operation you had to do and you want to do it on another thread (or multiple threads). Now we can have the user wait until we are done. Don't forget in jersey you'll need to add the " true" right in the jersey servlet definition in web.xml to get it to work.
my application must open a tcp socket connection to a server and listen to periodically incoming messages.
What are the best practices to implement this in a JEE 7 application?
Right now I have something like this:
#javax.ejb.Singleton
public class MessageChecker {
#Asynchronous
public void startChecking() {
// set up things
Socket client = new Socket(...);
[...]
// start a loop to retrieve the incoming messages
while((line = reader.readLine())!=null){
LOG.debug("Message from socket server: " + line);
}
}
}
The MessageChecker.startChecking() function is called from a #Startup bean with a #PostConstruct method.
#javax.ejb.Singleton
#Startup
public class Starter() {
#Inject
private MessageChecker checker;
#PostConstruct
public void startup() {
checker.startChecking();
}
}
Do you think this is the correct approach?
Actually it is not working well. The application server (JBoss 8 Wildfly) hangs and does not react to shutdown or re-deployment commands any more. I have the feeling that the it gets stuck in the while(...) loop.
Cheers
Frank
Frank, it is bad practice to do any I/O operations while you're in an EJB context. The reason behind this is simple. When working in a cluster:
They will inherently block each other while waiting on I/O connection timeouts and all other I/O related waiting timeouts. That is if the connection does not block for an unspecified amount of time, in which case you will have to create another Thread which scans for dead connections.
Only one of the EJBs will be able to connect and send/recieve information , the others will just wait in line. This way your system will not scale. No matter how many how many EJBs you have in your cluster, only one will actually do its work.
Apparently you already ran into problems by doing that :) . Jboss 8 seems not to be able to properly create and destroy the bean.
Now, I know your bean is a #Singleton so your architecture does not rely on transactionality, clustering and distribution of reading from that socket. So you might be ok with that.
However :D , you are asking for a java EE compliant way of solving this. Here is what should be done:
Redesign your solution to go with JMS. It 'smells' like you are trying to provide an async messaging functionality (Send a message & wait for reply). You might be using a synchronous protocol to do async messaging. Just give it a thought.
Create a JCA compliant adapter which will be injected in your EJB as a #Resource
You will have a connection pool configurable at AS level ( so you can have different values for different environments
You will have transactionality and rollback. Of course the rollback behavior will have to be coded by you
You can inject it via a #Resource annotation
There are some adapters out there, some might fit like a glove, some might be a bit overdesigned.
Oracle JCA Adapter