You'll have to excuse me if I'm describing this incorrectly, but essentially I'm trying to get a service-like class to be instantiated just once at server start and to sort of "exist" in the background until it is killed off at server stop. At least from what I can tell, this is not exactly the same as a typical servlet (though I may be wrong about this). What's even more important is that I need to also be able to access this service/object later down the line.
As an example, in another project I've worked on, we used the Spring Framework to accomplish something similar. Essentially, we used the configuration XML file along with the built-in annotations to let Spring know to instantiate instances of some of our services. Later down the line, we used the annotation #Autowired to sort of "grab" the object reference of this pre-instantiated service/object.
So, though it may seem against some of the major concepts of Java itself, I'm just trying to figure out how to reinvent this wheel here. I guess sometimes I feel like these big app frameworks do too much "black-box magic" behind the scenes that I'd really like to be able to fine-tune.
Thanks for any help and/or suggestions!
Oh and I'm trying to run this all from JBoss 6
Here's one way to do it. Add a servlet context listener to your web.xml, e.g.:
<listener>
<listener-class>com.example.BackgroundServletContextListener</listener-class>
</listener>
Then create that class to manage your background service. In this example I use a single-threaded ScheduledExecutorService to schedule it to run every 5 minutes:
public class BackgroundServletContextListener implements ServletContextListener {
private ScheduledExecutorService executor;
private BackgroundService service;
public void contextInitialized(ServletContextEvent sce) {
service = new BackgroundService();
// setup single thread to run background service every 5 minutes
executor = Executors.newSingleThreadScheduledExecutor();
executor.scheduleAtFixedRate(service, 0, 5, TimeUnit.MINUTES);
// make the background service available to the servlet context
sce.getServletContext().setAttribute("service", service);
}
public void contextDestroyed(ServletContextEvent sce) {
executor.shutdown();
}
}
public class BackgroundService implements Runnable {
public void run() {
// do your background processing here
}
}
If you need to access the BackgroundService from web requests, you can access it through the ServletContext. E.g.:
ServletContext context = request.getSession().getServletContext();
BackgroundService service = (BackgroundService) context.getAttribute("service");
Have you considered using an EJB 3.1 Session bean? These can be deployed in a war file, and can be annotated with #Singleton and #Startup.
A number of annotations available with EJB 3.1 are designed to bring Spring goodies into the Java EE framework. It may be the re-invention you're considering has been done for you.
If you must roll your own, you can create a servlet and configure it start up when the application does using load-on-startup. I built a system like that a few years ago. We then used the new(ish) java.util.concurrent stuff like ExecutorService to have it process work from other servlets.
More information about what you're trying to do, and why the existing ways of doing things is insufficient, would be helpful.
You can use messaging for that. Just send message to the queue, and let the message listener do the processing asynchronously in the background.
You can use JMS for the implementation, and ActiveMQ for the message broker.
Spring has JMSTemplate, JMSGateWaySupport API to make JMS Implementation simple
http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/jms.html
Related
I have one functionality in online application. I need to mail receipt to customer after generate receipt. My problem is mail function takes more time nearly 20 to 30 seconds, customer could not wait for long time during online transaction.
So i have used java ExecutorService to run independently mail service [sendMail] and return response PAGE to customer either mail sent or not.
Is it right to use ExecutorService in online application [Http request & Response]. Below is my code. Kindly advice.
#RequestMapping(value="/generateReceipt",method=RequestMethod.GET)
public #ResponseBody ReceiptBean generateReceipt(HttpServletRequest httpRequest,HttpServletResponse httpResponse) {
// Other codes here
...
...
I need run below line independently, since it takes more time. so commeneted and wrote executor service
//mailService.sendMail(httpRequest, httpResponse, receiptBean);
java.util.concurrent.ExecutorService executorService = java.util.concurrent.Executors.newFixedThreadPool(10);
executorService.execute(new Runnable() {
ReceiptBean receiptBean1;
public void run() {
mailService.sendMail(httpRequest, httpResponse, receiptBean);
}
public Runnable init(ReceiptBean receiptBean) {
this.receiptBean = receiptBean1;
return(this);
}
}.init(receiptBean));
executorService.shutdown();
return receiptBean;
}
You can do that, although I wouldn't expect this code in a controller class but in a separate on (Separation of Concerns and all).
However, since you seem to be using Spring, you might as well use their scheduling framework.
It is fine to use Executor Service to make an asynchronous mail sending request, but you should try to follow SOLID principles in your design. Let the service layer take care of running the executor task.
https://en.wikipedia.org/wiki/SOLID
I agree with both #daniu and #Ankur regarding the separation of concerns u should follow. So just create a dedicated service like "EmailService" and inject it where needed.
Moreover you are leveraging the Spring framework and u can take advantage of its Async feature.
If u prefer to write your own async code then I'll suggest to use maybe a CompletableFuture instead of the ExecutorService to better handling failure (maybe u want store messages not sent into a queue for achieving retry feature or some other behaviour).
my application must open a tcp socket connection to a server and listen to periodically incoming messages.
What are the best practices to implement this in a JEE 7 application?
Right now I have something like this:
#javax.ejb.Singleton
public class MessageChecker {
#Asynchronous
public void startChecking() {
// set up things
Socket client = new Socket(...);
[...]
// start a loop to retrieve the incoming messages
while((line = reader.readLine())!=null){
LOG.debug("Message from socket server: " + line);
}
}
}
The MessageChecker.startChecking() function is called from a #Startup bean with a #PostConstruct method.
#javax.ejb.Singleton
#Startup
public class Starter() {
#Inject
private MessageChecker checker;
#PostConstruct
public void startup() {
checker.startChecking();
}
}
Do you think this is the correct approach?
Actually it is not working well. The application server (JBoss 8 Wildfly) hangs and does not react to shutdown or re-deployment commands any more. I have the feeling that the it gets stuck in the while(...) loop.
Cheers
Frank
Frank, it is bad practice to do any I/O operations while you're in an EJB context. The reason behind this is simple. When working in a cluster:
They will inherently block each other while waiting on I/O connection timeouts and all other I/O related waiting timeouts. That is if the connection does not block for an unspecified amount of time, in which case you will have to create another Thread which scans for dead connections.
Only one of the EJBs will be able to connect and send/recieve information , the others will just wait in line. This way your system will not scale. No matter how many how many EJBs you have in your cluster, only one will actually do its work.
Apparently you already ran into problems by doing that :) . Jboss 8 seems not to be able to properly create and destroy the bean.
Now, I know your bean is a #Singleton so your architecture does not rely on transactionality, clustering and distribution of reading from that socket. So you might be ok with that.
However :D , you are asking for a java EE compliant way of solving this. Here is what should be done:
Redesign your solution to go with JMS. It 'smells' like you are trying to provide an async messaging functionality (Send a message & wait for reply). You might be using a synchronous protocol to do async messaging. Just give it a thought.
Create a JCA compliant adapter which will be injected in your EJB as a #Resource
You will have a connection pool configurable at AS level ( so you can have different values for different environments
You will have transactionality and rollback. Of course the rollback behavior will have to be coded by you
You can inject it via a #Resource annotation
There are some adapters out there, some might fit like a glove, some might be a bit overdesigned.
Oracle JCA Adapter
On Google App Engine (GAE) it is possible for frontend instances to create up to 10 threads to maximize throughput. According to this page, such multi-threading can be accomplished as follows:
Runnable myTask = new Runnable({
#Override
public void run() {
// Do whatever
}
});
ThreadFactory threadFactory = ThreadManager.currentRequestThreadFactory();
// GAE caps frontend instances to 10 worker threads per instance.
threadFactory.newRequestThread(myTask);
To hit my GAE server-side, I'll expose many servlets mapped to certain URLs, such as the FizzServlet mapped to http://myapp.com/fizz:
public class FizzServlet extends HttpServlet {
#Override
public void doGet(HttpServletRequest request,
HttpServletResponse response) throws IOException {
// Handle the request here. Somehow send it to an available
// worker thread.
}
}
I guess I'm choking on how to connect these two ideas. As far as I see it, you have 3 different mechanisms/items here:
The App Engine instance itself, whose lifecycle I can "hook" by implementing a ServletContextListener and run custom code when GAE fires up the instance; and
This ThreadFactory/ThreadManager stuff (above)
The servlets/listeners
I guess I'm wondering how to implement code such that every time a new request comes into, say, FizzServlet#doGet, how to make sure that request gets sent to an available thread (if there is one available). That way, if FizzServlet was the only servlet I was exposing, it could get called up to 10 times before it would cause a new (11th) incoming request to hang while a previous request was processing.
I'm looking for the glue code between the servlet and this thread-creating code. Thanks in advance.
I guess I'm wondering how to implement code such that every time a new request comes into, say, FizzServlet#doGet, how to make sure that request gets sent to an available thread (if there is one available). That way, if FizzServlet was the only servlet I was exposing, it could get called up to 10 times before it would cause a new (11th) incoming request to hang while a previous request was processing.
That's what the GAE servlet engine does for you. You deploy an app containing a servlet, and when a request comes in, the servlet engine uses a thread to process the request and calls your servlet. You don't have anything to do.
If your servlet's doGet() or doPost() method, invoked by GAE, needs to perform several tasks in parallel (like contacting several other web sites for example), then you'll start threads by yourself as explained in the page you linked to.
I am trying to get the FacesContext by calling FacesContext.getCurrentInstance() in the run() method of a Runnable class, but it returns null.
public class Task implements Runnable {
#Override
public void run() {
FacesContext context = FacesContext.getCurrentInstance(); // null!
// ...
}
}
How is this caused and how can I solve it?
The FacesContext is stored as a ThreadLocal variable in the thread responsible for the HTTP request which invoked the FacesServlet, the one responsible for creating the FacesContext. This thread usually goes through the JSF managed bean methods only. The FacesContext is not available in other threads spawned by that thread.
You should actually also not have the need for it in other threads. Moreover, when your thread starts and runs independently, the underlying HTTP request will immediately continue processing the HTTP response and then disappear. You won't be able to do something with the HTTP response anyway.
You need to solve your problem differently. Ask yourself: what do you need it for? To obtain some information? Just pass that information to the Runnable during its construction instead.
The below example assumes that you'd like to access some session scoped object in the thread.
public class Task implements Runnable {
private Work work;
public Task(Work work) {
this.work = work;
}
#Override
public void run() {
// Just use work.
}
}
Work work = (Work) FacesContext.getCurrentInstance().getExternalContext().getSessionMap().get("work");
Task task = new Task(work);
// ...
If you however ultimately need to notify the client e.g. that the thread's work is finished, then you should be looking for a different solution than e.g. adding a faces message or so. The answer is to use "push". This can be achieved with SSE or websockets. A concrete websockets example can be found in this related question: Real time updates from database using JSF/Java EE. In case you happen to use PrimeFaces, look at
<p:push>. In case you happen to use OmniFaces, look at <o:socket>.
Unrelated to the concrete problem, manually creating Runnables and manually spawning threads in a Java EE web application is alarming. Head to the following Q&A to learn about all caveats and how it should actually be done:
Spawning threads in a JSF managed bean for scheduled tasks using a timer
Is it safe to start a new thread in a JSF managed bean?
In simple description, I have a servlet and response time is long so I decided to divide it into two parts, one just composes a response to client, and second let's say performs some business logic and stores result in DB. To decrease response time I execute business logic asynchronously using ThreadPoolExecutor in combination with ArrayBlockingQueue. Using ArrayBlockingQueue I can ensure original FIFO ordering if requests were sequential for the same client. This is important prerequisite.
Here is a snippet:
Servlet
public class HelloServlet extends HttpServlet {
AsyncExecutor exe = new AsyncExecutor();
protected void doGet(HttpServletRequest req,
HttpServletResponse resp) throws ServletException, IOException {
PrintWriter w = resp.getWriter();
exe.executeAsync(exe.new Task(this));
w.print("HELLO TO CLIENT");
}
protected void someBusinessMethod(){
// long time execution here
}
}
and Executor
public class AsyncExecutor {
static final BlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(10, true);
static final Executor executor = new ThreadPoolExecutor(3, 5, 20L, TimeUnit.SECONDS, queue);
public void executeAsync(Task t){
boolean isTaskAccepted = false;
while(!isTaskAccepted){
try {
executor.execute(t);
isTaskAccepted = true;
} catch (RejectedExecutionException e){
}
}
}
class Task implements Runnable{
private HelloServlet servlet;
Task(HelloServlet servlet){
this.servlet = servlet;
}
#Override
public void run() {
// just call back to servlet's business method
servlet.someBusinessMethod();
}
}
}
This implementation works fine if I deploy it only to one Tomcat node, since I have only one ArrayBlockingQueue in application. But if I have several nodes and load balancer in front then I can not guarantee FIFO ordering of requests for async execution for the same client since I have already several Queues.
My question is, how it is possible to guarantee the same order of requests to be executed asynchronously for the same client in clustered (multi node) deployment? I think ActiveMQ probably a solution (not preferable for me), or load balancer configuration, or can it be implemented in code?
Hope some of these ideas help.
Thanks Sam for you prompt suggestions.
In the first post I described a problem in very simplified way so to clarify it better let's say I have a legacy web app deployed to Tomcat and it serves some Licensing Model(old one). Then we got a new Licensing Model (this is a GlassFish app) and we need to use it alongside with old one to be in sync. For the end user such integration must be transparent and not intrusive. So user request is served like this.
caller send a request (create subscription for example)
execute business logic of the the new licensing model
execute business logic of the the old licensing model
despite the result of the p.3 return response of p.2 in format of old licensing model back to caller
(optional) handle failure of p.3 if any
This was implemented with Aspect which intercepts requests of p.1 and executes the rest of stuff sequentially. And as I said in previous post p.3 execution time can be long that's why I want to make it asynchronous. Let's have a look at snippet of Aspect (instead of Servlet from the first post).
#Aspect #Component public class MyAspect {
#Autowired
private ApplicationContext ctx;
#Autowired
private AsyncExecutor asyncExecutor;
#Around("#annotation(executeOpi)")
public Object around(ProceedingJoinPoint jp, ExecuteOpi executeOpi) throws Throwable {
LegacyMapper newModelExecutor = ctx.getBean(executeOpi.legacyMapper());
// executes a new model and then return result in the format of old model
Object result = newModelExecutor.executeNewModelLogic(joinPoint.getArgs());
// executes old model logic asynchronously
asyncExecutor.executeAsync(asyncExecutor.new Task(this, jp)
return object
}
public void executeOldModelLogic(ProceedingJoinPoint jp) throws Throwable{
// long time execution here
jp.proceed();
}
}
With this implementation as in the first post, I can guarantee a FIFO order of executeOldModelLogic methods, if requests come to the same Tomcat node. But with multi-node deployment and round robin LB in front I can end-up with such a case when for the same caller "update subscription in old model" can come first to ArrayBlockingQueue than "create subscription in old model", which of course a bad logical bug.
And as for points you suggested:
p1, p2 and p4: I probably can't use it as a solution since I don't have a state of object as such. You see that I pass to Runnable task a references of Aspect and JoinPoint to make a call back of executeOldModelLogic from Runnable to Aspect
p3 Don't know about this might be worthwhile to investigate
p5 This is a direction I want go for further investigation, I have a gut feeling it is only way to solve my problem in the given conditions.
There are some solutions that come to mind off hand.
Use the database: post the jobs to be run in a database table, have a secondary server process run the jobs, and place results in an output table. Then when users call back to the web page, it can pick up any results waiting for them from the output table.
Use a JMS service: this is a pretty lightweight messaging service, which would integrate with your Tomcat application reasonably well. The downside here is that you have to run another server side component, and build the integration layer with your app. But that's not a big disadvantage.
Switch to a full J2EE container (Java App Server): and use an EJB Singleton. I have to admit, I don't have any experience with running a Singleton across separate server instances, but I believe that some of them may be able to handle it.
Use EHCache or some other distributed cache: I built a Queue wrapper around EHCache to enable it to be used like a FIFO queue, and it also has RMI (or JMS) replication, so multiple nodes will see the same data.
Use the Load Balancer: If your load balancer supports session level balancing, then all requests for a single user session can be directed to the same node. In a big web environment where I worked we were unable to share user session state across multiple servers, so we set up load balancing to ensure that the user's session always was directed to same web server.
Hope some of these ideas help.