Managing Cron4J Scheduler within J2EE Environment - java

My team has created a web application which we hope to associate with a task scheduler which will send out e-mails once a day if any of the projects being run through the web application are running behind schedule.
The scheduler library we're using, Cron4j, is simple enough, and at the present time we're attempting to manage it through a set of web service calls, something like this (the idea being we start the scheduler and it runs for the specified time or until we stop it):
public SchedulerResource(){
s = new Scheduler();
}
#GET
#Path("start")
#Produces(MediaType.APPLICATION_JSON)
public Response invoke_workback_communicator() throws MalformedURLException {
s.schedule("* * * * *", new Runnable() {
public void run() {
WorkbackCommunicator communicator = new WorkbackCommunicator();
communicator.run();
}
});
s.start();
try {
Thread.sleep(1000L * 60L * 2L);
} catch (InterruptedException e) {
;
}
s.stop();
return Response.noContent().build();
}
#GET
#Path("stop")
#Produces(MediaType.APPLICATION_JSON)
public Response stop_workback_communicator() throws MalformedURLException {
s.stop();
return Response.noContent().build();
}
At the present time the 'invoke_workback' method works fine, the 'stop_workback' method does not work.
What I'm curious about is:
is invoking a scheduler in a web service like this bad practice?
if so, how would one effectively accomplish this functionality?
if not, how can I could I create a service interface which manages my scheduler instance? Is this even necessary?
EDIT:
Have looked further into the problem and I noticed that there is something called a 'ServletContextListener' which can be used to start or kill the scheduler on server start up and shut down. So what I'm thinking at this point is to test out that functionality, and keep our service which will allow us to manage the scheduler when the app is still running.

I'm still interested in some outside perspectives on this topic, however we did find a solution which looks like it'll work reasonably well. What we did was implement a 'ServletContextListener' which initiates our scheduler on context start, when the server is started, and stops the scheduler on context destroy, when the server is stopped.
We just had to add the listener as an entry in our web.xml file in the WEB-INF folder of our project:
<listener>
<listener-class>com.mmm.marketing.utils.SchedulerServletContextListener</listener-class>
</listener>
We also wrote a web service which will allow us to stop and start the scheduler singleton manually if we like.

Related

Vert.x Unit Test a Verticle that does not implement the start method with future

I'm new to Vert.x and just stumbled about a problem.
I've the following Verticle:
public class HelloVerticle extends AbstractVerticle {
#Override
public void start() throws Exception {
String greetingName = config().getString("greetingName", "Welt");
String greetingNameEnv = System.getenv("GREETING_NAME");
String greetingNameProp = System.getProperty("greetingName");
Router router = Router.router(vertx);
router.get("/hska").handler(routingContext -> {
routingContext.response().end(String.format("Hallo %s!", greetingName));
});
router.get().handler(routingContext -> {
routingContext.response().end("Hallo Welt");
});
vertx
.createHttpServer()
.requestHandler(router::accept)
.listen(8080);
}
}
I want to unit test this verticle but i dont know how to wait for the verticle to be deployed.
#Before
public void setup(TestContext context) throws InterruptedException {
vertx = Vertx.vertx();
JsonObject config = new JsonObject().put("greetingName", "Unit Test");
vertx.deployVerticle(HelloVerticle.class.getName(), new DeploymentOptions().setConfig(config));
}
when i setup my test like this i have to add a Thread.sleep after the deploy call, to make the tests be executed after some time of watiting for the verticle.
I heared about Awaitability and that it should be possible to wait for the verticle to be deployed with this library. But I didn't find any examples of how to use Awaitability with vertx-unit and the deployVerticle method.
Could anyone bring some light into this?
Or do i really have to hardcode a sleep timer after calling the deployVerticle-Method in my tests?
Have a look into the comments of the accepted answer
First of all you need to implement start(Future future) instead of just start(). Then you need to add a callback handler (Handler<AsyncResult<HttpServer>> listenHandler) to the listen(...) call — which then resolves the Future you got via start(Future future).
Vert.x is highly asynchronous — and so is the start of an Vert.x HTTP server. In your case, the Verticle would be fully functional when the HTTP server is successfully started. Therefore, you need implement the stuff I mentioned above.
Second you need to tell the TestContext that the asynchronous deployment of your Verticle is done. This can be done via another callback handler (Handler<AsyncResult<String>> completionHandler). Here is blog post shows how to do that.
The deployment of a Verticle is always asynchronous even if you implemented the plain start() method. So you should always use a completionHandler if you want to be sure that your Verticle was successfully deployed before test.
So, no you don't need to and you definitely shouldn't hardcode a sleep timer in any of your Vert.x applications. Mind The Golden Rule - Don’t Block the Event Loop.
Edit:
If the initialisation of your Verticle is synchronous you should overwrite the plain start() method — like it's mentioned in the docs:
If your verticle does a simple, synchronous start-up then override this method and put your start-up code in there.
If the initialisation of your Verticle is asynchronous (e.g. deploying a Vert.x HTTP server) you should overwrite start(Future future) and complete the Future when your asynchronous initialisation is finished.

How to add job with trigger for running Quartz scheduler instance without restarting server

I want to create one scheduler instance then adding Jobs and triggers for future use to this scheduler running by web UI without restarting server
(I use Quartz 2.x version)
Can anybody help me please?
Thanks
You can dynamically add jobs to a Quartz scheduler instance but the jobs (i.e. the job classes) must be typically present on the Quartz scheduler's classpath. Alternatively you could use the Quartz scheduler's JobFactory API to load job classes through a custom class-loader and that would allow you to add jobs truly dynamically.
With triggers, there is no problem at all - these can be added/updated/deleted dynamically using the standard Quartz API.
As for a GUI that allows you to add jobs/triggers, there are couple of them and you can easily find them by searching for "quartz scheduler gui" on Google.
I happen to be a principal developer of QuartzDesk, which is one of those products. If you have any questions regarding this product, then please use our contacts.
thank you for your answer, I rephrase my question,
I want to create ONE SCHEDULER instance and add FIVE JOBS with PARAMETRES.
Then i want to dynamically add TRIGGERS to this jobs for future use by web UI without restarting server.
And with each trigger, I want to send parameters to the JOB to perform a specific processing
Exemple:
public class SendSMS implements Job {
public void execute(JobExecutionContext jec) throws JobExecutionException {
try {
SendMessage(param1, param2, param3);
} catch (Exception e) {
throw new UnsupportedOperationException("Erreur : " + e.getStackTrace());
}
}
}
public class CronTriggers {
public static void main(String[] args) throws Exception {
JobKey jobKeySMS = new JobKey("SMSJob", "Groupe1");
JobDetail jobDetailSMS = JobBuilder.newJob(SendSMS.class).withIdentity(jobKeySMS).build();
Scheduler scheduler = new StdSchedulerFactory().getScheduler();
scheduler.clear();
scheduler.start();
scheduler.scheduleJob(jobDetailSMS, DYNAMIC_TRIGGER); // DYNAMIC_TRIGGER recover from web UI
Thanks

ArrayBlockingQueue synchronization in multi node deployment

In simple description, I have a servlet and response time is long so I decided to divide it into two parts, one just composes a response to client, and second let's say performs some business logic and stores result in DB. To decrease response time I execute business logic asynchronously using ThreadPoolExecutor in combination with ArrayBlockingQueue. Using ArrayBlockingQueue I can ensure original FIFO ordering if requests were sequential for the same client. This is important prerequisite.
Here is a snippet:
Servlet
public class HelloServlet extends HttpServlet {
AsyncExecutor exe = new AsyncExecutor();
protected void doGet(HttpServletRequest req,
HttpServletResponse resp) throws ServletException, IOException {
PrintWriter w = resp.getWriter();
exe.executeAsync(exe.new Task(this));
w.print("HELLO TO CLIENT");
}
protected void someBusinessMethod(){
// long time execution here
}
}
and Executor
public class AsyncExecutor {
static final BlockingQueue<Runnable> queue = new ArrayBlockingQueue<Runnable>(10, true);
static final Executor executor = new ThreadPoolExecutor(3, 5, 20L, TimeUnit.SECONDS, queue);
public void executeAsync(Task t){
boolean isTaskAccepted = false;
while(!isTaskAccepted){
try {
executor.execute(t);
isTaskAccepted = true;
} catch (RejectedExecutionException e){
}
}
}
class Task implements Runnable{
private HelloServlet servlet;
Task(HelloServlet servlet){
this.servlet = servlet;
}
#Override
public void run() {
// just call back to servlet's business method
servlet.someBusinessMethod();
}
}
}
This implementation works fine if I deploy it only to one Tomcat node, since I have only one ArrayBlockingQueue in application. But if I have several nodes and load balancer in front then I can not guarantee FIFO ordering of requests for async execution for the same client since I have already several Queues.
My question is, how it is possible to guarantee the same order of requests to be executed asynchronously for the same client in clustered (multi node) deployment? I think ActiveMQ probably a solution (not preferable for me), or load balancer configuration, or can it be implemented in code?
Hope some of these ideas help.
Thanks Sam for you prompt suggestions.
In the first post I described a problem in very simplified way so to clarify it better let's say I have a legacy web app deployed to Tomcat and it serves some Licensing Model(old one). Then we got a new Licensing Model (this is a GlassFish app) and we need to use it alongside with old one to be in sync. For the end user such integration must be transparent and not intrusive. So user request is served like this.
caller send a request (create subscription for example)
execute business logic of the the new licensing model
execute business logic of the the old licensing model
despite the result of the p.3 return response of p.2 in format of old licensing model back to caller
(optional) handle failure of p.3 if any
This was implemented with Aspect which intercepts requests of p.1 and executes the rest of stuff sequentially. And as I said in previous post p.3 execution time can be long that's why I want to make it asynchronous. Let's have a look at snippet of Aspect (instead of Servlet from the first post).
#Aspect #Component public class MyAspect {
#Autowired
private ApplicationContext ctx;
#Autowired
private AsyncExecutor asyncExecutor;
#Around("#annotation(executeOpi)")
public Object around(ProceedingJoinPoint jp, ExecuteOpi executeOpi) throws Throwable {
LegacyMapper newModelExecutor = ctx.getBean(executeOpi.legacyMapper());
// executes a new model and then return result in the format of old model
Object result = newModelExecutor.executeNewModelLogic(joinPoint.getArgs());
// executes old model logic asynchronously
asyncExecutor.executeAsync(asyncExecutor.new Task(this, jp)
return object
}
public void executeOldModelLogic(ProceedingJoinPoint jp) throws Throwable{
// long time execution here
jp.proceed();
}
}
With this implementation as in the first post, I can guarantee a FIFO order of executeOldModelLogic methods, if requests come to the same Tomcat node. But with multi-node deployment and round robin LB in front I can end-up with such a case when for the same caller "update subscription in old model" can come first to ArrayBlockingQueue than "create subscription in old model", which of course a bad logical bug.
And as for points you suggested:
p1, p2 and p4: I probably can't use it as a solution since I don't have a state of object as such. You see that I pass to Runnable task a references of Aspect and JoinPoint to make a call back of executeOldModelLogic from Runnable to Aspect
p3 Don't know about this might be worthwhile to investigate
p5 This is a direction I want go for further investigation, I have a gut feeling it is only way to solve my problem in the given conditions.
There are some solutions that come to mind off hand.
Use the database: post the jobs to be run in a database table, have a secondary server process run the jobs, and place results in an output table. Then when users call back to the web page, it can pick up any results waiting for them from the output table.
Use a JMS service: this is a pretty lightweight messaging service, which would integrate with your Tomcat application reasonably well. The downside here is that you have to run another server side component, and build the integration layer with your app. But that's not a big disadvantage.
Switch to a full J2EE container (Java App Server): and use an EJB Singleton. I have to admit, I don't have any experience with running a Singleton across separate server instances, but I believe that some of them may be able to handle it.
Use EHCache or some other distributed cache: I built a Queue wrapper around EHCache to enable it to be used like a FIFO queue, and it also has RMI (or JMS) replication, so multiple nodes will see the same data.
Use the Load Balancer: If your load balancer supports session level balancing, then all requests for a single user session can be directed to the same node. In a big web environment where I worked we were unable to share user session state across multiple servers, so we set up load balancing to ensure that the user's session always was directed to same web server.
Hope some of these ideas help.

java ee background service

You'll have to excuse me if I'm describing this incorrectly, but essentially I'm trying to get a service-like class to be instantiated just once at server start and to sort of "exist" in the background until it is killed off at server stop. At least from what I can tell, this is not exactly the same as a typical servlet (though I may be wrong about this). What's even more important is that I need to also be able to access this service/object later down the line.
As an example, in another project I've worked on, we used the Spring Framework to accomplish something similar. Essentially, we used the configuration XML file along with the built-in annotations to let Spring know to instantiate instances of some of our services. Later down the line, we used the annotation #Autowired to sort of "grab" the object reference of this pre-instantiated service/object.
So, though it may seem against some of the major concepts of Java itself, I'm just trying to figure out how to reinvent this wheel here. I guess sometimes I feel like these big app frameworks do too much "black-box magic" behind the scenes that I'd really like to be able to fine-tune.
Thanks for any help and/or suggestions!
Oh and I'm trying to run this all from JBoss 6
Here's one way to do it. Add a servlet context listener to your web.xml, e.g.:
<listener>
<listener-class>com.example.BackgroundServletContextListener</listener-class>
</listener>
Then create that class to manage your background service. In this example I use a single-threaded ScheduledExecutorService to schedule it to run every 5 minutes:
public class BackgroundServletContextListener implements ServletContextListener {
private ScheduledExecutorService executor;
private BackgroundService service;
public void contextInitialized(ServletContextEvent sce) {
service = new BackgroundService();
// setup single thread to run background service every 5 minutes
executor = Executors.newSingleThreadScheduledExecutor();
executor.scheduleAtFixedRate(service, 0, 5, TimeUnit.MINUTES);
// make the background service available to the servlet context
sce.getServletContext().setAttribute("service", service);
}
public void contextDestroyed(ServletContextEvent sce) {
executor.shutdown();
}
}
public class BackgroundService implements Runnable {
public void run() {
// do your background processing here
}
}
If you need to access the BackgroundService from web requests, you can access it through the ServletContext. E.g.:
ServletContext context = request.getSession().getServletContext();
BackgroundService service = (BackgroundService) context.getAttribute("service");
Have you considered using an EJB 3.1 Session bean? These can be deployed in a war file, and can be annotated with #Singleton and #Startup.
A number of annotations available with EJB 3.1 are designed to bring Spring goodies into the Java EE framework. It may be the re-invention you're considering has been done for you.
If you must roll your own, you can create a servlet and configure it start up when the application does using load-on-startup. I built a system like that a few years ago. We then used the new(ish) java.util.concurrent stuff like ExecutorService to have it process work from other servlets.
More information about what you're trying to do, and why the existing ways of doing things is insufficient, would be helpful.
You can use messaging for that. Just send message to the queue, and let the message listener do the processing asynchronously in the background.
You can use JMS for the implementation, and ActiveMQ for the message broker.
Spring has JMSTemplate, JMSGateWaySupport API to make JMS Implementation simple
http://static.springsource.org/spring/docs/3.0.x/spring-framework-reference/html/jms.html

file listener process on tomcat

I need a very simple process that listens on a directory and
does some operation when a new file is created on that directory.
I guess I need a thread pool that does that.
This is very easy to implement using the spring framework, which I normally use but I can't use it now.
I can only use tomcat, How can I implement it? what is the entry point that "starts" that thread?
Does it have to be a servlet ?
thanks
since you refined the question, here comes another answer: how to start a daemon in tomcat:
first, register your Daemons in web.xml:
< listener >
my.package.servlet.Daemons
< /listener >
then implement the Daemons class as an implementation of ServletContextListener like this:
the code will be called every 5 seconds, tomcat will call contextDestroyed when your app shuts down. note that the variable is volatile, otherwise you may have troubles on shutdown on multi-core systems
import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
public class Daemons implements ServletContextListener {
private volatile boolean active = true;
Runnable myDeamon = new Runnable() {
public void run() {
while (active) {
try {
System.out.println("checking changed files...");
Thread.sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
};
public void contextInitialized(ServletContextEvent servletContextEvent) {
new Thread(myDeamon).start();
}
public void contextDestroyed(ServletContextEvent servletContextEvent) {
active = false;
}
}
You could create a listener to start the thread, however this isn't a good idea. When you are running inside a Web container, you shouldn't start your own threads. There are a couple of questions in Stack Overflow for why is this so. You could use Quartz (a scheduler framework), but I guess you couldn't achieve an acceptable resolution.
Anyway, what you are describing isn't a Web application, but rather a daemon service. You could implement this independently from your web application and create a means for them to communicate with each other.
true java-only file notifiaction will be added in java 7. here is a part of the javadoc that describes it roughly.
The implementation that observes events from the file system is intended to map directly on to the native file event notification facility where available
right now you will have to either create a native platform-dependent program that does that for you,
or alternatively implement some kind of polling, which lists the directory every so often to detect changes.
there is a notification library that you can use right now - it uses a C program on linux to detect changes over at sourceforge. on windows it uses polling. i did not try it out to see if it works.

Categories

Resources