HK2 factory for Quartz jobs, not destroying service after execution - java

I want to use Quartz Scheduler in my server application that uses HK2 for dependency injection. In order for Quartz jobs to have access to DI, they need to be DI-managed themselves. As a result, I wrote a super simple HK2-aware job factory and registered it with the scheduler.
It works fine with instantiation of services, observing the requested #Singleton or #PerLookup scope. However, it's failing to destroy() non-singleton services (= jobs) after they are finished.
Question: how do I get HK2 to manage jobs properly, including tearing them down again?
Do I need to go down the path of creating the service via serviceLocator.getServiceHandle() and later manually destroy the service, maybe from a JobListener (but how get the ServiceHandle to it)?
Hk2JobFactory.java
#Service
public class Hk2JobFactory implements JobFactory {
private final Logger log = LoggerFactory.getLogger(getClass());
#Inject
ServiceLocator serviceLocator;
#Override
public Job newJob(TriggerFiredBundle bundle, Scheduler scheduler) throws SchedulerException {
JobDetail jobDetail = bundle.getJobDetail();
Class<? extends Job> jobClass = jobDetail.getJobClass();
try {
log.debug("Producing instance of Job '" + jobDetail.getKey() + "', class=" + jobClass.getName());
Job job = serviceLocator.getService(jobClass);
if (job == null) {
log.debug("Unable to instantiate job via ServiceLocator, returning unmanaged instance.");
return jobClass.newInstance();
}
return job;
} catch (Exception e) {
SchedulerException se = new SchedulerException(
"Problem instantiating class '"
+ jobDetail.getJobClass().getName() + "'", e);
throw se;
}
}
}
HelloWorldJob.java
#Service
#PerLookup
public class HelloWorldJob implements Job {
private final Logger log = LoggerFactory.getLogger(this.getClass());
#PostConstruct
public void setup() {
log.info("I'm born!");
}
#PreDestroy
public void shutdown() {
// it's never called... :-(
log.info("And I'm dead again");
}
#Override
public void execute(JobExecutionContext context) throws JobExecutionException {
log.info("Hello, world!");
}
}

Similar to #jwells131313 suggestion, I have implemented a JobListener that destroy()s instances of jobs where appropriate. To facilitate that, I pass along the ServiceHandle in the job's DataMap.
The difference is only that I'm quite happy with the #PerLookup scope.
Hk2JobFactory.java:
#Service
public class Hk2JobFactory implements JobFactory {
private final Logger log = LoggerFactory.getLogger(getClass());
#Inject
ServiceLocator serviceLocator;
#Override
public Job newJob(TriggerFiredBundle bundle, Scheduler scheduler) throws SchedulerException {
JobDetail jobDetail = bundle.getJobDetail();
Class<? extends Job> jobClass = jobDetail.getJobClass();
try {
log.debug("Producing instance of job {} (class {})", jobDetail.getKey(), jobClass.getName());
ServiceHandle sh = serviceLocator.getServiceHandle(jobClass);
if (sh != null) {
Class scopeAnnotation = sh.getActiveDescriptor().getScopeAnnotation();
if (log.isTraceEnabled()) log.trace("Service scope is {}", scopeAnnotation.getName());
if (scopeAnnotation == PerLookup.class) {
// #PerLookup scope means: needs to be destroyed after execution
jobDetail.getJobDataMap().put(SERVICE_HANDLE_KEY, sh);
}
return jobClass.cast(sh.getService());
}
log.debug("Unable to instantiate job via ServiceLocator, returning unmanaged instance");
return jobClass.newInstance();
} catch (Exception e) {
SchedulerException se = new SchedulerException(
"Problem instantiating class '"
+ jobDetail.getJobClass().getName() + "'", e);
throw se;
}
}
}
Hk2CleanupJobListener.java:
public class Hk2CleanupJobListener extends JobListenerSupport {
public static final String SERVICE_HANDLE_KEY = "hk2_serviceHandle";
private final Map<String, String> mdcCopy = MDC.getCopyOfContextMap();
#Override
public String getName() {
return getClass().getSimpleName();
}
#Override
public void jobWasExecuted(JobExecutionContext context, JobExecutionException jobException) {
JobDetail jobDetail = context.getJobDetail();
ServiceHandle sh = (ServiceHandle) jobDetail.getJobDataMap().get(SERVICE_HANDLE_KEY);
if (sh == null) {
if (getLog().isTraceEnabled()) getLog().trace("No serviceHandle found");
return;
}
Class scopeAnnotation = sh.getActiveDescriptor().getScopeAnnotation();
if (scopeAnnotation == PerLookup.class) {
if (getLog().isTraceEnabled()) getLog().trace("Destroying job {} after it was executed (Class {})",
jobDetail.getKey(),
jobDetail.getJobClass().getName()
);
sh.destroy();
}
}
}
Both are registered with the Scheduler.

For Singletons:
Seems like a Singleton service would NOT be destroyed when the job is finished, because it is a Singleton, right? If you are expecting the Singleton to be destroyed at the end of the Job then it seems like the service is more of a "JobScope" and not really a Singleton scope.
JobScope:
If "Jobs" follow certain rules then it might be an good candidate for an "Operation" scope (please see Operation Example). In particular jobs can be in an "Operation" scope if:
There can be many parallel jobs going at once
There can only be one job active on a thread at a time
Note that the above rules also means that Jobs can exists on multiple threads at the same or at different times. The most important rule is that on a single thread only one Job can be active at a time.
If those two rules apply then I highly recommend writing an Operation scope that's something like "JobScope".
This is how you could define a JobScope if Jobs follow the rules above:
#Scope
#Proxiable(proxyForSameScope = false)
#Retention(RetentionPolicy.RUNTIME)
#Target(ElementType.TYPE)
public #interface JobScope {
}
And this would be the entire implementation of the corresponding Context:
#Singleton
public class JobScopeContext extends OperationContext<JobScope> {
public Class<? extends Annotation> getScope() {
return JobScope.class;
}
}
You would then use the OperationManager service to start and stop Jobs when, you know, Jobs start and stop.
Even if Jobs do not follow the rules for an "Operation" you still might want to use a "JobScope" scope that would know to destroy its services when a "Job" comes to its end.
PerLookup:
So if your question is about PerLookup scope objects, you could run into some trouble because you probably need the original ServiceHandle, which it sounds like you wouldn't have. In that case, and if you can at least find out that the original service WAS in fact in PerLookup scope you can use ServiceLocator.preDestroy to destroy the object.

Related

Catch Return Value of An Interceptor

I would like to retrieve the return value of this interceptor:
https://arjan-tijms.omnifaces.org/2012/01/cdi-based-asynchronous-alternative.html
#Interceptor
#Asynchronous
#Priority(PLATFORM_BEFORE)
public class AsynchronousInterceptor implements Serializable {
private static final long serialVersionUID = 1L;
#Resource
private ManagedExecutorService managedExecutorService;
private static final ThreadLocal<Boolean> asyncInvocation = new ThreadLocal<Boolean>();
#AroundInvoke
public synchronized Object submitAsync(InvocationContext ctx) throws Exception {
if (TRUE.equals(asyncInvocation.get())) {
return ctx.proceed();
}
return new FutureDelegator(managedExecutorService.submit( ()-> {
try {
asyncInvocation.set(TRUE);
return ctx.proceed();
} finally {
asyncInvocation.remove();
}
}));
}
}
here is a CdiBean of mine profiting from AsynchronousInterceptor by letting data be loaded async..
public class SomeCDI {
#Asynchronous
public void loadDataAsync() {....}
}
this is how I use the cdi bean later in code:
#Inject
SomeCDI dataLoader;
dataLoader.loadDataAsync(); // the loading starts async but I never find out when is the Future class done???
so my question is how to retrieve return value (in my example from FutureDelegator)???
You won't. Asynchronous invocations on EJB and in the model suggested by Tijms are "fire and forget": you invoke them and let them do their job. Eventually, you can make the async method fire some event when it ends to "return" the result, observing this event to give user some response (websockets, maybe?).
Ideally, the asynchronous method should be void and do some callback lift.
Note that CDI 2.0 event model has the fireAsync method, which should be used instead of your own implementation, as it already have the proper contexts and can be enriched by transaction markers and custom options (when using NotificationOptions method signature).

How to use blocking queue in Spring Boot?

I am trying to use BlockingQueue inside Spring Boot. My design was like this: user submit request via a controller and controller in turn puts some objects onto a blocking queue. After that the consumer should be able to take the objects and process further.
I have used Asnyc, ThreadPool and EventListener. However with my code below I found consumer class is not consuming objects. Could you please help point out how to improve?
Queue Configuration
#Bean
public BlockingQueue<MyObject> myQueue() {
return new PriorityBlockingQueue<>();
}
#Bean
public Executor getAsyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setCorePoolSize(3);
executor.setMaxPoolSize(3);
executor.setQueueCapacity(10);
executor.setThreadNamePrefix("Test-");
executor.initialize();
return executor;
}
Rest Controller
#Autowired
BlockingQueue<MyObject> myQueue;
#RequestMapping(path = "/api/produce")
public void produce() {
/* Do something */
MyObject myObject = new MyObject();
myQueue.put(myObject);
}
Consumer Class
#Autowired
private BlockingQueue<MyObject> myQueue;
#EventListener
public void onApplicationEvent(ContextRefreshedEvent event) {
consume();
}
#Async
public void consume() {
while (true) {
try {
MyObject myObject = myQueue.take();
}
catch (Exception e) {
}
}
}
Your idea is using Queue to store messages, consumer listens to spring events and consume.
I didn't see your code have actually publish the event, just store them in queue.
If you want to use Spring Events, producers could like this:
#Autowired
private ApplicationEventPublisher applicationEventPublisher;
public void doStuffAndPublishAnEvent(final String message) {
System.out.println("Publishing custom event. ");
CustomSpringEvent customSpringEvent = new CustomSpringEvent(this, message);
applicationEventPublisher.publishEvent(customSpringEvent);
}
check this doc
If you still want to use BlockingQueue, your consumer should be a running thread, continuously waiting for tasks in the queue, like:
public class NumbersConsumer implements Runnable {
private BlockingQueue<Integer> queue;
private final int poisonPill;
public NumbersConsumer(BlockingQueue<Integer> queue, int poisonPill) {
this.queue = queue;
this.poisonPill = poisonPill;
}
public void run() {
try {
while (true) {
Integer number = queue.take(); // always waiting
if (number.equals(poisonPill)) {
return;
}
System.out.println(Thread.currentThread().getName() + " result: " + number);
}
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
}
}
could check this code example
#Async doesn't actually start a new thread if the target method is called from within the same object instance, this could be the problem in your case.
Also note that you need to put #EnableAsync on a config class to enable the #Async annotation.
See Spring documentation: https://docs.spring.io/spring-framework/docs/current/reference/html/integration.html#scheduling-annotation-support
The default advice mode for processing #Async annotations is proxy which allows for interception of calls through the proxy only. Local calls within the same class cannot get intercepted that way. For a more advanced mode of interception, consider switching to aspectj mode in combination with compile-time or load-time weaving.
In the end I came up with this solution.
Rest Controller
#Autowired
BlockingQueue<MyObject> myQueue;
#RequestMapping(path = "/api/produce")
public void produce() {
/* Do something */
MyObject myObject = new MyObject();
myQueue.put(myObject);
Consumer.consume();
}
It is a little bit weird because you have to first put the object on queue yourself then consume that object by yourself. Any suggestions on improvement is highly appreciated.

How to create multiple instances of osgi service without using DS annotations

I have created an Osgi service. I want to create a new instance of my service each time the service request comes.
Code look likes this -
#Component(immediate=true)
#Service(serviceFactory = true)
#Property(name = EventConstants.EVENT_TOPIC, value = {DEPLOY, UNDEPLOY })
public class XyzHandler implements EventHandler {
private Consumer consumer;
public static setConsumer(Consumer consumer) {
this.consumer = consumer;
}
#Override
public void handleEvent(final Event event) {
consumer.notify();
}
}
public class Consumer {
private DataSourceCache cache;
public void notify() {
updateCache(cache);
System.out.println("cache updated");
}
public void updateCache(DataSourceCache cache) {
cache = null;
}
}
In my Consumer class, I want to access the service instance of XyzHandler & set the attribute consumer. Also I would like to have a new service instance of XyzHandler created every time for each request.
I found few articles where it is mentioned that using osgi declarative service annotations this can be achieved.
OSGi how to run mutliple instances of one service
But I want to achieve this without using DS 1.3.
How can I do this without using annotations or how can it be done using DS 1.2?
To me this looks like a case of having asked a question based on what you think the answer is rather than describing what you're trying to achieve. If we take a few steps back then a more elegant solution exists.
In general injecting objects into stateful services is a bad pattern in OSGi. It forces you to be really careful about the lifecycle, and risks memory leaks. From the example code it appears as though what you really want is for your Consumer to get notified when an event occurs on an Event Admin topic. The easiest way to do this would be to remove the XyzHandler from the equation and make the Consumer an Event Handler like this:
#Component(property= { EventConstants.EVENT_TOPIC + "=" + DEPLOY,
EventConstants.EVENT_TOPIC + "=" + UNDEPLOY})
public class Consumer implements EventHandler {
private DataSourceCache cache;
#Override
public void handleEvent(final Event event) {
notify();
}
public void notify() {
updateCache(cache);
System.out.println("cache updated");
}
public void updateCache(DataSourceCache cache) {
cache = null;
}
}
If you really don't want to make your Consumer an EventHandler then it would still be easier to register the Consumer as a service and use the whiteboard pattern to get it picked up by a single XyzHandler:
#Component(service=Consumer.class)
public class Consumer {
private DataSourceCache cache;
public void notify() {
updateCache(cache);
System.out.println("cache updated");
}
public void updateCache(DataSourceCache cache) {
cache = null;
}
}
#Component(property= { EventConstants.EVENT_TOPIC + "=" + DEPLOY,
EventConstants.EVENT_TOPIC + "=" + UNDEPLOY})
public class XyzHandler implements EventHandler {
// Use a thread safe list for dynamic references!
private List<Consumer> consumers = new CopyOnWriteArrayList<>();
#Reference(cardinality=MULTIPLE, policy=DYNAMIC)
void addConsumer(Consumer consumer) {
consumers.add(consumer);
}
void removeConsumer(Consumer consumer) {
consumers.remove(consumer);
}
#Override
public void handleEvent(final Event event) {
consumers.forEach(this::notify);
}
private void notify(Consumer consumer) {
try {
consumer.notify();
} catch (Exception e) {
// TODO log this?
}
}
}
Using the whiteboard pattern in this way avoids you needing to track which XyzHandler needs to be created/destroyed when a bundle is started or stopped, and will keep your code much cleaner.
It sounds like your service needs to be a prototype scope service. This was introduced in Core R6. DS 1.3, from Compendium R6, includes support for components to be prototype scope services.
But DS 1.2 predates Core R6 and thus has no knowledge or support for prototype scope services.

Java EJB Modify Schedule Property Values At Runtime

I have a Singleton class in Java and I have a timer using the #Schedule annotation. I wish to change the property of the Schedule at runtime. Below is the code:
#Startup
#Singleton
public class Listener {
public void setProperty() {
Method[] methods = this.getClass().getDeclaredMethods();
Method method = methods[0];
Annotation[] annotations = method.getDeclaredAnnotations();
Annotation annotation = annotations[0];
if(annotation instanceof Schedule) {
Schedule schedule = (Schedule) annotation;
System.out.println(schedule.second());
}
}
#PostConstruct
public void runAtStartUp() {
setProperty();
}
#Schedule(second = "3")
public void run() {
// do something
}
}
I wish to change the value at runtime of Schedule second based on the information from a Property file. Is this actually possibe? The Property file contains the configuration information. I tried to do #Schedule(second = SOME_VARIABLE) where private static String SOME_VARIABLE = readFromConfigFile(); This does not work. It expects a constant meaning a final and I don't want to set final.
I also saw this post: Modifying annotation attribute value at runtime in java
It shows this is not possible to do.
Any ideas?
EDIT:
#Startup
#Singleton
public class Listener {
javax.annotation.#Resource // the issue is this
private javax.ejb.TimerService timerService;
private static String SOME_VARIABLE = null;
#PostConstruct
public void runAtStartUp() {
SOME_VARIABLE = readFromFile();
timerService.createTimer(new Date(), TimeUnit.SECONDS.toMillis(Long.parse(SOME_VARIABLE)), null);
}
#Timeout
public void check(Timer timer) {
// some code runs every SOME_VARIABLE as seconds
}
}
The issue is injecting using #Resource. How can this be fixed?
The Exception is shown below:
No EJBContainer provider available The following providers: org.glassfish.ejb.embedded.EJBContainerProviderImpl Returned null from createEJBContainer call
javax.ejb.EJBException
org.glassfish.ejb.embedded.EJBContainerProviderImpl
at javax.ejb.embeddable.EJBContainer.reportError(EJBContainer.java:186)
at javax.ejb.embeddable.EJBContainer.createEJBContainer(EJBContainer.java:121)
at javax.ejb.embeddable.EJBContainer.createEJBContainer(EJBContainer.java:78)
#BeforeClass
public void setUpClass() throws Exception {
Container container = EJBContainer.createEJBContainer();
}
This occurs during unit testing using the Embeddable EJB Container. Some of the Apache Maven code is located on this post: Java EJB JNDI Beans Lookup Failed
I think the solution you are looking for was discussed here.
TomasZ is right you should use programmatic timers with TimerService for the situations when you want dynamically change schedule in run time.
Maybe you could use the TimerService. I have written some code but on my Wildfly 8 it seems to run multiple times even if its a Singleton.
Documentation http://docs.oracle.com/javaee/6/tutorial/doc/bnboy.html
Hope this helps:
#javax.ejb.Singleton
#javax.ejb.Startup
public class VariableEjbTimer {
#javax.annotation.Resource
javax.ejb.TimerService timerService;
#javax.annotation.PostConstruct
public void runAtStartUp() {
createTimer(2000L);
}
private void createTimer(long millis) {
//timerService.createSingleActionTimer(millis, new javax.ejb.TimerConfig());
timerService.createTimer(millis, millis, null);
}
#javax.ejb.Timeout
public void run(javax.ejb.Timer timer) {
long timeout = readFromConfigFile();
System.out.println("Timeout in " + timeout);
createTimer(timeout);
}
private long readFromConfigFile() {
return new java.util.Random().nextInt(5) * 1000L;
}
}

How to pass instance variables into Quartz job?

I wonder how to pass an instance variable externally in Quartz?
Below is pseudo code I would like to write. How can I pass externalInstance into this Job?
public class SimpleJob implements Job {
#Override
public void execute(JobExecutionContext context)
throws JobExecutionException {
float avg = externalInstance.calculateAvg();
}
}
you can put your instance in the schedulerContext.When you are going to schedule the job ,just before that you can do below:
getScheduler().getContext().put("externalInstance", externalInstance);
Your job class would be like below:
public class SimpleJob implements Job {
#Override
public void execute(JobExecutionContext context)
throws JobExecutionException {
SchedulerContext schedulerContext = null;
try {
schedulerContext = context.getScheduler().getContext();
} catch (SchedulerException e1) {
e1.printStackTrace();
}
ExternalInstance externalInstance =
(ExternalInstance) schedulerContext.get("externalInstance");
float avg = externalInstance.calculateAvg();
}
}
If you are using Spring ,you can actually using spring's support to inject the whole applicationContext like answered in the Link
While scheduling the job using a trigger, you would have defined JobDataMap that is added to the JobDetail. That JobDetail object will be present in the JobExecutionContext passed to the execute() method in your Job. So, you should figure out a way to pass your externalInstance through the JobDataMap. HTH.
Add the object to the JobDataMap:
JobDetail job = JobBuilder.newJob(MyJobClass.class)
.withIdentity("MyIdentity",
"MyGroup")
.build();
job.getJobDataMap()
.put("MyObject",
myObject);
Access the data from the JobDataMap:
var myObject = (MyObjectClass) context.getJobDetail()
.getJobDataMap()
.get("carrier");
Solve this problem by creating one interface with one HashMap putting required information there.
Implement this interface in your Quartz Job class this information will be accessible.
In IFace
Map<JobKey,Object> map = new HashMap<>();
In Job
map.get(context.getJobDetail().getKey()) => will give you Object
Quartz has a simple way to grep params from JobDataMap using setters
I am using Quartz 2.3 and I simply used setter to fetch passed instance objects
For example I created this class
public class Data implements Serializable {
#JsonProperty("text")
private String text;
#JsonCreator
public Data(#JsonProperty("b") String text) {this.text = text;}
public String getText() {return text;}
}
Then I created an instance of this class and put it inside the JobDataMap
JobDataMap jobDataMap = new JobDataMap();
jobDataMap.put("data", new Data(1, "One!"));
JobDetail job = newJob(HelloJob.class)
.withIdentity("myJob", "group")
.withDescription("bla bla bla")
.usingJobData(jobDataMap) // <!!!
.build();
And my job class looks like this
public class HelloJob implements Job {
Data data;
public HelloJob() {}
public void execute(JobExecutionContext context)
throws JobExecutionException
{
String text = data.getText();
System.out.println(text);
}
public void setData(Data data) {this.data = data;}
}
Note: it is mandatory that the field and the setter matched the key
This code will print One! when you schedule the job.
That's it, clean and efficient
This is the responsibility of the JobFactory. The default PropertySettingJobFactory implementation will invoke any bean-setter methods, based on properties found in the schdeuler context, the trigger, and the job detail. So as long as you have implemnted an appropriate setContext() setter method you should be able to do any of the following:
scheduler.getContext().put("context", context);
Or
Trigger trigger = TriggerBuilder.newTrigger()
...
.usingJobData("context", context)
.build()
Or
JobDetail job = JobBuilder.newJob(SimpleJob.class)
...
.usingJobData("context", context)
.build()
Or if that isn't enough you can provide your own JobFactory class which instantiates the Job objects however you please.

Categories

Resources