not sure how to title this issue but lets hope description may give better explaination. I am looking for a way to annotate a ejb method or cdi method with a custom annotation like " #Duration" or someothing aaand so to kill methods execution if takes too long after the given duration period. I guess some pseudo code will make everything clear:
public class myEJBorCdiBean {
#Duration(seconds = 5)
public List<Data> complexTask(..., ...)
{
while(..)
// this takes more time than the given 5 seconds so throw execption
}
To sum up, a method takes extremely long and it shall throw a given time duration expired error or something like that
Kinda a timeout mechanism, I dont know if there is already something like this, I am new to javaEE world.
Thanks in advance guys
You are not supposed to use Threading API inside EJB/CDI container. EJB spec clearly states that:
The enterprise bean must not attempt to manage threads. The enterprise
bean must not attempt to start, stop, suspend, or resume a thread, or
to change a thread’s priority or name. The enterprise bean must not
attempt to manage thread groups.
Managed beans and the invocation of their business methods have to be fully controlled by the container in order to avoid corruption of their state. Depending on your usecase, either offload this operation to a dedicated service(outside javaee), or you could come up with some semi-hacking solution using EJB #Singleton and Schedule - so that you could periodically check for some control flag. If you are running on Wildfly/JBoss, you could misuse the #TransactionTimeout annotation for this- as EJB methods are by default transaction aware, setting the timeout on Transaction will effective control the invocation timeout on the bean method. I am not sure, how it is supported on other applications servers.
If async processing is an option, then EJB #Asynchronous could be of some help: see Asynchronous tutorial - Cancelling and asynchronous operation.
As a general advice: Do not run long running ops in EJB/CDI. Every request will spawn a new thread, threads are limited resource and your app will be much harder to scale and maintain(long running op ~= state), what happens if your server crashes during method invocation, how would the use case work in clustered environment. Again it is hard to say, what is a better approach without understanding of your use case, but investigate java EE batch api, JMS with message driven beans or asynchronous processing with #Asynchronous
It is a very meaningful idea – to limit a complex task to a certain execution time. In practical web-computing, many users will be unwilling to wait for a complex search task to complete when its duration exceeds a maximally acceptable amount of time.
The Enterprise container controls the thread pool, and the allocation of CPU-resources among the active threads. It does so taking into account also retention times during time-consuming I/O-tasks (typically disk access).
Nevertheless, it makes sense to program a start task variable, and so now and then during the complex task verify the duration of that particular task. I advice you to program a local, runnable task, which picks scheduled tasks from a job queue. I have experience with this from a Java Enterprise backend application running under Glassfish.
First the interface definition Duration.java
// Duration.java
#Qualifier
#Target({ElementType.TYPE, ElementType.FIELD, ElementType.PARAMETER, ElementType.METHOD})
#Documented
#Retention(RetentionPolicy.RUNTIME)
public #interface Duration {
public int minutes() default 0; // Default, extended from class, within path
}
Now follows the definition of the job TimelyJob.java
// TimelyJob.java
#Duration(minutes = 5)
public class TimelyJob {
private LocalDateTime localDateTime = LocalDateTime.now();
private UUID uniqueTaskIdentifier;
private String uniqueOwnerId;
public TimelyJob(UUID uniqueTaskIdentifier, String uniqueOwnerId) {
this.uniqueTaskIdentifier = uniqueTaskIdentifier;
this.uniqueOwnerId = uniqueOwnerId;
}
public void processUntilMins() {
final int minutes = this.getClass().getAnnotation(Duration.class).minutes();
while (true) {
// do some heavy Java-task for a time unit, then pause, and check total time
// break - when finished
if (minutes > 0 && localDateTime.plusMinutes(minutes).isAfter(LocalDateTime.now())) {
break;
}
try {
Thread.sleep(5);
} catch (InterruptedException e) {
System.err.print(e);
}
}
// store result data in result class, 'synchronized' access
}
public LocalDateTime getLocalDateTime() {
return localDateTime;
}
public UUID getUniqueTaskIdentifier() {
return uniqueTaskIdentifier;
}
public String getUniqueOwnerId() {
return uniqueOwnerId;
}
}
The Runnable task that executes the timed jobs - TimedTask.java - is implemented as follows:
// TimedTask.java
public class TimedTask implements Runnable {
private LinkedBlockingQueue<TimelyJob> jobQueue = new LinkedBlockingQueue<TimelyJob>();
public void setJobQueue(TimelyJob job) {
this.jobQueue.add(job);
}
#Override
public void run() {
while (true) {
try {
TimelyJob nextJob = jobQueue.take();
nextJob.processUntilMins();
Thread.sleep(100);
} catch (InterruptedException e) {
System.err.print(e);
}
}
}
}
and in a seperate code, the staring of the TimedTask
public void initJobQueue() {
new Thread(new TimedTask()).start();
}
This functionality actually implements a batch-job scheduler in Java, using annotations to control the end-task time limit.
Related
I need to schedule a task to run after 2 minutes. Then when the time is up I need to check if we are still ONLINE. If we are still online I simple don't do anything. If OFFLINE then I will do some work.
private synchronized void schedule(ConnectionObj connectionObj)
{
if(connectionObj.getState() == ONLINE)
{
// schedule timer
}
else
{
// cancel task.
}
}
This is the code I am considering:
#Async
private synchronized void task(ConnectionObj connectionObj)
{
try
{
Thread.sleep(2000); // short time for test
}
catch (InterruptedException e)
{
e.printStackTrace();
}
if(connectionObj.getState() == ONLINE)
{
// don't do anything
}
else
{
doWork();
}
}
For scheduling this task should I use #Async? I may still get many more calls to schedule while I am waiting inside the task() method.
Does SpringBoot have something like a thread that I create each time schedule() gets called so that this becomes easy?
I am looking for something similar to a postDelay() from Android: how to use postDelayed() correctly in android studio?
I'm not sure about an exclusively spring-boot solution, since it isn't something that I work with.
However, you can use ScheduledExecutorService, which is in the base Java environment. For your usage, it would look something like this:
#Async
private synchronized void task(ConnectionObj connectionObj)
{
Executors.newScheduledThreadPool(1).schedule(() -> {
if(connectionObj.getState() == ONLINE)
{
// don't do anything
}
else
{
doWork();
}
}, 2, TimeUnit.MINUTES);
}
I used lambda expressions, which are explained here.
Update
Seeing as how you need to schedule them "on-demand", #Scheduling won't help as you mentioned. I think the simplest solution is to go for something like #Leftist proposed.
Otherwise, as I mentioned in the comments, you can look at Spring Boot Quartz integration to create a job and schedule it with Quartz. It will then take care of running it after the two minute mark. It's just more code for almost the same result.
Original
For Spring Boot, you can use the built in Scheduling support. It will take care of running your code on time on a separate thread.
As the article states, you must enable scheduling with #EnableScheduling.
Then you annotate your method you want to run with #Scheduled(..) and you can either setup a fixedDelay or cron expression, or any of the other timing options to suit your time execution requirements.
I am writing an API that receives requests on when and where to make GET requests, and will then use Quartz to schedule the appropriate times to make those requests. At the moment, I am calling getDefaultScheduler every time a request is made, in order to schedule the appropriate job and trigger. I'm storing the jobs in memory right now, but plan on storing jobs using JDBC later on.
Is this approach safe? We can assume that there may be many concurrent requests to the application, and that the application will make sure there won't be any trigger and job name conflicts.
Yes they are thread safe. But go ahead and look at the JobStore implementation you are using. Here is the DefaultClusteredJobStore impl for storing jobs..
public void storeJob(JobDetail newJob, boolean replaceExisting) throws ObjectAlreadyExistsException,
JobPersistenceException {
JobDetail clone = (JobDetail) newJob.clone();
lock();
try {
// wrapper construction must be done in lock since serializer is unlocked
JobWrapper jw = wrapperFactory.createJobWrapper(clone);
if (jobFacade.containsKey(jw.getKey())) {
if (!replaceExisting) { throw new ObjectAlreadyExistsException(newJob); }
} else {
// get job group
Set<String> grpSet = toolkitDSHolder.getOrCreateJobsGroupMap(newJob.getKey().getGroup());
// add to jobs by group
grpSet.add(jw.getKey().getName());
if (!jobFacade.hasGroup(jw.getKey().getGroup())) {
jobFacade.addGroup(jw.getKey().getGroup());
}
}
// add/update jobs FQN map
jobFacade.put(jw.getKey(), jw);
} finally {
unlock();
}
}
I am working on an application that is meant to be extensible by the customer. It is based on OSGi (Equinox) and makes heavy use of Declarative Services (DS). Customer-installed bundles provide their own service implementations which my application then makes use of. There is no limit on the number of service implementations that customer-specific bundles may provide.
Is there a way to ensure that, when the application's main function is executed, all customer-provided service implementations have been registered?
To clarify, suppose my application consists of a single DS component RunnableRunner:
public class RunnableRunner
{
private final List<Runnable> runnables = new ArrayList<Runnable>();
public void bindRunnable(Runnable runnable)
{
runnables.add(runnable);
}
public void activate()
{
System.out.println("Running runnables:");
for (Runnable runnable : runnables) {
runnable.run();
}
System.out.println("Done running runnables.");
}
}
This component is registered using a DS component.xml such as the following:
<?xml version="1.0" encoding="UTF-8"?>
<scr:component xmlns:scr="http://www.osgi.org/xmlns/scr/v1.1.0" name="RunnableRunner" activate="activate">
<implementation class="RunnableRunner"/>
<reference bind="bindRunnable" interface="java.lang.Runnable" name="Runnable"
cardinality="0..n" policy="dynamic"/>
</scr:component>
I understand that there is no guarantee that, at the time activate() is called, all Runnables have been bound. In fact, experiments I made with Eclipse/Equinox indicate that the DS runtime won't be able to bind Runnables contributed by another bundle if that bundle happens to start after the main bundle (which is a 50/50 chance unless explicit start levels are used).
So, what alternatives are there for me? How can I make sure the OSGi containers tries as hard as it can to resolve all dependencies before activating the RunnableRunner?
Alternatives I already thought about:
Bundle start levels: too coarse (they work on bundle level, not on component level) and also unreliable (they're only taken as a hint by OSGi)
Resorting to Eclipse's Extension Points: too Eclipse-specific, hard to combine with Declarative Services.
Making the RunnableRunner dynamically reconfigure whenever a new Runnable is registered: not possible, at some point I have to execute all the Runnables in sequence.
Any advice on how to make sure some extensible service is "ready" before it is used?
By far the best way is not to care and design your system that it flows correctly. There are many reasons a service appears and disappears so any mirage of stability is just that. Not handling the actual conditions creates fragile systems.
In your example, why can't the RunnableRunner not execute the stuff for each Runnable service as it comes available? The following code is fully OSGi dynamic aware:
#Component
public class RunnableRunner {
#Reference Executor executor;
#Reference(policy=ReferencePolicy.DYNAMIC)
void addRunnable( Runnable r) {
executor.execute(r);
}
}
I expect you find this wrong for a reason you did not specify. This reason is what you should try to express as a service registration.
If you have a (rare) use cases where you absolutely need to know that 'all' (whatever that means) services are available then you could count the number of instances, or use some other condition. In OSGi with DS the approach is then to turn this condition into a service so that then others can depend on it and you get all the guarantees that services provide.
In that case just create a component that counts the number of instances. Using the configuration, you register a Ready service once you reach a certain count.
public interface Ready {}
#Component
#Designate( Config.class )
public class RunnableGuard {
#ObjectClass
#interface Config {
int count();
}
int count = Integer.MAX_VALUE;
int current;
ServiceRegistration<Ready> registration;
#Activate
void activate(Config c, BundleContext context) {
this.context = context;
this.count = c.count();
count();
}
#Deactivate void deactivate() {
if ( registration != null )
registration.unregister();
}
#Reference
void addRunnable( Runnable r ) {
count(1);
}
void removeRunnable( Runnable r ) {
count(-1);
}
synchronized void count(int n) {
this.current += n;
if ( this.current >= count && registration==null)
registration = context.registerService(
Ready.class, new Ready() {}, null
);
if ( this.current < count && registration != null) {
registration.unregister();
registration = null;
}
}
}
Your RunnableRunner would then look like:
#Component
public class RunnableRunner {
#Reference volatile List<Runnable> runnables;
#Reference Ready ready;
#Activate void activate() {
System.out.println("Running runnables:");
runnables.forEach( Runnable::run );
System.out.println("Done running runnables.");
}
}
Pretty fragile code but sometimes that is the only option.
I did not know there were still people writing XML ... my heart is bleeding for you :-)
If you do not know which extensions you need to start then you can only make you component dynamic. You then react to each extension as it is added.
If you need to make sure that your extensions have been collected before some further step may happen then you can use names for your required extensions and name them in a config.
So for example you could have a config property "extensions" that lists all extension names spearated by spaces. Each extension then must have a service property like "name". In your component you then compare the extensions you found with the required extensions by name. You then do your "activation" only when all required extensions are present.
This is for example used in CXF DOSGi to apply intents on a service like specified in remote service admin spec.
I have a J2EE application that receives messages (events) via a web service. The messages are of varying types (requiring different processing depending on type) and sent in a specific sequence. It have identified a problem where some message types take longer to process than others. The result is that a message received second in a sequence may be processed before the first in the sequence. I have tried to address this problem by placing a synchronized block around the method that processes the messages. This seems to work, but I am not confident that this is the "correct" approach? Is there perhaps an alternative that may be more appropriate or is this "acceptable"? I have included a small snippit of code to try to explain more clearly. .... Any advice / guidance appreciated.
public class EventServiceImpl implements EventService {
public String submit (String msg) {
if (msg == null)
return ("NAK");
EventQueue.getInstance().submit(msg);
return "ACK";
}
}
public class EventQueue {
private static EventQueue instance = null;
private static int QUEUE_LENGTH = 10000;
protected boolean done = false;
BlockingQueue<String> myQueue = new LinkedBlockingQueue<String>(QUEUE_LENGTH);
protected EventQueue() {
new Thread(new Consumer(myQueue)).start();
}
public static EventQueue getInstance() {
if(instance == null) {
instance = new EventQueue();
}
return instance;
}
public void submit(String event) {
try {
myQueue.put(event);
} catch (InterruptedException ex) {
}
}
class Consumer implements Runnable {
protected BlockingQueue<String> queue;
Consumer(BlockingQueue<String> theQueue) { this.queue = theQueue; }
public void run() {
try {
while (true) {
Object obj = queue.take();
process(obj);
if (done) {
return;
}
}
} catch (InterruptedException ex) {
}
}
void process(Object obj) {
Event event = new Event( (String) obj);
EventHandler handler = EventHandlerFactory.getInstance(event);
handler.execute();
}
}
// Close queue gracefully
public void close() {
this.done = true;
}
I am not sure what is the framework (EJB(MDB)/JMS) you are working with. Generally using synchronization inside a Managed Environment like that of EJB/JMS should be avoided(its not a good practice). One way to get around is
the client should wait for the acknowledgement from the server before it sends the next message.
this way you client itself will control the sequence of events.
Please note this won't work if there are multiple client submitting the messages.
EDIT:
You have a situation wherein the client of the web service sends message in sequence without taking into account the message processing time. It simply dumps the message one after another. This is a good case for Queue ( First In First Out ) based solution. I suggest following two ways to accomplish this
Use JMS . This will have an additional overhead of adding a JMS providers and writing some plumbing code.
Use some multitheading pattern like Producer-Consumer wherein your web service handler will be dumping the incoming message in a Queue and a single threaded consumer will consume one message at a time. See this example using java.util.concurrent package.
Use database. Dump the incoming messages into a database. Use a different scheduler based program to scan the datbase (based on sequence number) and process the messages accordingly.
First and third solution is very standard for these type of problems. The second approach would be quick and won't need any additional libraries in your code.
If the events are to be processed in a specific sequence, then why not try adding "eventID" and 'orderID' fields to the messages? This way your EventServiceImpl class can sort, order and then execute in the proper order (regardless of the order they are created and/or delivered to the handler).
Synchronizing the handler.execute() block will not get the desired results, I expect. All the synchronized keyword does is prevent multiple threads from executing that block at the same time. It does nothing in the realm of properly ordering which thread goes next.
If the synchronized block does seem to make things work, then I assert you are getting very lucky in that the messages are being created, delivered and then acted upon in the proper order. In a multithread environment, this is not assured! I'd take steps to assure you are controlling this, rather than relying on good fortune.
Example:
Messages are created in the order 'client01-A', 'client01-C',
'client01-B', 'client01-D'
Messages arrive at the handler in the order 'client01-D',
'client01-B', 'client01-A', 'client01-C'
EventHandler can distinquish messages from one client to another and starts to cache 'client01' 's messages.
EventHandler recv's 'client01-A' message and knows it can process this and does so.
EventHandler looks in cache for message 'client01-B', finds it and processes it.
EventHandler cannot find 'client01-C' because it hasn't arrived yet.
EventHandler recv's 'client01-C' and processes it.
EventHandler looks in cache for 'client01-D' finds it, processes it, and considers the 'client01' interaction complete.
Something along these lines would assure proper processing and would promote good use of multiple threads.
I have created simple example with #Singleton, #Schedule and #Timeout annotations to try if they would solve my problem.
The scenario is this: EJB calls 'check' function every 5 secconds, and if certain conditions are met it will create single action timer that would invoke some long running process in asynchronous fashion. (it's sort of queue implementation type of thing). It then continues to check, but while the long running process is there it won't start another one.
Below is the code I came up with, but this solution does not work, because it looks like asynchronous call I'm making is in fact blocking my #Schedule method.
#Singleton
#Startup
public class GenerationQueue {
private Logger logger = Logger.getLogger(GenerationQueue.class.getName());
private List<String> queue = new ArrayList<String>();
private boolean available = true;
#Resource
TimerService timerService;
#Schedule(persistent=true, minute="*", second="*/5", hour="*")
public void checkQueueState() {
logger.log(Level.INFO,"Queue state check: "+available+" size: "+queue.size()+", "+new Date());
if (available) {
timerService.createSingleActionTimer(new Date(), new TimerConfig(null, false));
}
}
#Timeout
private void generateReport(Timer timer) {
logger.info("!!--timeout invoked here "+new Date());
available = false;
try {
Thread.sleep(1000*60*2); // something that lasts for a bit
} catch (Exception e) {}
available = true;
logger.info("New report generation complete");
}
What am I missing here or should I try different aproach? Any ideas most welcome :)
Testing with Glassfish 3.0.1 latest build - forgot to mention
The default #ConcurrencyManagement for singletons is ConcurrencyManagementType.CONTAINER with default #Lock of LockType.WRITE. Basically, that means every method (including generateReports) is effectively marked with the synchronized keyword, which means that checkQueueState will block while generateReport is running.
Consider using ConcurrencyManagement(ConcurrencyManagementType.BEAN) or #Lock(LockType.READ). If neither suggestion helps, I suspect you've found a Glassfish bug.
As an aside, you probably want persistent=false since you probably don't need to guarantee that the checkQueueState method fires every 5 seconds even when your server is offline. In other words, you probably don't need the container to fire "catch ups" when you bring your server back online.