I'm writing an application for a doctor which should be able to define Notifications that will show up in the patient's computer. These Notifications are scheduled by the doctor, so he/she can choose when it's going to show up. For example: "Remeber to take your pills", show once a week, from January to July 2010.
So it would be something like Google's Calendar's event scheduler, but with much richer timing conditions. I'm wondering what's the recommended solution/tool for:
Notification scheduler in the client side. The client's application is a java based application. It should have a background event scheduler that checks for new Noifications and if they timing conditions apply.
Notification designer/manager in the server side. The doctor's application should be able to show a visual tool to define the timing conditions (in java too). The Notifications are store in a database for remote accesing via web service.
Is there an open source tool available for this kind of issue? Also, I've been reading about Drools, but it's a completely new topic to me. Any recommendation on this?
There are various open source schedulers available.
Quartz is one of them, gives fine control for scheduling tasks.
It sounds like you have 3 separate but related issues:
The scheduling of one or more future events.
The persistence of the schedule and related contextual data.
A push model to [re-]deliver a scheduling event from the server to the client.
More or less right ?
For scheduling and persistence, I recommend you look at Quartz. It will provide you a clean API for scheduling (one time or recurring) with some flexibility including fixed period or cron. It will also persist schedule data and context (referred to as a Job) to a JDBC database.
As for #3, I am not clear on how you want this to work, but one possible way this might work is that when the client connects to the server, it non-persistently caches the server provided scheduled events applicable to that client (or user etc.). When the client shuts down, these events are discarded, but renewed on the next connection. Once the events are loaded in the client, the client will assume responsibility for firing them with its own local scheduler (Quartz or even a more simplified ScheduledThreadPoolExcutor).
Drools is an excellent rules engine, but might be overkill for what you are trying to do.
//Nicholas
Related
I am Writing a Java Application where when the Data Changes an image should change,
My Colleagues are asking me to do a Scheduler where you have to call a get api every 1 second
My Suggestion is to use Pub-Sub so that whenever event happens , then only the data is changed
is Subscriber and Scheduler one and the Same?
No code
Publish/subscribe is a nicer option, theoretically.
The differences:
Polling is a kind of busy waits, with multiple clients causing superfluous network traffic. The client is active.
Publish/Subcribe needs an active server that does a push notification to all subscribers. Meanwhile there is sufficient support in HTML5/JavaScript and in java. The server is active.
Unfortunately publish/subscribe will probably be a bit harder to realize. Best would be to make a proof of concept in a separate application. Things like asynchroneous Ajax might appear.
Also some publish/subscribe libraries might still use under the hood polling at the client side, instead of push notifications.
So the colleagues' advise might be based on the simpler, unproblematic implementation.
Depending on the leeway you are given, and in the interest of architectural research: a prototype with a load test for both implementations would be fine. Hope never dies.
It's no the same:
Scheduler is when you explicitly choose when to make the request. You can to it every second, minute or whatever. Every time you create a new request.
Pub-Sub is when you create a permatent connection to the source of events, and when an event is published you consume it. You don't have here multiple requests, it's rather a socket connection.
I have a user workflow where at a specific time a webservice is called, and the results are presented to the user.
According to the search request and the queried results, I want to perform some database updates and statistic logging.
As the workflow pauses while the webservice is requested, I thought about creating some kind of background thread that performs these database actions, while the user can already continue the workflow without having to wait for database actions to complete.
Do you think this is a good practice? How could I create such onetime running background threads?
If you only want to run in the background, then an Executor service is a good solution.
If you need to ensure that queued requests survive events like a server restart, then you need a persistent queue like a JMS Queue. There are some nice, free open source JMS implementations that serve this purpose.
If service call teakes little time (say 1 or 2 seconds) then it is a waste of time to develop such feature.
If it takes significant amount of time you should do this in background.
I have a number of backend processes (java applications) which run 24/7. To monitor these backends (i.e. to check if a process is not responding and notify via SMS/EMAIL) I have written another application.
The old backends now log heartbeat at regular time interval and this new applications checks if they are doing it regularly and notifies if necessary.
Now, We have two options
either run it as a scheduled task, which will run after every (let say) 15 min and stop after doing its job or
Run it as another backend process with 15 min sleep time.
The issue we can foresee right now is that what if this monitor application goes into non-responding state? So, my question is Is there any difference between both the cases or both are same? What option would suit my case more?
Please note this is a specific case and is not same as this or this
Environment: Java, hosted on LINUX server
By scheduled task, do you mean triggered by the system scheduler, or as a scheduled thread in the existing backend processes?
To capture unexpected termination or unresponsive states you would be best running a separate process rather than a thread. However, a scheduled thread would give you closer interaction with the owning process with less IPC overhead.
I would implement both. Maintain a record of the local state in each backend process, with a scheduled task in each process triggering a thread to update the current state of that node. This update could be fairly frequent, since it will be less expensive than communicating with a separate process.
Use your separate "monitoring app" process to routinely gather the information about all the backend processes. This should occur less frequently - whether the process is running all the time, or scheduled by a cron job is immaterial since the state is held in each backend process. If one of the backends become unresponsive, this monitoring app will be able to determine the lack of response and perform some meaningful probes to determine what the problem is. It will be this component that will then notify your SMS/Email utility to send a report.
I would go for a backend process as it can maintain state
have a look at the quartz scheduler from terracotta
http://terracotta.org/products/quartz-scheduler
It will be resilient to transient conditions and you only need provide a simple wrap so the monitor app should be robust providing you get the threading stuff right in the quartz.properties file.
You can use nagios core as core and Naptor to monitoring your application. Its easy to setup and embed with your application development.
You can check at this link:
https://github.com/agunghakase/Naptor/tree/ver1.0.0
I have a swing desktop application that is installed on many desktops within a LAN. I have a mysql database that all of them talk to. At precisely 5 PM everyday, there is a thread that will wake up in each of these applications and try to back up files to a remote server. I would like to prevent all the desktop applications from doing the same thing.
The way I was thinking to do this was:
After waking up at 5PM , all the applications will try to write a row onto a MYSQL table. They will write the same information. Only 1 will succeed and the others will get a duplicate row exception. Whoever succeeds, then goes on to run the backup program.
My questions are:
Is this right way of doing things? Is there any better (easier) way?
I know we can do this using sockets as well. But I dont want to go down that route... too much of coding also I would need to ensure that all the systems can talk to each other first (ping)
Will mysql support such as a feature. My DB is INNO DB. So I am thinking it does. Typically I will have about 20-30 users in the LAN. Will this cause a huge overhead for the DB to handle.
If you could put an intermediate class in between the applications and the database that would queue up the results and allow them to proceed in an orderly manner you'd have it knocked.
It sounds like the applications all go directly against the database. You'll have to modify the applications to avoid this issue.
I have a lot of questions about the design:
Why are they all writing "the same row"? Aren't they writing information for their own individual instance?
Why would every one of them have exactly the same primary key? If there was an auto increment or timestamp you would't have this problem.
What's the isolation set to on the database connection? If it's set to SERIALIZABLE, you'll force each one to wait until the previous one is done, at the cost of performance.
Could you have them all write files to a common directory and pick them up later in an orderly way?
I'm just brainstorming now.
It seems you want to backup server data not client data.
I recommend to use a 3-tier architecture using Java EE.
You could use a Timer Service then to trigger the backup.
Though usually a backup program is an independent program e.g. started by a cron job on the server. But again: you'll need a server to do this properly, not just a shared folder.
Here is what I would suggest. Instead of having all clients wake up at the same time and trying to perform the backup, stagger the time at which they wake up.
So when a client wakes up
- It will check some table in your DB (MYSQL) to see if a back up job has completed or is running currently. If the job has completed, the client will go on with its normal duties. You can decide how to handle the case when the job is running.
- If the client finds that the back up job has not been run for the day, it will start the back up job. At the same time will modify the row to indicate that the back up job has started. Once the back up has completed the client will modify the table to indicate that the back up has completed.
This approach will prevent a spurt in network activity and can also provide a rudimentary form of failover. So if one client fails, another client at a later time can attempt the backup. (this is a bit more involved though. Basically it comes down to what a client should do when it sees that a back up job is on going).
I'm trying to write a Spring web application on a Weblogic server that makes several independent database SELECTs(i.e. they can safely be called concurrently), one of which takes 15 minutes to execute.
Once all the results are fetched, an email containing the results will be sent to a user list.
What's a good way to get around this problem? Is there a Spring library that can help or do I go ahead and create daemon threads to do the job?
EDIT: This will have to be done at the application layer (business requirement) and the email will be sent out by the web application.
Are you sure you are doing everything optimally? 15 minutes is a really long time unless you have a gabillion rows across dozens of tables and need a heckofalot of joins....this is your highest priority -- why is it taking so long?
Do you do the email job at set intervals, or is it invoked from your web app? If set intervals, you should do it in an outside job, possibly on another machine. You can use daemons or the quartz scheduler.
If you need to fire this process off from the web app, you need to do it asynchronously. You could use JMS, or you could just have a table into which you enter a new job request, with daemon process that looks for new jobs every X time period. Firing off background threads is possible, but its error prone and not worth the complication, especially since you have other valid options that are simpler.
If you are asking about Spring support for long-running, possibly asynchronous tasks, you have a choice between Spring JMS support and Spring Batch.
You can use spring quartz to schedule the job. That way the jobs will run in the same container but will not require an http request to trigger them.