Backend process VS scheduled task - java

I have a number of backend processes (java applications) which run 24/7. To monitor these backends (i.e. to check if a process is not responding and notify via SMS/EMAIL) I have written another application.
The old backends now log heartbeat at regular time interval and this new applications checks if they are doing it regularly and notifies if necessary.
Now, We have two options
either run it as a scheduled task, which will run after every (let say) 15 min and stop after doing its job or
Run it as another backend process with 15 min sleep time.
The issue we can foresee right now is that what if this monitor application goes into non-responding state? So, my question is Is there any difference between both the cases or both are same? What option would suit my case more?
Please note this is a specific case and is not same as this or this
Environment: Java, hosted on LINUX server

By scheduled task, do you mean triggered by the system scheduler, or as a scheduled thread in the existing backend processes?
To capture unexpected termination or unresponsive states you would be best running a separate process rather than a thread. However, a scheduled thread would give you closer interaction with the owning process with less IPC overhead.
I would implement both. Maintain a record of the local state in each backend process, with a scheduled task in each process triggering a thread to update the current state of that node. This update could be fairly frequent, since it will be less expensive than communicating with a separate process.
Use your separate "monitoring app" process to routinely gather the information about all the backend processes. This should occur less frequently - whether the process is running all the time, or scheduled by a cron job is immaterial since the state is held in each backend process. If one of the backends become unresponsive, this monitoring app will be able to determine the lack of response and perform some meaningful probes to determine what the problem is. It will be this component that will then notify your SMS/Email utility to send a report.

I would go for a backend process as it can maintain state
have a look at the quartz scheduler from terracotta
http://terracotta.org/products/quartz-scheduler
It will be resilient to transient conditions and you only need provide a simple wrap so the monitor app should be robust providing you get the threading stuff right in the quartz.properties file.

You can use nagios core as core and Naptor to monitoring your application. Its easy to setup and embed with your application development.
You can check at this link:
https://github.com/agunghakase/Naptor/tree/ver1.0.0

Related

Blocking a load balanced server environment from sending two emails

I am currently working on a scheduled task that runs behind the scenes of my Spring web application. The task uses a cron scheduler to execute at midnight every night, and clean-up unused applications for my portal (my site allows users to create an application to fill out, and if they don't access the form within 30 days, my background task will delete it from our DB and inform the user to create a new form if needed with an email). Everything works great in my test environment, and I am ready to move to QA.
However, my next environment uses two load balanced servers to process requests. This is a problem, as the cron scheduler and my polling task run concurrently on both servers. While the read/writes to the DB won't be an issue, the issue lies with sending the notification email to the application user. Without any polling locks, two emails have the possibility to be generated and sent, and I would like to avoid this. Normally, we would use a SQL stored procedure and have a field in our DB for a lock, and then set/release whenever the polling code is called, so only one instance of the polling will be executed. However, with my new polling task, we don't have any fields available, so I am trying to work on a SPRING solution. I found this resource online:
http://www.springframework.net/doc-latest/reference/html/threading.html
And I was thinking of using it as
Semaphore _pollingLock = new Semaphore(1);
_pollingLock.aquire();
try {
//run my polling task
}
finally {
//release lock
}
However, I'm not sure if this will just ensure the second instance executes after, or it skips the second instance and will never execute. Or, is this solution not even appropriate, and there is a better solution. Again, I am using Spring java framework, so any solution that exists there would be my best bet.
Two ways that we've handled this sort of problem in the past both start with designating one of our clustered servers as the one responsible for a specific task (say, sending email, or running a job).
In one solution, we set a JVM parameter on all clustered servers identifying the server name of the one server on which your process should run. For example -DemailSendServer=clusterMember1
In another solution, we simply provided a JVM parameter in the startup of this designated server alone. For example -DsendEmailFromMe=true
In both cases, you can add a tiny bit of code in your process to gate it based on the value or presence of the startup parameter.
I've found the second option simpler to use since the presence of the parameter is enough to allow the process to run. In the first solution, you would have to compare the current server name against the value of the parameter instead.
We haven't done much with Spring Batch, but I would assume there is a way to configure Batch to run a job on a single server within a cluster as well.

Stateful Processes as a Service (Java)

I am building a Service that allows customers to run individual worker processes "in the web". The processes are designed to run for a very long time and being fed with new orders about every minute (event driven). The processes are intended to keep on running, even if there are no new orders and all orders have been processed. -> 1 process per customer.
I require the following "functionality":
Start a new process
End the process (on demand, never automatically)
Keep track of the processes / user
Receive new "order" for a process (identify process by customer id)
Inform the customer when his orders can not be worked, in case his/her process ended (e.g. exception occured, someone killed the server ...)
I am looking for patterns or best practices that allow me to solve following problems:
- Process management within one server (e.g. using a static list or singleton-pattern, something like this to keep track of the mapping between user-id and process)
- Process management over many servers (scalability) : One server might run 100-200 processes, if I get more customers, how would I remember on which server the processes run?
I am sure there are others who faced these problems before and certainly there are "right" and "wrong" ways of doing this.
I would highly recommend you create a persistent centralized data store to keep your customer --> process list. Especially when you are talking about minutes between requests.
It will be pretty straightforward to have a dispatch pattern to deal with getting the request to the right server/process. You should probably run it on each machine, allowing it to route the request to the internal processor, or send it over to another machine.
This way you get pretty good failover scaling. With any machine able to dispatch to any other. You will need one master, whose job is to monitor the other machines for failure (read the centralized table, and ping each process (just make it another kind of order).

Android background jobs for synchronization with a web service

Could you pease tell me what is the correct way to do the synchronization jobs in Android (e.g. if I have about 5 jobs)?
Note! By synchronization job I mean a thread which runs in background and sends some data (e.g. analytics) via a Web Service...
For more details please read a more detailed description:
I've got a task to implement some background jobs which will synchronize some data with a restful web service. Some of the jobs should be scheduled periodically with a specific delay. If there is no internet connection then I simply cache the data and later when the connection reappears I try to run the jobs.
Taking into consideration that creating new threads is quite expensive and especially on mobile development, I'm using a cached thread pool (ExecutorService) and every time some actions are requested for processing I'm trying to reuse the threads. I don't use AsyncTask because I've replaced this with this Executor Service (Executors.newCachedTreadPool) and its convenient for me because I don't need to create many AsyncTasks as I reuse the threads from ES... In order to support the scheduled jobs, I use another thread pool(ScheduledExecutorService) and use Callable because I need to see the execution result. I've got a complex logic here... So, when a particular action is done in the app, the 1st thread pool (is like the AsyncTask) will work like an AsyncTask, the advantage is that I don't create new threads but I reus them. This will not block the UI's main thread. It will delegate to the scheduled executor which will do its job.
This solution works. And it sounds good for me as I'm coming from server side, but I'm interested to know how this must be done correctly on Android? Is this too sophisticated for a mobile app?
Merci,
Serge
Use a sync adapter. See http://developer.android.com/training/sync-adapters/index.html. A sync adapter runs in the background, it's managed by the system, and scheduled efficiently so that your sync doesn't waste battery power. Best of all, the system will automatically detect network connectivity and queue up your sync adapter if necessary. If you want, you can use multiple sync adapters.
Notice that although it seems that sync adapters need a content provider and an authenticator, they really don't.

Availability of resident backend instances in Google App Engine

Our app extensively relies on backend instances. There is some logic that has to run every few seconds. The execution of this code cannot only be driven by requests arriving on the frontend because it needs to run regardless.
We only considered using task queues to solve this. But as far as we know, task queues only guarantee that tasks will be executed within 24 hours. I have not found a reference to back this up though.
Our app uses a fixed number of resident B1 backend instances. We assume that each instance stays alive 24/7 after the backend version is deployed and started.
Is this a valid assumption? If not, can our application be notified every time a backend instance will be shutdown?
What is the SLA on the availability of a backend instance?
Are backend instances restarted automatically after they are terminated? E.g. is an instance automatically restarted after it runs out of memory?
How quickly will instances be brought up again if they every are terminated?
We create a fixed size thread pool on each backend instance. Is there a maximum size for thread pools that we can have on a backend instance?
Are there any other conditions under which a backend instance might die?
Thanks!
UPDATES
Turns out a couple questions can be answered by reading the docs.
App Engine attempts to keep backends running indefinitely. However, at this time there is no guaranteed uptime for backends.
So what is the SLA for uptime? I am looking for a statement like: "The guaranteed uptime for backends is 99.99%"
The App Engine team will provide more guidance on expected backend uptime as statistics become available.
When will this statistics be available?
It's also important to recognize that the shutdown hook is not always able to run before a backend terminates. In rare cases, an outage can occur that prevents App Engine from providing 30 seconds of shutdown time.
When App Engine needs to turn down a backend instance, existing requests are given 30 seconds to complete, and new requests immediately return 404.
The following code sample demonstrates a basic shutdown hook:
LifecycleManager.getInstance().setShutdownHook(new ShutdownHook() {
public void shutdown() {
LifecycleManager.getInstance().interruptAllRequests();
}
});
I am running only one instance of a resident (non dynamic) Backend and my experience is that it is restarted at least once a day.
You application must be able to store its state and resume after restart.

Notification scheduler and designer in java: implementation recommendations

I'm writing an application for a doctor which should be able to define Notifications that will show up in the patient's computer. These Notifications are scheduled by the doctor, so he/she can choose when it's going to show up. For example: "Remeber to take your pills", show once a week, from January to July 2010.
So it would be something like Google's Calendar's event scheduler, but with much richer timing conditions. I'm wondering what's the recommended solution/tool for:
Notification scheduler in the client side. The client's application is a java based application. It should have a background event scheduler that checks for new Noifications and if they timing conditions apply.
Notification designer/manager in the server side. The doctor's application should be able to show a visual tool to define the timing conditions (in java too). The Notifications are store in a database for remote accesing via web service.
Is there an open source tool available for this kind of issue? Also, I've been reading about Drools, but it's a completely new topic to me. Any recommendation on this?
There are various open source schedulers available.
Quartz is one of them, gives fine control for scheduling tasks.
It sounds like you have 3 separate but related issues:
The scheduling of one or more future events.
The persistence of the schedule and related contextual data.
A push model to [re-]deliver a scheduling event from the server to the client.
More or less right ?
For scheduling and persistence, I recommend you look at Quartz. It will provide you a clean API for scheduling (one time or recurring) with some flexibility including fixed period or cron. It will also persist schedule data and context (referred to as a Job) to a JDBC database.
As for #3, I am not clear on how you want this to work, but one possible way this might work is that when the client connects to the server, it non-persistently caches the server provided scheduled events applicable to that client (or user etc.). When the client shuts down, these events are discarded, but renewed on the next connection. Once the events are loaded in the client, the client will assume responsibility for firing them with its own local scheduler (Quartz or even a more simplified ScheduledThreadPoolExcutor).
Drools is an excellent rules engine, but might be overkill for what you are trying to do.
//Nicholas

Categories

Resources