OSGi background thread failure - java

What should be done when a BundleActivator runs a background thread, and that background thread has an unrecoverable error?
public class Activator implements BundleActivator
{
private Thread t;
#Override
public void start(BundleContext context) throws Exception
{
t = new Thread(new Runnable(){
#Override
public void run(){
while (!Thread.interrupted()){
// do something which may throw a runtime exception
}
}
});
t.start();
}
#Override void stop(BundleContext context) throws Exception
{
t.interrupt();
t.join();
}
}
With this example, how can I notify the OSGi framework that the thread is dead and the bundle is effectively stopped and not running?

Look at how Peter Kriens performs similar actions in this article. All you would need to do with his example is invoke the stop on the activator in his catch block, instead of doing the printStackTrace.

Probably the best thing to do is just log the error, preferably to the OSGi Log Service. Then an administrator can detect the problem with the bundle and decide what to do. You should implement this as a Declarative Services component rather than as a BundleActivator, because that will give you much easier access to the Log Service, and you will also be able to have more than one of these things in your bundle.
I don't think that the bundle should attempt to stop itself. This puts the bundle in a weird state.... it's stopped but still has code running... i.e. the code that called stop(). This may be only for a brief period but it feels wrong.
A bundle that's in the ACTIVE state doesn't necessarily have to be "doing something" all the time, it just has the potential to "do something". The fact that something failed shouldn't really affect the external state of the bundle.

As far as I know, OSGi cannot directly help you in this particular situation. I usually rely on uncaught exception handlers to get notified of thread crashes or I implement some form of SW watchdog.
The point is that a bundle that spawns multiple threads and sucessfully completes its start method remains ACTIVE even if one of these threads crashes after some time.

Neil is (as usual) very right. A bundle should never stop itself since that interferes with the management agent. The start/stop is the message from this management agent to a bundle to say that it should be active. If the bundle cannot perform its responsibility you should log the message, wait a bit (increasingly longer) and retry.
The log is the place to notify, stopping a bundle is mixing levels badly.

Related

Spring Boot scheduler thread stops randomly

I have an Scheduler in spring boot that fulfils a specific business task every X minutes. It works fine until it suddenly stops and does not engage anymore. There is no exception in the logs or any other logs. I need to restart the program for the scheduler to work again.
Sometimes the task of the scheduler goes wrong, and I throw an exception. To be able to handle those exceptions specifically, I wrote a custom ErrorHandler in Spring for the scheduler that resolves a seperate task for logging purposes. It is linked correctly to the scheduler and processes the task.
This issue can come up when an unhandled exception gets thrown inside of an ErrorHandler. I am not sure about the specifics, however a Runtime Exception thrown by an ErrorHandler (or a method inside of it) that gets propagated outside of it basically kills the scheduled thread for that task. Furthermore NOTHING gets written to the logs (no Exception message, nada).
The "easiest" way to resolve this is by wrapping the entirety of the method in a try/catch block catching Exception - although depending on why you have that Error Handler that might be a bad idea. This does not solve the underlying issue at hand, but it keeps the thread alive and allows you to log the issue.
Example:
public class MyErrorHandler implements ErrorHandler {
#Override
public void handleError(Throwable t) {
try {
//handle intended exception (ex. write to database or logs)
} catch (Exception e) {
//handle exception that was thrown while trying to handle the intended exception.
}
}

How to prevent Spring app context shutdown until shutdown hook is fired

I have a spring-boot application.
I have implemented SmartLifecycle interface in my bean which starts async snmp server in it's start method and stops it in it's stop method.
All working fine, except the fact that main application context stops right after start, so my server bean also stops right after start.
All I need is to make spring context to stop only when shutdown hook is fired.
This is not a web application, so I don't need spring-boot-starter-web, which is solves this problem by starting webserver which prevents context stop until webserver stops.
I can use something like CountDownLatch and waiting for it to be zero in my main method right after context starts. Somethig like this:
public static void main(String[] args) throws InterruptedException {
ConfigurableApplicationContext ctx = SpringApplication.run(SnmpTrapRetranslatorApplication.class, args);
CountDownLatch snmpServerCloseLatch = ctx.getBean("snmpServerCloseLatch", CountDownLatch.class);
snmpServerCloseLatch.await();
}
And my server bean's start method will create this latch with count 1, while stop method will call snmpServerCloseLatch.countDown().
This technique is described here.
But what wrong with this is that my main method is responsible for waiting my custom server bean to stop. I feel this just not right.
How for example spring-boot-starter-web do this? When it starts tomcat, it keeps running until shutdown hook is received and it don't need to have any managing code in the main method. It stops only when context receiving shoutdown signal.
The same behaviour is for example when I have #Scheduled method in my bean. Spring also doesn't stops context automatically. Only on CTRL-C.
I want to achieve similar effect. My main method should have only one line: start the context. Context should start and stop my async server when it starts or stops (already achieved by SmartLifecycle) and should not stop until shutdown is requested (CTRL-C, SIGINT etc).
My investigation lead me to the core of the problem: daemon threads.
The snmp server implementation which I use (snmp4j) use daemon threads internally. So even when snmp server started, there are no more live user threads in JVM, so it exits.
TL/DR:
Just add this method to any bean (snmp server bean is good candidate for this):
#Scheduled(fixedDelay = 1000 * 60 * 60) // every hour
public void doNothing() {
// Forces Spring Scheduling managing thread to start
}
(Do not forget to add #EnableScheduling to your spring configuration).
Explanation:
To prevent stopping spring context, while SNMP server is still running, we need any non-daemon thread to be alive in JVM. Not necessarily main thread. So we can let main method to finish.
We can run new non-daemon thread from our server bean's start method. This thread will wait on some lock in while loop checking for some running variable, while our stop method will set this running variable to false and notifyAll on this lock.
This way, our non-daemon thread will be alive until shotdown hook is triggered (and prevents JVM to exit).
After shutdown hook, spring context lifecycle close method will call all SmartLifecycle bean's close methods, that will lead to SNMP server bean's stop method call, that will lead to set running to false, that will lead to our non-daemon thread to stop, that allow JVM to stop gracefully.
Or instead we can use Spring's scheduling thread in similar way. It also is non-daemon thread, so it will prevent JVM to exit. And Spring manages this thread itself, so it will automatically stop it when shutdown hook is triggered.
To make Spring's scheduling thread to start we need any #Scheduled method in any bean.
I think that first (manual) approach is still more "correct", while requires more async coding (which is error-prone as we all know). Who knows how Spring will change it's scheduling implementation in the future.
SpringApplication app = new SpringApplication(Main.class);
app.setRegisterShutdownHook(false);
ConfigurableApplicationContext applicationContext= app.run();
Runtime.getRuntime().addShutdownHook(new Thread(new Runnable() {
#Override
public void run() {
//do your things
applicationContext.close();
}
}));

Global method to prevent app from crashing?

I am trying to add a method in my parent activity that all my activities are inheriting from. I want the method to catch any errors that have not already been handled so the app does not crash. Instead of crashing it will redirect to a failure screen activity.
Here is what I have at the moment but it does not work, the app freezes and then goes black:
Thread.setDefaultUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() {
#Override
public void uncaughtException(Thread paramThread, Throwable paramThrowable) {
redirectToFailureScreen();
}
});
The uncaught exception handler is not meant for rescuing an application. Ending up in that handler means the thread is being terminated. The handler gets notified as a courtesy for logging purposes before the thread is killed.
Implemented by objects that want to handle cases where a thread is
being terminated by an uncaught exception. Upon such termination, the
handler is notified of the terminating thread and causal exception. If
there is no explicit handler set then the thread's group is the
default handler.

Spring's DeferredResult setResult interaction with timeouts

I'm experimenting with Spring's DeferredResult on Tomcat, and I'm getting crazy results. Is what I'm doing wrong, or is there some bug in Spring or Tomcat? My code is simple enough.
#Controller
public class Test {
private DeferredResult<String> deferred;
static class DoSomethingUseful implements Runnable {
public void run() {
try { Thread.sleep(2000); } catch (InterruptedException e) { }
}
}
#RequestMapping(value="/test/start")
#ResponseBody
public synchronized DeferredResult<String> start() {
deferred = new DeferredResult<>(4000L, "timeout\n");
deferred.onTimeout(new DoSomethingUseful());
return deferred;
}
#RequestMapping(value="/test/stop")
#ResponseBody
public synchronized String stop() {
deferred.setResult("stopped\n");
return "ok\n";
}
}
So. The start request creates a DeferredResult with a 4 second timeout. The stop request will set a result on the DeferredResult. If you send stop before or after the deferred result times out, everything works fine.
However if you send stop at the same time as start times out, things go crazy. I've added an onTimeout action to make this easy to reproduce, but that's not necessary for the problem to occur. With an APR connector, it simply deadlocks. With a NIO connector, it sometimes works, but sometimes it incorrectly sends the "timeout" message to the stop client and never answers the start client.
To test this:
curl http://localhost/test/start & sleep 5; curl http://localhost/test/stop
I don't think I'm doing anything wrong. The Spring documentation seems to say it's okay to call setResult at any time, even after the request already expired, and from any thread ("the
application can produce the result from a thread of its choice").
Versions used: Tomcat 7.0.39 on Linux, Spring 3.2.2.
This is an excellent bug find !
Just adding more information about the bug (that got fixed) for a better understanding.
There was a synchronized block inside setResult() that extended up to the part of submitting a dispatch. This can cause a deadlock if a timeout occurs at the same time since the Tomcat timeout thread has its own locking that permits only one thread to do timeout or dispatch processing.
Detailed explanation:
When you call "stop" at the same time as the request "times out", two threads are attempting to lock the DeferredResult object 'deferred'.
The thread that executes the "onTimeout" handler
Here is the excerpt from the Spring doc:
This onTimeout method is called from a container thread when an async request times out before the DeferredResult has been set. It may invoke setResult or setErrorResult to resume processing.
Another thread that executes the "stop" service.
If the dispatch processing called during the stop() service obtains the 'deferred' lock, it will wait for a tomcat lock (say TomcatLock) to finish the dispatch.
And if the other thread doing timeout handling has already acquired the TomcatLock, that thread waits to acquire a lock on 'deferred' to complete the setResult()!
So, we end up in a classic deadlock situation !

file listener process on tomcat

I need a very simple process that listens on a directory and
does some operation when a new file is created on that directory.
I guess I need a thread pool that does that.
This is very easy to implement using the spring framework, which I normally use but I can't use it now.
I can only use tomcat, How can I implement it? what is the entry point that "starts" that thread?
Does it have to be a servlet ?
thanks
since you refined the question, here comes another answer: how to start a daemon in tomcat:
first, register your Daemons in web.xml:
< listener >
my.package.servlet.Daemons
< /listener >
then implement the Daemons class as an implementation of ServletContextListener like this:
the code will be called every 5 seconds, tomcat will call contextDestroyed when your app shuts down. note that the variable is volatile, otherwise you may have troubles on shutdown on multi-core systems
import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
public class Daemons implements ServletContextListener {
private volatile boolean active = true;
Runnable myDeamon = new Runnable() {
public void run() {
while (active) {
try {
System.out.println("checking changed files...");
Thread.sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
};
public void contextInitialized(ServletContextEvent servletContextEvent) {
new Thread(myDeamon).start();
}
public void contextDestroyed(ServletContextEvent servletContextEvent) {
active = false;
}
}
You could create a listener to start the thread, however this isn't a good idea. When you are running inside a Web container, you shouldn't start your own threads. There are a couple of questions in Stack Overflow for why is this so. You could use Quartz (a scheduler framework), but I guess you couldn't achieve an acceptable resolution.
Anyway, what you are describing isn't a Web application, but rather a daemon service. You could implement this independently from your web application and create a means for them to communicate with each other.
true java-only file notifiaction will be added in java 7. here is a part of the javadoc that describes it roughly.
The implementation that observes events from the file system is intended to map directly on to the native file event notification facility where available
right now you will have to either create a native platform-dependent program that does that for you,
or alternatively implement some kind of polling, which lists the directory every so often to detect changes.
there is a notification library that you can use right now - it uses a C program on linux to detect changes over at sourceforge. on windows it uses polling. i did not try it out to see if it works.

Categories

Resources