Singleton returns null when accessed by threads - java

As the title states I'm trying to troubleshoot an issue where some threads which read data from a Singleton get a null value. My investigation into our logs read as though its a concurrency issue.
The Singleton is defined as follows:
#Singleton
public class StaticDatabaseEntries {
private static final Map<String,Thing> databaseEntries = new HashMap<>();
#Lock(LockType.READ)
public Thing getThing(String index) {
return databaseEntries.get(index);
}
}
At first I was under the impression that only one element within the data was corrupted as access to the same item is repeatededly returning null. Further access to debug entries show that the issue appears isolated to a specific thread. It's as though once whatever occurs that induces the null return on a thread continues to do so but only on the affected thread.
An earlier version of this class did not apply the LockType.READ so per the specification a LockType.WRITE is assumed. I deployed an update with the correct lock to enable concurrent read. This did not improve the situation.
The data is loaded into the HashMap from a database upon deployment and remains unchange for the duration. Since the class isn't tagged with #Startup the application instead uses a context listener to trigger the loading of the entries from database.
With threads primarily performing a read activity I don't believe a switch to ConcurrentHashMap is benficial. I am considering removing the static final portion as it seems unnecessary when the container is managing concurrent access and the singleton lifecycle. I have experienced side effects when the container cannot subclass/proxy things which are marked as final in EJBs.
The other possibility I've considered is there is some manner of bug in the container software. This is running on a older Java 1.7 and JBOSS 6 EAP. Worst case I'll have to forego the singleton pattern and instead load the entries from the database on demand.

In general: If you work with threads, read activity can cause problems, if you are calling an object method, that isn't thread-save! This is exactly what often leads to undetectable errors in larger projects.
HashMap isn't thread-safe!
You should switch to ConcurrentHashMap.
For further details, have a look at this article: https://www.baeldung.com/java-concurrent-map

Related

Thread-safe caching for expensive resource that needs global clean up

Situation:
Need a cache of an expensive-to-create and non-thread-safe external resource
The resource requires explicit clean up
The termination of each thread cannot be hooked, but that of the application can
The code also runs in a Servlet container, so caches that cause a strong reference from the system class loader (e.g. ThreadLocal) cannot be directly used (see edit below).
Thus to use a ThreadLocal, it can only hold WeakReferences to the resource and a separated collection of strong references has to be kept. The code quickly gets very complicated and creates a memory leak (as the strong reference is never removed after thread death).
ConcurrentHashMap seems to be a good suit, but it also suffers from the memory leak.
What other alternatives are there? A synchronised WeakHashMap??
(Hopefully the solution can also be automatically initialised using a given Supplier just like ThreadLocal.withInitial())
Edit:
Just to prove the class loader leak is a thing. Create a minimal WAR project with:
public class Test {
public static ThreadLocal<Test> test = ThreadLocal.withInitial(Test::new);
}
index.jsp:
<%= Test.test.get() %>
Visit the page and shutdown the Tomcat and you get:
Aug 21, 2015 5:56:11 PM org.apache.catalina.loader.WebappClassLoaderBase checkThreadLocalMapForLeaks
SEVERE: The web application [test] created a ThreadLocal with key of type [java.lang.ThreadLocal.SuppliedThreadLocal] (value [java.lang.ThreadLocal$SuppliedThreadLocal#54e69987]) and a value of type [test.Test] (value [test.Test#2a98020a]) but failed to remove it when the web application was stopped. Threads are going to be renewed over time to try and avoid a probable memory leak.
That seems to be the typical “weak key, strong value referencing the key” problem. If you make the value weak, it can be collected even if the key is reachable, if you make it strong, the key is strongly reachable as well. This can’t be solved without a direct support by the JVM.
Thankfully there is a class which offers that (though it’s not emphasized in its documentation):
java.lang.ClassValue:
Lazily associate a computed value with (potentially) every type. For example, if a dynamic language needs to construct a message dispatch table for each class encountered at a message send call site, it can use a ClassValue to cache information needed to perform the message send quickly, for each class encountered.
While this documentation doesn’t say that the values may refer to the Class key, it’s intended use case of holding dispatch tables for a class implies that it is typical to have values with back-references.
Let’s demonstrate it with a small test class:
public class ClassValueTest extends ClassValue<Method> {
#Override
protected Method computeValue(Class<?> type) {
System.out.println("computeValue");
return Arrays.stream(type.getDeclaredMethods())
.filter(m->Modifier.isPublic(m.getModifiers()))
.findFirst().orElse(null);
}
public static void main(String... arg) throws Throwable {
// create a collectible class:
MethodHandles.Lookup l=MethodHandles.lookup();
MethodType noArg = MethodType.methodType(void.class);
MethodHandle println = l.findVirtual(
PrintStream.class, "println", MethodType.methodType(void.class, String.class));
Runnable r=(Runnable)LambdaMetafactory.metafactory(l, "run",
println.type().changeReturnType(Runnable.class), noArg, println, noArg)
.getTarget().invokeExact(System.out, "hello world");
r.run();
WeakReference<Class<?>> ref=new WeakReference<>(r.getClass());
ClassValueTest test=new ClassValueTest();
// compute and get
System.out.println(test.get(r.getClass()));
// verify that the value is cached, should not compute
System.out.println(test.get(r.getClass()));
// allow freeing
r=null;
System.gc();
if(ref.get()==null) System.out.println("collected");
// ensure that it is not our cache instance that has been collected
System.out.println(test.get(String.class));
}
}
On my machine it printed:
hello world
computeValue
public void ClassValueTest$$Lambda$1/789451787.run()
public void ClassValueTest$$Lambda$1/789451787.run()
collected
computeValue
public boolean java.lang.String.equals(java.lang.Object)
To explain, this test creates an anonymous class, just like lambda expressions produce, which can be garbage collected. Then it uses the ClassValueTest instance to cache a Method object of that Class. Since Method instances have a reference to their declaring class, we have the situation of a value referring to its key here.
Still, after the class is not used anymore, it gets collected, which implies that the associated value has been collected too. So its immune to backreferences of the value to the key.
The last test using another class just ensures that we are not a victim of eager garbage collection as described here as we are still using the cache instance itself.
This class associates a single value with a class, not a value per thread, but it should be possible to combine ClassValue with ThreadLocal to get the desired result.
I'd propose to get rid of ThreadLocal and WeakReference stuff altogether, because, as you say, resources are not bound to specific threads, they just cannot be accessed from several threads simultaneously.
Instead, have a global cache, Map <Key, Collection <Resource>>. Cache contains only resources that are free for use at the moment.
Threads would first request an available resource from the cache. If present (this, of course, should be synchronized, as the cache is global), arbitrary resource is removed from the collection for that key and given to the thread. Otherwise, a new one for that key is built and also given to the thread.
When a thread finishes using a resource, it should return it to the cache, i.e. add to the collection mapped to resource key. From there it can be used by the same thread again, or even by a different thread.
Advantages:
Cache is global, trivial to shut down all allocated resources when application quits.
Hardly any potential for memory leaks, code should be pretty concise.
Threads can share resources (provided they need the same resource at different time), potentially decreasing demand.
Disadvantages:
Requires synchronization (but likely cheap and not difficult to code).
Maybe some others, depending on what exactly you do.
I am not sure about the problem you are talking about. Please take a look at: https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem
Some Questions:
How is the resource referenced?
What is the interface to the resource?
What data should be cached at all?
What is a "non-thread safe resource"
How often is the resource retrieved?
How long is the access to one resource, what level of concurrency is there?
Is one thread using the resource many times and this is the reason for the intended caching?
Are many threads using the same resource (instance)?
Can there be many instances of the same resource type, since the actual instance is not thread safe?
How many resources you have?
Is it many resource instances of the same type or different types?
Maybe you can try to remove the words ThreadLocal, WeakReference, ConcurrentHashMap from your question?
Some (wild) guess:
From what I can read between the lines, it seems to me that it is a straight forward use case for a Java cache. E.g. you can use Google Guava cache and add a removal listener for the explicit cleanup.
Since the resource is not thread safe you need to implement a locking mechanism. This can be done by putting a lock object into the cached object.
If you need more concurrency, create more resources of the same type and augment the cache key with the hash of the thread modulo the level of concurrency you like to have.
While researching the weak concurrent map idea, I found that it's implemented in Guava's Cache.
I used the current thread as the weak key and an CacheLoader is supplied to automatically create the resource for each new thread.
A removal listener is also added, so that each thread's resource will be automatically cleaned up after the Thread object is GC'ed or when I call the invalidateAll() method during shut-down.
Most of the configuration above can also be done in a one liner (with lambdas).

Should I mark object attributes as volatile if I init them in #PostConstruct in Spring Framework?

Suppose, that I do some initialization in Spring singleton bean #PostConstruct (simplified code):
#Service
class SomeService {
public Data someData; // not final, not volatile
public SomeService() { }
#PostConstruct
public void init() {
someData = new Data(....);
}
}
Should I worry about someData visibility to other beans and mark it volatile?
(suppose that I cannot initialize it in constructor)
And second scenario: what if I overwrite value in #PostConstruct (after for example explicit initialization or initialization in constructor), so write in #PostConstruct will not be first write to this attribute?
The Spring framework is not tied into the Java programming language, it is just a framework. Therefore, in general, you need to mark a non-final field that is accessed by different threads to be volatile. At the end of the day, a Spring bean is nothing more than a Java object and all language rules apply.
final fields receive a special treatment in the Java programming language. Alexander Shipilev, The Oracle performance guy, wrote a great article on this matter. In short, when a constructor initializes a final field, the assembly for setting the field value adds an additional memory barrier that assures that the field is seen correctly by any thread.
For a non-final field, no such memory barrier is created. Thus, in general, it is perfectly possible that the #PostConstruct-annotated method initializes the field and this value is not seen by another thread, or even worse, seen when the constructor is yet only partially executed.
Does this mean that you always need to mark non-final fields as volatile?
In short, yes. If a field can be accessed by different threads, you do. Don't make the same mistake that I did when only thinking of the matter for a few seconds (thanks to Jk1 for the correction) and think in terms of your Java code's execution sequence. You might think that your Spring application context is bootstraped in a single thread. This means that the bootstraping thread will not have issues with the non-volatile field. Thus, you might think that everything is in order as long as you do not expose the application context to another thread until it is fully initialized, i.e. the annotated method is called. Thinking like this, you could assume, the other threads do not have a chance to cache the wrong field value as long as you do not alter the field after this bootstrap.
In contrast, the compiled code is allowed to reorder instructions, i.e. even if the #PostConstruct-annotated method is called before the related bean is exposed to another thread in your Java code, this happens-before relationship is not necessarily retained at in the compiled code at runtime. Thus, another thread might always read and cache the non-volatile field while it is either not yet initialized at all or even partially initialized. This can introduce subtle bugs and the Spring documentation does unfortunately not mention this caveat. Such details of the JMM are a reason why I personally prefer final fields and constructor injection.
Update: According to this answer in another question, there are scenarios where not marking the field as volatile would still produce valid results. I investigated this a little further and the Spring framework guarantees as a matter of fact a certain amount of happens-before safety out of the box. Have a look at the JLS on happens-before relationships where it clearly states:
An unlock on a monitor happens-before every subsequent lock on that monitor.
The Spring framework makes use of this. All beans are stored in a single map and Spring acquires a specific monitor each time a bean is registered or retrieved from this map. As a result, the same monitor is unlocked after registering the fully initialized bean and it is locked before retrieving the same bean from another thread. This forces this other thread to respect the happens-before relationship that is reflected by the execution order of your Java code. Thus, if you bootstrap your bean once, all threads that access the fully initialized bean will see this state as long as they access the bean in a canonical manner (i.e. explicit retrieval by querying the application context or auto-wriring). This makes for example setter injection or the use of a #PostConstruct method safe even without declaring a field volatile. As a matter of fact, you should therefore avoid volatile fields as they introduce a run time overhead for each read what can get painful when accessing a field in loops and because the keyword signals a wrong intention. (By the way, by my knowledge, the Akka framework applies a similar strategy where Akka, other than Spring, drops some lines on the problem.)
This guarantee is however only given for the retrieval of the bean after its bootstrap. If you change the non-volatile field after its bootstrap or if you leak the bean reference during its initialization, this guarantee does not longer apply.
Check out this older blog entry which describes this feature in further detail. Apparently, this feature is not documented as even the Spring people are aware of (but did not do anything about in a long time).
Should I worry about someData write visibility to other beans and mark it volatile?
I see no reason why you should not. Spring framework provides no additional thread safety guarantees when calling #PostConstruct, so usual visibility issues may still happen. A common approach would be to declare someData final, but if you want to modify the field several times it obviously won't fit.
It should not really matter if it's the first write to the field, or not. According to Java Memory Model reordering/visibility issues apply in both cases. The only exception is made for final fields, which can be written safely on the first time, but later assignments (e.g. via reflection) are not guaranteed to be visible.
volatile, however, can guarantee necessary visibility from the other threads. It also prevents an unwanted exposure of partly-constructed Data object. Due to reordering issues someData reference may be assigned before all neccessary object creation operations are completed, including constructor operations and default value assignments.
Update: According to a comprehensive research made by #raphw Spring stores singleton beans in monitor-guarded map. This is actually true, as we can see from the source code of org.springframework.beans.factory.support.DefaultSingletonBeanRegistry:
public Object getSingleton(String beanName, ObjectFactory singletonFactory) {
Assert.notNull(beanName, "'beanName' must not be null");
synchronized (this.singletonObjects) {
Object singletonObject = this.singletonObjects.get(beanName);
...
return (singletonObject != NULL_OBJECT ? singletonObject : null);
}
}
This may provide you with a thread-safety properties on #PostConstruct, but I would not consider it as sufficient guarantee for a number of reasons:
It affect only singleton-scoped beans, providing no guarantees for the beans of other scopes: request, session, global session, accidentally exposed prototype scope, custom user scopes (yes, you can create one by yourself).
It ensures write to someData is protected, but it gives no guarantees to the reader thread. One can construct an equivalent, but simplified example here, where data write is monitor-guarder and reader thread is not affected by any happens-before relationship here and can read outdated data:
public class Entity {
public Object data;
public synchronized void setData(Object data) {
this.data = data;
}
}
The last, but not least: this internal monitor we're talking about is an implementation detail. Being undocumented it is not guaranteed to stay forever and may be changed without further notice.
Side note: All stated above is true for beans, that are subject of multithreaded access. For prototype-scoped beans it is not really the case, unless they are exposed to several threads explicitly, e.g. by injection into a singleton-scoped bean.

How to create singleton java class for multiple jvm support?

For example I have DBManager.java Singleton Class, which I have to deploy on clustered environment.
It is a web based application, with following deployment stratergy
Apache Load Balancer --> Tomcat 6 (3 Servers in cluster).
I have to maintain single instance of DBManager for 3 tomcat instances.
My code is
package com.db.util;
public class DBManager {
private static DBManager singleInstance;
private DBManager () {}
public static DBManager getSingleInstance() {
if (singleInstance == null) {
synchronized (DBManager.class) {
if (singleInstance == null) {
singleInstance = new DBManager ();
}
}
}
return singleInstance;
}
}
I have been searching a solution to this problem, and found something like JGroups API.
Can this be achieved using JGroups ? Any Idea, How to implement that ?
Java gives you a singleton in each instance, you need some kind of coordination between the instances so at any given time one of them is active, but if the active one dies then a different instance becomes active.
Some app servers have built in capabilities to control such coordinated worker instances, I don't know whether Tomcat has such a function.
Building such functionality yourself is surprisingly difficult, see this question and note that that question gives links to a useful library - which to me looks quite complex to use.
However in your case you have a database, and that gives you a point of coordination. I haven't designed this in detail, but I reckon it's possible to create a reservation scheme using a dedicated row in a control table. It will be a bit tricky to do this efficiently, balancing the speed of detection of an instance death with the overheads of polling the database to see which instance is active, but it seems doable.
The idea is that the record contains a "reservedUntil" timestamp and "processId". Each process reads the record, if it contains it's own id and the timestamp has not yet expired it knows it can work. When the time is nearly expired, the active process updates the timestamp using an optimistic locking style "Update where timestamp == old timestamp" to manage race conditions. Each non active process waits until the timestamp it last read has expired and then attempts to to take control by updating the record, again using an optimistic locking Update where. Usually that attempt to take control will fail, but if it succeeds we now have a new active instance, and due to optimistic locking we can only ever get one active instance.
Singleton ensures only one instance of the class in a given JVM.
What is the issue with multiple DBManagers, one for each JVM, in your case?

Handling a timeout in EJB3 without using threads

I have the following situation. I have a job that:
May time out after a given amount of time, and if so occurs needs to throw an exception
If it does not time out, will return a result
If this job returns a result, it must be returned as quickly as possible, because performance is very much an issue. Asynchronous solutions are hence off the table, and naturally tying up the system by hammering isn't an option either.
Lastly, the system has to conform to the EJB standard, so AFAIK using ordinary threads is not an option, as this is strictly forbidden.
Our current solution uses a thread that will throw an exception after having existed for a certain amount of time without being interrupted by an external process, but as this clearly breaks the EJB standard, we're trying to solve it with some other means.
Any ideas?
Edited to add: Naturally, a job which has timed out needs to be removed (or interrupted) as well.
Edited to add 2:
This issue doesn't seem to have any solution, because detecting a deadlock seems to be mostly impossible sticking to pure EJB3 standards. Since Enno Shioji's comments below reflect this, I'm setting his suggestion as the correct answer.
This is more like a request for clarification, but it's too long to fit as a comment..
I'm not sure how you are doing it right now, since from what you wrote, just using the request processing thread seems to be the way to go. Like this:
//Some webservice method (synchronous)
public Result process(Blah blah){
try{
return getResult(TimeUnit.SECONDS, 10);
}catch(InterruptedException e){
//No result within 10 seconds!
throw new ServiceUnavailableException("blah");
}
}
I'm not sure why you are creating threads at all. If you are forced to use threads because the getResult method doesn't timeout at all, you would have a thread leak. If it timeouts after a longer time and thus you want to "shortcut" your reply to the user, that would be the only case I'd consider using a thread like I imagine how you are using it. This could result in Threads piling up under load and I'd strive to avoid such situation.
Maybe you can post some code and let us know why you are creating in your service at all?
Also, what's your client interface? Sounds like it's a synchronous webservice or something?
In that case, if I were you I would use a HashedWheelTimer as a singleton... this mechanism should work great with your requirement (here is an implementation). However, this unfortunately seem to conflict with the ban on threading AND the ban on singleton in the EJB spec. In reality though there really isn't a problem if you would do this. See this discussion for example. We have also used the singleton pattern in our EJB app. which used JBoss. However, if this isn't a viable choice then I might look at isolating the processing in its own JVM by defining a new web service (and deploy it in a web-container or something), and call that service from the EJB app. This would however obviously incur performance hit and now you would have another whole new app.
With Bean Managed Transaction, the timeout for the specific transaction can be specified by using UserTransaction interface.
Modify the timeout value that is
associated with transactions started
by the current thread with the begin
method.
void setTransactionTimeout(int seconds) throws SystemException
Transaction will timeout after specified seconds & may not get propagated further. If exception is not thrown implicitly, then can throw it explicitly based on the result.
Will return a result on successful completion within specified time.
Can use it with stateless session beans so there may not be a performance issue.
Its EJB standard so that will not be an issue to implement.
With little-bit work around, it should work fine in the given scenario.
Edit : Also can use server specific properties to manage transaction timeout.
JBoss : At either at class or method level annotation #TransactionTimeout(100) can be applied.
Weblogic : Specifying the parameters in weblogic-ejb-jar.xml
<transaction-descriptor>
<trans-timeout-seconds>100</trans-timeout-seconds>
</transaction-descriptor>
GlassFish : Using the optional cmt-timeout-in-seconds element in sun-ejb-jar.xml
Stick the process and it's timeout thread in to a class annotated with #WebService, put that class in to a WAR, then invoke the WebService from your EJB.
WARs don't have the same limitations or live under the same contract that EJBs do, so they can safely run threads.
Yes, I consider this a "hack", but it meets the letter of the requirements, and it's portable.
You can create threads using the commonj WorkManager. There are implementations built into WebSphere and Weblogic as they proposed the standard, but you can also find implementations for other appservers as well.
Basically, the WorkManager allows you to create managed threads inside the container, much like using an Executor in regular Java. Your only other alternative would be to use MDB's, but that would be a 'heavier' solution.
Since I don't know your actual platform, you will have to google commonj with your platform yourself 8-)
Here is a non IBM or Oracle solution.
Note: This is not an actual standard, but it is widely available for different platforms and should suit your purposes nicely.
For EJBs, there is a concept of "Container Managed Transactions". By specifying #TransactionAttribute on your bean, or specific method, the container will create a transaction when ever the method(s) are invoked. If the execution of the code takes longer than the transaction threshold, the container will throw an exception. If the call finishes under the transaction threshold, it will return as usual. You can catch the exception in your calling code and handle it appropriately.
For more on container managed transactions, check out: http://java.sun.com/j2ee/tutorial/1_3-fcs/doc/Transaction3.html and http://download.oracle.com/javaee/5/tutorial/doc/bncij.html
You could use #TimeOut. Something like:
#Stateless
public class TimedBean {
#Resource
private TimerService timerService;
static private AtomicInteger counter = new AtomicInteger(0);
static private Map<Integer, AtomicBoolean> canIRunStore = new ...;
public void doSomething() {
Integer myId = counter.getAndIncrement();
AtomicBoolean canIRun = new AtomicBoolean(true);
canIRunStore.put(myId, canIRun);
timerService.createTimer(1000, 0, myId);
while (canIRun.get() /* && some other condition */) {
// do my work ... untill timeout ...
}
}
#Timeout
#PermitAll
public void timeout(Timer timer) {
Integer expiredId = (Integer) timer.getInfo();
AtomicBoolean canHeRun = canIRunStore.get(expiredId);
canIRunStore.remove(expiredId);
canHeRun.set(false);
}
}

Static references are cleared--does Android unload classes at runtime if unused?

I have a question specific to how the classloading / garbage collection works in Android. We have stumbled upon this issue a few times now, and as far as I can tell, Android behaves different here from an ordinary JVM.
The problem is this: We're currently trying to cut down on singleton classes in the app in favor of a single root factory singleton which sole purpose is to manage other manager classes. A top level manager if you will. This makes it easy for us to replace implementations in tests without opting for a full DI solution, since all Activities and Services share the same reference to that root factory.
Here's how it looks like:
public class RootFactory {
private static volatile RootFactory instance;
#SuppressWarnings("unused")
private Context context; // I'd like to keep this for now
private volatile LanguageSupport languageSupport;
private volatile Preferences preferences;
private volatile LoginManager loginManager;
private volatile TaskManager taskManager;
private volatile PositionProvider positionManager;
private volatile SimpleDataStorage simpleDataStorage;
public static RootFactory initialize(Context context) {
instance = new RootFactory(context);
return instance;
}
private RootFactory(Context context) {
this.context = context;
}
public static RootFactory getInstance() {
return instance;
}
public LanguageSupport getLanguageSupport() {
return languageSupport;
}
public void setLanguageSupport(LanguageSupport languageSupport) {
this.languageSupport = languageSupport;
}
// ...
}
initialize is called once, in Application.onCreate, i.e. before any Activity or Service is started. Now, here is the problem: the getInstance method sometimes comes back as null -- even when invoked on the same thread! That sounds like it isn't a visibility problem; instead, the static singleton reference hold on class level seems to actually have been cleared by the garbage collector. Maybe I'm jumping to conclusions here, but could this be because the Android garbage collector or class loading mechanism can actually unload classes when memory gets scarce, in which case the only reference to the singleton instance will go away? I'm not really deep into Java's memory model, but I suppose that shouldn't happen, otherwise this common way of implementing singletons wouldn't work on any JVM right?
Any idea why this is happening exactly?
PS: one can work around this by keeping "global" references on the single application instance instead. That has proven to be reliable when one must keep on object around across the entire life-time of an app.
UPDATE
Apparently my use of volatile here caused some confusion. My intention was to ensure that the static reference's current state is always visible to all threads accessing it. I must do that because I am both writing and reading that reference from more than one thread: In an ordinary app run just in the main application thread, but in an instrumentation test run, where objects get replaced with mocks, I write it from the instrumentation thread and read it on the UI thread. I could have as well synchronized the call to getInstance, but that's more expensive since it requires claiming an object lock. See What is an efficient way to implement a singleton pattern in Java? for a more detailed discussion around this.
Both you (#Matthias) and Mark Murphy (#CommonsWare) are correct in what you say, but the gist seems lost. (The use of volatile is correct and classes are not unloaded.)
The crux of the question is where initialize is called from.
Here is what I think is happening:
You are calling initialize from an Activity *
Android needs more memory, kills the whole Process
Android restarts the Application and the top Activity
You call getInstance which will return null, as initialize was not called
Correct me if I'm wrong.
Update:
My assumption – that initialize is called from an Activity * – seems to have been wrong in this case. However, I'll leave this answer up because that scenario is a common source of bugs.
I have never in my life seen a static data member declared volatile. I'm not even sure what that means.
Static data members will exist until the process is terminated or until you get rid of them (e.g., null out the static reference). The process may be terminated once all activities and services are proactively closed by the user (e.g., BACK button) and your code (e.g., stopService()). The process may be terminated even with live components if Android is desperately short on RAM, but this is rather unusual. The process may be terminated with a live service if Android thinks that your service has been in the background too long, though it may restart that service depending on your return value from onStartCommand().
Classes are not unloaded, period, short of the process being terminated.
To address the other of #sergui's points, activities may be destroyed, with instance state stored (albeit in RAM, not "fixed storage"), to free up RAM. Android will tend to do this before terminating active processes, though if it destroys the last activity for a process and there are no running services, that process will be a prime candidate for termination.
The only thing significantly strange about your implementation is your use of volatile.
Static references are cleared whenever the system feels like it and your application is not top-level (the user is not running it explicitly). Whenever your app is minimized and the OS wants some more memory it will either kill your app or serialize it on fixed storage for later use, but in both cases static variables are erased.
Also, whenever your app gets a Force Close error, all statics are erased as well. In my experience I saw that it's always better to use variables in the Application object than static variables.
I've seen similar strange behaviour with my own code involving disappearing static variables (I don't think this problem has anything to do with the volatile keyword). In particular this has come up when I've initialized a logging framework (ex. Crashlytics, log4j), and then after some period of activity it appears to be uninitialized. Investigation has shown this happens after the OS calls onSaveInstanceState(Bundle b).
Your static variables are held by the Classloader which is contained within your app's process. According to google:
An unusual and fundamental feature of Android is that an application
process's lifetime is not directly controlled by the application
itself. Instead, it is determined by the system through a combination
of the parts of the application that the system knows are running, how
important these things are to the user, and how much overall memory is
available in the system.
http://developer.android.com/guide/topics/processes/process-lifecycle.html
What that means for a developer is that you cannot expect static variables to remain initialized indefinitely. You need to rely on a different mechanism for persistence.
One workaround I've used to keep my logging framework initialized is for all my Activities to extend a base class where I override onCreate and check for initialization and re-initialize if necessary.
I think the official solution is to use the onSaveInstanceState(Bundle b) callback to persist anything that your Activity needs later, and then re-initialize in onCreate(Bundle b) when b != null.
Google explains it best:
http://developer.android.com/training/basics/activity-lifecycle/recreating.html

Categories

Resources