I'm looking at legacy code and found the following:
private static final SimpleDateFormat sdf = new SimpleDateFormat("...");
...
void foo() {
bar(date, someMoreArgs, sdf.clone());
}
where bar() then goes ahead and uses the passed SimpleDateFormat to format the given date.
Is the above code thread-safe? If multiple threads concurrently call sdf.clone(), can one of the cloned objects end up getting corrupted?
I wouldn't write the code like that myself in the first place. I know there are better ways to do this. But I'm not looking to refactor the code unless it can be proven to be not thread-safe.
Edit:
Some more information for clarification:
The static object sdf itself is never used for formatting. The only operation it's ever used for is cloning. Thus, I'm not expecting its contents to change (unless the cloning operation writes some transient data inside the object).
The clone is never used by more than one thread.
From the JavaDoc:
Date formats are not synchronized. It is recommended to create separate format instances for each thread. If multiple threads access a format concurrently, it must be synchronized externally.
So I believe that it depends on how you use that clone, but it is not assured. Cloning does not make your classes thread-safe. If the cloned object is not shared between class instances it should work with no problems, but I would not recommend this approach. However, you need a thread-safe date formatter I would suggest using Apache Commons FastDateFormat, described here.
Basically the clone() method doesn't give you thread safety. It just copies the properties of one object to another one. It doesn't lock or synchronize that object so if it is thread safe or not is up to the implementation. If during that copy some of the properties of the original object are changed then you might get into a strange state. If you use that cloned object in more than one threads - you still got problems
For your particular example I think the code is fine. The sdf object you are going to clone is probably never gonna change and you don't need a lock or something (it seems). You just create a new SimpleDateFormat object for each thread to ensure thread safety - or at least that's the idea - and you achieve that by using clone.
Anyway if you have spotted a problem in legacy code and you don't like it it is always better to spend some time and refactor it than to keep it like that even if you don't like it. It almost always pays off in the long term with having better and more maintainable code and not leaving that like that for the next developer to wander. For example if you have upgraded to java 8 you can use DateTimeFormatter which is thread safe or you can use some external library. Or at least create a new SimpleDateFormat(SOME_CONSTANT_FORMAT) everytime you need one instead of relying on clone of the object - because if you share just a string constant (the actual format) it is immutable and thread safe.
This is not an answer. But some information you may still find interesting. I did a couple of experiments.
First I had two threads formatting dates using the same SimpleDateFormat instance. After a couple of iterations they began giving incorrect results, and after a few hundred iterations one of the threads crashed. So the thread-unsafety seems very real.
Next I had one thread format dates using the original SimpleDateFormat and the other one taking clones of it and using the clones for formatting. Both threads have run for several minutes now and are still both producing correct results.
This is by no means any guarantee that this is the behaviour you will always see. The documentation is pretty clear: SimpleDateFormat is not thread safe and all access from several threads must be synchronized. So use the information at your own risk.
EDIT: Inspecting the source code seems to reveal that the clone operation copies fields in some order, but doesn’t modify the original. If the original was doing any work in another thread, this might cause the clone to be in an inconsistent state after creation, which in turn might or might not affect its correct working. If the original is only used for cloning, I see no risk with the current implementation. As you say, the implementation may be changed in later Java versions, but I would consider the risk small, and the risk of thread-unsafe behaviour being introduced even smaller. All of this is pure speculation!
Related
If I have an unsynchronized java collection in a multithreaded environment, and I don't want to force readers of the collection to synchronize[1], is a solution where I synchronize the writers and use the atomicity of reference assignment feasible? Something like:
private Collection global = new HashSet(); // start threading after this
void allUpdatesGoThroughHere(Object exampleOperand) {
// My hypothesis is that this prevents operations in the block being re-ordered
synchronized(global) {
Collection copy = new HashSet(global);
copy.remove(exampleOperand);
// Given my hypothesis, we should have a fully constructed object here. So a
// reader will either get the old or the new Collection, but never an
// inconsistent one.
global = copy;
}
}
// Do multithreaded reads here. All reads are done through a reference copy like:
// Collection copy = global;
// for (Object elm: copy) {...
// so the global reference being updated half way through should have no impact
Rolling your own solution seems to often fail in these type of situations, so I'd be interested in knowing other patterns, collections or libraries I could use to prevent object creation and blocking for my data consumers.
[1] The reasons being a large proportion of time spent in reads compared to writes, combined with the risk of introducing deadlocks.
Edit: A lot of good information in several of the answers and comments, some important points:
A bug was present in the code I posted. Synchronizing on global (a badly named variable) can fail to protect the syncronized block after a swap.
You could fix this by synchronizing on the class (moving the synchronized keyword to the method), but there may be other bugs. A safer and more maintainable solution is to use something from java.util.concurrent.
There is no "eventual consistency guarantee" in the code I posted, one way to make sure that readers do get to see the updates by writers is to use the volatile keyword.
On reflection the general problem that motivated this question was trying to implement lock free reads with locked writes in java, however my (solved) problem was with a collection, which may be unnecessarily confusing for future readers. So in case it is not obvious the code I posted works by allowing one writer at a time to perform edits to "some object" that is being read unprotected by multiple reader threads. Commits of the edit are done through an atomic operation so readers can only get the pre-edit or post-edit "object". When/if the reader thread gets the update, it cannot occur in the middle of a read as the read is occurring on the old copy of the "object". A simple solution that had probably been discovered and proved to be broken in some way prior to the availability of better concurrency support in java.
Rather than trying to roll out your own solution, why not use a ConcurrentHashMap as your set and just set all the values to some standard value? (A constant like Boolean.TRUE would work well.)
I think this implementation works well with the many-readers-few-writers scenario. There's even a constructor that lets you set the expected "concurrency level".
Update: Veer has suggested using the Collections.newSetFromMap utility method to turn the ConcurrentHashMap into a Set. Since the method takes a Map<E,Boolean> my guess is that it does the same thing with setting all the values to Boolean.TRUE behind-the-scenes.
Update: Addressing the poster's example
That is probably what I will end up going with, but I am still curious about how my minimalist solution could fail. – MilesHampson
Your minimalist solution would work just fine with a bit of tweaking. My worry is that, although it's minimal now, it might get more complicated in the future. It's hard to remember all of the conditions you assume when making something thread-safe—especially if you're coming back to the code weeks/months/years later to make a seemingly insignificant tweak. If the ConcurrentHashMap does everything you need with sufficient performance then why not use that instead? All the nasty concurrency details are encapsulated away and even 6-months-from-now you will have a hard time messing it up!
You do need at least one tweak before your current solution will work. As has already been pointed out, you should probably add the volatile modifier to global's declaration. I don't know if you have a C/C++ background, but I was very surprised when I learned that the semantics of volatile in Java are actually much more complicated than in C. If you're planning on doing a lot of concurrent programming in Java then it'd be a good idea to familiarize yourself with the basics of the Java memory model. If you don't make the reference to global a volatile reference then it's possible that no thread will ever see any changes to the value of global until they try to update it, at which point entering the synchronized block will flush the local cache and get the updated reference value.
However, even with the addition of volatile there's still a huge problem. Here's a problem scenario with two threads:
We begin with the empty set, or global={}. Threads A and B both have this value in their thread-local cached memory.
Thread A obtains obtains the synchronized lock on global and starts the update by making a copy of global and adding the new key to the set.
While Thread A is still inside the synchronized block, Thread B reads its local value of global onto the stack and tries to enter the synchronized block. Since Thread A is currently inside the monitor Thread B blocks.
Thread A completes the update by setting the reference and exiting the monitor, resulting in global={1}.
Thread B is now able to enter the monitor and makes a copy of the global={1} set.
Thread A decides to make another update, reads in its local global reference and tries to enter the synchronized block. Since Thread B currently holds the lock on {} there is no lock on {1} and Thread A successfully enters the monitor!
Thread A also makes a copy of {1} for purposes of updating.
Now Threads A and B are both inside the synchronized block and they have identical copies of the global={1} set. This means that one of their updates will be lost! This situation is caused by the fact that you're synchronizing on an object stored in a reference that you're updating inside your synchronized block. You should always be very careful which objects you use to synchronize. You can fix this problem by adding a new variable to act as the lock:
private volatile Collection global = new HashSet(); // start threading after this
private final Object globalLock = new Object(); // final reference used for synchronization
void allUpdatesGoThroughHere(Object exampleOperand) {
// My hypothesis is that this prevents operations in the block being re-ordered
synchronized(globalLock) {
Collection copy = new HashSet(global);
copy.remove(exampleOperand);
// Given my hypothesis, we should have a fully constructed object here. So a
// reader will either get the old or the new Collection, but never an
// inconsistent one.
global = copy;
}
}
This bug was insidious enough that none of the other answers have addressed it yet. It's these kinds of crazy concurrency details that cause me to recommend using something from the already-debugged java.util.concurrent library rather than trying to put something together yourself. I think the above solution would work—but how easy would it be to screw it up again? This would be so much easier:
private final Set<Object> global = Collections.newSetFromMap(new ConcurrentHashMap<Object,Boolean>());
Since the reference is final you don't need to worry about threads using stale references, and since the ConcurrentHashMap handles all the nasty memory model issues internally you don't have to worry about all the nasty details of monitors and memory barriers!
According to the relevant Java Tutorial,
We have already seen that an increment expression, such as c++, does not describe an atomic action. Even very simple expressions can define complex actions that can decompose into other actions. However, there are actions you can specify that are atomic:
Reads and writes are atomic for reference variables and for most primitive variables (all types except long and double).
Reads and writes are atomic for all variables declared volatile (including long and double variables).
This is reaffirmed by Section §17.7 of the Java Language Specification
Writes to and reads of references are always atomic, regardless of whether they are implemented as 32-bit or 64-bit values.
It appears that you can indeed rely on reference access being atomic; however, recognize that this does not ensure that all readers will read an updated value for global after this write -- i.e. there is no memory ordering guarantee here.
If you use an implicit lock via synchronized on all access to global, then you can forge some memory consistency here... but it might be better to use an alternative approach.
You also appear to want the collection in global to remain immutable... luckily, there is Collections.unmodifiableSet which you can use to enforce this. As an example, you should likely do something like the following...
private volatile Collection global = Collections.unmodifiableSet(new HashSet());
... that, or using AtomicReference,
private AtomicReference<Collection> global = new AtomicReference<>(Collections.unmodifiableSet(new HashSet()));
You would then use Collections.unmodifiableSet for your modified copies as well.
// ... All reads are done through a reference copy like:
// Collection copy = global;
// for (Object elm: copy) {...
// so the global reference being updated half way through should have no impact
You should know that making a copy here is redundant, as internally for (Object elm : global) creates an Iterator as follows...
final Iterator it = global.iterator();
while (it.hasNext()) {
Object elm = it.next();
}
There is therefore no chance of switching to an entirely different value for global in the midst of reading.
All that aside, I agree with the sentiment expressed by DaoWen... is there any reason you're rolling your own data structure here when there may be an alternative available in java.util.concurrent? I figured maybe you're dealing with an older Java, since you use raw types, but it won't hurt to ask.
You can find copy-on-write collection semantics provided by CopyOnWriteArrayList, or its cousin CopyOnWriteArraySet (which implements a Set using the former).
Also suggested by DaoWen, have you considered using a ConcurrentHashMap? They guarantee that using a for loop as you've done in your example will be consistent.
Similarly, Iterators and Enumerations return elements reflecting the state of the hash table at some point at or since the creation of the iterator/enumeration.
Internally, an Iterator is used for enhanced for over an Iterable.
You can craft a Set from this by utilizing Collections.newSetFromMap like follows:
final Set<E> safeSet = Collections.newSetFromMap(new ConcurrentHashMap<E, Boolean>());
...
/* guaranteed to reflect the state of the set at read-time */
for (final E elem : safeSet) {
...
}
I think your original idea was sound, and DaoWen did a good job getting the bugs out. Unless you can find something that does everything for you, it's better to understand these things than hope some magical class will do it for you. Magical classes can make your life easier and reduce the number of mistakes, but you do want to understand what they are doing.
ConcurrentSkipListSet might do a better job for you here. It could get rid of all your multithreading problems.
However, it is slower than a HashSet (usually--HashSets and SkipLists/Trees hard to compare). If you are doing a lot of reads for every write, what you've got will be faster. More importantly, if you update more than one entry at a time, your reads could see inconsistent results. If you expect that whenever there is an entry A there is an entry B, and vice versa, the skip list could give you one without the other.
With your current solution, to the readers, the contents of the map are always internally consistent. A read can be sure there's an A for every B. It can be sure that the size() method gives the precise number of elements that will be returned by the iterator. Two iterations will return the same elements in the same order.
In other words, allUpdatesGoThroughHere and ConcurrentSkipListSet are two good solutions to two different problems.
Can you use the Collections.synchronizedSet method? From HashSet Javadoc http://docs.oracle.com/javase/6/docs/api/java/util/HashSet.html
Set s = Collections.synchronizedSet(new HashSet(...));
Replace the synchronized by making global volatile and you'll be alright as far as the copy-on-write goes.
Although the assignment is atomic, in other threads it is not ordered with the writes to the object referenced. There needs to be a happens-before relationship which you get with a volatile or synchronising both reads and writes.
The problem of multiple updates happening at once is separate - use a single thread or whatever you want to do there.
If you used a synchronized for both reads and writes then it'd be correct but the performance may not be great with reads needing to hand-off. A ReadWriteLock may be appropriate, but you'd still have writes blocking reads.
Another approach to the publication issue is to use final field semantics to create an object that is (in theory) safe to be published unsafely.
Of course, there are also concurrent collections available.
I have a long-lived server application that is designed to run with minimal downtime (e.g. 24/7 operation stopping only for maintenance). The application has to be able to handle thousands of requests a second, so performance is a concern.
To service each request. part of the application needs to know what the current date is (although not the time) and it must be stored in a java.util.Date object because of a 3rd party API.
However,Date objects are expensive to construct so creating a new one for each request doesn't sound sensible.
Sharing a Date object between requests and updating it once a day would mean only a single object would need to be created (per server worker thread) at startup, but then how can you update it in a safe manner?
For example, using a ScheduledExecutorService that runs just after midnight could increment the Date, but introduces Thread synchronisation into the mix: the Date object is now shared between the main thread and the thread that the ScheduledExecutorService spawns to run the update task.
Synchronising the 2 threads introduces another performance headache, due to the likelihood of contention on the shared resource between the thousands of requests being serviced (the single execution of the update thread per day is less of a concern because it only happens once per day, unlike the millions of requests we will service daily).
So, my question is What is the most efficient way to ensure the application always knows what the current date is, even when running continuously for weeks on end?
Don't bother with this optimization. It will have no measurable effect. A waste of time.
I assume the expensive constructor your talking about is new Date(), which calls System.currentTimeMillis(). The easy way out would be to use new Date(long), using the value stored in a volatile field. An external thread can then update this field at an appropriate time and the other threads will create their Date objects from this updated value.
Edit: while the current question may seem like a premature optimization, System.currentTimeMillis() can sometimes be the bottleneck. Check this link if you in that situation: http://dow.ngra.de/2008/10/27/when-systemcurrenttimemillis-is-too-slow/
#chrisbunney, a couple of the answers on this thread have suggested using special-purpose concurrency classes, but if you really did need to cache a date as you originally asked, there's only one thing you need: the volatile keyword.
AtomicReference is good if you need an atomic check-and-swap operation, but in this case, you're not checking anything. You're just setting a reference to a new value, and you want that value to be visible to all threads. That's what volatile does.
If you were modifying the internal state of an existing object, then you might need locks. Or if you were going to read the value of the existing Date, do some calculations based on that, and generate a new Date object as a result, again some type of locking (or something like AtomicReference) would be necessary. But you're not doing that; again, you're only replacing a reference, with no regard to its previous value.
In general, if you have only one thread which ever replaces a value, and other threads only read the value, volatile is enough. No other concurrency control is needed. Or if you have multiple threads which can replace a value, but they only replace, not modify in-place, and they replace it with no regard for its previous value, then again, volatile is enough. Nothing else is needed.
http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/atomic/AtomicReference.html may avoid overhead of synchronization. If there is single thread executing the requests then you can execute a task on the same thread to update the date without having any synchronization overhead. If there are multiple threads then also same idea can be applied by making the date field thread local and updating it separately in each thread.
I would highly recommend you to use the caching feature from Google's Guava library. The caching library has good concurrency support, and it provides several alternative approaches to your problem.
One approach, if having a stale date is acceptable for your use case (e.g. for the first 100 seconds of a new day, it is all right if your app still thinks it's the previous day), you could use
LoadingCache<Object, Date> tDateCache
= CacheBuilder.newBuilder()
.maximumSize(1)
.expireAfterWrite(100, TimeUnit.SECONDS)
.build( new CacheLoader<Object, Date>() {
public Date load(Object object) {
return new Date();
}
});
Access the cached date using tDateCache.get(tDateCache). The expireAfterWrite(100, TimeUnit.SECONDS) parameter will cause your cache to be automatically refreshed every 100 seconds.
You could also have one monitor thread to automatically invoke Cache.invalidateAll() every time you detect the day is rolling, which will cause the load method of the cache loader to be re-invoked and a new date is created. No need to worry about concurrency, the library would handle that for you.
I have a instance of a object which performs very complex operation.
So in the first case I create an instance and save it it my own custom cache.
From next times whatever thread comes if he finds that a ready made object is already present in the cache they take it from the cache so as to be good in performance wise.
I was worried about what if two threads have the same instance. IS there a chance that the two threads can corrupt each other.
Map<String, SoftReference<CacheEntry<ClassA>>> AInstances= Collections.synchronizedMap(new HashMap<String, SoftReference<CacheEntry<ClassA>>>());
There are many possible solutions:
Use an existing caching solution like EHcache
Use the Spring framework which got an easy way to cache results of a method with a simple #Cacheable annotation
Use one of the synchronized maps like ConcurrentHashMap
If you know all keys in advance, you can use a lazy init code. Note that everything in this code is there for a reason; change anything in get() and it will break eventually (eventually == "your unit tests will work and it will break after running one year in production without any problem whatsoever").
ConcurrentHashMap is most simple to set up but it has simple way to say "initialize the value of a key once".
Don't try to implement the caching by yourself; multithreading in Java has become a very complex area with Java 5 and the advent of multi-core CPUs and memory barriers.
[EDIT] yes, this might happen even though the map is synchronized. Example:
SoftReference<...> value = cache.get( key );
if( value == null ) {
value = computeNewValue( key );
cache.put( key, value );
}
If two threads run this code at the same time, computeNewValue() will be called twice. The method calls get() and put() are safe - several threads can try to put at the same time and nothing bad will happen, but that doesn't protect you from problems which arise when you call several methods in succession and the state of the map must not change between them.
Assuming you are talking about singletons, simply use the "demand on initialization holder idiom" to make sure your "check" works across all JVM's. This will also make sure all threads which are requesting the same object concurrently wait till the initialization is over and be given back only valid object instance.
Here I'm assuming you want a single instance of the object. If not, you might want to post some more code.
Ok If I understand your problem correctly, you are worried that 2 objects changing the state of the shared object will corrupt each other.
The short answer is yes they will.
If the object is expensive in creation but is needed in a read only manner. I suggest you make it immutable, this way you get the benefit of it being fast in access and at the same time thread safe.
If the state should be writable but you don't actually need threads to see each others updates. You can simply load the object once in an immutable cache and just return copies to anyone who asks for the object.
Finally if your object needs to be writable and shared (for other reasons than it just being expensive to create). Then my friend you need to handle thread safety, I don't know your case but you should take a look at the synchronized keyword, Locks and java 5 concurrency features, Atomic types. I am sure one of them will satisfy your need and I sincerely wish that your case is one of the first 2 :)
If you only have a single instance of the Object, have a quick look at:
Thread-safe cache of one object in java
Other wise I can't recommend the google guava library enough, in particular look at the MapMaker class.
Is it true that if I only use immutable data type, my Java program would be thread safe?
Any other factors will affect the thread safety?
****Would appreciate if can provide an example. Thanks!**
**
Thread safety is about protecting shared data and immutable objects are protected as they are read only. Well apart from when you create them but creating a object is thread safe.
It's worth saying that designing a large application that ONLY uses immutable objects to achieve thread safety would be difficult.
It's a complicated subject and I would recommend you reading Java Concurrency in Practice
which is a very good place to start.
It is true. The problem is that it's a pretty serious limitation to place on your application to only use immutable data types. You can't have any persistent objects with state which exist across threads.
I don't understand why you'd want to do it, but that doesn't make it any less true.
Details and example: http://www.javapractices.com/topic/TopicAction.do?Id=29
If every single variable is immutable (never changed once assigned) you would indeed have a trivially thread-safe program.
Functional programming environments takes advantage of this.
However, it is pretty difficult to do pure functional programming in a language not designed for it from the ground up.
A trivial example of something you can't do in a pure functional program is use a loop, as you can't increment a counter. You have to use recursive functions instead to achieve the same effect.
If you are just straying into the world of thread safety and concurrency, I'd heartily recommend the book Java Concurrency in Practice, by Goetz. It is written for Java, but actually the issues it talks about are relevant in other languages too, even if the solutions to those issues may be different.
Immutability allows for safety against certain things that can go wrong with multi-threaded cases. Specifically, it means that the properties of an object visible to one thread cannot be changed by another thread while that first thread is using it (since nothing can change it, then clearly another thread can't).
Of course, this only works as far as that object goes. If a mutable reference to the object is also shared, then some cases of cross-thread bugs can happen by something putting a new object there (but not all, since it may not matter if a thread works on an object that has already been replaced, but then again that may be crucial).
In all, immutability should be considered one of the ways that you can ensure thread-safety, but neither the sole way nor necessarily sufficient in itself.
Although immutable objects are a help with thread safety, you may find "local variables" and "synchronize" more practical for real world progamming.
Any program where no mutable aspect of program state is accessed by more than one thread will be trivally thread-safe, as each thread may as well be its own separate program. Useful multi-threading, however, generally requires interaction between threads, which implies the existence of some mutable shared state.
The key to safe and efficient multi-threading is to incorporate mutability at the right "design level". Ideally, each aspect of program state should be representable by one immutably-rooted(*), mutable reference to an object whose observable state is immutable. Only one thread at a time may try to change the state represented by a particular mutable reference. Efficient multi-threading requires that the "mutable layer" in a program's state be low enough that different threads can use different parts of it. For example, if one has an immutable AllCustomers data structure and two threads simultaneously attempted to change different customers, each would generate a version of the AllCustomers data structure which included its own changes, but not that of the other thread. No good. If AllCustomers were a mutable array of CustomerState objects, however, it would be possible for one thread to be working on AllCustomers[4] while another was working on AllCustomers[9], without interference.
(*) The rooted path must exist when the aspect of state becomes relevant, and must not change while the access is relevant. For example, one could design an AddOnlyList<thing> which hold a thing[][] called Arr that was initialized to size 32. When the first thing is added, Arr[0] would be initialized, using CompareExchange, to an array of 16 thing. The next 15 things would go in that array. When the 17th thing is added, Arr[1] would be initialized using CompareExchange to an array of size 32 (which would hold the new item and the 31 items after it). When the 49th thing is added, Arr[2] would be initialized for 64 items. Note that while thing itself and the arrays contained thereby would not be totally immutable, only the very first access to any element would be a write, and once Arr[x][y] holds a reference to something, it would continue to do so as long as Arr exists.
i just have two questions about two methods used in many controllers/servlets in my app:
1-what is the difference between calling a static method in a util class or a non static method (like methods dealing with dates i.e getting current time,converting between timezones), which is better ?
2-what is the difference between calling a method(contain too many logic like sending emails) in the controller directly or running this method in a different thread ?
1)
Utils classes generally don't have any state associated with them. They just have behavior. Hence there really isn't much point in creating "instances" of them.
Even though compiler won't ever complain, instantiating a Util class would be a misleading coding.
Being Stateless Utils classes are completely thread safe. Class methods, whether static or not, get copied to every threads stack frame and cause no interference to each other. Java Utils classes are excellent examples of this.
2)
If your method is time consuming one, it makes sense to make it's call asynchronous.
There are advantages and disadvantages to using static methods:
Advantages:
You don't have to instantiate an object to use them.
Static variables defined in the class stay the same between calls.
Disadvantages:
You can only access static variables and static methods without creating an instance of an object to call them on.
Not inherently thread-safe... You must synchronize either the method or a section of code if you don't want other threads changing variables on you.
In my personal experience, static methods are great for things that don't require you to maintain state between calls. Like formatting dates.
Having said that, time operations are pretty easy.
Getting the current time is as easy as:
Date currentDate = new Date();
or
Calendar currentCal = Calendar.getInstance();
Calendar can also be used to roll Calendar.HOUR_OF_DAY (and Calendar.MINUTE if necessary) if you know the difference between the time zones.
1: So the static keyword only tells you about the accessibility of the method. If the method is static it can be accessed without instantiating an object. So it doesn't make sense to ask which is better: static or non-static.
2: Calling a method that has some time-consuming logic on a separate thread allows your main thread to continue working on some other things which are important. So if you have two time-consuming tasks that you need to execute for a client, then running those two tasks on separate thread can get the job done faster.
Note that all of this is said with the assumption that the programmer knows how to do proper threading... if the threading is not done correctly, then there could be a slew of problems: deadlocks, invalid object states, decreased performance, etc.
#1 Seems to have been answered well in other responses.
#2 Depends on the circumstance. If the controller is having to wait for the other thread to finish with the sending email task before it can continue then there is no speed improvement at all -- in fact there would be speed loss due to the context switch and synchronization. If the controller can service another request or if it can do something else in parallel with the email sending thread then there would be a gain.
Typically, if a controller needs to send email, it gives the job off to a worker thread and then continues on its way in parallel and handles the next request. This is faster but it means that there is no way to report back problems to the caller if the email sending failed.