I have a sinlgeton object which holds one method, which is NOT synchronized. The singleton can be accessed by many clients at a time - what will happen if multiple clients access that object ?
Actually I want to write a log in a single file using that method.
I guess by clients, you mean threads. Assuming you have implemented singleton correctly, all threads would be using the same instance. Since this is a method that changes state (writing to a file), it would require in general require some sort of synchronization. Although it depends on some factors - for example, if your method writes just a single line in a single call to BufferedWriter.write(), it is fine. Because BufferefWriter.write() does synchronization internally. However, if you write multiple lines or make multiple calls to BufferedWriter.write(), the different calls might execute out of order.
Now, if by clients you mean different processes, simple synchronization of course will not help. You can use FileLock to lock the file if the processes are in the same JVM. Otherwise, you have to lock using something external, such as use another temp file as lock. It depends on the OS though if it provides atomic file creates.
Related
I'm implementing a quick file server using Java RMI, which has 2 remote methods, one is for reading a file from the server and the other is for writing it to the client.
I want to lock concurrent access to same file. For example, if 10 users concurrently call read() and write() remote methods, specifying the same file: 'foo.txt', I need that the first call to be completely executed before the second call does, and the second call is completely executed before the third call does...
If I use "syncronized" statement in the two RMI methods, I lose efficiency if different users concurrently call these methods, specifying different files.
On the other hand, I can't use the FileLock class because:
"File locks are held on behalf of the entire Java virtual machine.
They are not suitable for controlling access to a file by multiple
threads within the same virtual machine."
and in RMI, there is only one process in the same JVM, that executes new threads for each remote call.
You could make use of ReadWriteLock to handle read/write locks of the Path of each file - but only if you don't run multiple processes of your RMI server or expect those files to be changed outside one RMI server VM. Each of the entry points for read or write needs to locate ReentrantReadWriteLock for the Path provided such as using getReadWriteLock(Path) here:
private static ConcurrentHashMap<Path,ReentrantReadWriteLock> LOCKS = new ConcurrentHashMap<>();
private static ReentrantReadWriteLock getReadWriteLock(Path path) {
return LOCKS.computeIfAbsent(path, p -> new ReentrantReadWriteLock());
}
Then your own code can use the readLock() or writeLock() API calls to allow multiple readers when there are no writes taking place. Over time you'd need to do some housekeeping to purge entries which are no longer in use.
I have three services in my Android app that are fired by two broadcast receivers. The first two write onto a file and are fired by one broadcast receiver so I can make sure that they are executed one after the other (via Context.sendOrderedBroadcast()). The third one is on its own and is fired by a separate broadcast receiver, but reads from the same file that the first two write on.
Because the broadcast receivers may be fired at the same time or nearly the same time as each other, the file might also be accessed concurrently. How can I prevent that from happening? I want to be able to either read first then write or write then read. I'm just not sure if this problem is similar to Java concurrency in general because android services, if I'm not mistaken, are an entirely different beast.
One solution would be to have your writing tasks create an empty temporary file (say .lock) before accessing the shared file and delete that same temporary file once they are done.
Your reading task can check whether .lock file exists or not.
Alternatively, you can use a FileLock.
http://developer.android.com/reference/android/app/Service.html
Note that services, like other application objects, run in the main thread of their hosting process. This means that, if your service is going to do any CPU intensive (such as MP3 playback) or blocking (such as networking) operations, it should spawn its own thread in which to do that work.
I suggest to read from/write to file in separate thread. You can use Only one thread at a time! for doing it in the same thread.
First of all, I shouldn't have done the file I/O in the main UI thread which is the case with Services. It should be done in another thread, like an AsyncTask.
Secondly, the ReentrantLock method is so much easier. When locked, it tells the other threads accessing the same resource to wait, and proceed only when the lock has been released. Simply instantiate a new ReentrantLock() and share that lock among the methods that read to or write from the file. It's as easy as calling lock() and unlock() on the ReentrantLock as you need it.
I have a producer consumer like pattern where some threads are creating data and periodically passing putting chunks of that data to be consumed by some other threads.
Keeping the Java Memory Model in mind, how do i ensure that the data passed to the consumer thread has full 'visibility'?
I know there are data structures in java.util.concurrent like ConcurrentLinkedQueue that are built specifically for this, but I want to do this as low level as possible without utilizing those and have full transparency on what is going on under the covers to ensure the memory visibility part.
If you want "low level" then look into volatile and synchronized.
To transfer data, you need a field somewhere available to all threads. In your case it really needs to be some sort of collection to handle multiple entries. If you made the field final, referencing, say, a ConcurrentLinkedQueue, you'd pretty much be done. The field could be made public and everyone could see it, or you could make it available with a getter.
If you use an unsynchronized queue, you have more work to do, because you have to manually synchronize all access to it, which means you have to track down all usages; not easy when there's a getter method. Not only do you need to protect the queue from simultaneous access, you must make sure interdependent calls end up in the same synchronized block. For instance:
if (!queue.isEmpty()) obj = queue.remove();
If the whole thing is not synchronized, queue is perfectly capable of telling you it is not empty, then throwing a NoSuchElementException when you try to get the next element. (ConcurrentLinkedQueue's interface is specifically designed to let you do operations like this with one method call. Take a good look at it even if you don't want to use it.)
The simple solution is to wrap the queue in another object whose methods are carefully chosen and all synchronized. The wrapped class, even if it's LinkedList or ArrayList, will now act (if you do it right) like CLQ, and it can be freely released to the rest of the program.
So you would have what is really a global field with an immutable (final) reference to a wrapper class, which contains a LinkedList (for example) and has synchronized methods that use the LinkedList to store and access data. The wrapper class, like CLQ, would be thread-safe.
Some variants on this might be desirable. It might make sense to combine the wrapper with some other high-level class in your program. It might also make sense to create and make available instances of nested classes: perhaps one that only adds to the queue and one that only removes from it. (You couldn't do this with CLQ.)
A final note: having synchronized everything, the next step is to figure out how to unsynchronize (to keep threads from waiting too much) without breaking thread safety. Work really hard on this, and you'll end up rewriting ConcurrentLinkedQueue.
I am working with a 3rd party proprietary library (no source code) which creates instances of a non thread safe component. Does this mean that I shouldn't use multiple threads to run jobs in parallel? Running each job in it's own JVM crossed my mind but is overkill.
Then I read the article here
http://cscarioni.blogspot.com/2011/09/alternatives-to-threading-in-java-stm.html
Is it recommended to follow that article's advice? What other alternatives exist out there?
Response to Martin James:
Vendor tells me that there is only one thread in which multiple instances of the component exist (Factory pattern to create the component instance) and each instance is independently controllable from it's API.
So does this mean that I can still use multiple threads while controlling each component instances running in one big thread?
No, it does not mean this.
It means that you should care about data protection yourself. One possible way is to synchronize access to that library in code that calls it (your code). Other possible way is using immutable objects (for example make private copy of non-threadsafe data structure every time you want to work with it).
Other way is to design your application that way that the code that works with certain object always run in the same thread. It does not mean that code that is working with other object (even of the same class) cannot run int other thread. So, the system is multi-threaded but no data clashes are created.
'Vendor tells me that there is only one thread in which multiple instances of the componenet exist (Factory pattern to create the component instance) and each instance is independently controllable from it's API.'
That is not exactly 100% clear. What I think it means is:
1) Creation of components is not thread-safe. Maybe they are all stored internally in a non-threadsafe container. Presumably, destruction of the components is not thread-safe either.
2) Once created, the components are 'independently controllable' - this suggests strongly that they are thread-safe.
That's my take on it so far. Maybe your vendor could confirm it, just to be sure, before you proceed any further with a design.
It all depends on what your code is actually doing with the components. For example, ArrayList is not thread safe, but Vector is thread safe. However, if you use an ArrayList inside a thread in a way that is thread safe or thread neutral, it doesn't matter. For example, you can use ArrayLists without any issue in a JavaEE container for web services because each web service call is going to be on its own thread and no one in their right mind would have web service handling threads communicating with each other. In fact, Vectors are very bad in a JavaEE container if you can avoid using them because they're synchronized on most of their methods, which means the container's threads will block until any operation is done.
As AlexR said, you can synchronize things, but the best approach is to really look at your code and figure out if the threads are actually going to be sharing data and state or going off and doing their own thing.
I have multiple client handler threads, these threads need to pass received object to a server queue and the sever queue will pass another type of object back to the sending thread. The server queue is started and keeps running when the server starts.I am not sure which thread mechanism to use for the client handler threads notified an object is sent back. I don't intend to use socket or writing to a file.
If you wanted to do actual message passing take a look at SynchronusQueue. Each thread will have reference to the queue and would wait until one thread passed the reference through the queue.
This would be thread safe and address your requirements.
Though if you are simply looking to have threads read and write a shared variable you can use normalocity's suggestion though it's thread-safety depends on how you access it (via sychronized or volatile)
As far as making objects accessible in Java, there's no difference between multi-thread and single-thread. You just follow the scope rules (public, private, protected), and that's it. Multiple threads all run within the same process, so there isn't any special thread-only scope rules to know about.
For example, define a method where you pass the object in, and make that method accessible from the other thread. The object you want to pass around simply needs to be accessible from the other thread's scope.
As far as thread-safety, you can synchronize your writes, and for the most part, that will take care of things. Thread safety can get a bit hairy the more complicated your code, but I think this will get you started.
One method for processing objects, and producing result objects is to have a shared array or LinkedList that acts as a queue of objects, containing the objects to be processed, and the resulting objects from that processing. It's hard to go into much more detail than that without more specifics on what exactly you're trying to do, but most shared access to objects between threads comes down to either inter-thread method calls, or some shared collection/queue of objects.
Unless you are absolutely certain that it will always be only a single object at a time, use some sort of Queue.
If you are certain that it will always be only a single object at a time, use some sort of Queue anyway. :-)
Use a concurrent queue from the java.util.concurrent.*.
why? Almost guaranteed to provide better general performance than any thing hand rolled.
recommendation: use a bound queue and you will get back-pressure for free.
note: the depth of queue determines your general latency characteristics: shallower queues will have lower latencies at the cost of reduced bandwidth.
Use Future semantics
why? Futures provide a proven and standard means of getting asynchronous result.
recommendation: create a simple Request class and expose a method #getFutureResponse(). The implementation of this method can use a variety of signaling strategies, such as Lock, flag (using Atomic/CAS), etc.
note: use of timeout semantics in Future will allow you to link server behavior to your server SLA e.g. #getFutureResponse(sla_timeout_ms).
A book tip for if you want to dive a bit more into communication between threads (or processes, or systems): Pattern-Oriented Software Architecture Volume 2: Patterns for Concurrent and Networked Objects
Just use simple dependency injection.
MyFirstThread extends Thread{
public void setData(Object o){...}
}
MySecondThread extends Thread{
MyFirstThread callback;
MySecondThread(MyFirstThread callback){this.callback=callback)
}
MyFirstThread t1 = new MyFirstThread();
MySecondThread t2 = new MySecondThread(t1);
t1.start();
t2.start();
You can now do callback.setData(...) in your second thread.
I find this to be the safest way. Other solutions involve using volatile or some kind of shared object which I think is an overkill.
You may also want to use BlockingQueue and pass both of those to each thread. If you plan to have more than one thread then it is probably a better solution.