Resource sharing or file lock among concurrent Android services - java

I have three services in my Android app that are fired by two broadcast receivers. The first two write onto a file and are fired by one broadcast receiver so I can make sure that they are executed one after the other (via Context.sendOrderedBroadcast()). The third one is on its own and is fired by a separate broadcast receiver, but reads from the same file that the first two write on.
Because the broadcast receivers may be fired at the same time or nearly the same time as each other, the file might also be accessed concurrently. How can I prevent that from happening? I want to be able to either read first then write or write then read. I'm just not sure if this problem is similar to Java concurrency in general because android services, if I'm not mistaken, are an entirely different beast.

One solution would be to have your writing tasks create an empty temporary file (say .lock) before accessing the shared file and delete that same temporary file once they are done.
Your reading task can check whether .lock file exists or not.
Alternatively, you can use a FileLock.

http://developer.android.com/reference/android/app/Service.html
Note that services, like other application objects, run in the main thread of their hosting process. This means that, if your service is going to do any CPU intensive (such as MP3 playback) or blocking (such as networking) operations, it should spawn its own thread in which to do that work.
I suggest to read from/write to file in separate thread. You can use Only one thread at a time! for doing it in the same thread.

First of all, I shouldn't have done the file I/O in the main UI thread which is the case with Services. It should be done in another thread, like an AsyncTask.
Secondly, the ReentrantLock method is so much easier. When locked, it tells the other threads accessing the same resource to wait, and proceed only when the lock has been released. Simply instantiate a new ReentrantLock() and share that lock among the methods that read to or write from the file. It's as easy as calling lock() and unlock() on the ReentrantLock as you need it.

Related

Calling methods of Singleton class concurrently

I have a sinlgeton object which holds one method, which is NOT synchronized. The singleton can be accessed by many clients at a time - what will happen if multiple clients access that object ?
Actually I want to write a log in a single file using that method.
I guess by clients, you mean threads. Assuming you have implemented singleton correctly, all threads would be using the same instance. Since this is a method that changes state (writing to a file), it would require in general require some sort of synchronization. Although it depends on some factors - for example, if your method writes just a single line in a single call to BufferedWriter.write(), it is fine. Because BufferefWriter.write() does synchronization internally. However, if you write multiple lines or make multiple calls to BufferedWriter.write(), the different calls might execute out of order.
Now, if by clients you mean different processes, simple synchronization of course will not help. You can use FileLock to lock the file if the processes are in the same JVM. Otherwise, you have to lock using something external, such as use another temp file as lock. It depends on the OS though if it provides atomic file creates.

How can I ensure that my Android app doesn't access a file simultaneously?

I am building a fitness app which continually logs activity on the device. I need to log quite often, but I also don't want to unnecessarily drain the battery of my users which is why I am thinking about batching network calls together and transmitting them all at once as soon as the radio is active, the device is connected to a WiFi or it is charging.
I am using a filesystem based approach to implement that. I persist the data first to a File - eventually I might use Tape from Square to do that - but here is where I encounter the first issues.
I am continually writing new log data to the File, but I also need to periodically send all the logged data to my backend. When that happens I delete the contents of the File. The problem now is how can I prevent both of those operations from happening at the same time? Of course it will cause problems if I try to write log data to the File at the same time as some other process is reading from the File and trying to delete its contents.
I am thinking about using an IntentService essentially act as a queue for all those operations. And since - at least I have read as much - an IntentServices handles Intents sequentially in single worker Thread it shouldn't be possible for two of those operations to happen at the same time, right?
Currently I want to schedule a PeriodicTask with the GcmNetworkManager which would take care of sending the data to the server. Is there any better way to do all this?
1) You are overthinking this whole thing!
Your approach is way more complicated than it has to be! And for some reason none of the other answers point this out, but GcmNetworkManager already does everything you are trying to implement! You don't need to implement anything yourself.
2) Optimal way to implement what you are trying to do.
You don't seem to be aware that GcmNetworkManager already batches calls in the most battery efficient way with automatic retries etc and it also persists the tasks across device boots and can ensure their execution as soon as is battery efficient and required by your app.
Just whenever you have data to save schedule a OneOffTask like this:
final OneoffTask task = new OneoffTask.Builder()
// The Service which executes the task.
.setService(MyTaskService.class)
// A tag which identifies the task
.setTag(TASK_TAG)
// Sets a time frame for the execution of this task in seconds.
// This specifically means that the task can either be
// executed right now, or must have executed at the lastest in one hour.
.setExecutionWindow(0L, 3600L)
// Task is persisted on the disk, even across boots
.setPersisted(true)
// Unmetered connection required for task
.setRequiredNetwork(Task.NETWORK_STATE_UNMETERED)
// Attach data to the task in the form of a Bundle
.setExtras(dataBundle)
// If you set this to true and this task already exists
// (just depends on the tag set above) then the old task
// will be overwritten with this one.
.setUpdateCurrent(true)
// Sets if this task should only be executed when the device is charging
.setRequiresCharging(false)
.build();
mGcmNetworkManager.schedule(task);
This will do everything you want:
The Task will be persisted on the disk
The Task will be executed in a batched and battery efficient way, preferably over Wifi
You will have configurable automatic retries with a battery efficient backoff pattern
The Task will be executed within a time window you can specify.
I suggest for starters you read this to learn more about the GcmNetworkManager.
So to summarize:
All you really need to do is implement your network calls in a Service extending GcmTaskService and later whenever you need to perform such a network call you schedule a OneOffTask and everything else will be taken care of for you!
Of course you don't need to call each and every setter of the OneOffTask.Builder like I do above - I just did that to show you all the options you have. In most cases scheduling a task would just look like this:
mGcmNetworkManager.schedule(new OneoffTask.Builder()
.setService(MyTaskService.class)
.setTag(TASK_TAG)
.setExecutionWindow(0L, 300L)
.setPersisted(true)
.setExtras(bundle)
.build());
And if you put that in a helper method or even better create factory methods for all the different tasks you need to do than everything you were trying to do should just boil down to a few lines of code!
And by the way: Yes, an IntentService handles every Intent one after another sequentially in a single worker Thread. You can look at the relevant implementation here. It's actually very simple and quite straight forward.
All UI and Service methods are by default invoked on the same main thread. Unless you explicitly create threads or use AsyncTask there is no concurrency in an Android application per se.
This means that all intents, alarms, broad-casts are by default handled on the main thread.
Also note that doing I/O and/or network requests may be forbidden on the main thread (depending on Android version, see e.g. How to fix android.os.NetworkOnMainThreadException?).
Using AsyncTask or creating your own threads will bring you to concurrency problems but they are the same as with any multi-threaded programming, there is nothing special to Android there.
One more point to consider when doing concurrency is that background threads need to hold a WakeLock or the CPU may go to sleep.
Just some idea.
You may try to make use of serial executor for your file, therefore, only one thread can be execute at a time.
http://developer.android.com/reference/android/os/AsyncTask.html#SERIAL_EXECUTOR

Process Synchronization in java

Process A writes in a file XYZ, when executed. There are processes B and C, which when executed, reads the file XYZ. So, while process A is up, B and C should wait for A to complete. To provide synchronization can I use java.nio package? or I should use something like FileLock or sockets? Can we mention the time to wait for the second process to wait?
Edited: The file is created during the first write process. In such case, can I make it shared resource?
Using java.nio package's file lock could be a better solution, I hope. But, I think java.nio is not full-fledged till JDK 1.6.
http://www.withoutbook.com/DifferenceBetweenSubjects.php?subId1=7&subId2=43&d=Java%206%20vs%20Java%207
FileLock:
http://docs.oracle.com/javase/7/docs/api/java/nio/channels/FileLock.html
One way could be the usage of a flag. Just a boolean stillWriting which is readable from outside.
As soon process A did its Job, this flag is set to false and your processes B/C can start their work with this file.
Assuming A wants to start again editing this file, it'll set this flag back to true and block the other two processes.
Using locks would be a good idea. You can use Conditions from JavaAPi.
Refer to [http://docs.oracle.com/javase/7/docs/api/java/util/concurrent/locks/Condition.html#awaitNanos(long)][1]
When A is working it should signal the thread to await and then on completion it can signal so that other thread waiting to start can proceed. Also this is very appropriate when we use shared resource.

Android send data other thread queue

I want to generate some text string that is going to be sent via TCP socket . I have accomplished it within few minutes.
However I want a producer consumer pattern.I dont care if it failed or not.
Should I create a Blocking Queque at application for this ? Should I create a service ?
Note that I want a single thread to manage this job.
In the case it's a short task (like you commented), I'd recommend putting it within an AsyncTask as a background thread. You can control anything about this separately, which will help you also debugging it. Services are more intended for long executing tasks, so I'd not recommend it at this scope (it's a bit harder even to communicate with other Activity's. Here you'll find the AsyncTask's documentation, and here a good example.
The Blocking structure depends on your needs - but I don't think you'll need that in your case. Anyway, if you would need that, there're lots of thread-safe data structures you may use, you might find this helpful.
Create a LinkedBlockingQueue where your producer adds data. Create a Timer that fires every second or so. The task of the Timer would be to send the messages over the wire.
For this, both the producer (the one generating the messages) and consumer (Timer) should have access to the LinkedBlockingQueue. The Timer will remove the first element of the LinkedBlockingQueue and then send it.
Sounds good ?

Asynchronous processing with a single thread

Even after reading http://krondo.com/?p=1209 or Does an asynchronous call always create/call a new thread? I am still confused about how to provide asynchronous calls on an inherently single-threaded system. I will explain my understanding so far and point out my doubts.
One of the examples I read was describing a TCP server providing asynch processing of requests - a user would call a method e.g. get(Callback c) and the callback would be invoked some time later. Now, my first issue here - we have already two systems, one server and one client. This is not what I mean, cause in fact we have two threads at least - one in the server and one on the client side.
The other example I read was JavaScript, as this is the most prominent example of single-threaded asynch system with Node.js. What I cannot get through my head, maybe thinking in Java terms, is this:If I execute the code below (apologies for incorrect, probably atrocious syntax):
function foo(){
read_file(FIle location, Callback c) //asynchronous call, does not block
//do many things more here, potentially for hours
}
the call to read file executes (sth) and returns, allowing the rest of my function to execute. Since there is only one thread i.e. the one that is executing my function, how on earth the same thread (the one and only one which is executing my stuff) will ever get to read in the bytes from disk?
Basically, it seems to me I am missing some underlying mechanism that is acting like round-robin scheduler of some sort, which is inherently single-threaded and might split the tasks to smaller ones or call into a multiothraded components that would spawn a thread and read the file in.
Thanks in advance for all comments and pointing out my mistakes on the way.
Update: Thanks for all responses. Further good sources that helped me out with this are here:
http://www.html5rocks.com/en/tutorials/async/deferred/
http://lostechies.com/johnteague/2012/11/30/node-js-must-know-concepts-asynchrounous/
http://www.interact-sw.co.uk/iangblog/2004/09/23/threadless (.NET)
http://ejohn.org/blog/how-javascript-timers-work/ (intrinsics of timers)
http://www.mobl-lang.org/283/reducing-the-pain-synchronous-asynchronous-programming/
The real answer is that it depends on what you mean by "single thread".
There are two approaches to multitasking: cooperative and interrupt-driven. Cooperative, which is what the other StackOverflow item you cited describes, requires that routines explicitly relinquish ownership of the processor so it can do other things. Event-driven systems are often designed this way. The advantage is that it's a lot easier to administer and avoids most of the risks of conflicting access to data since only one chunk of your code is ever executing at any one time. The disadvantage is that, because only one thing is being done at a time, everything has to either be designed to execute fairly quickly or be broken up into chunks that to so (via explicit pauses like a yield() call), or the system will appear to freeze until that event has been fully processed.
The other approach -- threads or processes -- actively takes the processor away from running chunks of code, pausing them while something else is done. This is much more complicated to implement, and requires more care in coding since you now have the risk of simultaneous access to shared data structures, but is much more powerful and -- done right -- much more robust and responsive.
Yes, there is indeed a scheduler involved in either case. In the former version the scheduler is just spinning until an event arrives (delivered from the operating system and/or runtime environment, which is implicitly another thread or process) and dispatches that event before handling the next to arrive.
The way I think of it in JavaScript is that there is a Queue which holds events. In the old Java producer/consumer parlance, there is a single consumer thread pulling stuff off this queue and executing every function registered to receive the current event. Events such as asynchronous calls (AJAX requests completing), timeouts or mouse events get pushed on to the Queue as soon as they happen. The single "consumer" thread pulls them off the queue and locates any interested functions and then executes them, it cannot get to the next Event until it has finished invoking all the functions registered on the current one. Thus if you have a handler that never completes, the Queue just fills up - it is said to be "blocked".
The system has more than one thread (it has at least one producer and a consumer) since something generates the events to go on the queue, but as the author of the event handlers you need to be aware that events are processed in a single thread, if you go into a tight loop, you will lock up the only consumer thread and make the system unresponsive.
So in your example :
function foo(){
read_file(location, function(fileContents) {
// called with the fileContents when file is read
}
//do many things more here, potentially for hours
}
If you do as your comments says and execute potentially for hours - the callback which handles fileContents will not fire for hours even though the file has been read. As soon as you hit the last } of foo() the consumer thread is done with this event and can process the next one where it will execute the registered callback with the file contents.
HTH

Categories

Resources