I am having trouble with waiting for fresh data on a worker thread. The data object is copied to realm on a main thread, but almost immediately after that I need to access this object from a worker thread, which then reports that no such object exists in realm right now ( its newly opened realm instance ) . I remember that there was a method load() that would block execution to the point of next update, but it was removed in newer versions. And I can't use change notification for this as this is not a Looper thread..
The only way I can think of right now is to sleep the thread for some magic period of time and pray to god that it has updated already, but that approach is imho wildly indeterministic.
Can anybody advise here, how can I ensure that I read the most current data at the time ?
A possible hack would be to create a transaction that you cancel at the end.
realm.beginTransaction(); // blocks while there are other transactions
... // you always see the latest version of the Realm here
realm.cancelTransaction();
This works if the thread is started after the UI thread saves the object into the Realm.
You can also try this workaround: https://stackoverflow.com/a/38839808/2413303 (although it doesn't really help with waiting)
try using QueryListener, which is trigger whenewer object, satisfying certain criteria is updated
using realm.beginTransaction() and realm.commitTransaction() instead of realm.executeTransaction(Realm.Transaction)
The problem with this is that executeTransaction() automatically handles calling realm.cancelTransaction() in case an exception is thrown, while the other alternative typically neglects the try-catch.
Yes, you’re supposed to call cancel on transactions that aren’t going to end up being committed.
For example, on background threads:
// SAY NO TO THIS
Realm realm = Realm.getDefaultInstance();
realm.beginTransaction(); // NO
realm.copyToRealm(dog)
realm.commitTransaction(); // NO NO NO NO NO
// YOU NEED TO CLOSE THE REALM
// ----------------------
// SAY YES TO THIS
Realm realm = null;
try { // I could use try-with-resources here
realm = Realm.getDefaultInstance();
realm.executeTransaction(new Realm.Transaction() {
#Override
public void execute(Realm realm) {
realm.insertOrUpdate(dog);
}
});
} finally {
if(realm != null) {
realm.close();
}
}
// OR IN SHORT (Retrolambda)
try(Realm realmInstance = Realm.getDefaultInstance()) {
realmInstance.executeTransaction((realm) -> realm.insertOrUpdate(dog));
}
The problem with this is that on background threads, having an open Realm instance that you don’t close even when the thread execution is over is very costly, and can cause strange errors. As such, it’s recommended to close the Realm on your background thread when the execution is done in a finally block. This includes IntentServices.
Related
I am using realm in my firebase service and i am closing the instance of realm in my finally block but the problem arises when i perform an asynchronous operation in my try block in which case the finally executes before the async complete and the realm instance is closed which causes the app to crash since the async operation performs realm related tasks.
try {
// perform async task that requires realm
} catch (Exception e) {
e.printStackTrace();
} finally {
if (realm != null && !realm.isClosed())
realm.close();
}
This is what the code roughly looks like.If i try closing the realm instance anywhere else i get an error saying that i am accessing the realm instance from the incorrect thread,is there a way i can wait until the async operation is completed and only then close the realm instance.
So, you really can't perform an asynchronous task inside a try/catch block.
... and by "can't" I don't mean that it is bad practice, I mean that it is simply not possible, by the very definition of "asynchronous".
What you are doing inside the try/catch block is enqueuing the task, for later execution. Once the task is enqueued (not executed!) the try/catch block is exited.
If you want the try/catch block around the asynchronously executed code, you need to execute it at part of the asynchronous task.
Furthermore, as you will see in the documentation, you cannot pass most Realm objects between threads. You cannot open the realm on some thread and then pass it, open, to an asynchronous task.
Hello RxJava masters,
In my current Android project, I encountered some deadlock issues while playing with RxJava and SQLite. My problem is :
I start a transaction on a thread
call a web service and save some stuff in the database
concat map another observable function
try to write other stuff on the database ---> get a deadlock
Here is my code :
//define a scheduler for managing transaction in the same thread
private Scheduler mScheduler = Schedulers.from(Executors.newSingleThreadExecutor());
Observable.just(null)
/* Go to known thread to open db transaction */
.observeOn(mScheduler)
.doOnNext(o -> myStore.startTransaction())
/* Do some treatments that change thread */
.someWebServiceCallWithRetrofit()
/* Return to known thread to save items in db */
.observeOn(mScheduler)
.flatMap(items -> saveItems(items))
.subscribe();
public Observable<Node> saveItems(List<Item> items) {
Observable.from(items)
.doOnNext(item -> myStore.saveItem(item)) //write into the database OK
.concatMap(tab -> saveSubItems(item));
}
public Observable<Node> saveSubItems(Item item) {
return Observable.from(item.getSubItems())
.doOnNext(subItem -> myStore.saveSubItems(subItem)) //DEADLOCK thread is different
}
Why all of sudden RxJava is changing thread? Even if I specified I want him to observe on my own scheduler. I made a dirty fix by adding another observeOn before saveSubItem, but this is probably not the right solution.
I know that when you call a web service with retrofit, the response is forwarded to a new thread (that's why I created my own scheduler to get back in the thread I started my sql transaction). But, I really don't understand how RxJava is managing the threads.
Thank you very much for your help.
The side effect operators (as does flatMap) execute synchronously on whatever thread calls it. Try something like
Observable.just(null)
.doOnNext(o -> myStore.startTransaction())
.subscribeOn(mScheduler) // Go to known thread to open db transaction
/* Do some treatments that change thread */
.someWebServiceCallWithRetrofit()
.flatMap(items -> saveItems(items))
.subscribeOn(mScheduler) // Return to known thread to save items in db
.observeOn(mScheduler) // Irrelevant since we don't observe anything
.subscribe();
As of my knowledge doOnNext method is called in different Thread, than the code before it, because it is running asynchroniously from the rest of the sequence.
Example: You can do multiple rest calls, save it to database and inside doOnNext(...) inform a view/presenter/controller of a progres. You could do this before saving to database or/and after saving to database.
What I would suggest you is "flatMapping" a code.
So the saveItems method would look like this (if myStore.saveSubItems returns a result):
public Observable<Node> saveSubItems(Item item) {
return Observable.from(item.getSubItems())
.flatMap(subItem -> myStore.saveSubItems(subItem))
}
Using "flatMapping" guarantees that the operation is run on the same thread as the previous sequence and the sequence continues then flaMap function ends.
I am using the new Couchbase Java Client API 2.1.1 and therefore JavaRx to access my Couchbase cluster.
When using asynchronous getAndLock on an already locked document, getAndLock fails with a TemporaryLockFailureException. In another SO question (rxjava: Can I use retry() but with delay?) I found out how to retry with delay.
Here is my adopted code:
CountDownLatchWithResultData<JsonDocument> resultCdl = new CountDownLatchWithResultData<>(1);
couchbaseBucket.async().getAndLock(key, LOCK_TIME).retryWhen((errorObserver) -> {
return errorObserver.flatMap((Throwable t) -> {
if (t instanceof TemporaryLockFailureException) {
return Observable.timer(RETRY_DELAY_MS, TimeUnit.MILLISECONDS);
}
return Observable.error(t);
});
}).subscribe(new Subscriber<JsonDocument>() {
#Override
public void onCompleted() {
resultCdl.countDown();
}
#Override
public void onError(Throwable e) {
resultCdl.countDown();
}
#Override
public void onNext(JsonDocument t) {
resultCdl.setInformation(t);
}
});
........
resultCdl.await();
if (resultCdl.getInformation() == null) {
//do stuff
} else ....
(CountDownLatchWithResultData simply extends a normal CountDownLatch and adds two methods to store some information before the count has reached 0 and retrieve it afterwards)
So basically I'd like this code to
try to get the lock infinitely once every RETRY_DELAY_MS milliseconds if a TemporaryLockFailureException occured and then call onNext
or to fail completely on other exceptions
or to directly call onNext if there is no exception at all
The problem now is that when retrying, it only retries once and the JsonDocument from resultCdl.getInformation() is always null in this case even though the document exists. It seems onNext is never called.
If there is no exception, the code works fine.
So apparently I am doing something wrong here but I have no clue as to where the problem might be. Does returning Observable.timer imply that with this new Obervable also the previously associated retryWhen is executed again? Is it the CountDownLatch with count 1 getting in the way?
This one is subtle. Up to version 2.2.0, the Observables from the SDK are in the "hot" category. In effect that means that even if no subscription is made, they start emitting. They will also emit the same data to every newcoming Subscriber, so in effect they cache the data.
So what you retry does is resubscribe to an Observable that will always emit the same thing (in this case an error). I suspect it comes out of the retry loop just because the lock maximum duration is LOCK_TIME...
Try wrapping the call to asyncBucket.getAndLock inside an Observable.defer (or migrate to the 2.2.x SDK if that's something you could do, see release and migration notes starting from 2.2.0).
I am having difficulty trying to correctly program my application in the way I want it to behave.
Currently, my application (as a Java Servlet) will query the database for a list of items to process. For every item in the list, it will submit an HTTP Post request. I am trying to create a way where I can stop this processing (and even terminate the HTTP Post request in progress) if the user requests. There can be simultaneous threads that are separately processing different queries. Right now, I will stop processing in all threads.
My current attempt involves implementing the database query and HTTP Post in a Callable class. Then I submit the Callable class via the Executor Service to get a Future object.
However, in order properly to stop the processing, I need to abort the HTTP Post and close the database's Connection, Statement and ResultSet - because the Future.cancel() will not do this for me. How can I do this when I call cancel() on the Future object? Do I have to store a List of Arrays that contains the Future object, HttpPost, Connection, Statement, and ResultSet? This seems overkill - surely there must be a better way?
Here is some code I have right now that only aborts the HttpPost (and not any database objects).
private static final ExecutorService pool = Executors.newFixedThreadPool(10);
public static Future<HttpClient> upload(final String url) {
CallableTask ctask = new CallableTask();
ctask.setFile(largeFile);
ctask.setUrl(url);
Future<HttpClient> f = pool.submit(ctask); //This will create an HttpPost that posts 'largefile' to the 'url'
linklist.add(new tuple<Future<HttpClient>, HttpPost>(f, ctask.getPost())); //storing the objects for when I cancel later
return f;
}
//This method cancels all running Future tasks and aborts any POSTs in progress
public static void cancelAll() {
System.out.println("Checking status...");
for (tuple<Future<HttpClient>, HttpPost> t : linklist) {
Future<HttpClient> f = t.getFuture();
HttpPost post = t.getPost();
if (f.isDone()) {
System.out.println("Task is done!");
} else {
if (f.isCancelled()) {
System.out.println("Task was cancelled!");
} else {
while (!f.isDone()) {
f.cancel(true);
try {
Thread.sleep(5000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("!Aborting Post!");
try {
post.abort();
} catch (Exception ex) {
System.out.println("Aborted Post, swallowing exception: ");
ex.printStackTrace();
}
}
}
}
}
}
Is there an easier way or a better design? Right now I terminate all processing threads - in the future, I would like to terminate individual threads.
I think keeping a list of all the resources to be closed is not the best approach. In your current code, it seems that the HTTP request is initiated by the CallableTask but the closing is done by somebody else. Closing resources is the responsibility of the one who opened it, in my opinion.
I would let CallableTask to initiate the HTTP request, connect to database and do it's stuff and, when it is finished or aborted, it should close everything it opened. This way you have to keep track only the Future instances representing your currently running tasks.
I think your approach is correct. You would need to handle the rollback yourself when you are canceling the thread
cancel() just calls interrupt() for already executing thread. Have a look here
http://docs.oracle.com/javase/tutorial/essential/concurrency/interrupt.html:
As it says
An interrupt is an indication to a thread that it should stop what it
is doing and do something else. It's up to the programmer to decide
exactly how a thread responds to an interrupt, but it is very common
for the thread to terminate.
Interrupted thread would throw InterruptedException
when a thread is waiting, sleeping, or otherwise paused for a long
time and another thread interrupts it using the interrupt() method in
class Thread.
So you need to explicitly code for scenarios such as you mentioned in executing thread where there is a possible interruption.
I am trying to write a mechanism that will manage my save to DB operation.
I send the server a list of objects, it iterates them and saves each one.
Now, if they fail for some strange reason (exception) it saves them to another list that
has a timer that runs every 5 seconds, and tries to re-save them.
I then have a locking problem, which I can solve with another boolean.
My function that saves my lost object is:
private void saveLostDeals() {
synchronized (unsavedDeals) {
if (unsavedDeals.size() > 0) {
for (DealBean unsavedDeal : unsavedDeals) {
boolean successfullySaved = reportDeal(unsavedDeal,false);
if (successfullySaved) {
unsavedDeals.remove(unsavedDeal);
}
}
}
}
}
And my reportDeal() method that's being called for regular reports and for lost deals report:
try {
...
} catch (HibernateException e) {
...if (fallback)
synchronized (unsavedDeals) {
unsavedDeals.add(deal);
}
session.getTransaction().rollback();
} finally {
....
}
Now, when a lost deal is saved - if an exception occurs - the synchronized block will stop it.
What do you have to say about this save fallback mechanism? Are there better design patterns to deal with this common issue?
I would suggest using either a proxy or aspects to handle the rollback/retry mechanism. The proxy could use something like the strategy pattern for advice on what action to take.
If you however don't want to retry immediately, but say in 5 seconds as you propose, I would consider building that into the contract of your database layer by providing asynchronous routines to begin with. Something like dao.scheduleStore(o); or dao.asyncStore(o);.
It depends
For example,
Request to save Entity ---> Exception occurs ---> DB Connection Problem----> In Exception Block Retry to save entity in fallback DB-----> Return the response to request
Request to save Entity ---> Exception occurs ---> DB Connection Problem----> In Exception Block Retry to save entity in in-memory Store of Application -----> Return the response to request
Request to save Entity ----> Exception occurs----> unknown Exception----> In Exception block save entity to XML File store[serialize in XML]---->Return the response mentioning temp saved will be updated later to request
Timer ----> checks the file store for any serialized XML ----> updates the DB
Points to watch out for
Async calls are better in such scenarios rather than making requesting client to wait.
In case of in-memory saving , watch out for amount of data saved in memory in case of prolonged DB failure. That might bring down the whole application
Transactions, whether you want to rollback of save its intermittent state.
consistency of data to be watched for