Read starts before the insert completes - java

#Transactional
public void save(String myIds)
{
synchronized (this)
{
List<mydata> data = getDataToSaveOrUpdate(myIds);//Returns the new dataList and updates old data
repository.saveAll(data);
logger.info("request processed");
}
logger.debug("exiting the method");
}
In this method if i sent the two same request with the difference between 0.5 sec what happens getDataToSaveOrUpdate method start reading data from repository before the previous request saveAll finishes the job.
Note One thing i noticed that it will work properly once i removed #Transactional

Maybe what you need is LockModeType(PESSIMISTIC_WRITE).

Second request processing starts as soon as first thread exits synchronized block. Transaction might still not be committed by then. Transaction will only be committed after method execution is completed.
One possible solution is to add synchronized keyword to the method itself.
#Transactional
public synchronized void save(String myIds) {
List<mydata> data = getDataToSaveOrUpdate(myIds);//Returns the new dataList and updates old data
repository.saveAll(data);
logger.info("request processed");
logger.debug("exiting the method");
}
One need to be very careful when using synchronized keyword. I don't know your exact need, may be it is a valid usage for your scenario.

Related

Event driven to continue request thread execution in Spring MVC

There is a method foo() in controller, which have to wait another method bar() triggered to continue execution.
#GetMapping("/foo")
public void foo(){
doSomething();
// wait until method bar() triggered
doAnotherSomething();
}
#GetMapping("/bar")
public void bar(){
// make foo() continue execute after being called
}
My solution is: saving a status flag in database/cache, while foo() is waiting, the thread loops searching if the status changed.
However, this solution will blocke request thread for seconds.
Is there any way to make foo() method run asynchronously, thus won't block thread execution?
This question is too broad. Yes you can use DeferredResult to finish a web request later. But doAnotherSomething() should actually do stuff asynchronously, otherwise you still end up using a thread, just not the one from the app server's pool. Which would be a waste since you can simply increase the app server's pool size and be done with it. "Offloading" work from it to another pool is a wild goose chase.
You achieve truly asynchronous execution when you wait on more than one action in a single thread. For example by using asynchronous file or socket channels you can read from multiple files/sockets at once. If you're using a database, the database driver must support asynchronous execution.
Here's an example of how to use the mongodb async driver:
#GetMapping("/foo")
public DeferredResult<ResponseEntity<?>> foo() {
DeferredResult<ResponseEntity<?>> res = new DeferredResult<>();
doSomething();
doAnotherSomething(res);
return res;
}
void doAnotherSomething(DeferredResult<ResponseEntity<?>> res) {
collection.find().first(new SingleResultCallback<Document>() {
public void onResult(final Document document, final Throwable t) {
// process (document)
res.setResult(ResponseEntity.ok("OK")); // finish the request
}
});
}
You can use CountDownLatch to wait till the dependent method is executed. For the sake of simplicity, I have used a static property. Make sure both methods have access to the same CountDownLatch object. ThreadLocal<CountDownLatch> could also be considered for this usecase.
private static CountDownLatch latch = new CountDownLatch(1);
#GetMapping("/foo")
public void foo(){
doSomething();
// wait until method bar() triggered
latch.await();
doAnotherSomething();
}
#GetMapping("/bar")
public void bar(){
// make foo() continue execute after being called
latch.countDown();
}

Java Threading Unexpected Behavior

We have been looking at a threading error for a while and are not sure how this is possible. Below is a minimized example from our code. There is a cache holding data retrieved from a database (or: "a lengthy synchronous operation", as far as this example is concerned). There is a thread for reloading the cache, while other threads try to query the cache. There is a period of time when the cache is null, waiting to be reloaded. It should not be queryable during this time, and we tried to enforce this by synchronizing the methods that access the cache - both for reading and writing. Yet if you run this class for a while, you will get NPEs in search(). How is this possible?
Java docs state that "it is not possible for two invocations of synchronized methods on the same object to interleave. When one thread is executing a synchronized method for an object, all other threads that invoke synchronized methods for the same object block (suspend execution) until the first thread is done with the object".
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
public class CacheMultithreading01 {
private long dt = 1000L;
public static void main(String[] args) {
CacheMultithreading01 cm = new CacheMultithreading01();
cm.demonstrateProblem();
}
void demonstrateProblem() {
QueryableCache cache = new QueryableCache();
runInLoop("Reload", new Runnable() {
#Override
public void run() {
cache.reload();
}
});
runInLoop("Search", new Runnable() {
#Override
public void run() {
cache.search(2);
}
});
// If the third "runInLoop" is commented out, no NPEs
runInLoop("_Clear", new Runnable() {
#Override
public void run() {
cache.clear();
}
});
}
void runInLoop(String threadName, Runnable r) {
new Thread(new Runnable() {
#Override
public synchronized void run() {
while (true) {
try {
r.run();
} catch (Exception e) {
log("Error");
e.printStackTrace();
}
}
}
}, threadName).start();
}
void log(String s) {
System.out.format("%d %s %s\n", System.currentTimeMillis(), Thread
.currentThread().getName(), s);
}
class QueryableCache {
private List<Integer> cache = new ArrayList<>();
public synchronized void reload() {
clear();
slowOp(); // simulate retrieval from database
cache = new ArrayList<>(Arrays.asList(1, 2, 3));
}
public synchronized void clear() {
cache = null;
}
public synchronized Integer search(Integer element) {
if (cache.contains(element))
return element;
else
return null;
}
private void slowOp() {
try {
Thread.sleep(dt);
} catch (InterruptedException e) {
}
}
}
}
//java.lang.NullPointerException
//at examples.multithreading.cache.CacheMultithreading01$QueryableCache.search(CacheMultithreading01.java:73)
//at examples.multithreading.cache.CacheMultithreading01$2.run(CacheMultithreading01.java:26)
//at examples.multithreading.cache.CacheMultithreading01$4.run(CacheMultithreading01.java:44)
//at java.lang.Thread.run(Thread.java:745)
We do not understand why the NPEs can happen even though the code is synchronized. We also do not understand why the NPEs stop happening if we comment out the third call to runInLoop (the one that does cache.clear).
We have also tried to implement locking using a ReentrantReadWriteLock - and the result is the same.
Since you don't have any more advanced locking, you can call clear() and search() consecutively. That will obviously cause a NPE.
Calling reload() and search() won't cause problems, since in reload the cache is cleared, then rebuilt, inside a synchronized block, preventing other (search) operations from being executed in between.
Why is there a clear() method that will leave cache in a "bad" state (which search() doesn't even check for)?
You have to check in the search method if cache is null. Otherwise calling contains on it in search can throw a NullPointerException in the case that you have previously set cache to null in the clear-method.
Synchronization is working as correctly.
The problem is that the method clear puts cache to null. And there is no guarantee that reload method will be called before search.
Also, note that the method reload, it's not releasing the lock. So, when you are waiting for the slowOp to finish, the other methods can't execute.
"There is a period of time when the cache is null, waiting to be reloaded."
This is your problem: clear sets things to null, and then returns, releasing the synchronization lock, allowing someone else to access.
It would be better to make the "new" assignment atomic and not clear().
Assuming that slowOp() needs to return data for the cache (private List<Integer> slowOp()) you retrieve that data before assigning it
ArrayList<Integer> waitingForData = slowOp();
cache = watingForData;
This 'updates' the cache only after the data is available. Assignment is an atomic operation, nothing can access cache while the reference is being updated.
Three different threads invoking clear() search() and reload() of the cache without a definitive interleaving. Since the interleaving doesn't gurantees the the sequence of lock being obtained for the clear() and search() threads there could be chances where the search thread is getting the lock over the object just after the clear() thread. In that case the search would result in the NullPointerException.
You may have to check for the cache equal to null in the search object and may be do a reload() from within the search() method. This would gurantee the search result or return null as applicable.

Can two threads share the same JPA transaction?

I am writing an integration test in JUnit for a Message Driven Pojo (MDP):
#JmsListener(destination = "jms/Queue", containerFactory = "cf")
public void processMessage(TextMessage message) throws JMSException {
repo.save(new Entity("ID"));
}
where repo is a spring-data repository
my unit test:
#Test
public void test() {
//sendMsg
sendJMSMessage();
//verify DB state
Entity e = repo.findOne("ID");
assertThat(e, is(notNullValue()) );
}
Now, the thing is that the processMessage() method is executed in a different thread than the test() method, so I figured out that I need to somehow wait for the processMessage() method to complete before verifying the state of the DB. The best solution I could find was based on CountDownLatch. so now the methods look like this:
#JmsListener(destination = "jms/Queue", containerFactory = "cf")
public void processMessage(TextMessage message) throws JMSException {
repo.save(new Entity("ID"));
latch.countDown();
}
and the test
#Test
public void test() {
//set the countdownlatch
CountDownLatch latch = new CountDownLatch(1);
JMSProcessor.setLatch(latch);
//sendMsg
sendJMSMessage();
try {
countDownLatch.await();
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
//verify DB state
Entity e = repo.findOne("ID");
assertThat(e, is(notNullValue()) );
}
So I was very proud of myself and then I run the test and it failed. The repo.findOne("ID") returned null. In the first reaction I set up a breakpoint at that line and proceed with debugging. During the debugging session the repo.findOne("ID") actually returned the entity inserted by the #JMSListenerlistener method.
After scratching my head for a while here's the current theory: Since the spring-data repository is accessed in two different threads, it gets two different instances of EntityManager and therefore the two threads are in a differen't transaction. Eventhough there's some sort of synchronization using the CountDownLatch, the transaction bound to the thread executing the #JMSListener annotated method has not committed yet when the JUnit #Test annotated method starts a new transaction and tries to retrieve the entity.
So my question is:
Is there a way for one thread to wait for the commit of the other.
Can two threads share one transaction in such a synchronized context (ie, the two threads would not access the EntityManager simultaneously)
Is my testing approach a nonsense and there is a better way of doing this

ContentProvider insert() always runs on UI thread?

I have an app that needs to pull data from a server and insert it into an SQLite database in response to user input. I thought this would be pretty simple - the code that pulls the data from the server is a fairly straightforward subclass of AsyncTask, and it works exactly as I expect it to without hanging the UI thread. I implemented callback functionality for it with a simple interface and wrapped it in a static class, so my code looks like this:
MyServerCaller.getFolderContents(folderId, new OnFolderContentsResponseListener() {
#Override
public void onFolderContentsResponse(final List<FilesystemEntry> contents) {
// do something with contents
}
}
All still good. Even if the server takes an hour to retrieve the data, the UI still runs smoothly, because the code in getFolderContents is running in the doInBackground method of an AsyncTask (which is in a separate thread from the UI). At the very end of the getFolderContents method, the onFolderContentsResponse is called and passed the list of FilesystemEntry's that was received from the server. I only really say all this so that it's hopefully clear that my problem is not in the getFolderContents method or in any of my networking code, because it doesn't ever occur there.
The problem arises when I try to insert into a database via my subclass of ContentProvider within the onFolderContentsResponse method; the UI always hangs while that code is executing, leading me to believe that despite being called from the doInBackground method of an AsyncTask, the inserts are somehow still running on the UI thread. Here's what the problematic code looks like:
MyServerCaller.getFolderContents(folderId, new OnFolderContentsResponseListener() {
#Override
public void onFolderContentsResponse(final List<FilesystemEntry> contents) {
insertContentsIntoDB(contents);
}
}
And the insertContentsIntoDB method:
void insertContentsIntoDB(final List<FilesystemEntry> contents) {
for (FilesystemEntry entry : contents) {
ContentValues values = new ContentValues();
values.put(COLUMN_1, entry.attr1);
values.put(COLUMN_2, entry.attr2);
// etc.
mContentResolver.insert(MyContentProvider.CONTENT_URI, values);
}
}
where mContentResolver has been previously set to the result of the getContentResolver() method.
I've tried putting insertContentsIntoDB in its own Thread, like so:
MyServerCaller.getFolderContents(folderId, new OnFolderContentsResponseListener() {
#Override
public void onFolderContentsResponse(final List<FilesystemEntry> contents) {
new Thread(new Runnable() {
#Override
public void run() {
insertContentsIntoDB(contents);
}
}).run();
}
}
I've also tried running each individual insert in its own thread (the insert method in MyContentProvider is synchronized, so this shouldn't cause any issues there):
void insertContentsIntoDB(final List<FilesystemEntry> contents) {
for (FilesystemEntry entry : contents) {
new Thread(new Runnable() {
#Override
public void run() {
ContentValues values = new ContentValues();
values.put(COLUMN_1, entry.attr1);
values.put(COLUMN_2, entry.attr2);
// etc.
mContentResolver.insert(MyContentProvider.CONTENT_URI, values);
}
}).run();
}
}
And just for good measure, I've also tried both of those solutions with the relevant code in the doInBackground method of another AsyncTask. Finally, I've explicitly defined MyContentProvider as living in a separate process in my AndroidManifest.xml:
<provider android:name=".MyContentProvider" android:process=":remote"/>
It runs fine, but it still seems to run in the UI thread. That's the point where I really started tearing my hair out over this, because that doesn't make any sense at all to me. No matter what I do, the UI always hangs during the inserts. Is there any way to get them not to?
Instead of calling mContentResolver.insert(), use AsyncQueryHandler and its startInsert() method. AsyncQueryHandler is designed to facilitate asynchronous ContentResolver queries.
I think your original problem may have been that you are calling the run method on your new thread (which causes execution to continue on the current thread) instead of calling the start method. I think this is what Bright Great was trying to say in his/her answer. See Difference between running and starting a thread. It's a common mistake.
Man.Relax yourself.And anything would looks better.
At first,Start a Thread is Func start not Func run,if you want to start the new Thread
not only call the func run.
new Thread(Runnable runnable).start();
Then I bet use Handler sometimes would be better than AsyncTask.
You can run the query in the doInBackground(Integer int) overridden method of the AsynTask, and update the main UI on the onPostExecute(Integer int) method.

Java Micro Edition - HTTP sending/receiving using Threads and Delegates (how to update UI)

I'm making a shopping list app which basically uploads your shopping list to a php file and also downloads all the updates anyone else has made to the list.
I'm using record stores w/ record enumeration and an item object
Basically i want to be able to send off all the elements in the record store to the php file using a thread. The trouble comes from, do i need to pass the record store to the thread? and how do i get the data back and update the record store?
At the moment i cant see how my Send data class is going to update the record store of the main midlet.
Thanks in advance
edi
public GetDataClass(Midlet parentMidlet, String URL, RecordStore tempRecordStore)
{
try
{
populateLocalRecordStore(tempRecordStore);
this.parentMidlet = parentMidlet;
this.URL = URL;
}
catch(Exception e)
{
System.out.println(e.toString());
}
}
the populate record store takes the record store that has been passed and literally loops through all records and inserts them into the local rs. The problem comes from when i want to pass the data back to the main form/recordstore
edit
How do i update the record store in the main form from within the thread (what has been returned from the http request)
If I understand you correctly (please let me know if I am not) you want to know how you can get a return value from the background thread that should update the UI? The thread you are using to populate the local record store.
So, if you GetDataClass is Runnable (created with new Thread(runnable)), you can do something similar to this:
public class GetDataClass implements Runnable {
private Midlet parentMidlet;
private String URL;
private RecordStore tempRecordStore;
public GetDataClass(Midlet parentMidlet, String URL, RecordStore tempRecordStore) {
this.parentMidlet = parentMidlet;
this.URL = URL;
this.tempRecordStore = tempRecordStore;
}
public void run() {
try {
returnData = populateLocalRecordStore(tempRecordStore);
parentMidlet.updateForm(returnData);
} catch(Exception e) {
// log and do exception handling
}
}
you can do what Jarle said, but that is not enough. Record stores cant handle concurrency by themselves, so if you have 2 threads accessing the same record store, you'll need to use some locks to control the access:
On your main thread:
private Object lock = new Object();
public Object getLock() {
return lock;
}
public void retrieveDataFromRS() {
synchronized (getLock()) {
//read, edit, whatever your RS here and release the lock
}
}
On your GetDataClass thread:
public void run() {
synchronized(parentMidlet.getLock()) {
//read, edit, whatever your RS here and release the lock
}
}
you can synchronize them using the class or the own thread, but creating a lock object gives you more control, as you can stop one thread until theres more data, or whatever you need using the getLock().wait() to make the thread wait until someone notify the lock, and getLock.notify(), to make all threads waiting on the lock to start running again.

Categories

Resources