Selecting Data from Room without observing - java

I need to select data from a table, manipulate it and then insert it into another table. This only happens when the app is opened for the first time that day and this isn't going to be used in the UI. I don't want to use LiveData because it doesn't need to be observed but when I was looking into how to do it, most people say I should use LiveData. I've tried using AsyncTask but I get the error "Cannot access database on the main thread since it may potentially....".
Here is the code for my AsyncTask
public class getAllClothesArrayAsyncTask extends AsyncTask<ArrayList<ClothingItem>, Void, ArrayList<ClothingItem>[]> {
private ClothingDao mAsyncDao;
getAllClothesArrayAsyncTask(ClothingDao dao) { mAsyncDao = dao;}
#Override
protected ArrayList<ClothingItem>[] doInBackground(ArrayList<ClothingItem>... arrayLists) {
List<ClothingItem> clothingList = mAsyncDao.getAllClothesArray();
ArrayList<ClothingItem> arrayList = new ArrayList<>(clothingList);
return arrayLists;
}
}
And this is how I'm calling it in my activity
mClothingViewModel = new ViewModelProvider(this).get(ClothingViewModel.class);
clothingItemArray = mClothingViewModel.getClothesArray();
What is the best practice in this situation?

Brief summary:
Room really doesn't allow to do anything (query|insert|update|delete) on main thread. You can switch off this control on RoomDatabaseBuilder, but you better shouldn't.
If you don't care about UI, minimally you can just to put your ROOM-ish code (Runnable) to one of - Thread, Executor, AsyncTask (but it was deprecated last year)... I've put examples below
Best practices in this one-shot operation to DB I think are Coroutines (for those who use Kotlin at projects) and RxJava (for those who use Java, Single|Maybe as a return types). They give much more possibilities but you should invest your time to get the point of these mechanisms.
To observe data stream from Room there are LiveData, Coroutines Flow, RxJava (Flowable).
Several examples of using Thread-switching with lambdas enabled (if you on some purpose don't want to learn more advanced stuff):
Just a Thread
new Thread(() -> {
List<ClothingItem> clothingList = mAsyncDao.getAllClothesArray();
// ... next operations
});
Executor
Executors.newSingleThreadExecutor().submit(() -> {
List<ClothingItem> clothingList = mAsyncDao.getAllClothesArray();
// ... next operations
});
AsyncTask
AsyncTask.execute(() -> {
List<ClothingItem> clothingList = mAsyncDao.getAllClothesArray();
// ... next operations
});
If you use Repository pattern you can put all this thread-switching there
Another useful link to read about life after AsyncTask deprecation

Related

how to convert Flux<pojo> to ArrayList<String>

In my spring-boot springboot service class, I have created the following code which is not working as desired:
Service class:
Flux<Workspace> mWorkspace = webClient.get().uri(WORKSPACEID)
.retrieve().bodyToFlux(Workspace.class);
ArrayList<String> newmWorkspace = new ArrayList();
newmWorkspace = mWorkspace.blockLast();
return newmWorkspace;
Please someone help me on converting the list of json values to put it into arrayList
Json
[
{
"id:"123abc"
},
{
"id:"123abc"
}
]
Why is the code not working as desired
mWorkspace is a publisher of one or many items of type Workspace.
Calling newmWorkspace.blockLast() will get a Workspace from that Publisher:
which is an object of type: Workspace and not of type ArrayList<String>.
That's why : Type mismatch: cannot convert from Workspace to ArrayList<String>
Converting from Flux to an ArrayList
First of all, in reactive programming, a Flux is not meant to be blocked, the blockxxx methods are made for testing purposes. If you find yourself using them, then you may not need reactive logic.
In your service, you shall try this :
//initialize the list
ArrayList<String> newmWorkspace = new ArrayList<>();
Flux<Workspace> mWorkspace = webClient.get().uri(WORKSPACEID)
.retrieve().bodyToFlux(Workspace.class)
.map(workspace -> {
//feed the list
newmWorkspace.add(workspace.getId());
return workspace;
});
//this line will trigger the publication of items, hence feeding the list
mWorkspace.subscribe();
Just in case you want to convert a JSON String to a POJO:
String responseAsjsonString = "[{\"id\": \"123abc\"},{\"id\": \"123cba\"}] ";
Workspace[] workspaces = new ObjectMapper().readValue(responseAsjsonString, Workspace[].class);
You would usually want to avoid blocking in a non-blocking application. However, if you are just integrating from blocking to non-blocking and doing so step-by-step (unless you are not mixing blocking and non-blocking in your production code), or using a servlet stack app but want to only use the WebFlux client, it should be fine.
With that being said, a Flux is a Publisher that represents an asynchronous sequence of 1..n emitted items. When you do a blockLast you wait until the last signal completes, which resolves to a Workspace object.
You want to collect each resolved item to a list and return that. For this purpose, there is a useful method called collectList, which does this job without blocking the stream. You can then block the Mono<List<Workspace>> returned by this method to retrieve the list.
So this should give you the result you want:
List<Workspace> workspaceList = workspaceFlux.collectList().block();
If you must use a blocking call in the reactive stack, to avoid blocking the event loop, you should subscribe to it on a different scheduler. For the I/O purposes, you should use the boundedElastic Scheduler. You almost never want to call block on a reactive stack, instead subscribe to it. Or better let WebFlux to handle the subscription by returning the publisher from your controller (or Handler).

Android HTML parsing with multithreading

I am developing app which requires html parsing. So, I'm currently using jsoup in AsyncTaskLoader like this (example):
#Override
public Boolean loadInBackground() {
try {
Connection.Response response = Jsoup.connect(getContext().getString(R.string.url_login))
.data("id", account_id, "password", account_password)
.timeout(5000)
.method(Connection.Method.POST)
.execute();
String cookie = response.cookie("JSESSIONID");
Document document = Jsoup.connect(getContext().getString(R.string.url_schedule))
.cookie("JSESSIONID", cookie)
.get();
Element table = document.select("table").first();
if (table != null) {
databaseHandler.openDatabase();
databaseHandler.getDatabase().beginTransaction();
try {
for (Element row : table.select("tr")) {
Elements columns = row.select("td");
addItem(columns, DatabaseHandler.getTableName());
}
databaseHandler.getDatabase().setTransactionSuccessful();
} finally {
databaseHandler.getDatabase().endTransaction();
}
databaseHandler.closeDatabase();
}
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
This is just one page scrape and there is few of them. And I noticed that speed of it is not very good. So, I have been told that I should consider doing multithreading and parse all these pages in separate threads at the same time so it would be faster. Now I have a few questions:
Should I still use AsyncTaskLoader or AsyncTask, or is there something else (better) for that solution ? I want to know what is the best practice for this thing.
Can anyone guide me to tutorials / examples how to do multithreading in android ?
Thanks ;)
AsyncTask should work good for this. However, as per the docs for AsyncTask, it's default model is still "one task a a time" unless you use the executeOnExecutor method.
From http://developer.android.com/reference/android/os/AsyncTask.html :
When first introduced, AsyncTasks were executed serially on a single
background thread. Starting with DONUT, this was changed to a pool of
threads allowing multiple tasks to operate in parallel. Starting with
HONEYCOMB, tasks are executed on a single thread to avoid common
application errors caused by parallel execution.
If you truly want parallel execution, you can invoke
executeOnExecutor(java.util.concurrent.Executor, Object[]) with
THREAD_POOL_EXECUTOR.
You didn't say how many pages there were to parse, so just make sure you limit the number of simultaneous tasks to a reasonable number.

Ideas on concurrent datastructure

I am not sure if i can put my question in the clearest fashion but i will try my best.
Lets say i am retrieving some information from a third party api. The retrieved information will be huge in size. To have a performance gain, instead of retrieving all the info in one go, i will be retrieving the info in a paged fashion (the api gives me that facility, basically an iterator). The return type is basically a list of objects.
My aim here is to process the information i have in hand(that includes comparing and storing in db and many other operations) while i get paged response on the request.
My question here to the expert community is , what data structure do you prefer in such case. Also does a framework like spring batch help you in getting performance gains in such cases.
I know the question is a bit vague, but i am looking for general ideas,tips and pointers.
In these cases, the data structure for me is java.util.concurrent.CompletionService.
For purposes of example, I'm going to assume a couple of additional constraints:
You want only one outstanding request to the remote server at a time
You want to process the results in order.
Here goes:
// a class that knows how to update the DB given a page of results
class DatabaseUpdater implements Callable { ... }
// a background thread to do the work
final CompletionService<Object> exec = new ExecutorCompletionService(
Executors.newSingleThreadExecutor());
// first call
List<Object> results = ThirdPartyAPI.getPage( ... );
// Start loading those results to DB on background thread
exec.submit(new DatabaseUpdater(results));
while( you need to ) {
// Another call to remote service
List<Object> results = ThirdPartyAPI.getPage( ... );
// wait for existing work to complete
exec.take();
// send more work to background thread
exec.submit(new DatabaseUpdater(results));
}
// wait for the last task to complete
exec.take();
This just a simple two-thread design. The first thread is responsible for getting data from the remote service and the second is responsible for writing to the database.
Any exceptions thrown by DatabaseUpdater will be propagated to the main thread when the result is taken (via exec.take()).
Good luck.
In terms of doing the actual parallelism, one very useful construct in Java is the ThreadPoolExecutor. A rough sketch of what that might look like is this:
public class YourApp {
class Processor implements Runnable {
Widget toProcess;
public Processor(Widget toProcess) {
this.toProcess = toProcess;
}
public void run() {
// commit the Widget to the DB, etc
}
}
public static void main(String[] args) {
ThreadPoolExecutor executor =
new ThreadPoolExecutor(1, 10, 30,
TimeUnit.SECONDS,
new LinkedBlockingDeque());
while(thereAreStillWidgets()) {
ArrayList<Widget> widgets = doExpensiveDatabaseCall();
for(Widget widget : widgets) {
Processor procesor = new Processor(widget);
executor.execute(processor);
}
}
}
}
But as I said in a comment: calls to an external API are expensive. It's very likely that the best strategy is to pull all the Widget objects down from the API in one call, and then process them in parallel once you've got them. Doing more API calls gives you the overhead of sending the data all the way from the server to you, every time -- it's probably best to pay that cost the fewest number of times that you can.
Also, keep in mind that if you're doing DB operations, it's possible that your DB doesn't allow for parallel writes, so you might get a slowdown there.

Play Framework await() makes the application act wierd

I am having some strange trouble with the method await(Future future) of the Controller.
Whenever I add an await line anywhere in my code, some GenericModels which have nothing to do with where I placed await, start loading incorrectly and I can not access to any of their attributes.
The wierdest thing is that if I change something in another completely different java file anywhere in the project, play will try to recompile I guess and in that moment it starts working perfectly, until I clean tmp again.
When you use await in a controller it does bytecode enhancement to break a single method into two threads. This is pretty cool, but definitely one of the 'black magic' tricks of Play1. But, this is one place where Play often acts weird and requires a restart (or as you found, some code changing) - the other place it can act strange is when you change a Model class.
http://www.playframework.com/documentation/1.2.5/asynchronous#SuspendingHTTPrequests
To make it easier to deal with asynchronous code we have introduced
continuations. Continuations allow your code to be suspended and
resumed transparently. So you write your code in a very imperative
way, as:
public static void computeSomething() {
Promise delayedResult = veryLongComputation(…);
String result = await(delayedResult);
render(result); }
In fact here, your code will be executed in 2 steps, in 2 different hreads. But as you see it, it’s very
transparent for your application code.
Using await(…) and continuations, you could write a loop:
public static void loopWithoutBlocking() {
for(int i=0; i<=10; i++) {
Logger.info(i);
await("1s");
}
renderText("Loop finished"); }
And using only 1 thread (which is the default in development mode) to process requests, Play is able to
run concurrently these loops for several requests at the same time.
To respond to your comment:
public static void generatePDF(Long reportId) {
Promise<InputStream> pdf = new ReportAsPDFJob(report).now();
InputStream pdfStream = await(pdf);
renderBinary(pdfStream);
and ReportAsPDFJob is simply a play Job class with doJobWithResult overridden - so it returns the object. See http://www.playframework.com/documentation/1.2.5/jobs for more on jobs.
Calling job.now() returns a future/promise, which you can use like this: await(job.now())

Guava CacheBuilder removal listener

Please show me where I'm missing something.
I have a cache build by CacheBuilder inside a DataPool. DataPool is a singleton object whose instance various thread can get and act on. Right now I have a single thread which produces data and add this into the said cache.
To show the relevant part of the code:
private InputDataPool(){
cache=CacheBuilder.newBuilder().expireAfterWrite(1000, TimeUnit.NANOSECONDS).removalListener(
new RemovalListener(){
{
logger.debug("Removal Listener created");
}
public void onRemoval(RemovalNotification notification) {
System.out.println("Going to remove data from InputDataPool");
logger.info("Following data is being removed:"+notification.getKey());
if(notification.getCause()==RemovalCause.EXPIRED)
{
logger.fatal("This data expired:"+notification.getKey());
}else
{
logger.fatal("This data didn't expired but evacuated intentionally"+notification.getKey());
}
}}
).build(new CacheLoader(){
#Override
public Object load(Object key) throws Exception {
logger.info("Following data being loaded"+(Integer)key);
Integer uniqueId=(Integer)key;
return InputDataPool.getInstance().getAndRemoveDataFromPool(uniqueId);
}
});
}
public static InputDataPool getInstance(){
if(clsInputDataPool==null){
synchronized(InputDataPool.class){
if(clsInputDataPool==null)
{
clsInputDataPool=new InputDataPool();
}
}
}
return clsInputDataPool;
}
From the said thread the call being made is as simple as
while(true){
inputDataPool.insertDataIntoPool(inputDataPacket);
//call some logic which comes with inputDataPacket and sleep for 2 seconds.
}
and where inputDataPool.insertDataIntoPool is like
inputDataPool.insertDataIntoPool(InputDataPacket inputDataPacket){
cache.get(inputDataPacket.getId());
}
Now the question is, the element in cache is supposed to expire after 1000 nanosec.So when inputDataPool.insertDataIntoPool is called second time, the data which has been inserted first time will be evacuated as it must have got expired as the call is being after 2 seconds of its insertion.And then correspondingly Removal Listener should be called.
But this is not happening. I looked into cache stats and evictionCount is always zero, no matter how much time cache.get(id) is called.
But importantly, if I extend inputDataPool.insertDataIntoPool
inputDataPool.insertDataIntoPool(InputDataPacket inputDataPacket){
cache.get(inputDataPacket.getId());
try{
Thread.sleep(2000);
}catch(InterruptedException ex){ex.printStackTrace();
}
cache.get(inputDataPacket.getId())
}
then the eviction take place as expected with removal listener being called.
Now I'm very much clueless at the moment where I'm missing something to expect such kind of behaviour. Please help me see,if you see something.
P.S. Please ignore any typos.Also no check is being made, no generic has been used, all as this is just in the phase of testing the CacheBuilder functionality.
Thanks
As explained in the javadoc and in the user guide, There is no thread that makes sure entries are removed from the cache as soon as the delay has elapsed. Instead, entries are removed during write operations, and occasionally during read operations if writes are rare. This is to allow for a high throughput and a low latency. And of course, every write operation doesn't cause a cleanup:
Caches built with CacheBuilder do not perform cleanup and evict values
"automatically," or instantly after a value expires, or anything of
the sort. Instead, it performs small amounts of maintenance during
write operations, or during occasional read operations if writes are
rare.
The reason for this is as follows: if we wanted to perform Cache
maintenance continuously, we would need to create a thread, and its
operations would be competing with user operations for shared locks.
Additionally, some environments restrict the creation of threads,
which would make CacheBuilder unusable in that environment.
I had the same issue and I could find this at guava's documentation for CacheBuilder.removalListener
Warning: after invoking this method, do not continue to use this cache
builder reference; instead use the reference this method returns. At
runtime, these point to the same instance, but only the returned
reference has the correct generic type information so as to ensure
type safety. For best results, use the standard method-chaining idiom
illustrated in the class documentation above, configuring a builder
and building your cache in a single statement. Failure to heed this
advice can result in a ClassCastException being thrown by a cache
operation at some undefined point in the future.
So by changing your code to use the builder reference that is called after adding the removalListnener this problem can be resolved
CacheBuilder builder=CacheBuilder.newBuilder().expireAfterWrite(1000, TimeUnit.NANOSECONDS).removalListener(
new RemovalListener(){
{
logger.debug("Removal Listener created");
}
public void onRemoval(RemovalNotification notification) {
System.out.println("Going to remove data from InputDataPool");
logger.info("Following data is being removed:"+notification.getKey());
if(notification.getCause()==RemovalCause.EXPIRED)
{
logger.fatal("This data expired:"+notification.getKey());
}else
{
logger.fatal("This data didn't expired but evacuated intentionally"+notification.getKey());
}
}}
);
cache=builder.build(new CacheLoader(){
#Override
public Object load(Object key) throws Exception {
logger.info("Following data being loaded"+(Integer)key);
Integer uniqueId=(Integer)key;
return InputDataPool.getInstance().getAndRemoveDataFromPool(uniqueId);
}
});
This problem will be resolved. It is kind of wired but I guess it is what it is :)

Categories

Resources