Batch delete all entities with given property (or properties) - java

This code below is our code to delete property for a given Entity type:
#Override
public boolean deleteProperty(String instance, String storeName, String propertyName) {
final boolean[] success = {false};
final PersistentEntityStore entityStore = manager.getPersistentEntityStore(xodusRoot, instance);
try {
entityStore.executeInTransaction(new StoreTransactionalExecutable() {
#Override
public void execute(#NotNull final StoreTransaction txn) {
EntityIterable entities = txn.findWithProp(storeName, propertyName);
final boolean[] hasError = {false};
entities.forEach(entity -> {
if(!entity.deleteProperty(propertyName)) {
hasError[0] = true;
}
});
success[0] = hasError[0];
}
});
} finally {
//entityStore.close();
}
return success[0];
}
I understand that Xodus is transactional and that if one of the deleteProperty operation here fails it will roll back (I may need to know if this is confirmed).
Still, is there a official way to delete a property for all existing entities of a given type?

I understand that Xodus is transactional and that if one of the deleteProperty operation here fails it will roll back (I may need to know if this is confirmed).
Yes, it's true. Here transaction will be flushed after StoreTransactionalExecutable performs there job. But you can split EntityIterable into batches (of size 100 for example) and after processing each batch execute txn.flush() method. Do not forget to check flush result since it returns boolean.
Still, is there a official way to delete a property for all existing entities of a given type?
No, there isn't. Only manually like I described above.

Related

How to use return object of Mono without using block()?

I am trying to learn spring webflux. In ReactiveMongoRepository, I am trying to check if category already exists. If it already exists then return that object otherwise save and return new saved object. Something like following.
public Mono<Category> save(Category category) {
final Mono<Category> byId = repository.findById(category.getId());
final Category block = byId.block();
if (block == null) {
return repository.save(new Category(category.getName()));
} else {
return byId;
}
}
How can I do this without using block()?
Use Mono::switchIfEmpty that provides an alternative Mono in case the former one is completed without data. As long as ReactiveMongoRepository::save returns Mono, you can pass it to generate the alternative one.
return repository.findById(category.getId())
.switchIfEmpty(repository.save(new Category(category.getName())));
In case ReactiveMongoRepository::findById returns a Mono with data, the Mono::switchIfEmpty will not be called.
Edit: Using Mono::defer with a Supplier<Mono> makes the saving operation to be delayed when necessary:
.switchIfEmpty(Mono.defer(() -> repository.save(new Category(category.getName()))));
You can try something like this
public Mono<Category> getCategories(Category category) {
return repository.findByName(category.getName()).doOnNext(o -> {
}).switchIfEmpty(repository.save(category));
}
You need to defer the switchIfEmpty. Otherwise, it'll be triggered eagerly:
return repository.findById(category.getId())
.switchIfEmpty(Mono.defer(() ->respository.save(category)));

Deleting multiple keys -- can it be transactional?

Here is our code:
#Override
public boolean delete(String instance, final String storeName, String... keys) {
final Boolean[] isSuccess = {false};
final List<String> keyList = Arrays.asList(keys);
final Environment env = Environments.newInstance(xodusRoot + instance);
env.executeInTransaction(new TransactionalExecutable() {
#Override
public void execute(#NotNull final Transaction txn) {
final Store store = env.openStore(storeName, StoreConfig.WITHOUT_DUPLICATES, txn);
for (String key : keyList) {
isSuccess[0] = store.delete(txn, StringBinding.stringToEntry(key));
}
}
});
env.close();
return isSuccess[0];
}
I have two question for this.
Is this transactional, since this function is for deleting multiple
keys, would this work like if one key fails to delete the other keys
will not be deleted. Like all or nothing?
If in case within the txn an exception happened due to some reason, like key or storeName being null, how should that be handled? Or it does not matter since if there is an exception the transcation would fail and roll back automatically?
Xodus is an ACID-compliant transactional database. Among other things it means that mutations of data in transactions are consistent. In your case either all specified keys (transaction is committed) or no keys (transaction aborted) would be deleted. If a transaction is interrupted for some reason (exception, system/hardware failure) nothing is modified and the transaction automatically rolls back.

How do I implement Guava Caching in Dropwizard?

I'm trying to setup a cache using guava, with the following code:
private List<Profile> buildCache() {
LoadingCache cache = CacheBuilder.newBuilder()
.expireAfterWrite(10, TimeUnit.MINUTES)
.maximumSize(40)
.build(
new CacheLoader<Profile, List<Profile>>() {
#Override
public List<Profile> load(Profile profile) throws Exception {
Profile profile1 = new Profile();
Profile.setEmployed(true);
return profileDAO.getAllProfiles(Profile1, null);
}
}
);
return (List<Profile>) cache;
}
public List<Profile> getAllProfiles(Profile profile, Integer size) throws Exception {
return profileDAO.getAllProfiles(profile, size);
}
The idea here is that this will create a cache using get all profile. The method for that uses a new profile object to set a boolean on whether that employee is employed or not. The size variable means that the method will return however many indicated. When null, it defaults to top 10.
I have two issues:
1. This is the first time I have ever used a cache, so I really do not know if I am doing this correctly.
2. I cannot find anything in the documentation on how to implement this within my app. How am I supposed to call this? I tried modifying the getAllProfiles method to return it:
public List<Profile> getAllProfiles(Profile profile, Integer size) throws Exception {
return buildCache();
}
But that simply returns an exception that I cannot cast the cache into a java list:
Exception occurred: java.lang.ClassCastException: com.google.common.cache.LocalCache$LocalLoadingCache cannot be cast to java.util.List
If its any help, my app is also using spring, so I've also been doing research into that. Is there any difference between springframework.cache.guava and google.common.cache, or is it just Spring's inbuilt guava cache?
Ok, I think I managed to figure it out:
private LoadingCache<Integer, List<Profile>> loadingCache = CacheBuilder.newBuilder()
.refreshAfterWrite(10,TimeUnit.MINUTES)
.maximumSize(100).build(
new CacheLoader<Integer, List<Profile>>() {
#Override
public List<Profile> load(Integer integer) throws Exception {
Profile profile= new Profile();
if (integer == null) {
integer = 10;
}
return profileDAO.getAllProfiles(profile, integer);
}
}
);
First, I should have specified the key and value being passed into LoadingCache, in this case, an Integer and a List of Profile. Also, when I declared the new CacheLoader in the build function, I should have kept that layout of key and value. Finlly, when calling the getAll method, I should have loaded using the key Integer, not a profile object.
As for calling the function:
public List<Profile> getAllProfiles(Profile profile, Integer size) throws Exception {
return loadingCache.get(size);
}
This serves to get lists of certain legnths that are stored in the cache. If the list of that length is not in the cache, the getAll method will run, using the size varaible you pass to it.
#Eugene, Thank you for your help. Your explanation on the load method really helped put the cache into perspective for me.

Flatten processing result in spring batch

Does anyone know how in spring-batch (3.0.7) can I flat a result of processor that returns list of entities?
Example:
I got a processor that returns List
public class MyProcessor implements ItemProcessor < Long , List <Entity>> {
public List<Entity> process ( Long id )
}
Now all following processors / writers need to work on List < Entity >. Is there any way to flat the result to simply Entity so the further processors in given step can work on single Entities?
The only way is to persist the list somehow with a writer and then create a separate step that would read from the persisted data.
Thanks in advance!
As you know, processors in spring-batch can be chained with a composite processor. Within the chain, you can change the processing type from processor to processor, but of course input and output type of two "neighbour"-processors have to match.
However, Input out Output type is always treated as one item. Therefore, if the output type of a processor ist a List, this list is regared as one item. Hence, the following processor needs to have an InputType "List", resp., if a writer follows, the Writer needs to have a List-of-List as type its write-method.
Moreover, a processor can not multiply its element. There can only be one output item for every input element.
Basically, there is nothing wrong with having a chain like
Reader<Integer>
ProcessorA<Integer,List<Integer>>
ProcessorB<List<Integer>,List<Integer>>
Writer<List<Integer>> (which leads to a write-method write(List<List<Integer>> items)
Depending on the context, there could be a better solution.
You could mitigate the impact (for instance reuseability) by using wrapper-processors and a wrapper-writer like the following code examples:
public class ListWrapperProcessor<I,O> implements ItemProcessor<List<I>, List<O>> {
ItemProcessor<I,O> delegate;
public void setDelegate(ItemProcessor<I,O> delegate) {
this.delegate = delegate;
}
public List<O> process(List<I> itemList) {
List<O> outputList = new ArrayList<>();
for (I item : itemList){
O outputItem = delegate.process(item);
if (outputItem!=null) {
outputList.add(outputItem);
}
}
if (outputList.isEmpty()) {
return null;
}
return outputList;
}
}
public class ListOfListItemWriter<T> implements InitializingBean, ItemStreamWriter<List<T>> {
private ItemStreamWriter<T> itemWriter;
#Override
public void write(List<? extends List<T>> listOfLists) throws Exception {
if (listOfLists.isEmpty()) {
return;
}
List<T> all = listOfLists.stream().flatMap(Collection::stream).collect(Collectors.toList());
itemWriter.write(all);
}
#Override
public void afterPropertiesSet() throws Exception {
Assert.notNull(itemWriter, "The 'itemWriter' may not be null");
}
public void setItemWriter(ItemStreamWriter<T> itemWriter) {
this.itemWriter = itemWriter;
}
#Override
public void close() {
this.itemWriter.close();
}
#Override
public void open(ExecutionContext executionContext) {
this.itemWriter.open(executionContext);
}
#Override
public void update(ExecutionContext executionContext) {
this.itemWriter.update(executionContext);
}
}
Using such wrappers, you could still implement "normal" processor and writers and then use such wrappers in order to move the "List"-handling out of them.
Unless you can provide a compelling reason, there's no reason to send a List of Lists to your ItemWriter. This is not the way the ItemProcessor was intended to be used. Instead, you should create/configure and ItemReader to return one object with relevant objects.
For example, if you're reading from the database, you could use the HibernateCursorItemReader and a query that looks something like this:
"from ParentEntity parent left join fetch parent.childrenEntities"
Your data model SHOULD have a parent table with the Long id that you're currently passing to your ItemProcessor, so leverage that to your advantage. The reader would then pass back ParentEntity objects, each with a collection of ChildEntity objects that go along with it.

Caching with Parse Service

I'm having trouble trying to cache data from Parse.com.
I've been reading the Parse API for caching, but i'm still having trouble understanding it. How do I extract data and cache with this?
query.setCachePolicy(ParseQuery.CachePolicy.NETWORK_ELSE_CACHE);
query.findInBackground(new FindCallback<ParseObject>() {
public void done(List<ParseObject> scoreList, ParseException e) {
if (e == null) {
// Results were successfully found, looking first on the
// network and then on disk.
} else {
// The network was inaccessible and we have no cached data
// for this query.
}
});
The data is cached automatically on the internal storage if you specify a CachePolicy. The default one is the CachePolicy.IGNORE_CACHE so no data is cached. Since you are interested in getting the results from cache, it would make more sense to use the CachePolicy.CACHE_ELSE_NETWORK so the query will look first inside the cache. The data you are looking for is stored in your case in the variable scoreList.
Maybe it is difficult for you to understand how your code works because you're using the callback (because of findinBackground()). Consider the following code:
ParseQuery<Person> personParseQuery = new ParseQuery<Person>(Person.class);
personParseQuery.setCachePolicy(ParseQuery.CachePolicy.CACHE_ELSE_NETWORK);
personParseQuery.addAscendingOrder("sort_order");
List<Person> = personParseQuery.find();
As you can see, the result of the query is return by the find() method. From the Parse API documentation:
public List<T> find() throws ParseException -
Retrieves a list of ParseObjects that satisfy this query. Uses the network and/or the cache, depending on the cache policy.
The Person class may look like this:
#ParseClassName("Person")
public class Person extends ParseObject {
public Person(){}
public String getPersonName() {
return getString("personName");
}
public void setPersonName(String personName) {
put("personName",personName);
}
}
And of course, don't forget to initialize Parse first and register the Person class:
Parse.initialize(this, "appID", "clientID");
ParseObject.registerSubclass(Person.class);
I hope my explanation can help you.
PS: You can see the data is cached by looking inside the data.data. your application package+name .cache.com.parse folder on your emulator after executing the code.

Categories

Resources