I have a question on the use of IO operations within java.util.function.Predicate. Please consider the following example:
public class ClientGroupFilter implements Predicate<Client> {
private GroupMapper mapper;
private List<String> validGroupNames = new ArrayList<>();
public ClientGroupFilter(GroupMapper mapper) {
this.mapper = mapper;
}
#Override
public boolean test(Client client) {
// this is a database call
Set<Integer> validsIds = mapper.getValidIdsForGroupNames(validGroupNames);
return client.getGroupIds().stream().anyMatch(validIds::contains);
}
public void permit(String name) {
validGroupNames.add(name);
}
}
As you can see this filter accepts any number of server group names, which are resolved by the mapper when a specific client is tested. If the client owns one of the valid server groups, true is returned.
Now, of course it is obivous that this is totally iniffecient if the filter is applied to multiple clients. So, refactoring lead me to this:
public class ClientGroupFilter implements Predicate<Client> {
private GroupMapper mapper;
private List<String> validGroupNames = new ArrayList<>();
private boolean updateRequired = true;
private Set<Integer> validIds = new HashSet<>();
public ClientGroupFilter(GroupMapper mapper) {
this.mapper = mapper;
}
#Override
public boolean test(Client client) {
if(updateRequired) {
// this is a database call
validIds = mapper.getValidIdsForGroupNames(validGroupNames);
updateRequired = false;
}
return client.getGroupIds().stream().anyMatch(validIds::contains);
}
public void permit(String name) {
validGroupNames.add(name);
updateRequired = true;
}
}
The performance is a lot better, of course, but im still not happy with the solution, since i feel like java.util.function.Predicate should not be used like this. However, i still want to be able to provide a fast solution to filter a list of clients, without the need to require the consumer to map the server group name to its ids.
Does anyone have a better idea to refactor this?
If your usage pattern is such that you call permit several times, and then use Predicate<Client> without calling permit again, you can separate the code that collects validGroupNames from the code of your predicate by using a builder:
class ClientGroupFilterBuilder {
private final GroupMapper mapper;
private List<String> validGroupNames = new ArrayList<>();
public ClientGroupFilter(GroupMapper mapper) {
this.mapper = mapper;
}
public void permit(String name) {
validGroupNames.add(name);
}
public Predicate<Client> build() {
final Set<Integer> validIds = mapper.getValidIdsForGroupNames(validGroupNames);
return new Predicate<Client>() {
#Override
public boolean test(Client client) {
return client.getGroupIds().stream().anyMatch(validIds::contains);
}
}
}
}
This restricts building of validIds to the point where we construct the Predicate<Client>. Once the predicate is constructed, no further input is necessary.
Related
I want to dynamic search with Criteria API in Java.
In the code I wrote, we need to write each entity in the url bar in JSON. I don't want to write "plaka".
The URL : <localhost:8080/api/city/query?city=Ankara&plaka=> I want to only "city" or "plaka"
Here we need to write each entity, even if we are going to search with only 1 entity. Type Entity and it should be empty.
My code is as below. Suppose there is more than one entity, what I want to do is to search using a single entity it wants to search. As you can see in the photo, I don't want to write an entity that I don't need. can you help me what should I do?
My code in Repository
public interface CityRepository extends JpaRepository<City, Integer> , JpaSpecificationExecutor<City> {
}
My code in Service
#Service
public class CityServiceImp implements CityService{
private static final String CITY = "city";
private static final String PLAKA = "plaka";
#Override
public List<City> findCityByNameAndPlaka(String cityName, int plaka) {
GenericSpecification genericSpecification = new GenericSpecification<City>();
if (!cityName.equals("_"))
genericSpecification.add(new SearchCriteria(CITY,cityName, SearchOperation.EQUAL));
if (plaka != -1)
genericSpecification.add(new SearchCriteria(PLAKA,plaka, SearchOperation.EQUAL));
return cityDao.findAll(genericSpecification);
}
#Autowired
CityRepository cityDao;
My code in Controller
#RestController
#RequestMapping("api/city")
public class CityController {
#Autowired
private final CityService cityService;
public CityController(CityService cityService) {
this.cityService = cityService;
#GetMapping("/query")
public List<City> query(#RequestParam String city, #RequestParam String plaka){
String c = city;
int p;
if (city.length() == 0)
c = "_";
if (plaka.length() == 0) {
p = -1;
}
else
p = Integer.parseInt(plaka);
return cityService.findCityByNameAndPlaka(c,p);
}
My code in SearchCriteria
public class SearchCriteria {
private String key;
private Object value;
private SearchOperation operation;
public SearchCriteria(String key, Object value, SearchOperation operation) {
this.key = key;
this.value = value;
this.operation = operation;
}
public String getKey() {
return key;
}
public Object getValue() {
return value;
}
public SearchOperation getOperation() {
return operation;
}
My code in GenericSpecification
public class GenericSpecification<T> implements Specification<T> {
private List<SearchCriteria> list;
public GenericSpecification() {
this.list = new ArrayList<>();
}
public void add(SearchCriteria criteria){
list.add(criteria);
}
#Override
public Predicate toPredicate(Root<T> root, CriteriaQuery<?> query, CriteriaBuilder builder) {
List<Predicate> predicates = new ArrayList<>();
for (SearchCriteria criteria : list) {
if (criteria.getOperation().equals(SearchOperation.GREATER_THAN)) {
predicates.add(builder.greaterThan(
root.get(criteria.getKey()), criteria.getValue().toString()));
} else if (criteria.getOperation().equals(SearchOperation.LESS_THAN)) {
predicates.add(builder.lessThan(
root.get(criteria.getKey()), criteria.getValue().toString()));
} else if (criteria.getOperation().equals(SearchOperation.GREATER_THAN_EQUAL)) {
predicates.add(builder.greaterThanOrEqualTo(
root.get(criteria.getKey()), criteria.getValue().toString()));
} else if (criteria.getOperation().equals(SearchOperation.LESS_THAN_EQUAL)) {
predicates.add(builder.lessThanOrEqualTo(
root.get(criteria.getKey()), criteria.getValue().toString()));
} else if (criteria.getOperation().equals(SearchOperation.NOT_EQUAL)) {
predicates.add(builder.notEqual(
root.get(criteria.getKey()), criteria.getValue()));
} else if (criteria.getOperation().equals(SearchOperation.EQUAL)) {
predicates.add(builder.equal(
root.get(criteria.getKey()), criteria.getValue()));
} else if (criteria.getOperation().equals(SearchOperation.MATCH)) {
predicates.add(builder.like(
builder.lower(root.get(criteria.getKey())),
"%" + criteria.getValue().toString().toLowerCase() + "%"));
} else if (criteria.getOperation().equals(SearchOperation.MATCH_END)) {
predicates.add(builder.like(
builder.lower(root.get(criteria.getKey())),
criteria.getValue().toString().toLowerCase() + "%"));
}
}
return builder.and(predicates.toArray(new Predicate[0]));
}
My code in SearchOperation
public enum SearchOperation {
GREATER_THAN,
LESS_THAN,
GREATER_THAN_EQUAL,
LESS_THAN_EQUAL,
NOT_EQUAL,
EQUAL,
MATCH,
MATCH_END,
}
The good thing about the Criteria API is that you can use the CriteriaBuilder to build complex SQL statements based on the fields that you have. You can combine multiple criteria fields using and and or statements with ease.
How I approached something similar int he past is using a GenericDao class that takes a Filter that has builders for the most common operations (equals, qualsIgnoreCase, lessThan, greaterThan and so on). I actually have something similar in an open-source project I started: https://gitlab.com/pazvanti/logaritmical/-/blob/master/app/data/dao/GenericDao.java
https://gitlab.com/pazvanti/logaritmical/-/blob/master/app/data/filter/JPAFilter.java
Next, the implicit DAO class extends this GenericDao and when I want to do an operation (ex: find a user with the provided username) and there I create a Filter.
Now, the magic is in the filter. This is the one that creates the Predicate.
In your request, you will receive something like this: field1=something&field2=somethingElse and so on. The value can be preceded by the '<' or '>' if you want smaller or greater and you initialize your filter with the values. If you can retrieve the parameters as a Map<String, String>, even better.
Now, for each field in the request, you create a predicate using the helper methods from the JPAFilter class and return he resulted Predicate. In the example below I assume that you don't have it as a Map, but as individual fields (it is easy to adapt the code for a Map):
public class SearchFilter extends JPAFilter {
private Optional<String> field1 = Optional.empty();
private Optional<String> field2 = Optional.empty();
#Override
public Predicate getPredicate(CriteriaBuilder criteriaBuilder, Root root) {
Predicate predicateField1 = field1.map(f -> equals(criteriaBuilder, root, "field1", f)).orElse(null);
Predicate predicateField2 = field2.map(f -> equals(criteriaBuilder, root, "field2", f)).orElse(null);
return andPredicateBuilder(criteriaBuilder, predicateField1, predicateField2);
}
}
Now, I have the fields as Optional since in this case I assumed that you have them as Optional in your request mapping (Spring has this) and I know it is a bit controversial to have Optional as input params, but in this case I think it is acceptable (more on this here: https://petrepopescu.tech/2021/10/an-argument-for-using-optional-as-input-parameters/)
The way the andPredicateBuilder() is made is that it works properly even if one of the supplied predicates is null. Also, I made s simple mapping function, adjust to include for < and >.
Now, in your DAO class, just supply the appropriate filter:
public class SearchDao extends GenericDAO {
public List<MyEntity> search(Filter filter) {
return get(filter);
}
}
Some adjustments need to be made (this is just starter code), like an easier way to initialize the filter (and doing this inside the DAO) and making sure that that the filter can only by applied for the specified entity (probably using generics, JPAFIlter<T> and having SearchFilter extends JPAFilter<MyEntity>). Also, some error handling can be added.
One disadvantage is that the fields have to match the variable names in your entity class.
I am new to Reactor framework and trying to utilize it in one of our existing implementations. LocationProfileService and InventoryService both return a Mono and are to executed in parallel and have no dependency on each other (from the MainService). Within LocationProfileService - there are 4 queries issued and the last 2 queries have a dependency on the first query.
What is a better way to write this? I see the calls getting executed sequentially, while some of them should be executed in parallel. What is the right way to do it?
public class LocationProfileService {
static final Cache<String, String> customerIdCache //define Cache
#Override
public Mono<LocationProfileInfo> getProfileInfoByLocationAndCustomer(String customerId, String location) {
//These 2 are not interdependent and can be executed immediately
Mono<String> customerAccountMono = getCustomerArNumber(customerId,location) LocationNumber).subscribeOn(Schedulers.parallel()).switchIfEmpty(Mono.error(new CustomerNotFoundException(location, customerId))).log();
Mono<LocationProfile> locationProfileMono = Mono.fromFuture(//location query).subscribeOn(Schedulers.parallel()).log();
//Should block be called, or is there a better way to do ?
String custAccount = customerAccountMono.block(); // This is needed to execute and the value from this is needed for the next 2 calls
Mono<Customer> customerMono = Mono.fromFuture(//query uses custAccount from earlier step).subscribeOn(Schedulers.parallel()).log();
Mono<Result<LocationPricing>> locationPricingMono = Mono.fromFuture(//query uses custAccount from earlier step).subscribeOn(Schedulers.parallel()).log();
return Mono.zip(locationProfileMono,customerMono,locationPricingMono).flatMap(tuple -> {
LocationProfileInfo locationProfileInfo = new LocationProfileInfo();
//populate values from tuple
return Mono.just(locationProfileInfo);
});
}
private Mono<String> getCustomerAccount(String conversationId, String customerId, String location) {
return CacheMono.lookup((Map)customerIdCache.asMap(),customerId).onCacheMissResume(Mono.fromFuture(//query).subscribeOn(Schedulers.parallel()).map(x -> x.getAccountNumber()));
}
}
public class InventoryService {
#Override
public Mono<InventoryInfo> getInventoryInfo(String inventoryId) {
Mono<Inventory> inventoryMono = Mono.fromFuture(//inventory query).subscribeOn(Schedulers.parallel()).log();
Mono<List<InventorySale>> isMono = Mono.fromFuture(//inventory sale query).subscribeOn(Schedulers.parallel()).log();
return Mono.zip(inventoryMono,isMono).flatMap(tuple -> {
InventoryInfo inventoryInfo = new InventoryInfo();
//populate value from tuple
return Mono.just(inventoryInfo);
});
}
}
public class MainService {
#Autowired
LocationProfileService locationProfileService;
#Autowired
InventoryService inventoryService
public void mainService(String customerId, String location, String inventoryId) {
Mono<LocationProfileInfo> locationProfileMono = locationProfileService.getProfileInfoByLocationAndCustomer(....);
Mono<InventoryInfo> inventoryMono = inventoryService.getInventoryInfo(....);
//is using block fine or is there a better way to do?
Mono.zip(locationProfileMono,inventoryMono).subscribeOn(Schedulers.parallel()).block();
}
}
You don't need to block in order to get the pass that parameter your code is very close to the solution. I wrote the code using the class names that you provided. Just replace all the Mono.just(....) with the call to the correct service.
public Mono<LocationProfileInfo> getProfileInfoByLocationAndCustomer(String customerId, String location) {
Mono<String> customerAccountMono = Mono.just("customerAccount");
Mono<LocationProfile> locationProfileMono = Mono.just(new LocationProfile());
return Mono.zip(customerAccountMono, locationProfileMono)
.flatMap(tuple -> {
Mono<Customer> customerMono = Mono.just(new Customer(tuple.getT1()));
Mono<Result<LocationPricing>> result = Mono.just(new Result<LocationPricing>());
Mono<LocationProfile> locationProfile = Mono.just(tuple.getT2());
return Mono.zip(customerMono, result, locationProfile);
})
.map(LocationProfileInfo::new)
;
}
public static class LocationProfileInfo {
public LocationProfileInfo(Tuple3<Customer, Result<LocationPricing>, LocationProfile> tuple){
//do wathever
}
}
public static class LocationProfile {}
private static class Customer {
public Customer(String cutomerAccount) {
}
}
private static class Result<T> {}
private static class LocationPricing {}
Pleas remember that the first zip is not necessary. I re write it to mach your solution. But I would solve the problem a little bit differently. It would be clearer.
public Mono<LocationProfileInfo> getProfileInfoByLocationAndCustomer(String customerId, String location) {
return Mono.just("customerAccount") //call the service
.flatMap(customerAccount -> {
//declare the call to get the customer
Mono<Customer> customerMono = Mono.just(new Customer(customerAccount));
//declare the call to get the location pricing
Mono<Result<LocationPricing>> result = Mono.just(new Result<LocationPricing>());
//declare the call to get the location profile
Mono<LocationProfile> locationProfileMono = Mono.just(new LocationProfile());
//in the zip call all the services actually are executed
return Mono.zip(customerMono, result, locationProfileMono);
})
.map(LocationProfileInfo::new)
;
}
This is related to java generic wild card.
I have interface like this.
public interface Processer<P, X> {
void process(P parent, X result);
}
An implementation like this.
public class FirstProcesser implements Processer<User, String> {
#Override
public void process(User parent, String result) {
}
}
And I'm using processer as this.
public class Executor {
private Processer<?, String> processer;
private int i;
public void setProcesser(Processer<?, String> processer) {
this.processer = processer;
}
private String generateString() {
return "String " + i++;
}
public <P> void execute(P parent) {
processer.process(parent, generateString());
}
public static void main(String[] args) {
Executor executor = new Executor();
executor.setProcesser(new FirstProcesser());
User user = new User();
executor.execute(user);
}
}
But here
public <P> void execute(P parent) {
processer.process(parent, generateString());
}
it gives compile error Error:(18, 27) java: incompatible types: P cannot be converted to capture#1 of ?
I need to understand why this give an error. also solution.
The wildcard basically means "I don't care which type is used here". In your case, you definitely do care though: the first type parameter of your processor must be the same as the P type in the execute method.
With the current code, you could call execute(1), which would try to call the FirstProcesser with an integer as argument, which obviously makes no sense, hence why the compiler forbids it.
The easiest solution would be to make your Executor class generic, instead of only the execute method:
public class Executor<P> {
private Processer<P, String> processer;
private int i;
public void setProcesser(Processer<P, String> processer) {
this.processer = processer;
}
private String generateString() {
return "String " + i++;
}
public void execute(P parent) {
processer.process(parent, generateString());
}
public static void main(String[] args) {
Executor executor = new Executor<User>();
executor.setProcesser(new FirstProcesser());
User user = new User();
executor.execute(user);
}
}
Because processor can have first type argument of anything. You may have assigned a Process<Foo, String> to it, and of course compiler will complain as it can be something different from P in your execute().
You may want to make your Executor a generic class:
class Executor<T> {
private Processer<T, String> processer;
public void setProcesser(Processer<T, String> processer) {
this.processer = processer;
}
public void execute(T parent) {
processer.process(parent, generateString());
}
}
and your main will look like:
Executor<User> executor = new Executor<User>();
executor.setProcesser(new FirstProcesser());
User user = new User();
executor.execute(user);
In response to comments:
There is no proper solution with proper use of Generics here, because what you are doing is contradicting: On one hand you say you do not care about first type argument of Processor (hence private Processor<?, String> processor), but on the other hand you DO really care about it (your execute). Compiler is simply doing its work right as it is absolutely legal for you to assign a Processor<Foo,String> to it.
If you don't really care about generics and is willing to suffer from poor design, then don't use generics.
Just keep Processor a raw type in Executor and suppress all unchecked warning:
i.e.
class Executor {
private Processor processor;
#SuppressWarnings("unchecked")
public void setProcessor(Processor<?, String> processor) {
this.processor = processor;
}
// your generic method does not do any meaningful check.
// just pass an Object to it
#SuppressWarnings("unchecked")
public void execute(Object parent) {
processor.process(parent, "");
}
}
And if it is me, I will go one step further:
Provide an Executor that is properly designed (e.g. calling it TypedExecutor). All new code should use the new, properly designed TypedExecutor. Original Executor is kept for sake of backward compatibility, and delegate its work to TypedExecutor.
Hence look like:
class TypedExecutor<T> {
private Processor<T, String> processor;
public void setProcessor(Processor<T, String> processor) {
this.processor = processor;
}
public void execute(T parent) {
processor.process(parent, "");
}
}
#SuppressWarnings("unchecked")
class Executor {
private TypedExecutor executor = new TypedExecutor();
public void setProcessor(Processor<?, String> processor) {
this.executor.setProcessor(processor);
}
public void execute(Object parent) {
this.executor.execute(parent);
}
}
I've created a class User that extends Document. User just has some simple constructors and getters/setters around some strings and ints. However, when I try to insert the User class into Mongo I get the following error:
Exception in thread "main" org.bson.codecs.configuration.CodecConfigurationException: Can't find a codec for class com.foo.User.
at org.bson.codecs.configuration.CodecCache.getOrThrow(CodecCache.java:46)
at org.bson.codecs.configuration.ProvidersCodecRegistry.get(ProvidersCodecRegistry.java:63)
at org.bson.codecs.configuration.ProvidersCodecRegistry.get(ProvidersCodecRegistry.java:37)
at org.bson.BsonDocumentWrapper.asBsonDocument(BsonDocumentWrapper.java:62)
at com.mongodb.MongoCollectionImpl.documentToBsonDocument(MongoCollectionImpl.java:507)
at com.mongodb.MongoCollectionImpl.insertMany(MongoCollectionImpl.java:292)
at com.mongodb.MongoCollectionImpl.insertMany(MongoCollectionImpl.java:282)
at com.foo.bar.main(bar.java:27)
Sounds like I need to work with some Mongo Codecs stuff, but I'm not familiar with it and some quick googling returns some results that seem pretty advanced.
How do I properly write my User class for use in Mongo? Here is my class for reference:
public class User extends Document {
User(String user, List<Document > history, boolean isActive, String location){
this.append("_id", user)
.append("history", history)
.append("isActive", isActive)
.append("location", location);
}
public List<Document > getHistory(){
return this.get("history", ArrayList.class);
}
public void addToHistory(Document event){
List<Document> history = this.getHistory();
history.add(event);
this.append("history", history);
}
public boolean hasMet(User otherUser){
List<String> usersIveMet = this.getUsersMet(),
usersTheyMet = otherUser.getUsersMet();
return !Collections.disjoint(usersIveMet, usersTheyMet);
}
public List<String> getUsersMet() {
List<Document> usersHistory = this.getHistory();
List<String> usersMet = usersHistory.stream()
.map(doc -> Arrays.asList(doc.getString("user1"), doc.getString("user1")))
.filter(u -> !u.equals(this.getUser()))
.flatMap(u -> u.stream())
.collect(Collectors.toList());
return usersMet;
}
public String getUser(){
return this.getString("_id");
}
}
Since you are trying to create new object (even if you extend from Document), Mongo has no way to recognize it and therefore you need to provide encoding/decoding in order to let Mongo to know about your object (at least I cannot see other way than this..).
I played with your User class a bit and get it work.
So, here is how I defined a User class:
public class User {
private List<Document> history;
private String id;
private Boolean isActive;
private String location;
// Getters and setters. Omitted for brevity..
}
Then you need provide encoding/decoding logic to your User class:
public class UserCodec implements Codec<User> {
private CodecRegistry codecRegistry;
public UserCodec(CodecRegistry codecRegistry) {
this.codecRegistry = codecRegistry;
}
#Override
public User decode(BsonReader reader, DecoderContext decoderContext) {
reader.readStartDocument();
String id = reader.readString("id");
Boolean isActive = reader.readBoolean("isActive");
String location = reader.readString("location");
Codec<Document> historyCodec = codecRegistry.get(Document.class);
List<Document> history = new ArrayList<>();
reader.readStartArray();
while (reader.readBsonType() != BsonType.END_OF_DOCUMENT) {
history.add(historyCodec.decode(reader, decoderContext));
}
reader.readEndArray();
reader.readEndDocument();
User user = new User();
user.setId(id);
user.setIsActive(isActive);
user.setLocation(location);
user.setHistory(history);
return user;
}
#Override
public void encode(BsonWriter writer, User user, EncoderContext encoderContext) {
writer.writeStartDocument();
writer.writeName("id");
writer.writeString(user.getId());
writer.writeName("isActive");
writer.writeBoolean(user.getIsActive());
writer.writeName("location");
writer.writeString(user.getLocation());
writer.writeStartArray("history");
for (Document document : user.getHistory()) {
Codec<Document> documentCodec = codecRegistry.get(Document.class);
encoderContext.encodeWithChildContext(documentCodec, writer, document);
}
writer.writeEndArray();
writer.writeEndDocument();
}
#Override
public Class<User> getEncoderClass() {
return User.class;
}
}
Then you need a codec provided for type checking before starting serialization/deserialization.
public class UserCodecProvider implements CodecProvider {
#Override
#SuppressWarnings("unchecked")
public <T> Codec<T> get(Class<T> clazz, CodecRegistry registry) {
if (clazz == User.class) {
return (Codec<T>) new UserCodec(registry);
}
return null;
}
}
And finally, you need to register your provider to your MongoClient, that's all.
public class MongoDb {
private MongoDatabase db;
public MongoDb() {
CodecRegistry codecRegistry = CodecRegistries.fromRegistries(
CodecRegistries.fromProviders(new UserCodecProvider()),
MongoClient.getDefaultCodecRegistry());
MongoClientOptions options = MongoClientOptions.builder()
.codecRegistry(codecRegistry).build();
MongoClient mongoClient = new MongoClient(new ServerAddress(), options);
db = mongoClient.getDatabase("test");
}
public void addUser(User user) {
MongoCollection<User> collection = db.getCollection("user").withDocumentClass(User.class);
collection.insertOne(user);
}
public static void main(String[] args) {
MongoDb mongoDb = new MongoDb();
Document history1 = new Document();
history1.append("field1", "value1");
history1.append("field2", "value2");
history1.append("field3", "value3");
List<Document> history = new ArrayList<>();
history.add(history1);
User user = new User();
user.setId("someId1");
user.setIsActive(true);
user.setLocation("someLocation");
user.setHistory(history);
mongoDb.addUser(user);
}
}
A bit late but just stumbled across the issue and was also somewhat disappointed by the work involved in the proposed solutions so far. Especially since it requires tons of custom code for every single Document extending class you wish to persist and might also exhibit sub-optimal performance noticeable in large data sets.
Instead I figured one might piggyback off DocumentCodec like so (Mongo 3.x):
public class MyDocumentCodec<T extends Document> implements CollectibleCodec<T> {
private DocumentCodec _documentCodec;
private Class<T> _class;
private Constructor<T> _constructor;
public MyDocumentCodec(Class<T> class_) {
try {
_documentCodec = new DocumentCodec();
_class = class_;
_constructor = class_.getConstructor(Document.class);
} catch (Exception ex) {
throw new MCException(ex);
}
}
#Override
public void encode(BsonWriter writer, T value, EncoderContext encoderContext) {
_documentCodec.encode(writer, value, encoderContext);
}
#Override
public Class<T> getEncoderClass() {
return _class;
}
#Override
public T decode(BsonReader reader, DecoderContext decoderContext) {
try {
Document document = _documentCodec.decode(reader, decoderContext);
T result = _constructor.newInstance(document);
return result;
} catch (Exception ex) {
throw new MCException(ex);
}
}
#Override
public T generateIdIfAbsentFromDocument(T document) {
if (!documentHasId(document)) {
Document doc = _documentCodec.generateIdIfAbsentFromDocument(document);
document.put("_id", doc.get("_id"));
}
return document;
}
#Override
public boolean documentHasId(T document) {
return _documentCodec.documentHasId(document);
}
#Override
public BsonValue getDocumentId(T document) {
return _documentCodec.getDocumentId(document);
}
}
This is then registered along the lines of
MyDocumentCodec<MyClass> myCodec = new MyDocumentCodec<>(MyClass.class);
CodecRegistry codecRegistry = CodecRegistries.fromRegistries(MongoClient.getDefaultCodecRegistry(),
CodecRegistries.fromCodecs(myCodec));
MongoClientOptions options = MongoClientOptions.builder().codecRegistry(codecRegistry).build();
MongoClient dbClient = new MongoClient(new ServerAddress(_dbServer, _dbPort), options);
Switching to this approach along with bulking up some operations (which probably has a large effect) I just managed to run an operation that previously took several hours to 30 mins. The decode method can probably be improved but my main concern was inserts for now.
Hope this helps someone. Please let me know if you see issues with this approach.
Thanks.
Have you tried using the #Embedded and #JsonIgnoreProperties(ignoreUnknown = true) on top of your class signature?
This worked for me when I had a similar issue. I had a model (Translation) which I was storing in a HashMap member field of another model (Promo).
Once I added these annotations to the Translation class signature, the issue went away. Not sure if it'll work that way in your case but worth trying.
I have to explore more on this myself.
I need to build a process which will validate a record against ~200 validation rules. A record can be one of ~10 types. There is some segmentation from validation rules to record types but there exists a lot of overlap which prevents me from cleanly binning the validation rules.
During my design I'm considering a chain of responsibility pattern for all of the validation rules. Is this a good idea or is there a better design pattern?
Validation is frequently a Composite pattern. When you break it down, you want to seperate the what you want to from the how you want to do it, you get:
If foo is valid
then do something.
Here we have the abstraction is valid -- Caveat: This code was lifted from currrent, similar examples so you may find missing symbology and such. But this is so you get the picture. In addition, the
Result
Object contains messaging about the failure as well as a simple status (true/false).
This allow you the option of just asking "did it pass?" vs. "If it failed, tell me why"
QuickCollection
and
QuickMap
Are convenience classes for taking any class and quickly turning them into those respected types by merely assigning to a delegate. For this example it means your composite validator is already a collection and can be iterated, for example.
You had a secondary problem in your question: "cleanly binding" as in, "Type A" -> rules{a,b,c}" and "Type B" -> rules{c,e,z}"
This is easily managed with a Map. Not entirely a Command pattern but close
Map<Type,Validator> typeValidators = new HashMap<>();
Setup the validator for each type then create a mapping between types. This is really best done as bean config if you're using Java but Definitely use dependency injection
public interface Validator<T>{
public Result validate(T value);
public static interface Result {
public static final Result OK = new Result() {
#Override
public String getMessage() {
return "OK";
}
#Override
public String toString() {
return "OK";
}
#Override
public boolean isOk() {
return true;
}
};
public boolean isOk();
public String getMessage();
}
}
Now some simple implementations to show the point:
public class MinLengthValidator implements Validator<String> {
private final SimpleResult FAILED;
private Integer minLength;
public MinLengthValidator() {
this(8);
}
public MinLengthValidator(Integer minLength) {
this.minLength = minLength;
FAILED = new SimpleResult("Password must be at least "+minLength+" characters",false);
}
#Override
public Result validate(String newPassword) {
return newPassword.length() >= minLength ? Result.OK : FAILED;
}
#Override
public String toString() {
return this.getClass().getSimpleName();
}
}
Here is another we will combine with
public class NotCurrentValidator implements Validator<String> {
#Autowired
#Qualifier("userPasswordEncoder")
private PasswordEncoder encoder;
private static final SimpleResult FAILED = new SimpleResult("Password cannot be your current password",false);
#Override
public Result validate(String newPassword) {
boolean passed = !encoder.matches(newPassword,user.getPassword());
return (passed ? Result.OK : FAILED);
}
#Override
public String toString() {
return this.getClass().getSimpleName();
}
}
Now here is a composite:
public class CompositePasswordRule extends QuickCollection<Validator> implements Validator<String> {
public CompositeValidator(Collection<Validator> rules) {
super.delegate = rules;
}
public CompositeValidator(Validator<?>... rules) {
super.delegate = Arrays.asList(rules);
}
#Override
public CompositeResult validate(String newPassword) {
CompositeResult result = new CompositeResult(super.delegate.size());
for(Validator rule : super.delegate){
Result temp = rule.validate(newPassword);
if(!temp.isOk())
result.put(rule,temp);
}
return result;
}
public static class CompositeResult extends QuickMap<Validator,Result> implements Result {
private Integer appliedCount;
private CompositeResult(Integer appliedCount) {
super.delegate = VdcCollections.delimitedMap(new HashMap<PasswordRule, Result>(), "-->",", ");
this.appliedCount = appliedCount;
}
#Override
public String getMessage() {
return super.delegate.toString();
}
#Override
public String toString() {
return super.delegate.toString();
}
#Override
public boolean isOk() {
boolean isOk = true;
for (Result r : delegate.values()) {
isOk = r.isOk();
if(!isOk)
break;
}
return isOk;
}
public Integer failCount() {
return this.size();
}
public Integer passCount() {
return appliedCount - this.size();
}
}
}
and now a snippet of use:
private Validator<String> pwRule = new CompositeValidator<String>(new MinLengthValidator(),new NotCurrentValidator());
Validator.Result result = pwRule.validate(newPassword);
if(!result.isOk())
throw new PasswordConstraintException("%s", result.getMessage());
user.obsoleteCurrentPassword();
user.setPassword(passwordEncoder.encode(newPassword));
user.setPwExpDate(DateTime.now().plusDays(passwordDaysToLive).toDate());
userDao.updateUser(user);
Chain of responsibility implies that there is an order in which the validations must take place. I would probably use something similar to the Strategy pattern where you have a Set of validation strategies that are applied to a specific type of record. You could then use a factory to examine the record and apply the correct set of validations.