How to use Aggregation Query with MongoItemReader in spring batch - java

Some how the requirement changed and I have to use aggregation query insted of basic query in setQuery(). Is this even possible?
Please suggest how can i do that? My Aggregation query is ready but not sure how can I use that in spring batch
public ItemReader<ProfileCollection> searchMongoItemReader() throws Exception, UnexpectedInputException, ParseException, NonTransientResourceException {
MongoItemReader<MyCollection> mongoItemReader = new MongoItemReader<>();
mongoItemReader.setTemplate(myMongoTemplate);
mongoItemReader.setCollection(myMongoCollection);
mongoItemReader.setQuery(" Some Simple Query - Basic");
mongoItemReader.setTargetType(MyCollection.class);
Map<String, Sort.Direction> sort = new HashMap<>();
sort.put("field4", Sort.Direction.ASC);
mongoItemReader.setSort(sort);
return mongoItemReader;
}

extend MongoItemReader and provide your own implementation for method doPageRead(). This way you will have full pagination support and this reading of documents will be part of a step.
public class CustomMongoItemReader<T, O> extends MongoItemReader<T> {
private MongoTemplate template;
private Class<? extends T> inputType;
private Class<O> outputType
private MatchOperation match;
private ProjectionOperation projection;
private String collection;
#Override
protected Iterator<T> doPageRead() {
Pageable page = PageRequest.of(page, pageSize) //page and page size are coming from the class that MongoItemReader extends
Aggregation agg = newAggregation(match, projection, skip(page.getPageNumber() * page.getPageSize()), limit(page.getPageSize()));
return (Iterator<T>) template.aggregate(agg, collection, outputType).iterator();
}
}
And other getter and setters and other methods. Just have a look at sourcecode for MongoItemReader here.
I also removed Query support from it. You can have that also in the same method just copy paste it from MongoItemReader. Same with Sort.
And in the class where you have a reader, you would do something like:
public MongoItemReader<T> reader() {
CustomMongoItemReader reader = new CustomMongoItemReader();
reader.setTemplate(mongoTemplate);
reader.setName("abc");
reader.setTargetType(input.class);
reader.setOutputType(output.class);
reader.setCollection(myMongoCollection);
reader.setMatch(Aggregation.match(new Criteria()....)));
reader.setProjection(Aggregation.project("..","..");
return reader;
}

To be able to use aggregation in a job, making use of all features that spring batch has, you have to create a custom ItemReader.
Extending AbstractPaginatedDateItemReader we can use all the elements from pageable operations.
Here's a simple of that custom class:
public class CustomAggreagationPaginatedItemReader<T> extends AbstractPaginatedDataItemReader<T> implements InitializingBean {
private static final Pattern PLACEHOLDER = Pattern.compile("\\?(\\d+)");
private MongoOperations template;
private Class<? extends T> type;
private Sort sort;
private String collection;
public CustomAggreagationPaginatedItemReader() {
super();
setName(ClassUtils.getShortName(CustomAggreagationPaginatedItemReader.class));
}
public void setTemplate(MongoOperations template) {
this.template = template;
}
public void setTargetType(Class<? extends T> type) {
this.type = type;
}
public void setSort(Map<String, Sort.Direction> sorts) {
this.sort = convertToSort(sorts);
}
public void setCollection(String collection) {
this.collection = collection;
}
#Override
#SuppressWarnings("unchecked")
protected Iterator<T> doPageRead() {
Pageable pageRequest = new PageRequest(page, pageSize, sort);
BasicDBObject cursor = new BasicDBObject();
cursor.append("batchSize", 100);
SkipOperation skipOperation = skip(Long.valueOf(pageRequest.getPageNumber()) * Long.valueOf(pageRequest.getPageSize()));
Aggregation aggregation = newAggregation(
//Include here all your aggreationOperations,
skipOperation,
limit(pageRequest.getPageSize())
).withOptions(newAggregationOptions().cursor(cursor).build());
return (Iterator<T>) template.aggregate(aggregation, collection, type).iterator();
}
#Override
public void afterPropertiesSet() throws Exception {
Assert.state(template != null, "An implementation of MongoOperations is required.");
Assert.state(type != null, "A type to convert the input into is required.");
Assert.state(collection != null, "A collection is required.");
}
private String replacePlaceholders(String input, List<Object> values) {
Matcher matcher = PLACEHOLDER.matcher(input);
String result = input;
while (matcher.find()) {
String group = matcher.group();
int index = Integer.parseInt(matcher.group(1));
result = result.replace(group, getParameterWithIndex(values, index));
}
return result;
}
private String getParameterWithIndex(List<Object> values, int index) {
return JSON.serialize(values.get(index));
}
private Sort convertToSort(Map<String, Sort.Direction> sorts) {
List<Sort.Order> sortValues = new ArrayList<Sort.Order>();
for (Map.Entry<String, Sort.Direction> curSort : sorts.entrySet()) {
sortValues.add(new Sort.Order(curSort.getValue(), curSort.getKey()));
}
return new Sort(sortValues);
}
}
If you look with attention you can see it was create using MongoItemReader from Spring framework, you can see that class at org.springframework.batch.item.data.MongoItemReader, there's the way you have to create a whole new class extending AbstractPaginatedDataItemReader, if you have a look at "doPageRead" method you should be able to see that it only uses the find operation of MongoTemplate, making it impossible to use Aggregate operations in it.
Here is how it should like to use it our CustomReader:
#Bean
public ItemReader<YourDataClass> reader(MongoTemplate mongoTemplate) {
CustomAggreagationPaginatedItemReader<YourDataClass> customAggreagationPaginatedItemReader = new CustomAggreagationPaginatedItemReader<>();
Map<String, Direction> sort = new HashMap<String, Direction>();
sort.put("id", Direction.ASC);
customAggreagationPaginatedItemReader.setTemplate(mongoTemplate);
customAggreagationPaginatedItemReader.setCollection("collectionName");
customAggreagationPaginatedItemReader.setTargetType(YourDataClass.class);
customAggreagationPaginatedItemReader.setSort(sort);
return customAggreagationPaginatedItemReader;
}
As you may notice, you also need a instance of MongoTemplate, here's how it should like too:
#Bean
public MongoTemplate mongoTemplate(MongoDbFactory mongoDbFactory) {
return new MongoTemplate(mongoDbFactory);
}
Where MongoDbFactory is a autowired object by spring framework.
Hope that's enough to help you.

Related

spring data mongodb calling save twice leads to duplicate key exception

I try to save an entity with spring data mongodb repository. I have an EventListener that cascades saves.
The problem is, that I need to save an entity to get its internal id and perform further state mutations and saving the entity afterwards.
#Test
void testUpdate() {
FooDto fooDto = getResource("/json/foo.json", new TypeReference<FooDto>() {
});
Foo foo = fooMapper.fromDTO(fooDto);
foo = fooService.save(foo);
log.info("Saved foo: " + foo);
foo.setState(FooState.Bar);
foo = fooService.save(foo);
log.info("Updated foo: " + foo);
}
I have an index on a child collection of foo. It will not update children but will try to insert them twice which leads to org.springframework.dao.DuplicateKeyException.
Why does it not save but tries to insert it again?
Related:
Spring Data MongoRepository save causing Duplicate Key error
Edit: versions:
mongodb 4,
spring boot 2.3.3.RELEASE
Edit more details:
Repository:
public interface FooRepository extends MongoRepository<Foo, String>
Entity:
#Document
public class Foo {
#Id
private String id;
private FooState state;
#DBRef
#Cascade
private Collection<Bar> bars = new ArrayList<>();
...
}
CascadeMongoEventListener:
//from https://mflash.dev/blog/2019/07/08/persisting-documents-with-mongorepository/#unit-tests-for-the-accountrepository
public class CascadeMongoEventListener extends AbstractMongoEventListener<Object> {
private #Autowired
MongoOperations mongoOperations;
public #Override void onBeforeConvert(final BeforeConvertEvent<Object> event) {
final Object source = event.getSource();
ReflectionUtils
.doWithFields(source.getClass(), new CascadeSaveCallback(source, mongoOperations));
}
private static class CascadeSaveCallback implements ReflectionUtils.FieldCallback {
private final Object source;
private final MongoOperations mongoOperations;
public CascadeSaveCallback(Object source, MongoOperations mongoOperations) {
this.source = source;
this.mongoOperations = mongoOperations;
}
public #Override void doWith(final Field field)
throws IllegalArgumentException, IllegalAccessException {
ReflectionUtils.makeAccessible(field);
if (field.isAnnotationPresent(DBRef.class) && field.isAnnotationPresent(Cascade.class)) {
final Object fieldValue = field.get(source);
if (Objects.nonNull(fieldValue)) {
final var callback = new IdentifierCallback();
final CascadeType cascadeType = field.getAnnotation(Cascade.class).value();
if (cascadeType.equals(CascadeType.PERSIST) || cascadeType.equals(CascadeType.ALL)) {
if (fieldValue instanceof Collection<?>) {
((Collection<?>) fieldValue).forEach(mongoOperations::save);
} else {
ReflectionUtils.doWithFields(fieldValue.getClass(), callback);
mongoOperations.save(fieldValue);
}
}
}
}
}
}
private static class IdentifierCallback implements ReflectionUtils.FieldCallback {
private boolean idFound;
public #Override void doWith(final Field field) throws IllegalArgumentException {
ReflectionUtils.makeAccessible(field);
if (field.isAnnotationPresent(Id.class)) {
idFound = true;
}
}
public boolean isIdFound() {
return idFound;
}
}
}
Edit: expected behaviour
From the docs in org.springframework.data.mongodb.core.MongoOperations#save(T):
Save the object to the collection for the entity type of the object to
save. This will perform an insert if the object is not already
present, that is an 'upsert'.
Edit - new insights:
it might be related to the index on the Bar child collection. (DbRef and Cascade lead to mongoOperations::save being called from the EventListener)
I created another similar test with another entity and it worked.
The index on the child "Bar" entity (which is held as collection in parent "Foo" entity):
#CompoundIndex(unique = true, name = "fooId_name", def = "{'fooId': 1, 'name': 1}")
update: I think I found the problem. Since I am using a custom serialization/deserialization in my Converter (Document.parse()) the id field is not mapped properly. This results in id being null and therefore this leads to an insert instead of an update.
I will write an answer if I resolved this properly.
public class MongoResultConversion {
#Component
#ReadingConverter
public static class ToResultConverter implements Converter<Document, Bar> {
private final ObjectMapper mapper;
#Autowired
public ToResultConverter(ObjectMapper mapper) {
this.mapper = mapper;
}
public MeasureResult convert(Document source) {
String json = toJson(source);
try {
return mapper.readValue(json, new TypeReference<Bar>() {
});
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}
protected String toJson(Document source) {
return source.toJson();
}
}
#Component
#WritingConverter
public static class ToDocumentConverter implements Converter<Bar, Document> {
private final ObjectMapper mapper;
#Autowired
public ToDocumentConverter(ObjectMapper mapper) {
this.mapper = mapper;
}
public Document convert(Bar source) {
String json = toJson(source);
return Document.parse(json);
}
protected String toJson(Bar source) {
try {
return mapper.writeValueAsString(source);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}
}
}
As stated in my last edit the problem was with the custom serialization/deserialization and mongo document conversion. This resulted in id being null and therefore an insert was done instead of an upsert.
The following code is my implementation of my custom converter to map the objectid:
public class MongoBarConversion {
#Component
#ReadingConverter
public static class ToBarConverter implements Converter<Document, Bar> {
private final ObjectMapper mapper;
#Autowired
public ToBarConverter(ObjectMapper mapper) {
this.mapper = mapper;
}
public Bar convert(Document source) {
JsonNode json = toJson(source);
setObjectId(source, json);
return mapper.convertValue(json, new TypeReference<Bar>() {
});
}
protected void setObjectId(Document source, JsonNode jsonNode) {
ObjectNode modifiableObject = (ObjectNode) jsonNode;
String objectId = getObjectId(source);
modifiableObject.put(ID_FIELD, objectId);
}
protected String getObjectId(Document source) {
String objectIdLiteral = null;
ObjectId objectId = source.getObjectId("_id");
if (objectId != null) {
objectIdLiteral = objectId.toString();
}
return objectIdLiteral;
}
protected JsonNode toJson(Document source) {
JsonNode node = null;
try {
String json = source.toJson();
node = mapper.readValue(json, JsonNode.class);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
return node;
}
}
#Component
#WritingConverter
public static class ToDocumentConverter implements Converter<Bar, Document> {
private final ObjectMapper mapper;
#Autowired
public ToDocumentConverter(ObjectMapper mapper) {
this.mapper = mapper;
}
public Document convert(Bar source) {
try {
JsonNode jsonNode = toJson(source);
setObjectId(source, jsonNode);
String json = mapper.writeValueAsString(jsonNode);
return Document.parse(json);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}
protected void setObjectId(Bar source, JsonNode jsonNode) throws JsonProcessingException {
ObjectNode modifiableObject = (ObjectNode) jsonNode;
JsonNode objectIdJson = getObjectId(source);
modifiableObject.set("_id", objectIdJson);
modifiableObject.remove(ID_FIELD);
}
protected JsonNode getObjectId(Bar source) throws JsonProcessingException {
ObjectNode _id = null;
String id = source.getId();
if (id != null) {
_id = JsonNodeFactory.instance.objectNode();
_id.put("$oid", id);
}
return _id;
}
protected JsonNode toJson(Bar source) {
return mapper.convertValue(source, JsonNode.class);
}
}
}
So to conclude: two subsequent saves should (and will) definitely lead to an upsert if the id is non null. The bug was in my code.
All MongoDB drivers include functionality to generate ids on the client side. If you only save to get the id, research how to use client-side id generation and remove the first save entirely.
I believe you are facing this issue as you try to save for the second time without fetching from db. You are changing the object returned by the save, not the object saved into the db. Try retrieving existing foo by using a method like findById and then perform next steps and saving it

QueryDsl avoiding multiple if blocks

Currently I am using Query DSL in my Java(with JPA) EE project. I recieve a filterObject from UI as json with all the filters. My FilterObject looks like this
public class FilterObject {
private String name;
private List<Status> status;
private String module;
private List<Source> source;
......
}
And in my service class I have something like this
public List<MyModel> findByFilter(FilterObject filterObject) {
BooleanBuilder builder = new BooleanBuilder();
QMyModel mymodel= QMyModel.myModel;
if(filterObject.getName() != null) {
builder.and(mymodel.name.contains(filterObject.getName()));
}
if(! CollectionUtils.isEmpty(filterObject.getStatus())) {
builder.and(mymodel.status.in(filterObject.getStatus()));
}
...............
...............
}
And finally I have this
JPAQuery<MyModel> query = new JPAQuery<>(getEntityManager());
List<MyModel> myModels = query.from(QMyModel.mymodel).where(builder).fetch();
EDIT:
/**
* QMyModel is a Querydsl query type for MyModel
*/
#Generated("com.querydsl.codegen.EntitySerializer")
public class QMyModel extends EntityPathBase<MyModel> {
private static final long serialVersionUID = 1041638507L;
private static final PathInits INITS = PathInits.DIRECT2;
public static final QMyModel myModel = new QMyModel("myModel");
public final StringPath name = createString("name");
public final EnumPath<Status> status = createEnum("status", Status.class);
public final StringPath module = createString("module");
........
.......
}
All these work. But my FilterObject is growing and has more than 10 fields. So I have like 10 If blocks in my service class method. Is there a better way to do this where I could avoid so many if blocks.
You can use lambda's, or (even better in this case) method reference:
public List<MyModel> findByFilter(FilterObject filterObject) {
BooleanBuilder builder = new BooleanBuilder();
QMyModel mymodel = QMyModel.myModel;
add(builder, filterObject.getName(), mymodel.name::contains);
add(builder, filterObject.getStatus(), mymodel.status::in);
...
}
private <T> void add(BooleanBuilder builder, T filterElement, Function<T, BooleanExpression> booleanExpressionFunction) {
if (valid(filterElement)) {
builder.and(booleanExpressionFunction.apply(filterElement));
}
}
private boolean valid(Object filterElement) {
if (filterElement == null) {
return false;
}
if (filterElement instanceof Collection) {
return !((Collection) filterElement).isEmpty();
}
return true;
}

Strategy - automatically register by interface type

I have a lot of actions. All actions works with some Object/Context that passed in all actions. I want to use pattern Strategy/Policy.
Here is examples in Kotlin:
interface Action {
val name: String
fun run(ctx: Context)
}
class Multiply: Action {
override name = "MULTIPLY"
override fun run(ctx: Context) {
writeToDb(ctx.id, ctx.number * 2)
}
}
class Substract
class SendNotification
etc...
So I want to register all strategies on startup. And select strategy from structure like Enum.
val action = selectAwaitingAction()
val ctx = selectCtxById(action.transaction_id)
perfromAction(ctx, actions.getByName(action.name))
fun performAction(ctx Context, action: Action) {
action.run(ctx)
}
My question is how register strategy by interface type?
Note: This is complete example. If you are looking only for automatic registration by interface type, scroll to last part of answer.
Strategy design pattern can be implemented using function tables. This stores implementations in Map<String,IImpl> where key is name of algorithm and value is concrete implementation of algorithm .
Common approach:
Consider class Context holding all parameters shared between imlementations of interface Solver.
public class Context extends HashMap<String,Object> {
public <T> T get(String key, Class<T> resultClass){
return resultClass.cast(get(key));
}
public <T> T getOrDefault(String key, T def, Class<T> resultClass){
return resultClass.cast(getOrDefault(key,def));
}
}
And interface Solver with required methods solve and name
public interface Solver {
void solve(Context context);
String name();
}
Then you can create implementations of Solver interface modifying shared Context object. I have created AddSolver and MultiplySolver in this example.
AddSolver.java:
public class AddSolver implements Solver {
#Override
public void solve(Context context) {
context.put("result", context.getOrDefault("result",0.0, Double.class) + context.get("add", Double.class));
}
#Override
public String name() {
return "+";
}
}
MultiplySolver.java
public class MultiplySolver implements Solver {
#Override
public void solve(Context context) {
context.put("result", context.getOrDefault("result",0.0, Double.class) * context.get("multiply", Double.class));
}
#Override
public String name() {
return "*";
}
}
Manual construction of Map<String,Solver>:
Implementations of interface Solver can be stored in HashMap<String,Solver>
#Test
public void testCustomFunctionMap(){
HashMap<String,Solver> functionMap = new HashMap<>();
functionMap.put("+", new AddSolver());
functionMap.put("*", new MultiplySolver());
Context context = new Context();
context.put("add", 2.0);
functionMap.get("+").solve(context);
TestCase.assertEquals(2.0, context.get("result", Double.class));
context.put("multiply", 3.0);
functionMap.get("*").solve(context);
TestCase.assertEquals(6.0, context.get("result", Double.class));
}
Automatic construction of Map<String,Solver>
There is more approaches, if you need costruct Map<String,Solver> automatically. A lot of them is mentioned in this question. I have used org.reflections library.
public class SolverScanner{
static HashMap<String, Solver> functionMap;
static {
functionMap = new HashMap<>();
Reflections reflections = new Reflections(SolverScanner.class.getPackage().getName());
for( Class<? extends Solver> clazz : reflections.getSubTypesOf(Solver.class)){
try {
Solver solver = clazz.newInstance();
functionMap.put(solver.name(), solver);
} catch (Exception e) {
throw new IllegalStateException("Cannot construct functionMap",e);
}
}
}
public static HashMap<String, Solver> getFunctionMap(){
return functionMap;
}
private SolverScanner(){}//disable instantiating
}
And usage:
#Test
public void testSolverScannerFunctionMap(){
HashMap<String,Solver> functionMap = SolverScanner.getFunctionMap();
Context context = new Context();
context.put("add", 2.0);
functionMap.get("+").solve(context);
TestCase.assertEquals(2.0, context.get("result", Double.class));
context.put("multiply", 3.0);
functionMap.get("*").solve(context);
TestCase.assertEquals(6.0, context.get("result", Double.class));
}

Converting RAW JDBC code to use JdbcTemplate

I have recently started my first Software Dev job out of university and one of my tasks at the moment is to convert raw JDBC Code to use JdbcTemplate in order to get rid of boilerplate code.
I have written an example of a DAO class using JdbcTemplate which retrieves a users address.
Would someone be able to tell me if this looks like the right pattern/approach... or am I
missing anything here ?
public class accountsDAO extends StoredProcedure {
private static final STOREDPROC = "accounts.getAccountDetails";
public accountsDAO() {
super(jdbcTemplate,STOREDPROC)
declareParameter(new SqlParameter("client_id"), OracleTypes.VARCHAR, );
declareParameter(new SqlOutParameter("accounts_csr"), OracleTypes.CURSOR,new AccountAddressExtractor());
setFunction(false)
complie()
}
private List<String> getAccountAddress(String account) {
Map<String,Object> params = new HashMap<String,Object>;
Map<String,Object results;
List<String> data = new LinkedList<String>();
try{
params.put("client_id",account);
results = execute(params);
data = (List<String) results.get(0);
}catch (Exception e) {
// report error
}
return data;
}
private class AccountAddressExtractor implements RowMapper<List<String>> {
#Override
public List<String> mapRow(Resultset rs, int i) throws SQLException{
List<String> data = new ArrayList<String>();
data.add(rs.getString(1));
data.add(rs.geString(2));
data.add(rs.getString(3));
return data;
}
}
}

Standalone Java Implementation for extracting values in URI Template (RFC 6570)?

Is there a Java standalone implementation to extract values ​​of parameters in an URI as defined by an URI-Template (RFC 6570)?
The best implementation I've found is a ruby implementation ( https://github.com/sporkmonger/addressable )
Via http://code.google.com/p/uri-templates/wiki/Implementations I found a Java implementation: Handy-URI-Templates
It supports the resolution of an URI-Template with parameter values to a final URI. Unfortunately, it can not do the reverse: extraction of parameter values ​​in the URI according URI-Template.
Implentations of the JAX-RS (or Restlet) have this feature internally.
But none seems to have isolated this feature module which could used independently.
Does anyone have another idea?
Here a example to Use spring-Web :
import org.springframework.web.util.UriTemplate;
public class UriParserSpringImpl implements UriParser {
private final UriTemplate uriTemplate;
private final String uriTemplateStr;
public UriParserSpringImpl(final String template) {
this.uriTemplateStr = template;
this.uriTemplate = new UriTemplate(template);
}
#Override
public Map<String, String> parse(final String uri) {
final boolean match = this.uriTemplate.matches(uri);
if (!match) {
return null;
}
return uriUtils.decodeParams(this.uriTemplate.match(uri));
}
#Override
public Set<String> getVariables() {
return Collections.unmodifiableSet(new LinkedHashSet<String>(this.uriTemplate.getVariableNames()));
}
}
Another for Jersey (JAX-RS implementation) :
import com.sun.jersey.api.uri.UriTemplate;
public class UriParserJerseyImpl implements UriParser {
private final UriTemplate uriTemplate;
private final Map<String, String> valuesMaps;
public UriParserJerseyImpl(final String template) {
this.uriTemplate = new UriTemplate(template);
final Map<String, String> valuesMaps = new HashMap<String, String>();
for (final String prop : this.uriTemplate.getTemplateVariables()) {
valuesMaps.put(prop, null);
}
this.valuesMaps = Collections.unmodifiableMap(valuesMaps);
}
#Override
public Map<String, String> parse(final String uri) {
final Map<String, String> values = new HashMap<String, String>(this.valuesMaps);
final boolean match = this.uriTemplate.match(uri, values);
if (!match) {
return null;
}
return values;
}
#Override
public Set<String> getVariables() {
return this.valuesMaps.keySet();
}
}
With interface :
public interface UriParser {
public Set<String> getVariables();
public Map<String, String> parse(final String uri);
}
The damnhandy uri template library has an open issue for exactly this feature. I've already gotten the PR for the feature merged and it should be out in version 2.2! Head over there and let the maintainers know you're interested.
Also if you can't wait, you can see how I did it here and use that for yourself.
java.net.URI
Can't set the parameters after it's instantiated, but it has a nice set of getters and you can contruct a new one to alter it.

Categories

Resources