I have a list of documents called customers that I retrieved using mongotemplate, bellow some of the documents:
{"customer": {"entityPerimeter": "abp", "name": "ZERZER", "siren": "6154645", "enterpriseId": "546456", "ic01": "", "marketingOffer": "qlksdjf", "irType": "Router", "offerSpecificationOfferLabel": "2Mb"}}
{"customer": {"entityPerimeter": "sdf", "name": "qazer", "siren": "156", "enterpriseId": "546456", "ic01": "", "marketingOffer": "qlksdjddddsqf", "irType": "Ruter", "offerSpecificationOfferLabel": "2Mb"}}
{"customer": {"entityPerimeter": "zer", "name": "fazdsdfsdgg", "siren": "sdfs", "enterpriseId": "1111", "ic01": "", "marketingOffer": "qsdfqsd", "irType": "Router", "offerSpecificationOfferLabel": "2Mb"}}
That what I did in mongodb to have this result:
public List<DBObject> findAllCustomersByExtractionDateMongo(LocalDate extractionDate) {
Aggregation aggregation = newAggregation(
match(Criteria.where(EXTRACTION_DATE).is(extractionDate)),
project(CUSTOMER).andExclude("_id"),
group().addToSet("$customer").as("distinct_customers"),
unwind("distinct_customers"),
project().andExclude("_id").and("distinct_customers").as("customer"),
project().andExclude("distinct_customers")
);
return template
.aggregate(aggregation, COLLECTION, DBObject.class)
.getMappedResults();
}
Now what I really want is to map those Documents to a Class called Customer:
#Data
#NoArgsConstructor
#AllArgsConstructor
#Builder
public class Customer {
private String entityPerimeter;
private String name;
private String siren;
private String enterpriseId;
private String ic01;
private String marketingOffer;
private String product;
private String irType;
}
I tried to do that by creating a DTO interface:
public interface DocumentToCustomerMapper {
String NULL = "null";
static Customer getFilter(DBObject document) {
var customer = new Customer();
customer.setSiren(Optional.ofNullable((String) document.get(CustomerAttributes.SIREN.value())).orElse(NULL));
customer.setEnterpriseId(Optional.ofNullable((String) document.get(CustomerAttributes.ENTERPRISE_ID.value())).orElse(NULL));
customer.setEntityPerimeter(Optional.ofNullable((String) document.get(CustomerAttributes.ENTITY_PERIMETER.value())).orElse(NULL));
customer.setName(Optional.ofNullable((String) document.get(CustomerAttributes.NAME.value())).orElse(NULL));
customer.setIc01(Optional.ofNullable((String) document.get(CustomerAttributes.IC_01.value())).orElse(NULL));
customer.setMarketingOffer(Optional.ofNullable((String) document.get(CustomerAttributes.MARKETING_OFFER.value())).orElse(NULL));
customer.setProduct(Optional.ofNullable((String) document.get(CustomerAttributes.PRODUCT.value())).orElse(NULL));
customer.setIrType(Optional.ofNullable((String) document.get(CustomerAttributes.IR_TYPE.value())).orElse(NULL));
return customer;
}
}
Then in the findAllCystomersByExtractionDateMongo() I'm doing this:
public List<Customer> findAllCustomersByExtractionDateMongo(LocalDate extractionDate) {
Aggregation aggregation = newAggregation(
match(Criteria.where(EXTRACTION_DATE).is(extractionDate)),
project(CUSTOMER).andExclude("_id"),
group().addToSet("$customer").as("distinct_customers"),
unwind("distinct_customers"),
project().andExclude("_id").and("distinct_customers").as("customer"),
project().andExclude("distinct_customers")
);
final Converter<DBObject, Customer> converter = DocumentToCustomerMapper::getFilter;
MongoCustomConversions cc = new MongoCustomConversions(List.of(converter));
((MappingMongoConverter) template.getConverter()).setCustomConversions(cc);
return template
.aggregate(aggregation, COLLECTION, Customer.class)
.getMappedResults();
}
But unfortunately it's giving me an exception:
Couldn't resolve type arguments for class com.obs.dqsc.api.repository.mongo_template.CustomerRepositoryImpl$$Lambda$1333/0x00000008012869a8!
I tried to remove this code:
final Converter<DBObject, Customer> converter = DocumentToCustomerMapper::getFilter;
MongoCustomConversions cc = new MongoCustomConversions(List.of(converter));
((MappingMongoConverter) template.getConverter()).setCustomConversions(cc);
Then all what I'm getting is some null values in my customer objects:
Customer(entityPerimeter=null, name=null, siren=null, enterpriseId=null, ic01=null, marketingOffer=null, product=null, irType=null)
Customer(entityPerimeter=null, name=null, siren=null, enterpriseId=null, ic01=null, marketingOffer=null, product=null, irType=null)
Customer(entityPerimeter=null, name=null, siren=null, enterpriseId=null, ic01=null, marketingOffer=null, product=null, irType=null)
Note: for performance issues, I don't want to do any mapping in the java side, also I don't want to use a global converter in my mongo configuration.
The problem is that you are using a method reference to express your converter:
final Converter<DBObject, Customer> converter = DocumentToCustomerMapper::getFilter;
(Expanding the method reference to a lambda won't work either.)
Try rewriting that snippet to something else (such as an anonymous inner class).
Here is a very similar issue reported, including info on how to work around this problem: https://github.com/arangodb/spring-data/issues/120
Related
I have following MongoDB document:
#Data
#EqualsAndHashCode(callSuper = true)
#NoArgsConstructor
#AllArgsConstructor
#Accessors(chain = true)
#SuperBuilder
#Document(collection = ReasonDocument.COLLECTION)
public class ReasonDocument extends BaseDocument<ObjectId> {
public static final String COLLECTION = "reasons";
#Id
private ObjectId id;
#Indexed
private ObjectId ownerId;
#Indexed
private LocalDate date;
private Type type;
private String reason;
}
I would like to get all rows for ownerId with latest date and additionally filter some of them out. I wrote custom repository for that, where I use aggregation with a group statement:
public class ReasonsRepositoryImpl implements ReasonsRepository {
private final MongoTemplate mongoTemplate;
#Autowired
public ReasonsRepositoryImpl(MongoTemplate mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
public List<ReasonDocument> findReasons(LocalDate date) {
final Aggregation aggregation = Aggregation.newAggregation(
sort(Direction.DESC, "date"),
group("ownerId")
.first("id").as("id")
.first("reason").as("reason")
.first("type").as("type")
.first("date").as("date")
.first("ownerId").as("ownerId"),
match(Criteria.where("date").lte(date).and("type").is(Type.TYPE_A))
);
return mongoTemplate.aggregate(aggregation, "reasons", ReasonDocument.class).getMappedResults();
}
}
It is smart query but unfortunately it returns corrupted rows while testing:
java.lang.AssertionError:
Expecting:
<[ReasonDocument(id=5dd5500960483c1b2d974eed, ownerId=5dd5500960483c1b2d974eed, date=2019-05-14, type=TYPA_A, reason=14),
ReasonDocument(id=5dd5500960483c1b2d974ee8, ownerId=5dd5500960483c1b2d974ee8, date=2019-05-15, type=TYPA_A, reason=1)]>
to contain exactly in any order:
<[ReasonDocument(id=5dd5500960483c1b2d974eef, ownerId=5dd5500960483c1b2d974ee8, date=2019-05-15, type=TYPA_A, reason=1),
ReasonDocument(id=5dd5500960483c1b2d974efc, ownerId=5dd5500960483c1b2d974eed, date=2019-05-14, type=TYPA_A, reason=14)]>
elements not found:
<[ReasonDocument(id=5dd5500960483c1b2d974eef, ownerId=5dd5500960483c1b2d974ee8, date=2019-05-15, type=TYPA_A, reason=1),
ReasonDocument(id=5dd5500960483c1b2d974efc, ownerId=5dd5500960483c1b2d974eed, date=2019-05-14, type=TYPA_A, reason=14)]>
and elements not expected:
<[ReasonDocument(id=5dd5500960483c1b2d974eed, ownerId=5dd5500960483c1b2d974eed, date=2019-05-14, type=TYPA_A, reason=14),
ReasonDocument(id=5dd5500960483c1b2d974ee8, ownerId=5dd5500960483c1b2d974ee8, date=2019-05-15, type=TYPA_A, reason=1)]>
The id returned is the same as ownerId.
Could anyone say what is wrong with the query?
Im not entirely sure whether or not this may be the problem. But did you check how mongo has saved the ID? because even if you're grouping by ownerID. IF mongo has saved the item under the _id header in your Json. Then you need to refer it as _id
Ex: If it looks like this
{
"_id" : "2893u4jrnjnwfwpfn",
"name" : "Jenkins"
}
then your groupBy should be groupBy(_id) and not what you've written.
This happens to be limitation of MongoDB and ORM, unless I'm not aware of something.
According to documentation https://docs.mongodb.com/manual/reference/operator/aggregation/group/, native mongo query looks like this:
{
$group:
{
_id: <expression>, // Group By Expression
<field1>: { <accumulator1> : <expression1> },
...
}
}
So grouping itself creates new _id - if I group by ownerId that value will end up in _id field.
One way of solving this is by using:
.first("_id").as("oldId")
and creating a new type with oldId as a field that can be later used to map back to original Document.
When I unmarshal my JSON, the Warehouses instance is ok with however many warehouse instances are in it's list.
Each warehouse instance has the url field but the WarehouseField list has one instance with blank values.
I'm not sure what I'm missing.
JSON
{
"warehouses": [
{
"warehouse": {
"PRiyA": "0",
"WHID": "1 ALABO",
"PRixA": ""
},
"url": "http://ACL-HPDV6:8080/HSyncREST/api/v1/warehouses/PLL/1 ALABO"
},
{
"warehouse": {
"PRiyA": "0",
"WHID": "1000 EDWAR",
"PRixA": ""
},
"url": "http://ACL-HPDV6:8080/HSyncREST/api/v1/warehouses/PLL/1000 EDWAR"
},
],
"url": "http://ACL-HPDV6:8080/HSyncREST/api/v1/warehouses/PLL",
"status": " "
}
Code used to unmarshall
public static void main(String[] args) throws Exception {
Class<?>[] ctx = {Warehouses.class, Warehouse.class, WarehouseField.class};
JAXBContext jc = JAXBContext.newInstance(ctx);
Unmarshaller um = jc.createUnmarshaller();
um.setProperty(UnmarshallerProperties.MEDIA_TYPE, "application/json");
um.setProperty(UnmarshallerProperties.JSON_INCLUDE_ROOT, false);
Source json = new StreamSource(new File("D:/warehouses.json"));
Warehouses warehouses = um.unmarshal(json, Warehouses.class).getValue();
Model classes
public class Warehouses {
public List<Warehouse> warehouses;
public String url;
public String status;
<getters and setters>
}
public class Warehouse {
public List<WarehouseField> warehouse;
public String url;
<getters and setters>
}
public class WarehouseField {
#XmlAttribute
public String implName;
#XmlValue
public String value;
<getters and setters>
}
First of all I suggest you make all fields private, you have getters & setters for your fields.
It is a also a good idea to separate the response(?) DTO class name from the field and actual type naming.
Assuming that field names in the response DTOs tell the actual type, then do a bit refactoring like Warehouses to WarehousesResponse and Warehouse to WarehouseResponse.
Then about the "array", clip from the JSON:
"warehouse": {
"PRiyA": "0",
"WHID": "1 ALABO",
"PRixA": ""
}
this is not an array named warehouse so it not deserializing to a List nicely.
It is an Object of type Warehouse (that is why distinction WarehouseResponse, for clarity but see also mention about Map later) that is a field named warehouse in Object of type WarehouseResponse (assuming you agree on naming policy).
One option is to create a class like:
#Getter #Setter
public class Warehouse {
private String PRiyA;
private String WHID;
private String PRixA;
}
and change WarehouseResponse like:
#Getter #Setter
public class WarehouseResponse {
// Change the list to warehouse object as it is in response
// private List<WarehouseField> warehouse;
private Warehouse warehouse;
private String url;
private Date date = new Date();
}
Usually it is also possible to set key/value-pairs simply - for an example - to a Map<String,String> so in this case WarehouseResponses could also have private HashMap<String, String> warehouse and no class Warehouse would be needed. However I could not get it working with my Moxy knowledge.
So I presented how you can deserialize (and serialize) the format you gave in your JSON but I can not know it this then suits your possible XML needs
I am trying to learn MongoDB and Morphia and I have created a sample application in Java.
But while performing aggregation I am getting "invalid hexadecimal representation of an ObjectId" error.
Morphia version is 1.3.2
Entity: Address.java
#Entity
public class Address {
#Id
#Property("id")
protected ObjectId id;
private String street;
private String building;
private String pin;
}
Sample document:
{
"_id" : ObjectId("58fcb704c1d24e05ce5851cb"),
"building" : "SGV",
"street" : "Galaxy Heights",
"pin" : "411017"
}
AddressDAO.java:
public class AddressDAO extends BasicDAO<Address, ObjectId>{
public AddressDAO(Class<Address> entityClass, Datastore ds) {
super(entityClass, ds);
}
public List<Address> getByGroupedData(String pin) {
Query<Address> query = createQuery().field("pin").equal(pin);
Iterator<Address> pipeline = getDatastore().createAggregation(Address.class)
.match(query)
.group(Group.id(Group.grouping("building"))).out(Address.class);
while(pipeline.hasNext()) {
System.out.println(pipeline.next());
}
return null;
}
}
When calling 'pipeline.next()' in AddressDAO.java I am getting the exception:
java.lang.IllegalArgumentException: invalid hexadecimal representation of an ObjectId: [{ "building" : "Galaxy Heights"}]
at org.bson.types.ObjectId.parseHexString(ObjectId.java:550)
at org.bson.types.ObjectId.<init>(ObjectId.java:240)
at org.mongodb.morphia.converters.ObjectIdConverter.decode(ObjectIdConverter.java:32)
at org.mongodb.morphia.converters.Converters.fromDBObject(Converters.java:124)
at org.mongodb.morphia.mapping.ValueMapper.fromDBObject(ValueMapper.java:20)
at org.mongodb.morphia.mapping.Mapper.readMappedField(Mapper.java:844)
at org.mongodb.morphia.mapping.Mapper.fromDb(Mapper.java:282)
at org.mongodb.morphia.mapping.Mapper.fromDBObject(Mapper.java:193)
Any idea what I am missing here?
I believe the problem is with the $out stage. It creates the new collection with _id field as building value.
So now when you try to map it back to Address object which has _id defined as Object id, it results in an error.
So the fix would be to use $projection to suppress the _id field from the final response so the $out stage creates a new Object id.
Try something like this.
Iterator<Address> pipeline = getDatastore().createAggregation(Address.class)
.match(query)
.group(Group.id(Group.grouping("building")))
.project(Projection.projection("_id").suppress(), Projection.projection("building", "$_id"))
.out(Address.class);
Sidenote: You probably should map into new pojo object for new collection.
I have a document with many fields (some nested) indexed on elasticsearch. For example:
{
"id" : 1,
"username" : "...",
"name" : "...",
"surname" : "...",
"address" : "...",
"age": 42,
...
"bookmarks" : [{...}, {...}],
"tags" : [{...}, {...}]
}
Only some filed is mapped in my entity (I don't want to map the entire document):
#Document(indexName = "...", type = "...")
public class User {
#Id
private int id;
private String username;
private String address;
// getter/setter methods
}
In the service class I would like to do a partial update with ElasticsearchRepository, without mapping all document's fields in the entity:
public class UserServiceClass {
#Autowired
private UserElasticsearchRepository userElasticsearchRepository;
public void updateAddress(int id, String updatedAddress) {
User user = userElasticsearchRepository.findOne(id);
user.setAddress(updatedAddress);
userElasticsearchRepository.save(user);
}
}
but save method overwrites the entire document:
{
"id" : 1,
"username" : "...",
"address" : "..."
}
Partial udpdate seems not supported by ElasticsearchRepository. So I used ElasticsearchTemplate, to make a partial update, for example:
public class UserServiceClass {
#Autowired
private UserElasticsearchRepository userElasticsearchRepository;
#Autowired
private ElasticsearchTemplate elasticsearchTemplate;
public void updateAddress(int id, String updatedAddress) {
User user = userElasticsearchRepository.findOne(id);
if (user.getUsername().equals("system")) {
return;
}
IndexRequest indexRequest = new IndexRequest();
indexRequest.source("address", updatedAddress);
UpdateQuery updateQuery = new UpdateQueryBuilder().withId(user.getId()).withClass(User.class).withIndexRequest(indexRequest).build();
elasticsearchTemplate.update(updateQuery);
}
}
but seems a bit redundant to have two similar references (repository and ElasticsearchTemplate).
Can anyone suggest me a better solution?
Instead of having both ElasticsearchTemplate and UserElasticsearchRepository injected into your UserServiceClass, you can implement your own custom repository and let your existing UserElasticsearchRepository extend it.
I assume that your existing UserElasticsearchRepository look something like this.
public interface UserElasticsearchRepository extends ElasticsearchRepository<User, String> {
....
}
You have to create new interface name UserElasticsearchRepositoryCustom. Inside this interface you can list your custom query method.
public interface UserElasticsearchRepositoryCustom {
public void updateAddress(User user, String updatedAddress);
}
Then implement your UserElasticsearchRepositoryCustom by create a class called UserElasticsearchRepositoryImpl and implement your custom method inside with injected ElasticsearchTemplate
public class UserElasticsearchRepositoryImpl implements UserElasticsearchRepositoryCustom {
#Autowired
private ElasticsearchTemplate elasticsearchTemplate;
#Override
public void updateAddress(User user, String updatedAddress){
IndexRequest indexRequest = new IndexRequest();
indexRequest.source("address", updatedAddress);
UpdateQuery updateQuery = new UpdateQueryBuilder().withId(user.getId()).withClass(User.class).withIndexRequest(indexRequest).build();
elasticsearchTemplate.update(updateQuery);
}
}
After that, just extends your UserElasticsearchRepository with UserElasticsearchRepositoryCustom so it should look like this.
public interface UserElasticsearchRepository extends ElasticsearchRepository<User, String>, UserElasticsearchRepositoryCustom {
....
}
Finally, you service code should look like this.
public class UserServiceClass {
#Autowired
private UserElasticsearchRepository userElasticsearchRepository;
public void updateAddress(int id, String updatedAddress) {
User user = userElasticsearchRepository.findOne(id);
if (user.getUsername().equals("system")) {
return;
}
userElasticsearchRepository.updateAddress(user,updatedAddress);
}
}
You can also move your user finding logic into the custom repository logic as well so that you can passing only user id and address in the method. Hope this is helpful.
You can use ElasticSearchTemplate also to get your User object instead of repository interface. you can use NativeSearchQueryBuilder and other classes to build your query. With this you can avoid two similar reference from your class. Let me know if this solves your problem.
I'm using spring-data-elasticsearch and for the beginning everything works fine.
#Document( type = "products", indexName = "empty" )
public class Product
{
...
}
public interface ProductRepository extends ElasticsearchRepository<Product, String>
{
...
}
In my model i can search for products.
#Autowired
private ProductRepository repository;
...
repository.findByIdentifier( "xxx" ).getCategory() );
So, my problem is - I've the same Elasticsearch type in different indices and I want to use the same document for all queries. I can handle more connections via a pool - but I don't have any idea how I can implement this.
I would like to have, something like that:
ProductRepository customerRepo = ElasticsearchPool.getRepoByCustomer("abc", ProductRepository.class);
repository.findByIdentifier( "xxx" ).getCategory();
Is it possible to create a repository at runtime, with an different index ?
Thanks a lot
Marcel
Yes. It's possible with Spring. But you should use ElasticsearchTemplate instead of Repository.
For example. I have two products. They are stored in different indices.
#Document(indexName = "product-a", type = "product")
public class ProductA {
#Id
private String id;
private String name;
private int value;
//Getters and setters
}
#Document(indexName = "product-b", type = "product")
public class ProductB {
#Id
private String id;
private String name;
//Getters and setters
}
Suppose if they have the same type, so they have the same fields. But it's not necessary. Two products can have totally different fields.
I have two repositories:
public interface ProductARepository extends ElasticsearchRepository<ProductA, String> {
}
public interface ProductBRepository
extends ElasticsearchRepository<ProductB, String> {
}
It's not necessary too. Only for testing. The fact that ProductA is stored in "product-a" index and ProductB is stored in "product-b" index.
How to query two(ten, dozen) indices with the same type?
Just build custom repository like this
#Repository
public class CustomProductRepositoryImpl {
#Autowired
private ElasticsearchTemplate elasticsearchTemplate;
public List<ProductA> findProductByName(String name) {
MatchQueryBuilder queryBuilder = QueryBuilders.matchPhrasePrefixQuery("name", name);
//You can query as many indices as you want
IndicesQueryBuilder builder = QueryBuilders.indicesQuery(queryBuilder, "product-a", "product-b");
SearchQuery searchQuery = new NativeSearchQueryBuilder().withQuery(builder).build();
return elasticsearchTemplate.query(searchQuery, response -> {
SearchHits hits = response.getHits();
List<ProductA> result = new ArrayList<>();
Arrays.stream(hits.getHits()).forEach(h -> {
Map<String, Object> source = h.getSource();
//get only id just for test
ProductA productA = new ProductA()
.setId(String.valueOf(source.getOrDefault("id", null)));
result.add(productA);
});
return result;
});
}
}
You can search as many indices as you want and you can transparently inject this behavior into ProductARepository adding custom behavior to single repositories
Second solution is to use indices aliases, but you had to create custom model or custom repository too.
We can use the withIndices method to switch the index if needed:
NativeSearchQueryBuilder nativeSearchQueryBuilder = nativeSearchQueryBuilderConfig.getNativeSearchQueryBuilder();
// Assign the index explicitly.
nativeSearchQueryBuilder.withIndices("product-a");
// Then add query as usual.
nativeSearchQueryBuilder.withQuery(allQueries)
The #Document annotation in entity will only clarify the mapping, to query against a specific index, we still need to use above method.
#Document(indexName="product-a", type="_doc")