I am using morphia 0.109 and have defined a base class as follows:
#Entity
public abstract class MorphiaData {
#Id protected ObjectId objectId;
#Version private Long mongodocversion;
}
And the intended Morphia entity
public class ItemTest extends MorphiaData {
public Long testValue;
}
When I save an instance of ItemTest to mongoDB the document looks as follows:
{
"_id" : ObjectId("54d26ed66aca89c0717e8936"),
"className" : "test.ItemTest",
"testValue" : NumberLong(1423077078)
}
I am expecting to see a value of mongodocversion in the document.
The morphia documentation provides the following information regarding the version annotation:
This field will be automatically managed for you -- there is no need
to set a value and you should not do so anyway.
#Entity
class MyClass {
...
#Version Long v;
}
which I believe I am adhering too. I have attempted the following fixes without success:
Moving the version annotation in the child class.
Removing the 'private' declaration of the version parameter.
Any advice would be greatly appreciated.
Edit to add: The save process I am using:
DBObject document = MongoDbFactory.getMorphia().toDBObject(this);
DB db = MongoDbFactory.getClient();
DBCollection coll = db.getCollection(noSqlCollection.toString());
if (this.objectId != null) {
//This is an update
BasicDBObject searchQuery = new BasicDBObject().append("_id", this.objectId);
coll.update(searchQuery, document);
} else {
//This is just an add
coll.insert(document);
this.objectId = (ObjectId)document.get( "_id" );
}
This test currently is passing on jenkins: https://github.com/mongodb/morphia/blob/master/morphia/src/test/java/org/mongodb/morphia/optimisticlocks/VersionTest.java#L20-20
Related
I have following MongoDB document:
#Data
#EqualsAndHashCode(callSuper = true)
#NoArgsConstructor
#AllArgsConstructor
#Accessors(chain = true)
#SuperBuilder
#Document(collection = ReasonDocument.COLLECTION)
public class ReasonDocument extends BaseDocument<ObjectId> {
public static final String COLLECTION = "reasons";
#Id
private ObjectId id;
#Indexed
private ObjectId ownerId;
#Indexed
private LocalDate date;
private Type type;
private String reason;
}
I would like to get all rows for ownerId with latest date and additionally filter some of them out. I wrote custom repository for that, where I use aggregation with a group statement:
public class ReasonsRepositoryImpl implements ReasonsRepository {
private final MongoTemplate mongoTemplate;
#Autowired
public ReasonsRepositoryImpl(MongoTemplate mongoTemplate) {
this.mongoTemplate = mongoTemplate;
}
public List<ReasonDocument> findReasons(LocalDate date) {
final Aggregation aggregation = Aggregation.newAggregation(
sort(Direction.DESC, "date"),
group("ownerId")
.first("id").as("id")
.first("reason").as("reason")
.first("type").as("type")
.first("date").as("date")
.first("ownerId").as("ownerId"),
match(Criteria.where("date").lte(date).and("type").is(Type.TYPE_A))
);
return mongoTemplate.aggregate(aggregation, "reasons", ReasonDocument.class).getMappedResults();
}
}
It is smart query but unfortunately it returns corrupted rows while testing:
java.lang.AssertionError:
Expecting:
<[ReasonDocument(id=5dd5500960483c1b2d974eed, ownerId=5dd5500960483c1b2d974eed, date=2019-05-14, type=TYPA_A, reason=14),
ReasonDocument(id=5dd5500960483c1b2d974ee8, ownerId=5dd5500960483c1b2d974ee8, date=2019-05-15, type=TYPA_A, reason=1)]>
to contain exactly in any order:
<[ReasonDocument(id=5dd5500960483c1b2d974eef, ownerId=5dd5500960483c1b2d974ee8, date=2019-05-15, type=TYPA_A, reason=1),
ReasonDocument(id=5dd5500960483c1b2d974efc, ownerId=5dd5500960483c1b2d974eed, date=2019-05-14, type=TYPA_A, reason=14)]>
elements not found:
<[ReasonDocument(id=5dd5500960483c1b2d974eef, ownerId=5dd5500960483c1b2d974ee8, date=2019-05-15, type=TYPA_A, reason=1),
ReasonDocument(id=5dd5500960483c1b2d974efc, ownerId=5dd5500960483c1b2d974eed, date=2019-05-14, type=TYPA_A, reason=14)]>
and elements not expected:
<[ReasonDocument(id=5dd5500960483c1b2d974eed, ownerId=5dd5500960483c1b2d974eed, date=2019-05-14, type=TYPA_A, reason=14),
ReasonDocument(id=5dd5500960483c1b2d974ee8, ownerId=5dd5500960483c1b2d974ee8, date=2019-05-15, type=TYPA_A, reason=1)]>
The id returned is the same as ownerId.
Could anyone say what is wrong with the query?
Im not entirely sure whether or not this may be the problem. But did you check how mongo has saved the ID? because even if you're grouping by ownerID. IF mongo has saved the item under the _id header in your Json. Then you need to refer it as _id
Ex: If it looks like this
{
"_id" : "2893u4jrnjnwfwpfn",
"name" : "Jenkins"
}
then your groupBy should be groupBy(_id) and not what you've written.
This happens to be limitation of MongoDB and ORM, unless I'm not aware of something.
According to documentation https://docs.mongodb.com/manual/reference/operator/aggregation/group/, native mongo query looks like this:
{
$group:
{
_id: <expression>, // Group By Expression
<field1>: { <accumulator1> : <expression1> },
...
}
}
So grouping itself creates new _id - if I group by ownerId that value will end up in _id field.
One way of solving this is by using:
.first("_id").as("oldId")
and creating a new type with oldId as a field that can be later used to map back to original Document.
I am creating a new endpoint in springboot that will return simple stats on users generated from an aggregate query in a mongo database. However I get a PropertyReferenceException. I have read multiple stackoverflow questions about it, but didn't find one that solved this problem.
We have a mongo data scheme like this:
{
"_id" : ObjectId("5d795993288c3831c8dffe60"),
"user" : "000001",
"name" : "test",
"attributes" : {
"brand" : "Chrome",
"language" : "English" }
}
The database is filled with multiple users and we want using Springboot aggregate the stats of users per brand. There could be any number of attributes in the attributes object.
Here is the aggregation we are doing
Aggregation agg = newAggregation(
group("attributes.brand").count().as("number"),
project("number").and("type").previousOperation()
);
AggregationResults<Stats> groupResults
= mongoTemplate.aggregate(agg, Profile.class, Stats.class);
return groupResults.getMappedResults();
Which produces this mongo query which works:
> db.collection.aggregate([
{ "$group" : { "_id" : "$attributes.brand" , "number" : { "$sum" : 1}}} ,
{ "$project" : { "number" : 1 , "_id" : 0 , "type" : "$_id"}} ])
{ "number" : 4, "type" : "Chrome" }
{ "number" : 2, "type" : "Firefox" }
However when running a simple integration test we get this error:
org.springframework.data.mapping.PropertyReferenceException: No property brand found for type String! Traversed path: Profile.attributes.
From what I understand, it seems that since attributes is a Map<String, String> there might be a schematic problem. And in the mean time I can't modify the Profile object.
Is there something I am missing in the aggregation, or anything I could change in my Stats object?
For reference, here are the data models we're using, to work with JSON and jackson.
The Stats data model:
#Document
public class Stats {
#JsonProperty
private String type;
#JsonProperty
private int number;
public Stats() {}
/* ... */
}
The Profile data model:
#Document
public class Profiles {
#NotNull
#JsonProperty
private String user;
#NotNull
#JsonProperty
private String name;
#JsonProperty
private Map<String, String> attributes = new HashMap<>();
public Stats() {}
/* ... */
}
I found a solution, which was a combination of two problems:
The PropertyReferenceException was indeed caused because attributes is a Map<String, String> which means there is no schemes for Mongo.
The error message No property brand found for type String! Traversed path: Profile.attributes. means that the Map object doesn't have a brand property in it.
In order to fix that without touching my orginal Profile class, I had to create a new custom class which would map the attributes to an attributes object having the properties I want to aggreate on like:
public class StatsAttributes {
#JsonProperty
private String brand;
#JsonProperty
private String language;
public StatsAttributes() {}
/* ... */
}
Then I created a custom StatsProfile which would leverage my StatsAttributes and would be similar to the the original Profile object without modifying it.
#Document
public class StatsProfile {
#JsonProperty
private String user;
#JsonProperty
private StatsAttributes attributes;
public StatsProfile() {}
/* ... */
}
With that I made disapear my problem with the PropertyReferenceException using my new class StatsAggregation in the aggregation:
AggregationResults<Stats> groupResults
= mongoTemplate.aggregate(agg, StatsProfile.class, Stats.class);
However I would not get any results. It seems the query would not find any document in the database. That's where I realied that production mongo objects had the field "_class: com.company.dao.model.Profile" which was tied to the Profile object.
After some research, for the new StatsProfile to work it would need to be a #TypeAlias("Profile"). After looking around, I found that I also needed to precise a collection name which would lead to:
#Document(collection = "profile")
#TypeAlias("Profile")
public class StatsProfile {
/* ... */
}
And with all that, finally it worked!
I suppose that's not the prettiest solution, I wish I would not need to create a new Profile object and just consider the attributes as a StatsAttributes.class somehow in the mongoTemplate query. If anyone knows how to, please share 🙏
I'm using spring-data-elasticsearch and for the beginning everything works fine.
#Document( type = "products", indexName = "empty" )
public class Product
{
...
}
public interface ProductRepository extends ElasticsearchRepository<Product, String>
{
...
}
In my model i can search for products.
#Autowired
private ProductRepository repository;
...
repository.findByIdentifier( "xxx" ).getCategory() );
So, my problem is - I've the same Elasticsearch type in different indices and I want to use the same document for all queries. I can handle more connections via a pool - but I don't have any idea how I can implement this.
I would like to have, something like that:
ProductRepository customerRepo = ElasticsearchPool.getRepoByCustomer("abc", ProductRepository.class);
repository.findByIdentifier( "xxx" ).getCategory();
Is it possible to create a repository at runtime, with an different index ?
Thanks a lot
Marcel
Yes. It's possible with Spring. But you should use ElasticsearchTemplate instead of Repository.
For example. I have two products. They are stored in different indices.
#Document(indexName = "product-a", type = "product")
public class ProductA {
#Id
private String id;
private String name;
private int value;
//Getters and setters
}
#Document(indexName = "product-b", type = "product")
public class ProductB {
#Id
private String id;
private String name;
//Getters and setters
}
Suppose if they have the same type, so they have the same fields. But it's not necessary. Two products can have totally different fields.
I have two repositories:
public interface ProductARepository extends ElasticsearchRepository<ProductA, String> {
}
public interface ProductBRepository
extends ElasticsearchRepository<ProductB, String> {
}
It's not necessary too. Only for testing. The fact that ProductA is stored in "product-a" index and ProductB is stored in "product-b" index.
How to query two(ten, dozen) indices with the same type?
Just build custom repository like this
#Repository
public class CustomProductRepositoryImpl {
#Autowired
private ElasticsearchTemplate elasticsearchTemplate;
public List<ProductA> findProductByName(String name) {
MatchQueryBuilder queryBuilder = QueryBuilders.matchPhrasePrefixQuery("name", name);
//You can query as many indices as you want
IndicesQueryBuilder builder = QueryBuilders.indicesQuery(queryBuilder, "product-a", "product-b");
SearchQuery searchQuery = new NativeSearchQueryBuilder().withQuery(builder).build();
return elasticsearchTemplate.query(searchQuery, response -> {
SearchHits hits = response.getHits();
List<ProductA> result = new ArrayList<>();
Arrays.stream(hits.getHits()).forEach(h -> {
Map<String, Object> source = h.getSource();
//get only id just for test
ProductA productA = new ProductA()
.setId(String.valueOf(source.getOrDefault("id", null)));
result.add(productA);
});
return result;
});
}
}
You can search as many indices as you want and you can transparently inject this behavior into ProductARepository adding custom behavior to single repositories
Second solution is to use indices aliases, but you had to create custom model or custom repository too.
We can use the withIndices method to switch the index if needed:
NativeSearchQueryBuilder nativeSearchQueryBuilder = nativeSearchQueryBuilderConfig.getNativeSearchQueryBuilder();
// Assign the index explicitly.
nativeSearchQueryBuilder.withIndices("product-a");
// Then add query as usual.
nativeSearchQueryBuilder.withQuery(allQueries)
The #Document annotation in entity will only clarify the mapping, to query against a specific index, we still need to use above method.
#Document(indexName="product-a", type="_doc")
I have a document stored in MongoDB that looks something like this:
{ '_id' : 'XXX', 'myProps' : [ { '_id' : 'YYY', 'propA' : 'ValueA' }, { '_id' : 'ZZZ', 'propA' : 'ValueB' } ] }
I'm using Morphia to model this into Java objects. What I would like to do is query for elements within myProps that have a propA value of 'ValueA'. Is this possible? Is it possible to query for specific values within a subdocument? I've tried using queries like:
myProps.propA == 'ValueA'
...but, I still see all values of myProps being returned. Is there something I'm missing in my query? Or is it not possible to make such a query using Morphia/MongoDB?
UPDATE: My code thus far...
My entity and embedded classes:
#Entity
public class MyTestClass implements Serializable {
#Id
private ObjectId id;
#Embedded
private List<MyProps> myProps;
...
}
#Embedded
public class MyProps {
private String propA;
...
}
I have created the appropriate DAO class for it by extending BasicDAO. Here is my query:
Query<MyTestClass> q = this.myTestClassDAO.createQuery();
q.field("myProps.propA").qual("ValueA");
MyTestClass result = q.get();
The code executes correctly, but when I look at result.getMyProps() I see a list containing ALL of the myProps values, not just the ones with propA == 'ValueA'.
Using the fluent interface it should be something like field("myProps.propA").equal("ValueA").field("myProps.propA").notEqual("ValueB").
I have a document that can have dynamic key names:
{
"_id" : ObjectId("51a29f6413dc992c24e0283e"),
"envinfo" : {
"appName" : "MyJavaApp",
"environment" : {
"cpuCount" : 12,
"heapMaxBytes" : 5724766208,
"osVersion" : "6.2",
"arch" : "amd64",
"javaVendor" : "Sun Microsystems Inc.",
"pid" : 44996,
"javaVersion" : "1.6.0_38",
"heapInitialBytes" : 402507520,
}
Here envinfo 's keys are not known in advance.
What is the best way to create an entity class in Spring Data Mongodb which will map this document?
This is one way of doing it. There may be other better ways.
Create a map of attributes and store the map in mongo.
public class Env {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private ObjectId id;
#Field
private Envinfo envinfo;
public static class Envinfo {
#Field
private String appName;
#Field
private Map<String, String> attributes;
}
}
If you know the keys in advance, you may add those attributes in Envinfo and keep those out of attributes map.
Here is what I'll do.
class EnvDocuemnt {
#Id
private String id; //getter and setter omitted
#Field(value = "envinfo")
private BasicDBObject infos;
public Map getInfos() {
// some documents don't have any infos, in this case return null...
if ( null!= infos)
return infos.toMap();
return null;
}
public void setInfos(Map infos) {
this.infos = new BasicDBObject( infos );
}
}
This way, getInfos() returns a Map<String,Object> you can explore with String keys when needed, and that can have nested Map.
For your dependencies, it is better not to expose the BasicDBObject field directly, so this can be used via interface in a code not including any MongoDb library.
Note that if there is some frequent accessed fields in envinfo, then it would be better to declare them as fields in your class, to have a direct accessor, and so not to spend to much time in browsing the map again and again.