Morphia not applying sparse option to my index - java

I'm trying to use Morphia to interface with MongoDB, and my Morphia entity looks like this:
#Entity(some params about storing the entity)
public class Entity implements Serializable {
<Some other fields here>
#Indexed(options =
#IndexOptions(unique = true, sparse = true)
)
private String field;
<Some other fields here>
}
I would like this field to be unique if present, but not required (and not unique if not present; multiple entries should be able to exclude this field). My understanding of how to do this is with a unique sparse index, as I've tried to set up.
The problem I'm running into is that when I check the index configuration in Studio3T, it appears that my index is being created as unique, but the sparse property is not applied.
What am I doing wrong?
Thanks.
EDIT: Upon further research, this appears like it might be an issue with Microsoft Azure CosmosDB. When I run this code locally, it works fine, but it does not create the sparse index properly on Azure CosmosDB. Updating tags accordingly.

Related

How to create Composite index in Hazelcast

I am trying to improve performance of Hazelcast lookup by using composite key. I have a class entity
Class Entity {
private Long id;
private String field1;
private String field2;
private String field3;
// getter and setters
}
I have added a composite index comprising of above 3 fields in hazelcast-server.xml
...
<map name="Entity">
<max-idle-seconds>2678400</max-idle-seconds>
<time-to-live-seconds>2678400</time-to-live-seconds>
<backup-count>3</backup-count>
<async-backup-count>3</async-backup-count>
<read-backup-data>true</read-backup-data>
<indexes>
<index ordered="false">field1, field2, field3</index>
</indexes>
</map>
...
Querying Hazelcast map
EntryObject entryObject = new PredicateBuilder().getEntryObject();
PredicateBuilder predicate = entryObject.get("field1").equal("value1")
.and(entryObject.get("field2").equal("value2"))
.and(entryObject.get("field3").equal("value3"));
IMap<Long, Entity> entityCache = hazelcastInstance.getMap("Entity")
List<Entity> routings = new ArrayList<>(entityCache.values(predicate));
The code is working fine with and without the index.
Questions
Is this the correct way of creating and using composite index?
Is there a way to check if the index is actually being used by the query? (I could not get any index related info on hazelcast management-center console)
I have scanned a lot hazelcast documentation and internet forums but could not find concrete answers. Hazelcast version: 3.12; Java version: 8
The only way I've found is IMap.getLocalMapStats().getIndexStats() as described here: https://docs.hazelcast.org/docs/3.12.1/manual/html-single/index.html#map-index-statistics
The code is working fine with and without the index.
That's obvious, because the index isn't required.
I didn't test, but index should be used in this case, all three columns should be used. There's no public API to see if an index is actually used. What you can do is to put a large number of entries into the map and the query should be much faster with the index. The other way is debugging the query execution, e.g. put a breakpoint in Indexes.matchIndex(), but I'm not sure this class was the same in 3.12.

How can I overwrite a list field of strings in Spring ElasticSearch?

I am working with the Spring data library for ElasticSearch. I have an ES document looking like the following:
#Getter
#Setter
#Document(indexName = "my_ideas")
public class MyIdea {
#Id
#Field(type = FieldType.Long)
private Long id;
#Field(type = FieldType.Text)
private List<String> countries;
// ... more fields
}
I have an API endpoint which accepts a new list of countries. My goal is to simply overwrite the list of countries in the above ES document. To do this, I use:
#Autowired
ElasticsearchOperations esRestTemplate;
SearchHits<MyIdea> hits = esRestTemplate.search(searchQuery, MyIdea.class);
MyIdea ideaDoc = hits.getSearchHits().stream().findFirst().orElse(null).getContent();
// call method to update the countries
esRestTemplate.save(ideaDoc);
That is, I simply overwrite the list of countries on the document with something else. I assumed that ES' #save() method would behave similarly to the one for JPA. The expected behavior was that, logically, the previous country list would be deleted, and the one I pass in would be the replacement. To my surprise, what I observed is that the countries in the list of the above document were added, while retaining the values which were already there. So, assuming the document started off with "America" and "Canada" in the list, and I passed in "Singapore," the list would in fact contain all three countries after the call to #save().
Can someone expert in ElasticSearch for Spring point out where I am going wrong here?
After some careful investigation, it turns out that the source object being used to populate the ElasticSearch document had some duplicate data in it. At least, duplicate data was being copied over to the ES document. Once I resolved this, the problem went away. So Spring ElasticSearch's #save() is working as expected, and it knows how to either insert or update an ES document.

How to check properties with Hibernate efficiently?

We are using Hibernate for Object/Relational Mapping. This works fine when loading entire entities. However, often I face the problem that I simply want to check a single attribute or COUNT() table entries based on a certain criteria. For sake of performance, I use Native SQL in those cases instead of loading several objects from database and checking their properties in Java. But having plain SQL queries is error-prone and I feel like it violates the idea of ORM.
So I wonder, is there any ORM approach to check single attributes with Hibernate efficiently?
Example: Let's assume we have two entity beans Order and OrderPosition. We want to check, if an order is partly delivered (i.e. COUNT(OrderPositions WHERE isDelivered = true) > 0).
#Entity
public class Order {
private long id;
private List<OrderPosition> orderPositions;
// ...
}
#Entity
public class OrderPosition {
private isDelivered = false;
// ...
}
(Code is simplified for readability.)

MongoDB Morphia - Unique

I am trying to have a consistent db where the username and email are unique.
http://www.mongodb.org/display/DOCS/Indexes#Indexes-unique%3Atrue
http://code.google.com/p/morphia/wiki/EntityAnnotation
My user class looks like this:
public class User {
#Indexed(unique = true)
#Required
#MinLength(4)
public String username;
#Indexed(unique = true)
#Required
#Email
public String email;
#Required
#MinLength(6)
public String password;
#Valid
public Profile profile;
public User() {
...
I used the #Indexed(unique=true) annotation but it does not work. There are still duplicates in my db.
Any ideas how I can fix this?
Edit:
I read about ensureIndexes but this seems like a wrong approach, I don't want to upload duplicate data, just to see that its really a duplicate.
I want to block it right away.
somthing like
try{
ds.save(user);
}
catch(UniqueException e){
...
}
A unique index cannot be created if there are already duplicates in the column you are trying to index.
I would try running your ensureIndex commands from the mongo shell:
db.user.ensureIndex({'username':1},{unique:true})
db.user.ensureIndex({'email':1},{unique:true})
.. and also check that the indexes are set:
db.user.getIndexes()
Morphia should have WriteConcern.SAFE set by default, which will throw an exception when trying to insert documents that would violate a unique index.
There is good explanation about unique constraint right here Unique constraint with JPA and Bean Validation , does this help you at all? So what I would do is just to validate your data at controller level (or bean validate()) when checking other errors as well. That will do the job, but its not as cool than it would be with annotation.
Edit
Alternatively see this post Inserting data to MongoDB - no error, no insert which clearly describes that mongodb doesn't raise error by default of unique indexes if you don't tell it so, try configuring your mongodb to throw those errors too and see if you can work on with solution :(
Edit 2
It also crossed my mind that play 2 has a start up global class where you could try to access your database and run your indexed column commands like this db.things.ensureIndex({email:1},{unique:true}); ?? see more at http://www.playframework.org/documentation/2.0/JavaGlobal
I had the same issue, with play framework 1.2.6 and morphia 1.2.12.
The solution for the #Indexed(unique = true) annotation, is to let morpha to re create the collection.
So if I already had the "Account" collection in mongo, and annotated the email column, and re started the play app, nothing changed in the Account indexes.
If I dropped the Account ollection, morphia re crated it, and now the email column is unique:
> db.Account.drop()
true
After play restart: (I have a job to create initial accounts...)
> db.Account.getIndexes()
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "something.Account",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"email" : 1
},
"unique" : true,
"ns" : "something.Account",
"name" : "email_1"
}
]
Now, after an insert with an already existing email, I get a MongoException.DuplicateKey exception.
To create indexes, the Datastore.ensureIndexes() method needs to be called to apply the indexes to MongoDB. The method should be called after you have registered your entities with Morphia. It will then synchronously create your indexes. This should probably be done each time you start your application.
Morphia m = ...
Datastore ds = ...
m.map(Product.class);
ds.ensureIndexes(); //creates all defined with #Indexed
Morphia will create indexes for the collection with either the class name or with the #Entity annotation value.
For example if your class name is Author:
Please make sure you have #Indexed annotation in you Entity class and you have done these two steps:
m.map(Author.class);
ds.ensureIndexes();
Check indexes on mongo db
b.Author.getIndexes()
I am adding this answer, to emphasize that you can not create indexes with a custom collection name(Entity class is Author, but your collection name is different)
This scenario is obvious in many cases, where you want to reuse the Entity class if the schema is same

JPA - Setting entity class property from calculated column?

I'm just getting to grips with JPA in a simple Java web app running on Glassfish 3 (Persistence provider is EclipseLink). So far, I'm really liking it (bugs in netbeans/glassfish interaction aside) but there's a thing that I want to be able to do that I'm not sure how to do.
I've got an entity class (Article) that's mapped to a database table (article). I'm trying to do a query on the database that returns a calculated column, but I can't figure out how to set up a property of the Article class so that the property gets filled by the column value when I call the query.
If I do a regular "select id,title,body from article" query, I get a list of Article objects fine, with the id, title and body properties filled. This works fine.
However, if I do the below:
Query q = em.createNativeQuery("select id,title,shorttitle,datestamp,body,true as published, ts_headline(body,q,'ShortWord=0') as headline, type from articles,to_tsquery('english',?) as q where idxfti ## q order by ts_rank(idxfti,q) desc",Article.class);
(this is a fulltext search using tsearch2 on Postgres - it's a db-specific function, so I'm using a NativeQuery)
You can see I'm fetching a calculated column, called headline. How do I add a headline property to my Article class so that it gets populated by this query?
So far, I've tried setting it to be #Transient, but that just ends up with it being null all the time.
There are probably no good ways to do it, only manually:
Object[] r = (Object[]) em.createNativeQuery(
"select id,title,shorttitle,datestamp,body,true as published, ts_headline(body,q,'ShortWord=0') as headline, type from articles,to_tsquery('english',?) as q where idxfti ## q order by ts_rank(idxfti,q) desc","ArticleWithHeadline")
.setParameter(...).getSingleResult();
Article a = (Article) r[0];
a.setHeadline((String) r[1]);
-
#Entity
#SqlResultSetMapping(
name = "ArticleWithHeadline",
entities = #EntityResult(entityClass = Article.class),
columns = #ColumnResult(name = "HEADLINE"))
public class Article {
#Transient
private String headline;
...
}
AFAIK, JPA doesn't offer standardized support for calculated attributes. With Hibernate, one would use a Formula but EclipseLink doesn't have a direct equivalent. James Sutherland made some suggestions in Re: Virtual columns (#Formula of Hibernate) though:
There is no direct equivalent (please
log an enhancement), but depending on
what you want to do, there are ways to
accomplish the same thing.
EclipseLink defines a
TransformationMapping which can map a
computed value from multiple field
values, or access the database.
You can override the SQL for any CRUD
operation for a class using its
descriptor's DescriptorQueryManager.
You could define a VIEW on your
database that performs the function
and map your Entity to the view
instead of the table.
You can also perform minor
translations using Converters or
property get/set methods.
Also have a look at the enhancement request that has a solution using a DescriptorEventListener in the comments.
All this is non standard JPA of course.

Categories

Resources