MongoDB Morphia - Unique - java

I am trying to have a consistent db where the username and email are unique.
http://www.mongodb.org/display/DOCS/Indexes#Indexes-unique%3Atrue
http://code.google.com/p/morphia/wiki/EntityAnnotation
My user class looks like this:
public class User {
#Indexed(unique = true)
#Required
#MinLength(4)
public String username;
#Indexed(unique = true)
#Required
#Email
public String email;
#Required
#MinLength(6)
public String password;
#Valid
public Profile profile;
public User() {
...
I used the #Indexed(unique=true) annotation but it does not work. There are still duplicates in my db.
Any ideas how I can fix this?
Edit:
I read about ensureIndexes but this seems like a wrong approach, I don't want to upload duplicate data, just to see that its really a duplicate.
I want to block it right away.
somthing like
try{
ds.save(user);
}
catch(UniqueException e){
...
}

A unique index cannot be created if there are already duplicates in the column you are trying to index.
I would try running your ensureIndex commands from the mongo shell:
db.user.ensureIndex({'username':1},{unique:true})
db.user.ensureIndex({'email':1},{unique:true})
.. and also check that the indexes are set:
db.user.getIndexes()
Morphia should have WriteConcern.SAFE set by default, which will throw an exception when trying to insert documents that would violate a unique index.

There is good explanation about unique constraint right here Unique constraint with JPA and Bean Validation , does this help you at all? So what I would do is just to validate your data at controller level (or bean validate()) when checking other errors as well. That will do the job, but its not as cool than it would be with annotation.
Edit
Alternatively see this post Inserting data to MongoDB - no error, no insert which clearly describes that mongodb doesn't raise error by default of unique indexes if you don't tell it so, try configuring your mongodb to throw those errors too and see if you can work on with solution :(
Edit 2
It also crossed my mind that play 2 has a start up global class where you could try to access your database and run your indexed column commands like this db.things.ensureIndex({email:1},{unique:true}); ?? see more at http://www.playframework.org/documentation/2.0/JavaGlobal

I had the same issue, with play framework 1.2.6 and morphia 1.2.12.
The solution for the #Indexed(unique = true) annotation, is to let morpha to re create the collection.
So if I already had the "Account" collection in mongo, and annotated the email column, and re started the play app, nothing changed in the Account indexes.
If I dropped the Account ollection, morphia re crated it, and now the email column is unique:
> db.Account.drop()
true
After play restart: (I have a job to create initial accounts...)
> db.Account.getIndexes()
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "something.Account",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"email" : 1
},
"unique" : true,
"ns" : "something.Account",
"name" : "email_1"
}
]
Now, after an insert with an already existing email, I get a MongoException.DuplicateKey exception.

To create indexes, the Datastore.ensureIndexes() method needs to be called to apply the indexes to MongoDB. The method should be called after you have registered your entities with Morphia. It will then synchronously create your indexes. This should probably be done each time you start your application.
Morphia m = ...
Datastore ds = ...
m.map(Product.class);
ds.ensureIndexes(); //creates all defined with #Indexed

Morphia will create indexes for the collection with either the class name or with the #Entity annotation value.
For example if your class name is Author:
Please make sure you have #Indexed annotation in you Entity class and you have done these two steps:
m.map(Author.class);
ds.ensureIndexes();
Check indexes on mongo db
b.Author.getIndexes()
I am adding this answer, to emphasize that you can not create indexes with a custom collection name(Entity class is Author, but your collection name is different)
This scenario is obvious in many cases, where you want to reuse the Entity class if the schema is same

Related

Spring data Elasticsearch find by field and latest date query

I am using Spring Boot 2.1.6 and Elasticsearch 6.2.2
EDIT to better clarify my question:
When I let Spring generate a query for me by using the following method in my repository:
Account findTop1ByAccountIdOrderByCreatedDesc(final String accountId);
I imagine it means it will select from the index filtering by accountId, then ordering the results by created descending, and finally it will return only the first (latest) result.
But I only have two entries in the index, identical ones minus the created date, and that query returns both results. I think it means it does not translate to what I have in mind but rather it will pick all accounts with that ID (since that is a key, all are "top"), ordered descending.
This would more closely match my query, but it is not legal naming:
Account findTop1OrderByCreatedDescByAccountId(final String accountId);
org.springframework.data.mapping.PropertyReferenceException: No property descByAccountId found for type LocalDateTime! Traversed path: Account.created.
And this one as well:
Account findTop1OrderByCreatedDescAndAccountIdEquals(final String accountId);
org.springframework.data.mapping.PropertyReferenceException: No property desc found for type LocalDateTime! Traversed path: Account.created.
So how do I translate, if possible at all, my query to Spring repository magic?
/EDIT
Original question:
I have a POJO declared as such (trimmed version):
#Document(indexName = "config.account", type = "account")
public class Account{
#Id
private String id;
#Field(type = FieldType.Text)
#JsonProperty("ilmAccountIdentifier")
private String accountId;
#Field(type = FieldType.Text)
private String endOfBusinessDay;
#Field(type = FieldType.Date)
private LocalDateTime created;
}
And I would like to query the index to retrieve the latest entry (created = max) for a given accountId.
I know I can use the query builder, but I was wondering if there was some magic that does that for me by using the spring named queries, currently I was trying with (couldn't find other wording combination that are valid):
Account findTop1ByAccountIdOrderByCreatedDesc(final String accountId);
But it returns null. I see the data is in the index (trimmed version):
Account(id=W83u0GsBEjwDhWt1-Whn,
accountId=testfindByAccountIdReturnsLatestVersion,
endOfBusinessDay=17:00:00,
created=2019-07-08T09:34)
But the query Spring generated is quite strange and not what I would expect:
SearchRequest{
searchType=QUERY_THEN_FETCH,
indices=[config.account],
indicesOptions=IndicesOptions[ignore_unavailable=false, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_aliases_to_multiple_indices=true, forbid_closed_indices=true, ignore_aliases=false],
types=[account],
routing='null',
preference='null',
requestCache=null,
scroll=null,
maxConcurrentShardRequests=0,
batchedReduceSize=512,
preFilterShardSize=128,
allowPartialSearchResults=null,
source={
"from":0,
"query":{
"bool":{
"must":[{
"query_string":
{"query":
"testfindByAccountIdReturnsLatestVersion",
"fields": ["accountId^1.0"],
"type":"best_fields",
"default_operator":"and",
"max_determinized_states":10000,
"enable_position_increments":true,
"fuzziness":"AUTO",
"fuzzy_prefix_length":0,
"fuzzy_max_expansions":50,
"phrase_slop":0,
"escape":false,
"auto_generate_synonyms_phrase_query":true,
"fuzzy_transpositions":true,
"boost":1.0}
}],
"adjust_pure_negative":true,"boost":1.0}
}
,"version":true}}
What I would like instead is the equivalent of this SQL query:
select *
from(
select *
from account
where accountId = ?
order by created desc
)
where rownum = 1
Is it possible at all to do with the Spring magic or must I use QueryBuilder or my own logic for it?
Thanks and cheers
EDIT
Unrelated, but I realized that spring Repository magic doesn't work if a field is renamed during mapping with #JsonProperty. Assuming I do NOT do that renaming, the question remains the same. Currently I worked around this by implementing my own logic with:
#Repository
public interface AccountRepository extends ElasticsearchRepository<Account, String>, AccountRepositoryCustom {
List<Account> findByIlmAccountIdentifierOrderByCreatedDesc(final String accountId);
}
public interface AccountRepositoryCustom {
Account findLatestByAccountId(final String accountId);
}
#Repository
public class AccountRepositoryImpl implements AccountRepositoryCustom {
#Autowired
#Lazy
private AccountRepository accountRepository;
public Account findLatestByAccountId(final String accountId){
return accountRepository.findByIlmAccountIdentifierOrderByCreatedDesc(accountId).get(0);
}
}
For the queries that are built from method names you must use the property names from your Document class, so here findByAccountId is correct.
The problem indeed is the renaming of this property in the index with the #JsonProperty annotation. This annotation was used for writing the mapping to the index, and as far as I recall as well when writing data with the Template classes but not with the Repository classes. So the repository query searches for a field accountId in the index, whereas the data is stored in the field ilmAccountIdentifier.
The version 3.2.0 of Spring Data Elasticsearch (currently RC1, RC2 will be released later this month and GA should be out in the beginning of september) has a new Mapper implementation available with which this is working without the #JsonPropertyjust by using the #Field annotation:
#Field(name="ilmAccountIdentifier", type = FieldType.Text)
private String accountId;
But 3.2.x will not work with Elasticsearch 6.2, you would need to update to 6.7.
As a workaround, can you rename the accountId property to ilmAccountIdentifier?
Edit 23.07.2019:
I just found out that limiting the result with topN not working is an old bug in Spring Data Elasticsearch, seems it has never been implemented in this sub-module. So please vote for this issue.
Note: spring-data-elasticsearch is a community driven module, so we live from contributions!
Edit 28.11.2019:
I implemented topN in August 2019, it will be in the Spring Data Elasticsearch Neumann release (version 4)

Morphia not applying sparse option to my index

I'm trying to use Morphia to interface with MongoDB, and my Morphia entity looks like this:
#Entity(some params about storing the entity)
public class Entity implements Serializable {
<Some other fields here>
#Indexed(options =
#IndexOptions(unique = true, sparse = true)
)
private String field;
<Some other fields here>
}
I would like this field to be unique if present, but not required (and not unique if not present; multiple entries should be able to exclude this field). My understanding of how to do this is with a unique sparse index, as I've tried to set up.
The problem I'm running into is that when I check the index configuration in Studio3T, it appears that my index is being created as unique, but the sparse property is not applied.
What am I doing wrong?
Thanks.
EDIT: Upon further research, this appears like it might be an issue with Microsoft Azure CosmosDB. When I run this code locally, it works fine, but it does not create the sparse index properly on Azure CosmosDB. Updating tags accordingly.

Spring mongorepository save throws duplicate key exception

I am using the java and Spring. As a test, I query an object by id, then try to save the same object without updating anything. I get a duplicate key exception when I do this. According to what I've read, MongoRepository.save() should do an insert if the _id is null and an update otherwise. Clearly, I should get an update.
A bit of code:
// Succeeds
Datatype sut = mongoRepository.findOne("569eac0dd4c623dc65508679");
// Fails with duplicate key.
mongoRepository.save(sut);
Why? Repeat the above with object of other classes and they work. How can I trouble shoot this? I don't see how to break it down and isolate the problem.
Thanks
The error:
27906 [http-bio-8080-exec-3] 2016-05-02 13:00:26,304 DEBUG org.springframework.web.servlet.mvc.method.annotation.ExceptionHandlerExceptionResolver -
Resolving exception from handler
[
public gov.nist.healthcare.tools.hl7.v2.igamt.lite.web.DatatypeSaveResponse
gov.nist.healthcare.tools.hl7.v2.igamt.lite.web.controller.DatatypeController.save(
gov.nist.healthcare.tools.hl7.v2.igamt.lite.domain.Datatype)
throws gov.nist.healthcare.tools.hl7.v2.igamt.lite.web.exception.DatatypeSaveException
]:
org.springframework.dao.DuplicateKeyException: {
"serverUsed" : "127.0.0.1:27017" ,
"ok" : 1 ,
"n" : 0 ,
"err" : "E11000 duplicate key error index: igl.datatype.$_id_ dup key: { : ObjectId('569eac0dd4c623dc65508679') }" ,
"code" : 11000};
nested exception is com.mongodb.MongoException$DuplicateKey: {
"serverUsed" : "127.0.0.1:27017" ,
"ok" : 1 ,
"n" : 0 ,
"err" : "E11000 duplicate key error index: igl.datatype.$_id_ dup key: { : ObjectId('569eac0dd4c623dc65508679') }" ,
"code" : 11000}
...repeats
I just made a discovery. When saving as shown above, spring attempts an insert, this even though _id is populated.
When saving other objects ( not shown, but similar), spring performs, an update, and yes _id is again populated.
Why the difference? The documentation says spring should update when _id is populated and insert when it is not.
Is there anything else that can be causing this? Something in my object? perhaps my read converter?
Update:
I just met with the team. Upon scrutiny we determined we no longer need read converters. Problem solved by another means.
In my case the issue was that I added the version for my data model class.
#Version
private Long version;
The old documents did not have one, it resolved to null, and MongoTemplate decided that this is a new document.
In this case just initialize the version with the default value (private Long version = 0;).
When using read converters or write converters you can solve the issue by ensuring the object to be saved contains a non-null id field.
The SimpleMongoRepository checks if the entity is new before performing a conversion. In our instance, we had an object that did not have an id field and the write converter would add it.
Adding a populated id field to the object informs the SimpleMongoRepository to call save instead of insert.
The decision happens here. Your code may vary by Spring version. I hope this helps.
#Override
public <S extends T> S save(S entity) {
Assert.notNull(entity, "Entity must not be null!");
if (entityInformation.isNew(entity)) {
return mongoOperations.insert(entity, entityInformation.getCollectionName());
}
return mongoOperations.save(entity, entityInformation.getCollectionName());
}
In the database side, You may have created Unique indexes. please look at "https://docs.mongodb.com/manual/core/index-unique/" for more information.
Implement equals and hashcode methods in your Datatype entity, and make sure that mongoRepository extends CrudRepositor
(as it is described in https://docs.spring.io/spring-data/mongodb/docs/current/reference/html/#repositories) to ensure that if the objects are equals save method should merge them instead of save a new one.

Struts2 and Hibernate insert operation error [duplicate]

org.hibernate.HibernateException: identifier of an instance
of org.cometd.hibernate.User altered from 12 to 3
in fact, my user table is really must dynamically change its value, my Java app is multithreaded.
Any ideas how to fix it?
Are you changing the primary key value of a User object somewhere? You shouldn't do that. Check that your mapping for the primary key is correct.
What does your mapping XML file or mapping annotations look like?
You must detach your entity from session before modifying its ID fields
In my case, the PK Field in hbm.xml was of type "integer" but in bean code it was long.
In my case getters and setter names were different from Variable name.
private Long stockId;
public Long getStockID() {
return stockId;
}
public void setStockID(Long stockID) {
this.stockId = stockID;
}
where it should be
public Long getStockId() {
return stockId;
}
public void setStockId(Long stockID) {
this.stockId = stockID;
}
In my case, I solved it changing the #Id field type from long to Long.
In my particular case, this was caused by a method in my service implementation that needed the spring #Transactional(readOnly = true) annotation. Once I added that, the issue was resolved. Unusual though, it was just a select statement.
Make sure you aren't trying to use the same User object more than once while changing the ID. In other words, if you were doing something in a batch type operation:
User user = new User(); // Using the same one over and over, won't work
List<Customer> customers = fetchCustomersFromSomeService();
for(Customer customer : customers) {
// User user = new User(); <-- This would work, you get a new one each time
user.setId(customer.getId());
user.setName(customer.getName());
saveUserToDB(user);
}
In my case, a template had a typo so instead of checking for equivalency (==) it was using an assignment equals (=).
So I changed the template logic from:
if (user1.id = user2.id) ...
to
if (user1.id == user2.id) ...
and now everything is fine. So, check your views as well!
It is a problem in your update method. Just instance new User before you save changes and you will be fine. If you use mapping between DTO and Entity class, than do this before mapping.
I had this error also. I had User Object, trying to change his Location, Location was FK in User table. I solved this problem with
#Transactional
public void update(User input) throws Exception {
User userDB = userRepository.findById(input.getUserId()).orElse(null);
userDB.setLocation(new Location());
userMapper.updateEntityFromDto(input, userDB);
User user= userRepository.save(userDB);
}
Also ran into this error message, but the root cause was of a different flavor from those referenced in the other answers here.
Generic answer:
Make sure that once hibernate loads an entity, no code changes the primary key value in that object in any way. When hibernate flushes all changes back to the database, it throws this exception because the primary key changed. If you don't do it explicitly, look for places where this may happen unintentionally, perhaps on related entities that only have LAZY loading configured.
In my case, I am using a mapping framework (MapStruct) to update an entity. In the process, also other referenced entities were being updates as mapping frameworks tend to do that by default. I was later replacing the original entity with new one (in DB terms, changed the value of the foreign key to reference a different row in the related table), the primary key of the previously-referenced entity was already updated, and hibernate attempted to persist this update on flush.
I was facing this issue, too.
The target table is a relation table, wiring two IDs from different tables. I have a UNIQUE constraint on the value combination, replacing the PK.
When updating one of the values of a tuple, this error occured.
This is how the table looks like (MySQL):
CREATE TABLE my_relation_table (
mrt_left_id BIGINT NOT NULL,
mrt_right_id BIGINT NOT NULL,
UNIQUE KEY uix_my_relation_table (mrt_left_id, mrt_right_id),
FOREIGN KEY (mrt_left_id)
REFERENCES left_table(lef_id),
FOREIGN KEY (mrt_right_id)
REFERENCES right_table(rig_id)
);
The Entity class for the RelationWithUnique entity looks basically like this:
#Entity
#IdClass(RelationWithUnique.class)
#Table(name = "my_relation_table")
public class RelationWithUnique implements Serializable {
...
#Id
#ManyToOne
#JoinColumn(name = "mrt_left_id", referencedColumnName = "left_table.lef_id")
private LeftTableEntity leftId;
#Id
#ManyToOne
#JoinColumn(name = "mrt_right_id", referencedColumnName = "right_table.rig_id")
private RightTableEntity rightId;
...
I fixed it by
// usually, we need to detach the object as we are updating the PK
// (rightId being part of the UNIQUE constraint) => PK
// but this would produce a duplicate entry,
// therefore, we simply delete the old tuple and add the new one
final RelationWithUnique newRelation = new RelationWithUnique();
newRelation.setLeftId(oldRelation.getLeftId());
newRelation.setRightId(rightId); // here, the value is updated actually
entityManager.remove(oldRelation);
entityManager.persist(newRelation);
Thanks a lot for the hint of the PK, I just missed it.
Problem can be also in different types of object's PK ("User" in your case) and type you ask hibernate to get session.get(type, id);.
In my case error was identifier of an instance of <skipped> was altered from 16 to 32.
Object's PK type was Integer, hibernate was asked for Long type.
In my case it was because the property was long on object but int in the mapping xml, this exception should be clearer
If you are using Spring MVC or Spring Boot try to avoid:
#ModelAttribute("user") in one controoler, and in other controller
model.addAttribute("user", userRepository.findOne(someId);
This situation can produce such error.
This is an old question, but I'm going to add the fix for my particular issue (Spring Boot, JPA using Hibernate, SQL Server 2014) since it doesn't exactly match the other answers included here:
I had a foreign key, e.g. my_id = '12345', but the value in the referenced column was my_id = '12345 '. It had an extra space at the end which hibernate didn't like. I removed the space, fixed the part of my code that was allowing this extra space, and everything works fine.
Faced the same Issue.
I had an assosciation between 2 beans. In bean A I had defined the variable type as Integer and in bean B I had defined the same variable as Long.
I changed both of them to Integer. This solved my issue.
I solve this by instancing a new instance of depending Object. For an example
instanceA.setInstanceB(new InstanceB());
instanceA.setInstanceB(YOUR NEW VALUE);
In my case I had a primary key in the database that had an accent, but in other table its foreign key didn't have. For some reason, MySQL allowed this.
It looks like you have changed identifier of an instance
of org.cometd.hibernate.User object menaged by JPA entity context.
In this case create the new User entity object with appropriate id. And set it instead of the original User object.
Did you using multiple Transaction managers from the same service class.
Like, if your project has two or more transaction configurations.
If true,
then at first separate them.
I got the issue when i tried fetching an existing DB entity, modified few fields and executed
session.save(entity)
instead of
session.merge(entity)
Since it is existing in the DB, when we should merge() instead of save()
you may be modified primary key of fetched entity and then trying to save with a same transaction to create new record from existing.

MongoDB Morphia save() generates two objects with the same ID

I have a java application that connects to a MongoDB Database through the Morphia library. My POJO that I store in the database has String field named _id and annotated with the #Id annotation (com.google.code.morphia.annotations.Id;).
I'm generating a new object ( it has null _id).
I call save(object) on the datastore provided by morphia.
The object gets updated after being stored and now has an _id value.
I call save(object) again and a new entry is created in the database with the same _id.
All consecutive save() operations on the object overwrite the old one and do not produce any new entries in the database.
So for example, after 10 save() calls on the same object my database ends up looking like this:
{ "_id" : { "$oid" : "539ade7ee4b0451f28ba0e2e"} , "className" : "blabla" , blabla ...}
{ "_id" : "539ade7ee4b0451f28ba0e2e" , "className" : "blabla" , blabla ...}
As seen those two entries have the same _id but with different representation. One has it as an object the other as a string. Normally I should have only one entry shouldn't I ?
Do not use a string for the _id. This will fix your problem:
#Id
protected ObjectId id;
While you could use protected String id (this shouldn't create duplicates IMHO), you'll have problems if you use #Reference and might run into weird edge cases elsewhere, so avoid it if possible.

Categories

Resources