I'm currently using Spring Data MongoDB to abstract MongoDB operations, and using an Azure DocumentDB database with MongoDB protocol support. I've also run into this using the latest MongoDB Java driver by itself.
There is an issue with setting a TTL index in this process.
I'm receiving the following exception.
`Caused by: com.mongodb.CommandFailureException: { "serverUsed" : "****-****-test.documents.azure.com:****" , "_t" : "OKMongoResponse" , "ok" : 0 , "code" : 2 , "errmsg" : "The 'expireAfterSeconds' option has invalid value. Ensure to provide a nonzero positive integer, or '-1'` which means never expire." , "$err" : "The 'expireAfterSeconds' option has invalid value. Ensure to provide a nonzero positive integer, or '-1' which means never expire."}
at com.mongodb.CommandResult.getException(CommandResult.java:76)
at com.mongodb.CommandResult.throwOnError(CommandResult.java:140)
at com.mongodb.DBCollectionImpl.createIndex(DBCollectionImpl.java:399)
at com.mongodb.DBCollection.createIndex(DBCollection.java:597)
This is a simple representation of the POJO that I'm using.
public class Train{
#JsonProperty
private String id;
#JsonProperty("last_updated")
#Indexed(expireAfterSeconds = 1)
private Date lastUpdated;
// Getters & Setters
.
.
.
}
This was my initial approach of initializing the index (via the #Indexed annotation).
I've also attempted to initialize the index via:
mongoTemplate.indexOps(collection.getName())
.ensureIndex(new Index("last_updated", Sort.Direction.DESC)
.expire(1, TimeUnit.SECONDS));
Both ways of setting the index throw that same execption.
I've also seen an error saying that it can only be done on the '_ts' field. I think this is due to Azure DocumentDB using the '_ts' field for it's own TTL operation. So I've tried the following with the same results:
Added a new field, Long '_ts', to the pojo and tried with the annotation.
Attempted to set the index via the ensureIndex method with '_ts' field.
Did the same things above but changing the type of '_ts' to Date.
I'm new to these technologies (DocumentDB and MongoDB), so I'm probably missing something obvious.
Any thoughts?
Revisiting my question I had posted a while back to reply with the solution that I figured out.
Note that DocumentDB has been renamed to CosmosDB since I posted this question.
There was an issue with type casting in either the Spring framework or the CosmosDB/DocumentDB platform side. Despite documentation saying it needs an integer, you actually need to pass it a double value.
I'm using something along the lines of the following and it works
dcoll.createIndex(new BasicDBObject("_ts", 1)
, new BasicDBObject("expireAfterSeconds", 10.0));
This works for me.
db.coll.createIndex( { "_ts": 1 }, { expireAfterSeconds: 3600 } )
Using Spring Boot and Spring Data in a ContextListener:
private final MongoTemplate mongoTemplate;
#EventListener(ContextRefreshedEvent.class)
public void ensureIndexes() {
mongoTemplate.indexOps(YourCollection.class)
.ensureIndex(
new Index()
.on("_ts", Sort.Direction.ASC)
.expire(30, TimeUnit.SECONDS)
);
}
The Azure console is a little confusing in the "Settings" options for the collection, it says "To enable time-to-live (TTL) for your collection/documents, create a TTL index" - it will say this even if you do already have such an index created.
Confirmed working January 2023 with Spring Boot 2.7.6 and Azure CosmosDB.
According to the blog DocumentDB now supports Time-To-Live (TTL) & the section Setting TTL on a document of the offical tutorial Expire data in DocumentDB collections automatically with time to live, the TTL setting on Azure DocumentDB is different from MongoDB, although Azure support do the operater on DocumentDB via MongoDB drive & spring-data in Java.
The TTL should be defined as a property on DocumentDB to set a nonzero positive integer value, so please try to change your code as below.
public class Train{
#JsonProperty
private String id;
#JsonProperty("last_updated")
private Date lastUpdated;
/*
* Define a property as ttl and set the default value 1.
*/
#JsonProperty("ttl")
private int expireAfterSeconds = 1;
// Getters & Setters
.
.
.
}
Hope it helps. Any concern, please feel free to let me know.
Update:
Notice the below content at https://learn.microsoft.com/en-us/azure/documentdb/documentdb-time-to-live#configuring-ttl.
By default, time to live is disabled by default in all DocumentDB collections and on all documents.
So please first enable the feature TIME TO LIVE on Azure portal as the figure below, or follow the above link to enable it programmatically.
Related
I am trying to test MongoDB TTL functionality for short-lived collections. Using #Indexed annotation on field like this:
#Indexed(name = "deleteAt", expireAfterSeconds = 5)
private Date deleteAt;
In constructor I initialise deleteAt to this.deleteAt = new Date() and I expect after inserting my Document to Mongo that after it's TTLManager will run this collection will be removed. But It is not being removed even waiting for a few minutes doesn't help. I ran db.serverStatus().metrics.ttl in mongo shell and it returned:
[
{
"deletedDocuments": 0,
"passes": 8
}
]
And command db.adminCommand({getParameter:1, ttlMonitorEnabled: 1}) returns:
[
{
"ok": 1,
"ttlMonitorEnabled": true
}
]
Is something wrong with my code ? Or it is because of embedded mongo doesn't support this, is it possible to configure embedded mongo so it would work ?
I am using https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo for embedded MongoDB with default settings.
It's only MongoDB server (mongod) feature
A background thread in mongod reads the values in the index and removes expired documents from the collection.
https://docs.mongodb.com/manual/core/index-ttl/#delete-operations
Embedded MongoDB framework "simulates" mongod process with MongodProcess class only to deploy a basic embedded server.
https://github.com/flapdoodle-oss/de.flapdoodle.embed.mongo/blob/master/src/main/java/de/flapdoodle/embed/mongo/MongodProcess.java
Edit: Solution: Upgrading to ISIS 1.17.0 and setting the property isis.persistor.datanucleus.standaloneCollection.bulkLoad=false solved the first two problems.
I am using Apache ISIS 1.16.2 and I try to store Blob/Clob content in a MariaDB database (v10.1.35). Therefore, I use the DB connector org.mariadb.jdbc.mariadb-java-client (v2.3.0) and in the code the #Persistent annotation as shown in many examples and the ISIS documentation.
Using the code below, I just get one single column named content_name (in which the Blob object is serialized in binary form) instead of the three columns content_name, content_mimetype and content_bytes.
This is the Document class with the Blob field content:
#PersistenceCapable(identityType = IdentityType.DATASTORE)
#DatastoreIdentity(strategy = IdGeneratorStrategy.IDENTITY, column = "id")
#DomainObject(editing = Editing.DISABLED, autoCompleteRepository = DocumentRepository.class, objectType = "Document")
#Getter
// ...
public class Document implements Comparable<Document> {
#Persistent(
defaultFetchGroup = "false",
columns = {
#Column(name = "content_name"),
#Column(name = "content_mimetype"),
#Column(name = "content_bytes",
jdbcType = "BLOB",
sqlType = "LONGVARBINARY")
})
#Nonnull
#Column(allowsNull = "false")
#Property(optionality = Optionality.MANDATORY)
private Blob content;
// ...
#Column(allowsNull = "false")
#Property
private Date created = new Date();
public Date defaultCreated() {
return new Date();
}
#Column(allowsNull = "true")
#Property
#Setter
private String owner;
// ...
}
This creates the following schema for the DomainObject class Document with just one column for the Blob field:
CREATE TABLE `document` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`content_name` mediumblob,
`created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
`owner` varchar(255) CHARACTER SET latin1 COLLATE latin1_bin DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Normally, the class org.apache.isis.objectstore.jdo.datanucleus.valuetypes.IsisBlobMapping of the ISIS framework should do the mapping. But it seems that this Mapper is somehow not involved...
1. Question: How do I get the Blob field being split up in the three columns (as described above and in many demo projects). Even if I switch to HSQLDB, I still get only one column, so this might not be an issue with MariaDB.
2. Question: If I use a Blob/Clob field in a class that inherits from another DomainObject class, I often get a org.datanucleus.exceptions.NucleusException (stack trace see below) and I cannot make head or tail of it. What are potential pitfalls when dealing with inheritance? Why am I getting this exception?
3. Question: I need to store documents belonging to domain objects (as you might have guessed). The proper way of doing so would be to store the documents in a file system tree instead of a database (which also has by default some size limitations for object data) and reference the files in the object. In the Datanucleus documentation I found the extension serializeToFileLocation that should do exactly that. I tried it by adding the line #Extension(vendorName="datanucleus", key="serializeToFileLocation" value="document-repository") to the Blob field, but nothing happened. So my question is: Is this Datanucleus extension compatible with Apache Isis?
If this extension conflicts with Isis, would it be possible to have a javax.jdo.listener.StoreLifecycleListener or org.apache.isis.applib.AbstractSubscriber that stores the Blob on a file system before persisting the domain object to database and restoring it before loading? Are there better solutions available?
That's it for now. Thank you in advance! ;-)
The stack trace to question 2:
... (other Wicket related stack trace)
Caused by: org.datanucleus.exceptions.NucleusException: Creation of SQLExpression for mapping "org.datanucleus.store.rdbms.mapping.java.SerialisedMapping" caused error
at org.datanucleus.store.rdbms.sql.expression.SQLExpressionFactory.newExpression(SQLExpressionFactory.java:199)
at org.datanucleus.store.rdbms.sql.expression.SQLExpressionFactory.newExpression(SQLExpressionFactory.java:155)
at org.datanucleus.store.rdbms.request.LocateBulkRequest.getStatement(LocateBulkRequest.java:158)
at org.datanucleus.store.rdbms.request.LocateBulkRequest.execute(LocateBulkRequest.java:283)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.locateObjects(RDBMSPersistenceHandler.java:564)
at org.datanucleus.ExecutionContextImpl.findObjects(ExecutionContextImpl.java:3313)
at org.datanucleus.api.jdo.JDOPersistenceManager.getObjectsById(JDOPersistenceManager.java:1850)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession.loadPersistentPojos(PersistenceSession.java:1010)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession.adaptersFor(PersistenceSession.java:1603)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession.adaptersFor(PersistenceSession.java:1573)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel$Type$1.loadInBulk(EntityCollectionModel.java:107)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel$Type$1.load(EntityCollectionModel.java:93)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel.load(EntityCollectionModel.java:454)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel.load(EntityCollectionModel.java:70)
at org.apache.wicket.model.LoadableDetachableModel.getObject(LoadableDetachableModel.java:135)
at org.apache.isis.viewer.wicket.ui.components.collectioncontents.ajaxtable.CollectionContentsSortableDataProvider.size(CollectionContentsSortableDataProvider.java:68)
at org.apache.wicket.markup.repeater.data.DataViewBase.internalGetItemCount(DataViewBase.java:142)
at org.apache.wicket.markup.repeater.AbstractPageableView.getItemCount(AbstractPageableView.java:235)
at org.apache.wicket.markup.repeater.AbstractPageableView.getRowCount(AbstractPageableView.java:216)
at org.apache.wicket.markup.repeater.AbstractPageableView.getViewSize(AbstractPageableView.java:314)
at org.apache.wicket.markup.repeater.AbstractPageableView.getItemModels(AbstractPageableView.java:99)
at org.apache.wicket.markup.repeater.RefreshingView.onPopulate(RefreshingView.java:93)
at org.apache.wicket.markup.repeater.AbstractRepeater.onBeforeRender(AbstractRepeater.java:124)
at org.apache.wicket.markup.repeater.AbstractPageableView.onBeforeRender(AbstractPageableView.java:115)
at org.apache.wicket.Component.internalBeforeRender(Component.java:950)
at org.apache.wicket.Component.beforeRender(Component.java:1018)
at org.apache.wicket.MarkupContainer.onBeforeRenderChildren(MarkupContainer.java:1825)
... 81 more
Caused by: org.datanucleus.exceptions.NucleusException: Unable to create SQLExpression for mapping of type "org.datanucleus.store.rdbms.mapping.java.SerialisedMapping" since not supported
at org.datanucleus.store.rdbms.sql.expression.SQLExpressionFactory#newExpression(SQLExpressionFactory.java:189)
at org.datanucleus.store.rdbms.sql.expression.SQLExpressionFactory#newExpression(SQLExpressionFactory.java:155)
at org.datanucleus.store.rdbms.request.LocateBulkRequest#getStatement(LocateBulkRequest.java:158)
at org.datanucleus.store.rdbms.request.LocateBulkRequest#execute(LocateBulkRequest.java:283)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler#locateObjects(RDBMSPersistenceHandler.java:564)
at org.datanucleus.ExecutionContextImpl#findObjects(ExecutionContextImpl.java:3313)
at org.datanucleus.api.jdo.JDOPersistenceManager#getObjectsById(JDOPersistenceManager.java:1850)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession#loadPersistentPojos(PersistenceSession.java:1010)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession#adaptersFor(PersistenceSession.java:1603)
at org.apache.isis.core.runtime.system.persistence.PersistenceSession#adaptersFor(PersistenceSession.java:1573)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel$Type$1#loadInBulk(EntityCollectionModel.java:107)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel$Type$1#load(EntityCollectionModel.java:93)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel#load(EntityCollectionModel.java:454)
at org.apache.isis.viewer.wicket.model.models.EntityCollectionModel#load(EntityCollectionModel.java:70)
at org.apache.wicket.model.LoadableDetachableModel#getObject(LoadableDetachableModel.java:135)
at org.apache.isis.viewer.wicket.ui.components.collectioncontents.ajaxtable.CollectionContentsSortableDataProvider#size(CollectionContentsSortableDataProvider.java:68)
at org.apache.wicket.markup.repeater.data.DataViewBase#internalGetItemCount(DataViewBase.java:142)
at org.apache.wicket.markup.repeater.AbstractPageableView#getItemCount(AbstractPageableView.java:235)
at org.apache.wicket.markup.repeater.AbstractPageableView#getRowCount(AbstractPageableView.java:216)
at org.apache.wicket.markup.repeater.AbstractPageableView#getViewSize(AbstractPageableView.java:314)
at org.apache.wicket.markup.repeater.AbstractPageableView#getItemModels(AbstractPageableView.java:99)
at org.apache.wicket.markup.repeater.RefreshingView#onPopulate(RefreshingView.java:93)
at org.apache.wicket.markup.repeater.AbstractRepeater#onBeforeRender(AbstractRepeater.java:124)
at org.apache.wicket.markup.repeater.AbstractPageableView#onBeforeRender(AbstractPageableView.java:115)
// <-- 8 times the following lines
at org.apache.wicket.Component#internalBeforeRender(Component.java:950)
at org.apache.wicket.Component#beforeRender(Component.java:1018)
at org.apache.wicket.MarkupContainer#onBeforeRenderChildren(MarkupContainer.java:1825)
at org.apache.wicket.Component#onBeforeRender(Component.java:3916)
// -->
at org.apache.wicket.Page#onBeforeRender(Page.java:801)
at org.apache.wicket.Component#internalBeforeRender(Component.java:950)
at org.apache.wicket.Component#beforeRender(Component.java:1018)
at org.apache.wicket.Component#internalPrepareForRender(Component.java:2236)
at org.apache.wicket.Page#internalPrepareForRender(Page.java:242)
at org.apache.wicket.Component#render(Component.java:2325)
at org.apache.wicket.Page#renderPage(Page.java:1018)
at org.apache.wicket.request.handler.render.WebPageRenderer#renderPage(WebPageRenderer.java:124)
at org.apache.wicket.request.handler.render.WebPageRenderer#respond(WebPageRenderer.java:195)
at org.apache.wicket.core.request.handler.RenderPageRequestHandler#respond(RenderPageRequestHandler.java:175)
at org.apache.wicket.request.cycle.RequestCycle$HandlerExecutor#respond(RequestCycle.java:895)
at org.apache.wicket.request.RequestHandlerStack#execute(RequestHandlerStack.java:64)
at org.apache.wicket.request.cycle.RequestCycle#execute(RequestCycle.java:265)
at org.apache.wicket.request.cycle.RequestCycle#processRequest(RequestCycle.java:222)
at org.apache.wicket.request.cycle.RequestCycle#processRequestAndDetach(RequestCycle.java:293)
at org.apache.wicket.protocol.http.WicketFilter#processRequestCycle(WicketFilter.java:261)
at org.apache.wicket.protocol.http.WicketFilter#processRequest(WicketFilter.java:203)
at org.apache.wicket.protocol.http.WicketFilter#doFilter(WicketFilter.java:284)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain#doFilter(ServletHandler.java:1668)
at org.apache.isis.core.webapp.diagnostics.IsisLogOnExceptionFilter#doFilter(IsisLogOnExceptionFilter.java:52)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain#doFilter(ServletHandler.java:1668)
at org.apache.shiro.web.servlet.AbstractShiroFilter#executeChain(AbstractShiroFilter.java:449)
at org.apache.shiro.web.servlet.AbstractShiroFilter$1#call(AbstractShiroFilter.java:365)
at org.apache.shiro.subject.support.SubjectCallable#doCall(SubjectCallable.java:90)
at org.apache.shiro.subject.support.SubjectCallable#call(SubjectCallable.java:83)
at org.apache.shiro.subject.support.DelegatingSubject#execute(DelegatingSubject.java:383)
at org.apache.shiro.web.servlet.AbstractShiroFilter#doFilterInternal(AbstractShiroFilter.java:362)
at org.apache.shiro.web.servlet.OncePerRequestFilter#doFilter(OncePerRequestFilter.java:125)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain#doFilter(ServletHandler.java:1668)
// ... some Jetty stuff
at java.lang.Thread#run(Thread.java:748)
After some research, I think that the problem of question 1 and 2 seems to be related to this ISIS bug report #1902.
In short: The datanucleus extension plugin resolving mechanism does not seem to find the ISIS value type adapters and therefore cannot know how to serialize ISIS's Blob/Clob types.
According to the aforementioned ISIS bug report, this problem is fixed in 1.17.0, so I am trying to upgrade from 1.16.2 to this version (which introduced many other problems, but that will be an extra topic).
For question 3 I have found Minio which addresses basically my problem, but it is a bit oversized for my needs. I will keep looking for other solutions to store Blob/Clobs to local file system and will keep Minio in mind...
UPDATE:
I upgraded my project to ISIS version 1.17.0. It solved my problem in question 1 (now I get three columns for a Blob/Clob object).
The problem in question 2 (NucleusException) is not solved by the upgrade. I figured out that it is only thrown if returning a list of DomainObjects with Blob/Clob field(s), i.e. if rendered as standalone table. If I get directly to an object's entity view, no exception is thrown and I can see/modify/download the Blob/Clob content.
In the meantime, I wrote my own datanucleus plugin, that stores the Blobs/Clobs as files on a file system.
UPDATE 2
I found a solution for circumventing the org.datanucleus.exceptions.NucleusException: Unable to create SQLExpression for mapping of type "org.apache.isis.objectstore.jdo.datanucleus.valuetypes.IsisClobMapping" since not supported. It seems to be a problem with bulk-loading (but I do not know any details).
Deactivating bulk-load via the property isis.persistor.datanucleus.standaloneCollection.bulkLoad=false (which is initially set to true by ISIS archetypes) solved the problem.
I'am writing a software for the data-synchronization of a custom software and sugarCRM. Therefore I need an updateOrCreate() function. My Problem is, that the custom software uses other uuid´s than sugarCRM so i can´t look for the uuid to check on update or create.So I want to save the custom-uuid in a custom field of sugarCRM.
But i have no idea how to do that over the REST-API of sugarCRM.
By the way I wrote a java-application.
Thank you for help!
As far as I'm aware there is no update-or-create API (see https://your-sugarsite/rest/v10/help), howewer if you just want to use the API (rather than customize it) you could sync data like this:
1) Fetch all ids of records that have a custom uuid by using the POST /rest/v10/<module>/filter endpoint and a payload similar to:
{
offset: 0,
max_num: 1000,
fields: ["id", "custom_uuid_c"],
filter: [{"custom_uuid_c": {"$not_empty": ""}}],
]
}
or if you just need a specific custom uuid at a time:
{
offset: 0,
max_num: 1000,
fields: ["id"],
filter: [{"custom_uuid_c": {"$equals": "example-custom-uuid"}}],
]
}
The response will look something like this:
{
next_offset: -1,
records: [
{"id": "example-sugar-uuid", "custom_uuid_c": "example-custom-uuid"},
...
],
}
Notes:
Make sure to evaluate next_offset as even with a high max_num you may not get all records at once because of server limits. As long as next_offset isn't -1 you should use its value as offset in a new request to get the remaining records.
You can supply all field names you need to sync in the fields array, so that you get that information early and can check whether or not an update is required at all (maybe data is still up-to-date?)
Sugar also always include certain fields in the response, no matter if they were requested or not. (E.g. id and date_modified). I did not include them all in the response snippets for the sake of simplicity.
2)
Based on the information received in the previous step you know which sugar ID belongs to which custom UUID and you can detect/prepare data for updates.
If you need to sync all and retrieve the complete list first, I suggest you create a lookup table custom-uuid => sugar-id, so that you do not have to loop through the data array and compare fields when looking for a specific number.Don't forget to consider the possibility of a custom-uuid being present in one than more Sugar-record at a time, unless you enforce them being unique on the server/database side.
3)
Now that you have all the information you need you can update and create records as needed:
Update existing record: PUT /rest/v10/<module>/<record_id>
Create missing record: POST /rest/v10/<module>
If want to send a lot of creates and/or updates in a single request, have a look at the POST /rest/v10/bulk API - if your version of Sugar has it.
Final notes:
The filter operators definition on /rest/v10/help seems incomplete, for more info you can check the filter docs
I have mongodb aggregation query and it works perfectly in shell.
How can i rewrite this query to use with morphia ?
org.mongodb.morphia.aggregation.Group.addToSet(String field) accepts only one field name but i need to add object to the set.
Query:
......aggregate([
{$group:
{"_id":"$subjectHash",
"authors":{$addToSet:"$fromAddress.address"},
---->> "messageDataSet":{$addToSet:{"sentDate":"$sentDate","messageId":"$_id"}},
"messageCount":{$sum:1}}},
{$sort:{....}},
{$limit:10},
{$skip:0}
])
Java code:
AggregationPipeline aggregationPipeline = myDatastore.createAggregation(Message.class)
.group("subjectHash",
grouping("authors", addToSet("fromAddress.address")),
--------??????------>> grouping("messageDataSet", ???????),
grouping("messageCount", new Accumulator("$sum", 1))
).sort(...)).limit(...).skip(...);
That's currently not supported but if you'll file an issue I'd be happy to include that in an upcoming release.
Thanks for your answer, I can guess that according to source code. :(
I don't want to use spring-data or java-driver directly (for this project) so I changed my document representation.
Added messageDataSet object which contains sentDate and messageId (and some other nested objects) (these values become duplicated in a document which is a bad design).
Aggregation becomes : "messageDataSet":{$addToSet:"$messageDataSet"},
and Java code is: grouping("messageDataSet", addToSet("messageDataSet")),
This works with moprhia. Thanks.
We are working with mongodb/morphia since the early versions of morphia. So we always used a string as id-Field in our persistent objects:
#Id
private String id;
public PersistentEntity() {
id = new ObjectId().toHexString();
}
Up to Version 0.108 of morphia that was fine. Since morphia 0.110 however, references to objects with id as outlined above are not found any longer. The error shown is:
Mrz 10, 2015 9:49:35 AM org.mongodb.morphia.mapping.ReferenceMapper$1
eval WARNUNG: Null reference found when retrieving value for
<classname>
Currently our systems run with mongodb 2.6.8, java-driver 2.13.0 and morphia 0.108.
So how do we migrate the existing data on the customer systems? There is no way to change _id Field from String to ObjectId - mongodb does not allow that. Are we stuck with morphia 0.108?
Has anyone solved similar problems?
Are you using multiple datastores in the same app? There was a change in that area and we've had similar issues with the same version.
There will be a fix in 1.0-rc, which should be the next release.