Morphia - How to replace LongIdEntity.StoredId in last version? - java

I just switched to the last version of Morphia (1.0.1). The previous one was com.github.jmkgreen.morphia 1.2.3.
I don't know how to replace LongIdEntity.StoredId. I use it to increment a long id.
edit : Here is how it worked before:
public Key<Snapshot> save(PTSnapshot entity) {
if (entity.getId() == null) {
String collName = ds.getCollection(getClass()).getName();
Query<StoredId> q = ds.find(StoredId.class, "_id", collName);
UpdateOperations<StoredId> uOps = ds.createUpdateOperations(StoredId.class).inc("value");
StoredId newId = ds.findAndModify(q, uOps);
if (newId == null) {
newId = new StoredId(collName);
ds.save(newId);
}
entity.setId(newId.getValue());
}
return super.save(entity);
}

StoredId class is just a POJO with 3 fields:
id
className (to store the type of object the auto-increment will be done on, but you could store something lese, this is just used to retrieve the adequate increment value, because you could have more than one auto-incremented collection !)
value (to store the current value of the auto-increment)
But it is just an helper, you can reproduce the behavior all by yourself.
Basically you just need a collection where you store a simple number, and increment it with findAndModify() each time a new object is inserted.
My thought is that Morphia/Mongo decided to remove this because auto-increments are not recommended with Mongo databases, and ObjectIds are more powerful.

Thanks.
Here is the answer:
if (entity.getId() == null) {
DBCollection ids = getDatastore().getDB().getCollection("ids");
BasicDBObject findQuery = new BasicDBObject("_id", getClass().getSimpleName());
DBObject incQuery = new BasicDBObject("$inc", new BasicDBObject("value", 1));
DBObject result = ids.findAndModify(findQuery, incQuery);
entity.setId(result == null || !result.containsField("value") ? 1L : (Long) result.get("value"));
}

Related

Is there a way to get latest object node by date from list of nodes provided by objectMapper?

I have a list of objects as List<KeyWithDocument> docs = getDocuments();. And the class which this individual KeyWithDocument is being mapped to is Detail. The list gives data where each value has a date. I am trying to get the doc only the latest date. I implemented the super simple logic that works.
I want to know if this can be done in a better way.
List<KeyWithDocument> docs = getDocuments();
Detail detail1 = null;
// So this below detail will be my final output after the for loop ends as this will contain the latest date object
Detail detailOutput = null;
Date previousMax = null;
for (KeyWithDocument kd : docs) {
detail1 = objectMapper.treeToValue(kd.getDocument().getContent(), Detail.class);
Date creationDate = detail1.getCreationDate();
if (creationDate == null) {
continue;
}
if (previousMax == null) {
previousMax = creationDate;
detailOutput = objectMapper.treeToValue(kd.getDocument().getContent(), Detail.class);
} else if (previousMax.before(creationDate)){
previousMax = creationDate;
detailOutput = objectMapper.treeToValue(kd.getDocument().getContent(), Detail.class);
}
}
I have modified the variable names from my actual names so pardon me for using bad naming convention here.
Looking for an optimized way of doing this.
Can anyone help me with this?

Improve performance of loading 100,000 records from database

We created a program to make the use of the database easier in other programs. So the code im showing gets used in multiple other programs.
One of those other programs gets about 10,000 records from one of our clients and has to check if these are in our database already. If not we insert them into the database (they can also change and have to be updated then).
To make this easy we load all the entries from our whole table (at the moment 120,000), create a class for every entry we get and put all of them into a Hashmap.
The loading of the whole table this way takes around 5 minutes. Also we sometimes have to restart the program because we run into a GC overhead error because we work on limited hardware. Do you have an idea of how we can improve the performance?
Here is the code to load all entries (we have a global limit of 10.000 entries per query so we use a loop):
public Map<String, IMasterDataSet> getAllInformationObjects(ISession session) throws MasterDataException {
IQueryExpression qe;
IQueryParameter qp;
// our main SDP class
Constructor<?> constructorForSDPbaseClass = getStandardConstructor();
SimpleDateFormat itaTimestampFormat = new SimpleDateFormat("yyyyMMddHHmmssSSS");
// search in standard time range (modification date!)
Calendar cal = Calendar.getInstance();
cal.set(2010, Calendar.JANUARY, 1);
Date startDate = cal.getTime();
Date endDate = new Date();
Long startDateL = Long.parseLong(itaTimestampFormat.format(startDate));
Long endDateL = Long.parseLong(itaTimestampFormat.format(endDate));
IDescriptor modDesc = IBVRIDescriptor.ModificationDate.getDescriptor(session);
// count once before to determine initial capacities for hash map/set
IBVRIArchiveClass SDP_ARCHIVECLASS = getMasterDataPropertyBag().getSDP_ARCHIVECLASS();
qe = SDP_ARCHIVECLASS.getQueryExpression(session);
qp = session.getDocumentServer().getClassFactory()
.getQueryParameterInstance(session, new String[] {SDP_ARCHIVECLASS.getDatabaseName(session)}, null, null);
qp.setExpression(qe);
qp.setHitLimitThreshold(0);
qp.setHitLimit(0);
int nrOfHitsTotal = session.getDocumentServer().queryCount(session, qp, "*");
int initialCapacity = (int) (nrOfHitsTotal / 0.75 + 1);
// MD sets; and objects already done (here: document ID)
HashSet<String> objDone = new HashSet<>(initialCapacity);
HashMap<String, IMasterDataSet> objRes = new HashMap<>(initialCapacity);
qp.close();
// do queries until hit count is smaller than 10.000
// use modification date
boolean keepGoing = true;
while(keepGoing) {
// construct query expression
// - basic part: Modification date & class type
// a. doc. class type
qe = SDP_ARCHIVECLASS.getQueryExpression(session);
// b. ID
qe = SearchUtil.appendQueryExpressionWithANDoperator(session, qe,
new PlainExpression(modDesc.getQueryLiteral() + " BETWEEN " + startDateL + " AND " + endDateL));
// 2. Query Parameter: set database; set expression
qp = session.getDocumentServer().getClassFactory()
.getQueryParameterInstance(session, new String[] {SDP_ARCHIVECLASS.getDatabaseName(session)}, null, null);
qp.setExpression(qe);
// order by modification date; hitlimit = 0 -> no hitlimit, but the usual 10.000 max
qp.setOrderByExpression(session.getDocumentServer().getClassFactory().getOrderByExpressionInstance(modDesc, true));
qp.setHitLimitThreshold(0);
qp.setHitLimit(0);
// Do not sort by modification date;
qp.setHints("+NoDefaultOrderBy");
keepGoing = false;
IInformationObject[] hits = null;
IDocumentHitList hitList = null;
hitList = session.getDocumentServer().query(qp, session);
IDocument doc;
if (hitList.getTotalHitCount() > 0) {
hits = hitList.getInformationObjects();
for (IInformationObject hit : hits) {
String objID = hit.getID();
if(!objDone.contains(objID)) {
// do something with this object and the class
// here: construct a new SDP sub class object and give it back via interface
doc = (IDocument) hit;
IMasterDataSet mdSet;
try {
mdSet = (IMasterDataSet) constructorForSDPbaseClass.newInstance(session, doc);
} catch (Exception e) {
// cause for this
String cause = (e.getCause() != null) ? e.getCause().toString() : MasterDataException.ERRMSG_PART_UNKNOWN;
throw new MasterDataException(MasterDataException.ERRMSG_NOINSTANCE_POSSIBLE, this.getClass().getSimpleName(), e.toString(), cause);
}
objRes.put(mdSet.getID(), mdSet);
objDone.add(objID);
}
}
doc = (IDocument) hits[hits.length - 1];
Date lastModDate = ((IDateValue) doc.getDescriptor(modDesc).getValues()[0]).getValue();
startDateL = Long.parseLong(itaTimestampFormat.format(lastModDate));
keepGoing = (hits.length >= 10000 || hitList.isResultSetTruncated());
}
qp.close();
}
return objRes;
}
Loading 120,000 rows (and more) each time will not scale very well, and your solution may not work in the future as the record size grows. Instead let the database server handle the problem.
Your table needs to have a primary key or unique key based on the columns of the records. Iterate through the 10,000 records performing JDBC SQL update to modify all field values with where clause to exactly match primary/unique key.
update BLAH set COL1 = ?, COL2 = ? where PKCOL = ?; // ... AND PKCOL2 =? ...
This modifies an existing row or does nothing at all - and JDBC executeUpate() will return 0 or 1 indicating number of rows changed. If number of rows changed was zero you have detected a new record which does not exist, so perform insert for that new record only.
insert into BLAH (COL1, COL2, ... PKCOL) values (?,?, ..., ?);
You can decide whether to run 10,000 updates followed by however many inserts are needed, or do update+optional insert, and remember JDBC batch statements / auto-commit off may help speed things up.

Java Mongo DB Unable to get valid next id

I am trying insert an item in MongoDB using Java MongoDB driver.Before inserting I am trying to get nextId to insert,but not sure why I am always getting nextId as 4 .I am using below given method to get nextId before inserting any item in Mongo.
private Long getNextIdValue(DBCollection dbCollection) {
Long nextSequenceNumber = 1L;
DBObject query = new BasicDBObject();
query.put("id", -1);
DBCursor cursor = dbCollection.find().sort(query).limit(1);
while (cursor.hasNext()) {
DBObject itemDBObj = cursor.next();
nextSequenceNumber = new Long(itemDBObj.get("id").toString()) + 1;
}
return nextSequenceNumber;
}
I have total 13 record in my mongodb collection.What I am doing wrong here?
Please don't do that. You don't need create a bad management id situation as the driver already do this in the best way, just use the right type and annotation for the field:
#Id
#ObjectId
private String id;
Then write a generic method to insert all entites:
public T create(T entity) throws MongoException, IOException {
WriteResult<? extends Object, String> result = jacksonDB.insert(entity);
return (T) result.getSavedObject();
}
This will create a time-based indexed hash for id's which is pretty much more powerful than get the "next id".
https://www.tutorialspoint.com/mongodb/mongodb_objectid.htm
How can you perform Arithmetic operations like +1 to String
nextSequenceNumber = new Long(itemDBObj.get("id").toString()) + 1;
Try to create a Sequence collection like this.
{"id":"MySequence","sequence":1}
Then use Update to increment the id
// Query for sequence collection
Query query = new Query(new Criteria().where("id").is("MySequence"));
//Increment the sequence by 1
Update update = new Update();
update.inc("sequence", 1);
FindAndModifyOptions findAndModifyOptions = new FindAndModifyOptions();
findAndModifyOptions.returnNew(true);
SequenceCollection sequenceCollection = mongoOperations.findAndModify(query, update,findAndModifyOptions, SequenceCollection.class);
return sequenceModel.getSequence();
I found the work around using b.collection.count().I simply find the total count and incremented by 1 to assign id to my object.

Get entity group count always return 0

Following the GAE official doc i try to test it in my local dev environment(unit test), unfortunately the entity group count always return 0:
DatastoreService ds = DatastoreServiceFactory.getDatastoreService();
MemcacheService memcacheService = MemcacheServiceFactory.getMemcacheService();
Entity entity1 = new Entity("Simple");
Key key1 = ds.put(entity1);
Key entityGroupKey = Entities.createEntityGroupKey(key1);
//should print 1, but 0
showEntityGroupCount(ds, memcacheService, entityGroupKey);
Entity entity2 = new Entity("Simple", key1);
Key key2 = ds.put(entity2);
//should print 2, but still 0
showEntityGroupCount(ds, memcacheService, entityGroupKey);
below are copied from the doc for quick reference:
// A simple class for tracking consistent entity group counts
class EntityGroupCount implements Serializable {
long version; // Version of the entity group whose count we are tracking
int count;
EntityGroupCount(long version, int count) {
this.version = version;
this.count = count;
}
}
// Display count of entities in an entity group, with consistent caching
void showEntityGroupCount(DatastoreService ds, MemcacheService cache, PrintWriter writer,
Key entityGroupKey) {
EntityGroupCount egCount = (EntityGroupCount) cache.get(entityGroupKey);
if (egCount != null && egCount.version == getEntityGroupVersion(ds, null, entityGroupKey)) {
// Cached value matched current entity group version, use that
writer.println(egCount.count + " entities (cached)");
} else {
// Need to actually count entities. Using a transaction to get a consistent count
// and entity group version.
Transaction tx = ds.beginTransaction();
PreparedQuery pq = ds.prepare(tx, new Query(entityGroupKey));
int count = pq.countEntities(FetchOptions.Builder.withLimit(5000));
cache.put(entityGroupKey,
new EntityGroupCount(getEntityGroupVersion(ds, tx, entityGroupKey), count));
tx.rollback();
writer.println(count + " entities");
}
}
Any ideas about this problem? Thanks in advance.
Entities.createEntityGroupKey() is being called twice as a result of method nesting. Change both occurrences of
showEntityGroupCount(ds, memcacheService, entityGroupKey);
to
showEntityGroupCount(ds, memcacheService, key1);
and the correct counts appear (in the development environment anyway).

Get previous version of entity in Hibernate Envers

I have an entity loaded by Hibernate (via EntityManager):
User u = em.load(User.class, id)
This class is audited by Hibernate Envers. How can I load the previous version of a User entity?
Here's another version that finds the previous revision relative to a "current" revision number, so it can be used even if the entity you're looking at isn't the latest revision. It also handles the case where there isn't a prior revision. (em is assumed to be a previously-populated EntityManager)
public static User getPreviousVersion(User user, int current_rev) {
AuditReader reader = AuditReaderFactory.get(em);
Number prior_revision = (Number) reader.createQuery()
.forRevisionsOfEntity(User.class, false, true)
.addProjection(AuditEntity.revisionNumber().max())
.add(AuditEntity.id().eq(user.getId()))
.add(AuditEntity.revisionNumber().lt(current_rev))
.getSingleResult();
if (prior_revision != null)
return (User) reader.find(User.class, user.getId(), prior_revision);
else
return null
}
This can be generalized to:
public static T getPreviousVersion(T entity, int current_rev) {
AuditReader reader = AuditReaderFactory.get(JPA.em());
Number prior_revision = (Number) reader.createQuery()
.forRevisionsOfEntity(entity.getClass(), false, true)
.addProjection(AuditEntity.revisionNumber().max())
.add(AuditEntity.id().eq(((Model) entity).id))
.add(AuditEntity.revisionNumber().lt(current_rev))
.getSingleResult();
if (prior_revision != null)
return (T) reader.find(entity.getClass(), ((Model) entity).id, prior_revision);
else
return null
}
The only tricky bit with this generalization is getting the entity's id. Because I'm using the Play! framework, I can exploit the fact that all entities are Models and use ((Model) entity).id to get the id, but you'll have to adjust this to suit your environment.
maybe this then (from AuditReader docs)
AuditReader reader = AuditReaderFactory.get(entityManager);
User user_rev1 = reader.find(User.class, user.getId(), 1);
List<Number> revNumbers = reader.getRevisions(User.class, user_rev1);
User user_previous = reader.find(User.class, user_rev1.getId(),
revNumbers.get(revNumbers.size()-1));
(I'm very new to this, not sure if I have all the syntax right, maybe the size()-1 should be size()-2?)
I think it would be this:
final AuditReader reader = AuditReaderFactory.get( entityManagerOrSession );
// This could probably be declared as Long instead of Object
final Object pk = userCurrent.getId();
final List<Number> userRevisions = reader.getRevisions( User.class, pk );
final int revisionCount = userRevision.size();
final Number previousRevision = userRevisions.get( revisionCount - 2 );
final User userPrevious = reader.find( User.class, pk, previousRevision );
Building off of the excellent approach of #brad-mace, I have made the following changes:
You should pass in your EntityClass and Id instead of hardcoding and assuming the Model.
Don't hardcode your EntityManager.
There is no point setting selectDeleted, because a deleted record can never be returned as the previous revision.
Calling get single result with throw and exception if no results or more than 1 result is found, so either call resultlist or catch the exception (this solution calls getResultList with maxResults = 1)
Get the revision, type, and entity in one transaction (remove the projection, use orderBy and maxResults, and query for the Object[3] )
So here's another solution:
public static <T> T getPreviousRevision(EntityManager entityManager, Class<T> entityClass, Object entityId, int currentRev) {
AuditReader reader = AuditReaderFactory.get(entityManager);
List<Object[]> priorRevisions = (List<Object[]>) reader.createQuery()
.forRevisionsOfEntity(entityClass, false, false)
.add(AuditEntity.id().eq(entityId))
.add(AuditEntity.revisionNumber().lt(currentRev))
.addOrder(AuditEntity.revisionNumber().desc())
.setMaxResults(1)
.getResultList();
if (priorRevisions.size() == 0) {
return null;
}
// The list contains a single Object[] with entity, revinfo, and type
return (T) priorRevision.get(0)[0];
}
From the docs:
AuditReader reader = AuditReaderFactory.get(entityManager);
User user_rev1 = reader.find(User.class, user.getId(), 1);

Categories

Resources