documents not removed using mongo-jackson - java

I'm chasing a severe bug in our application and after some digging I broke it down to this example. Using the jackson collection, I'm not able to remove documents. Usually it works for a single document which I just inserted. In the following example I just try to purge a unittest-collection.
import java.net.UnknownHostException;
import net.vz.mongodb.jackson.DBCursor;
import net.vz.mongodb.jackson.JacksonDBCollection;
import com.mongodb.DB;
import com.mongodb.DBCollection;
import com.mongodb.Mongo;
import com.mongodb.MongoException;
import com.test.model.Account;
public class MongoTest {
public static void main(String[] args) {
try {
Mongo mongo = new Mongo("localhost", 27017);
DB db = mongo.getDB("orion-unittest");
// get a single collection
DBCollection collection = db.getCollection("account");
JacksonDBCollection<Account, String> jacksonCollection = JacksonDBCollection.wrap(collection, Account.class, String.class);
// doesn't work
DBCursor<Account> jacksonCursor = jacksonCollection.find();
while (jacksonCursor.hasNext()) {
jacksonCollection.remove(jacksonCursor.next());
}
System.out.println(jacksonCollection.getCount());
// works!
com.mongodb.DBCursor cursor = collection.find();
while (cursor.hasNext()) {
collection.remove(cursor.next());
}
System.out.println(collection.getCount());
} catch (UnknownHostException e) {
e.printStackTrace();
} catch (MongoException e) {
e.printStackTrace();
}
}
}
Console output:
6
0
The strange part ist, when I monitor the mongo server activities I see the delete commands being passed. do I miss something, or is this a bug of the jackson mapper?
here the Account class - nothing much there. the id is filled with the _id key:
import net.vz.mongodb.jackson.Id;
public class Account {
#Id
String id;
String nickname = null;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getNickname() {
return nickname;
}
public void setNickname(String nickname) {
this.nickname = nickname;
}
}

I haven't used mongo-jackson-mapper but as I saw in the source code it only calls dbCollection.remove method itself. So I guess your problem is with WriteConcern. try this one :
jacksonCollection.remove(jacksonCursor.next(), WriteConcern.FSYNC_SAFE);
and test 2 deletes separately (not one after another), so you can really understand if its deleting or not (it can have delete lag as well).

I found the solution by using the #ObjectId instead of #Id. the problem was, that before my id was serialized to
{"_id": ObjectId("4f7551a74206b4f8e692bda3")}
and not to something like:
{"_id": "4f7551a74206b4f8e692bda3"}
... which failed the query done by remove trying to find the matching object. the #ObjectId annotation now equals the results of the serialization.

Related

CosmosDB SQL API Java SDK v.4 ReadAllItems method does not return results

I encountered some obstackles with getting results from cosmos db sql api using methods:
.readAllItems(partitionKey, classType)
.readAllItems(partitionKey, options, classType)
Above methods work fine when I have flat structure of class. I mean no subclasses nested inside my model class. Below is class example, which does not work:
public class Root {
public Root(){};
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public Subclass getSubclass () {
return subclass;
}
public void setSubclass (Subclass subclass) {
this.subclass= subclass;
}
private String id = "";
private Subclass subclass= new Subclass ();
}
My subclass just contains partitionKey for simplicity.
public class Subclass {
public String getPk() { return pk; }
public void setPk(String pk) {
this.pk = pk;
}
private String pk="";
}
With such "structure" I can easily create items inside container, with proper values of partition key, but I am not able to load them again just using partitionKey value. Here is code snippet for reading them from cosmosdb.
PartitionKey partitionKey = new PartitionKey("properValueWhichWorksWithNoNestedClassesInsideRoot");
CosmosPagedFlux<Root> pagedFluxResponse = container.readAllItems(partitionKey, Root.class);
try {
double totalRequestCharge = pagedFluxResponse.byPage(preferredPageSize).flatMap(fluxResponse -> {
entities.addAll(fluxResponse.getResults());
return Mono.just(fluxResponse.getRequestCharge());
})
.reduce(0.0, (x1, x2) -> x1 + x2)
.block();
logger.info("Query charge: {}", totalRequestCharge);
} catch(Exception err) {
err.printStackTrace();
}
return entities;
I am not quite sure if I am doing sth wrong or it is bug, beacuse I use the same code with .queryItems(query, queryOptions, classType), which also returns CosmosPageFlux instead of .readAllItems and it works perfectly fine.
I use such dependancy:
<dependency>
<groupId>com.azure</groupId>
<artifactId>azure-cosmos</artifactId>
<version>LATEST</version>
</dependency>
I also used some specific versions intead of LATEST and it does not work.
I've tested in my side and found some interesting things.
readAllItems is a function which achieved by SqlQuery, so when I debug your situation which put the partition key in a sub property, it will generate a sql like this:
So, obviously it can got any response so that the result of readAllItems returns none. You'd better to use queryItems directly.

IllegalArgumentException - Key is incomplete - Objectify

I'm experiencing a bug which seems to be related to the memcache.
java.lang.IllegalArgumentException: Key is incomplete.
at com.google.appengine.api.datastore.KeyFactory.keyToString(KeyFactory.java:164)
at com.googlecode.objectify.cache.KeyMemcacheService.stringify(KeyMemcacheService.java:62)
at com.googlecode.objectify.cache.KeyMemcacheService.putAll(KeyMemcacheService.java:91)
at com.googlecode.objectify.cache.EntityMemcache.empty(EntityMemcache.java:319)
at com.googlecode.objectify.cache.CachingAsyncDatastoreService$5.trigger(CachingAsyncDatastoreService.java:445)
at com.googlecode.objectify.cache.TriggerFuture.isDone(TriggerFuture.java:89)
at com.googlecode.objectify.cache.TriggerFuture.get(TriggerFuture.java:104)
at com.googlecode.objectify.impl.ResultAdapter.now(ResultAdapter.java:34)
at com.googlecode.objectify.util.ResultWrapper.translate(ResultWrapper.java:22)
at com.googlecode.objectify.util.ResultWrapper.translate(ResultWrapper.java:10)
at com.googlecode.objectify.util.ResultTranslator.nowUncached(ResultTranslator.java:21)
at com.googlecode.objectify.util.ResultCache.now(ResultCache.java:30)
at com.googlecode.objectify.util.ResultWrapper.translate(ResultWrapper.java:22)
at com.googlecode.objectify.util.ResultWrapper.translate(ResultWrapper.java:10)
at com.googlecode.objectify.util.ResultTranslator.nowUncached(ResultTranslator.java:21)
at com.googlecode.objectify.util.ResultCache.now(ResultCache.java:30)
The objects I'm trying to persist are extending the following, and have
#Parent Key someclass;
public abstract class AbstractVO<T> implements iVO<T> {
private static final Logger log = Logger.getLogger(AbstractVO.class.getName());
#Id
private Long id;
#Index
private Date lastModified;
public Long getId() {
return id;
}
public Date getLastModified() {
return lastModified;
}
public void setId(Long id) {
this.id = id;
}
public void setLastModified(Date lastModified) {
this.lastModified = lastModified;
}
public Key<?> getKey() {
return Key.create(this);
}
#OnSave
public void onSaveFunction(){
setLastModified(new Date());
}
#SuppressWarnings("unchecked")
public T save(){
try {
ofy().save().entity(this).now();
} catch (IllegalArgumentException e){
log.log(Level.SEVERE, "boom, key was incomplete", e);
}
return (T) this;
}
public Result<Void> delete(){
return ofy().delete().entity(this);
}
#SuppressWarnings("unchecked")
public T refresh(){
if (getId() != null){
return (T) ofy().load().entity(this).now();
}
else {
return (T) this;
}
}
Logs are showing that the data required to save the entities are what you'd expect them to be:
parent id:5813790675304448
parent key:ag5zfnRlY2gtZXNzZW5jZXJZCxIHQ29tcGFueRiAgIDAyK2fCAwLEghDYW1wYWlnbhiAgICA0pCXCwwLEgpBY3Rpb25UeXBlGICAgICS5-gJDAsSDFNlbGxlckFjdGlvbhiAgICAyvOpCgw
entity id: null
Have anyone experienced this problem before and how could I resolve it? I've written test cases to attempt to reproduce on my dev servers, but they are all passing. It only appears to be a problem on production.
Edit:
I have removed the cache on the affected entities, which resulted in the saving of the entity taking 10 seconds (I'm guessing this is the timeout?) and high contention making it blow up..
I think i have identified the problem, it's simply contention.. I have rewritten much of my application to verify - I'm in the middle of migration now and will let you know
This was indeed the problem, my resolution was to remove the entity from the hierarchy and add the parents as indexed fields.

Having trouble with BIRT 4.3 new POJO DataSource

I am following the example at http://www.eclipse.org/birt/phoenix/project/notable4.3.php#jump_3
and I can't seem to get it to work properly. At the step where you define the new DataSet (New Birt POJO Data Set), I can't seem to find the 'POJO Data Set Class Name'. The matching item widget remains empty. I tried editing the rptdesign with the source tab trying all kind of variations (with/without package name), nothing does it. Anybody had success with this new feature of BIRT ?
Ok, my bad. It would have been easier if we had to implement an interface rather than trying to deduce how birt reads a custom pojo dataset.
So on the example at http://www.eclipse.org/birt/phoenix/project/notable4.3.php#jump_3 everything worked as described except for the class CustomerDataSet. Here is the implementation of the class CustomerDataSet that worked for me.
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
import java.util.Map;
public class CustomerDataSet {
public Iterator<Customer> itr;
public List<Customer> getCustomers() {
List<Customer> customers = new ArrayList<Customer>();
Customer c = new Customer(103);
c.setCity("City1");
c.setCountry("Country1");
c.setCreditLimit(100);
c.setName("aName1");
c.setState("state1");
customers.add(c);
c = new Customer(104);
c.setCity("City2");
c.setCountry("Country2");
c.setCreditLimit(200);
c.setName("aName2");
c.setState("aStat2");
customers.add(c);
c = new Customer(105);
c.setCity("City3");
c.setCountry("Country3");
c.setCreditLimit(300);
c.setName("aName3");
c.setState("aStat3");
customers.add(c);
return customers;
}
public void open(Object obj, Map<String,Object> map) {
}
public Object next() {
if (itr == null)
itr = getCustomers().iterator();
if (itr.hasNext())
return itr.next();
return null;
}
public void close() {
}
}

Annotation reflection (using getAnnotation) does not work

I have to following code to check whether the entity in my model has a nullable=false or similar annotation on a field.
import javax.persistence.Column;
import .....
private boolean isRequired(Item item, Object propertyId) {
Class<?> property = getPropertyClass(item, propertyId);
final JoinColumn joinAnnotation = property.getAnnotation(JoinColumn.class);
if (null != joinAnnotation) {
return !joinAnnotation.nullable();
}
final Column columnAnnotation = property.getAnnotation(Column.class);
if (null != columnAnnotation) {
return !columnAnnotation.nullable();
}
....
return false;
}
Here's a snippet from my model.
import javax.persistence.*;
import .....
#Entity
#Table(name="m_contact_details")
public class MContactDetail extends AbstractMasterEntity implements Serializable {
#Column(length=60, nullable=false)
private String address1;
For those people unfamiliar with the #Column annotation, here's the header:
#Target({METHOD, FIELD})
#Retention(RUNTIME)
public #interface Column {
I'd expect the isRequired to return true every now and again, but instead it never does.
I've already done a mvn clean and mvn install on my project, but that does not help.
Q1: What am I doing wrong?
Q2: is there a cleaner way to code isRequired (perhaps making better use of generics)?
property represents a class (it's a Class<?>)
#Column and #JoinColumn can only annotate fields/methods.
Consequently you will never find these annotations on property.
A slightly modified version of your code that prints out whether the email property of the Employee entity is required:
public static void main(String[] args) throws NoSuchFieldException {
System.out.println(isRequired(Employee.class, "email"));
}
private static boolean isRequired(Class<?> entity, String propertyName) throws NoSuchFieldException {
Field property = entity.getDeclaredField(propertyName);
final JoinColumn joinAnnotation = property.getAnnotation(JoinColumn.class);
if (null != joinAnnotation) {
return !joinAnnotation.nullable();
}
final Column columnAnnotation = property.getAnnotation(Column.class);
if (null != columnAnnotation) {
return !columnAnnotation.nullable();
}
return false;
}
Note that this is a half-baked solution, because JPA annotations can either be on a field or on a method. Also be aware of the difference between the reflection methods like getFiled()/getDeclaredField(). The former returns inherited fields too, while the latter returns only fields of the specific class ignoring what's inherited from its parents.
The following code works:
#SuppressWarnings("rawtypes")
private boolean isRequired(BeanItem item, Object propertyId) throws SecurityException {
String fieldname = propertyId.toString();
try {
java.lang.reflect.Field field = item.getBean().getClass().getDeclaredField(fieldname);
final JoinColumn joinAnnotation = field.getAnnotation(JoinColumn.class);
if (null != joinAnnotation) {
return !joinAnnotation.nullable();
}
final Column columnAnnotation = field.getAnnotation(Column.class);
if (null != columnAnnotation) {
return !columnAnnotation.nullable();
}
} catch (NoSuchFieldException e) {
//not a problem no need to log this event.
return false;
}
}

BlazeDS custom serialization causes RangeError

I am using BlazeDS to communicate between Java and Flash/Flex. And everything works fine, except that Java Null Integer becomes 0 on Flex side.
To handle the problem with transferring a Java Null Integer to an Flash/Flex int, I have implement a custom serialization, which works on the Java side and uses negative values to express Null values.
After implementing that I get an
RangeError: Error #2006: Der angegebene Index liegt außerhalb des zulässigen Bereichs.
(in english: the index is out of range)
at ObjectInput/readObject()
at mx.collections::ArrayList/readExternal()[E:\dev\4.x\frameworks\projects\framework\src\mx\collections\ArrayList.as:586]
at mx.collections::ArrayCollection/readExternal()[E:\dev\4.x\frameworks\projects\framework\src\mx\collections\ArrayCollection.as:147]
at ObjectInput/readObject()
at mx.messaging.messages::AbstractMessage/readExternal()[E:\dev\4.x\frameworks\projects\rpc\src\mx\messaging\messages\AbstractMessage.as:486]
at mx.messaging.messages::AsyncMessage/readExternal()[E:\dev\4.x\frameworks\projects\rpc\src\mx\messaging\messages\AsyncMessage.as:170]
at mx.messaging.messages::AcknowledgeMessage/readExternal()[E:\dev\4.x\frameworks\projects\rpc\src\mx\messaging\messages\AcknowledgeMessage.as:95]
The exception occures on the Flex side while deserializing the Java Result.
Which is an list of complex objects which contains this special class with the custom serialization. Because there was no such problem until I added the custom serialization, I guess it must belong to the problem, but i have no clue what triggers the exception.
This is the code of the object with the custom serialization:
package crux.domain.dto;
import java.io.Externalizable;
import java.io.IOException;
import java.io.ObjectInput;
import java.io.ObjectOutput;
import java.io.Serializable;
public class NullAbleID implements Serializable, Externalizable {
private static final long serialVersionUID = 788620879056753289L;
private Integer id;
public NullAbleID() {
super();
this.id = null;
}
public NullAbleID(final Integer id) {
this.id = id;
}
getter, setter for ID and hashCode and equals
#Override
public void writeExternal(ObjectOutput out) throws IOException {
out.writeObject(this.nullToNegative(this.id));
}
#Override
public void readExternal(ObjectInput in) throws IOException {
this.id = this.negativeToNull(in.readInt());
}
private int nullToNegative(Integer id) {
if (id == null) {
return -1;
} else {
return id.intValue();
}
}
private Integer negativeToNull(int flashId) {
if (flashId < 0) {
return null;
} else {
return Integer.valueOf(flashId);
}
}
}
Flex: two classes, because we use Gas3 Granite Data Service code generation:
/**
* Generated by Gas3 v2.1.0 (Granite Data Services).
*
*/
package crux.domain.dto {
import flash.utils.IExternalizable;
[Bindable]
public class NullAbleIDBase {
public function NullAbleIDBase() {}
private var _id:Number;
public function set id(value:Number):void {
_id = value;
}
public function get id():Number {
return _id;
}
}
}
Flex sub class with read and write external
package crux.domain.dto {
import flash.utils.IDataInput;
import flash.utils.IDataOutput;
import flash.utils.IExternalizable;
[Bindable]
[RemoteClass(alias="crux.domain.dto.NullAbleID")]
public class NullAbleID extends NullAbleIDBase implements IExternalizable{
public function readExternal(input:IDataInput):void {
id = input.readInt();
}
public function writeExternal(output:IDataOutput):void {
output.writeInt(id);
}
}
}
I have spend several hours on this problem, but I have no idea what the problem is.
Do you see the cause for the exception?
Not sure it's the cause of the problem, because I don't know BlazeDS, but the methods readExternal and writeExternal of your NullAbleID class are not symetric : you write an object of type Integer in writeExternal, and you read an int in readExternal.

Categories

Resources