Mongo ReflectionDBObject, map ALL embedded array's elements to a class; - java

I use Mongo with native Java driver (no 3rd party library/ORM). I have this:
public class Release extends ReflectionDBObject {
//other fields omitted
private List<ReleaseDetailsByTerritory> releaseDetailsByTerritory = new ArrayList<ReleaseDetailsByTerritory>();
}
public class ReleaseDetailsByTerritory extends ReflectionDBObject { //...}
If I want to retrieve a "Release" entry having two "ReleaseDetailsByTerritory" entries, and have them auto-instanciated in a Release class instance containing a List of two ReleaseDetailsByTerritory class instances, I have to do this:
releaseColl.setObjectClass(Release.class);
releaseColl.setInternalClass("ReleaseDetailsByTerritory.0", ReleaseDetailsByTerritory.class);
releaseColl.setInternalClass("ReleaseDetailsByTerritory.1", ReleaseDetailsByTerritory.class);
Release r = (Release) releaseColl.findOne();
i.e. I need to specifically map each potential element of the embedded array to the corresponding class.
Is there a way to tell the Mongo driver that I want all and any element of an embedded array to be mapped to a certain class? Something like :
collection.setInternalClass("ReleaseDetailsByTerritory.*", ReleaseDetailsByTerritory.class);
?
Thanks. And please don't say "use Spring MondoDb module or Morphia". I want to know if this is achievable with the Mongo native Java driver.

Looking at the source code and I don't think this is possible. There is also no obvious way to create convenience functionality for what you need. Having to call setInternalClass for each array element is hardly an option given the massive amounts of memory usage this would result in for large arrays.
You may want to consider implementing your own implementation of a "Document" class that does what you need without having to go to a full mapping solution such as Morphia (which is actually pretty elegant, more so than Spring at least).
You could also consider opening a JIRA issue at jira.mongodb.org and request this feature.

Related

How to convert ArrayList to ExampleSet in Rapidminer?

I'm creating an extension for rapidminer using java. I have an array of elements of type Example and I need to covert it to a dataset of type ExampleSet.
Rapidminer's ExampleSet definition looks like this:
public interface ExampleSet extends ResultObject, Cloneable, Iterable<Example>
I need to pick certain elements from dataset and send it back, still as ExampleSet, however casting is not working and I can't simply create new ExampleSet object since it's an interface.
private ExampleSet generateSet(ExampleSet dataset){
List<Example> list = new ArrayList<Example>();
// pick elements from sent dataset and add them to newly created list above
return (ExampleSet)list;
}
You will need more than a simple explicit cast.
In RapidMiner, an ExampleSet is not just a collection of Example. It contains more complex information and logic.
Therefore, you need another approach to work with ExampleSets. Like you already said, it is just the interface, which lead us to choice of the right subtype.
For starters, (Since: 7.3) simply use one of ExampleSets class's methods .
You also need to define each Attribute this ExampleSet is going to have, namely the columns.
Below, I create one with a single Attribute called First
Attribute attributeFirst = AttributeFactory.createAttribute("First", Ontology.POLYNOMINAL);
ExampleSetBuilder builder = ExampleSets.from(attributeFirst);
builder.addDataRow(example.getDataRow());
ExampleSet result = builder.build();
You can also get the Attributes in a more generic way using:
Attribute[] attributes = example.getAttributes().createRegularAttributeArray();
ExampleSetBuilder builder = ExampleSets.from(attributes);
...
If you have many cases where you have to create or alter ExampleSet, I encourage you to write your own ExampleSetBuilder since the original implementation have many drawbacks.
You can also try searching for other extensions, which may already meet your requirements, and you do not need to create one of your own (belive me, it's not Headache-free).
the ExampleSet class is getting deprecated (but still perfectly fine to use).
You might want to consider switching over to the newer data set API called Belt (https://github.com/rapidminer/belt). It's faster and more intuitive to use. It's still actively developed, so feedback is also welcome.
Also if you have more specific questions, feel free to drop by the RapidMiner community (https://community.rapidminer.com/), where also many of the developers are very active.

ATG JavaBean over RepositoryItem

I doesn't understand how to use repositoryItem in ATG. How do I need construct customized logic on it.
Do I need to create usual JavaBean over repositoryItem or I need to use it as is?
I will try to explain:
Logic on repositoryItem:
RepositoryItem store = getRepository().getItem(..);
String address = store.getPropertyValue(..);
Logic on JavaBean:
class StoreBean {
String address;
StoreBean(RepositoryItem store) {
address = store.getPropertyValue(..);
}
}
Then I can use StoreBean how I want, to get it fields(lazy load for them, for example).
What will be best practices in ATG?
It is a matter of preference.
What you do not get with RepositoryItem objects is strong type checking. You must either make assumptions about the type of RepositoryItem you are working with or you have to do manual checks in your code (see example below). Additionally, since the RepositoryItem properties are stored as a metadata, you have to know 1) the actual names of the properties from the XML repository descriptor and 2) you need to know the types, which requires type casting (Example: String firstName = (String) item.getProperty("firstName");) Here is an example of a validation to ensure the RepositoryItem object is of type "sku":
RepositoryItemDescriptor skuItemDescriptor = getCatalogTools().getCatalog().getItemDescriptor(getCatalogTools().getBaseSKUItemType());
if (!RepositoryUtils.isTypeOfItemDesc(itemDescriptor, skuItemDescriptor)) {
throw new IllegalArgumentException("RepositoryItem must be of type " + getCatalogTools().getBaseSKUItemType());
}
If you take the approach of not using "JavaBeans", then you are increasing the risk of having runtime errors in your application. My suggestion is that you have a healthy balance between using RepistoryItem objects and wrapper objects. For critical items you plan to use in a large amount of your code base, I suggest using a wrapper object.
I suggest that if you create wrapper objects, that for consistency, you follow the same design pattern that Oracle Commerce uses. For example, the "order" item is wrapped by OrderImpl and implements the ChangedProperties interface.
public class OrderImpl
extends CommerceIdentifierImpl
implements Order, ChangedProperties
http://docs.oracle.com/cd/E52191_03/Platform.11-1/apidoc/atg/commerce/order/OrderImpl.html
ATG out of box repository implementations do not use JavaBeans for the most part. One big disadvantage of using JavaBeans and lazy loading them into memory will be to lose many repository caching features and will increase your memory footprint. For instance you will not be able to monitor your cache statistic or invalidate cache periodically. You will also have overheads of instantiations when you have huge repotiroyitem result set from a query.
Instead you can also use DynamicBean which lets you refer to repository properties similar to java beans for instance Profile.city.
If you only want to wrap them so that developers don't accidentally parse them incorrectly, you can write a util class per repository for various types of ready write operations and centralize your type safety.

What is the best practice to return dynamic type from a REST API in SpringMVC

I have implemented some REST API with springMVC+Jackson+hibernate.
All I needed to do is retrieve objects from database, return it as a list, the conversion to JSON is implicit.
But there is one problem. If I want to add some more information to those object before return/response. For example I am returning a list of "store" object, but I want to add a name of the person who is attending right now.
JAVA does not have dynamic type (how I solve this problem in C#). So, how do we solve this problem in JAVA?
I thought about this, and have come up with a few not so elegant solution.
1. use factory pattern, define another class which contain the name of that person.
2. covert store object to JSON objects (ObjectNode from jackson), put a new attribute into json objects, return json objects.
3. use reflection to inject a new property to store object, return objects, maybe SpringMVC conversion will generate JSON correctly?
option 1 looks bad, will end up with a lot of boiler plate class which doesn't really useful. option 2 looks ok, but is this the best we could do with springMVC?
option 1
Actually your JSON domain is different from your core domain. I would decouple them and create a seperate domain for your JSON objects, as this is a seperate concern and you don't want to mix it. This however might require a lot of 1-to-1 mapping. This is your option 1, with boilerplate. There are frameworks that help you with the boilerplate (such as dozer, MapStruct), but you will always have a performance penalty with frameworks that use generic reflection.
option 2, 3
If you really insist on hacking it in because it's only a few exceptions and not a common pattern, I would certainly not alter the JSON nodes or use reflection (your option 2 and 3). This is certainly not the way to do it in Java.
option 4 [hack]
What you could do is extend your core domain with new types that contain the extra information and in a post-processing step replace the old objects with the new domain objects:
UnaryOperator<String> toJsonStores = domainStore -> toJsonStore(domainStore);
list.replaceAll(toJsonStores);
where the JSONStore extends the domain Store and toJsonStore maps the domain Store to the JSONStore object by adding the person name.
That way you preserve type safety and keep the codebase comprehensive. But if you have to do it more then in a few exceptional cases, you should change strategy.
Are you looking for a rest service that return list of objects that contain not just one type, but many type of objects? If so, Have you tried making the return type of that service method to List<Object>?
I recommend to create a abstract class BaseRestResponse that will be extended by all the items in the list which you want return by your rest service method.
Then make return type as List<BaseRestResponse>.
BaseRestResponse should have all the common properties and the customized object can have the property name as you said

Volatile data in Solr

I have an index of documents which is distributed over several shards and replicas. The size is ca. 40 mil and I expect it to grow
Problem: Users add information to these documents, which they change quite frequently. They need it to be integrated in search syntax, e.g. funny and cool and cat:interesting. Where cat would be a volatile data set
As far as I know neither Solr nor Lucene support "true update", that means that I have to reindex the whole set of changed documents again. Thus I need to connect it to external data source such as relational database.
I did it in Lucene with extendable search (http://lucene.apache.org/core/4_3_0/queryparser/index.html). The algorithm was pretty easy:
Preprosess query by adding "_" to all external fields
Map these fields to classes
Each class extends org.apache.lucene.search.Filter class and converts ids to a bitset by overriding public public DocIdSet getDocIdSet(AtomicReaderContext context, Bits acceptDocs) throws IOException:
ResultSet set = state.executeQuery();
OpenBitSet bitset = new OpenBitSet();
while (set.next()) {
bitset.set(set.getInt("ID"));
}
Then by extending org.apache.lucene.queryparser.ext.ParserExtension, I override parse like this:
public Query parse(ExtensionQuery eq) throws ParseException{
String cat= eq.getRawQueryString();
Filter filter = _cache.getFilter(cat);
return new ConstantScoreQuery(filter);
}
Extend org.apache.lucene.queryparser.ext.Extensions using add method and done.
But HOW to do this in Solr?
I found couple of suggestions:
Using external field (http://lucene.apache.org/solr/4_3_0/solr-core/org/apache/solr/schema/ExternalFileField.html)
NRS (http://wiki.apache.org/solr/NearRealtimeSearch) which looks a little bit under construction to me.
Any ideas how to do it in Solr? Maybe there are some code examples?
Please, consider also that Im kinda new to Solr.
Thank you
The Solr 4.x releases all support Atomic Update which I believe may satisfy your needs.

Key-Value on top of Appengine

Although appengine already is schema-less, there still need to define the entities that needed to be stored into the Datastore through the Datanucleus persistence layer. So I am thinking of a way to get around this; by having a layer that will store Key-value at runtime, instead of compile-time Entities.
The way this is done with Redis is by creating a key like this:
private static final String USER_ID_FORMAT = "user:id:%s";
private static final String USER_NAME_FORMAT = "user:name:%s";
From the docs Redis types are: String, Linked-list, Set, Sorted set. I am not sure if there's more.
As for the GAE datastore is concerned a String "Key" and a "Value" have to be the entity that will be stored.
Like:
public class KeyValue {
private String key;
private Value value; // value can be a String, Linked-list, Set or Sorted set etc.
// Code omitted
}
The justification of this scheme is rooted to the Restful access to the datastore (that is provided by Datanucleus-api-rest)
Using this rest api, to persist a object or entity:
POST http://datanucleus.appspot.com/dn/guestbook.Greeting
{"author":null,
"class":"guestbook.Greeting",
"content":"test insert",
"date":1239213923232}
The problem with this approach is that in order to persist a Entity the actual class needs to be defined at compile time; unlike with the idea of having a key-value store mechanism we can simplify the method call:
POST http://datanucleus.appspot.com/dn/org.myframework.KeyValue
{ "class":"org.myframework.KeyValue"
"key":"user:id:johnsmith;followers",
"value":"the_list",
}
Passing a single string as "value" is fairly easy, I can use JSON array for list, set or sorted list. The real question would be how to actually persist different types of data passed into the interface. Should there be multiple KeyValue entities each representing the basic types it support: KeyValueString? KeyValueList? etc.
Looks like you're using a JSON based REST API, so why not just store Value as a JSON string?
You do not need to use the Datanucleus layer, or any of the other fine ORM layers (like Twig or Objectify). Those are optional, and are all based on the low-level API. If I interpret what you are saying properly, perhaps it already has the functionality that you want. See: https://developers.google.com/appengine/docs/java/datastore/entities
Datanucleus is a specific framework that runs on top of GAE. You can however access the database at a lower, less structured, more key/value-like level - the low-level API. That's the lowest level you can access directly.
BTW, the low-level-"GAE datastore" internally runs on 6 global Google Megastore tables, which in turn are hosted on the Google Big Table database system.
Saving JSON as a String works fine. But you will need ways to retrieve your objects other than by ID. That is, you need a way to index your data to support any kind of useful query on it.

Categories

Resources