I'm using neo4j through java and I was wondering if there's a way to save some metadata with that node. I wanted to be able to have a node from the graph include more information, for instance to have each node have a dictionary associated with it.
edit - A dictionary was just an example, I want to be able to associate also class instances which contain as one of the fields a dictionary for example.
Unfortunately, there is no such functionality in Neo4j.
Neo4j is simple property graph.
But you can "emulate" such behaviour by using conventions in your application.
Special properties
You can specify in your application that all properties that starts with __ are metadata.
Then storing metadata is trivial:
try (Transaction tx = db.beginTx()) {
Node node = db.createNode();
node.setProperty("__version", "1.0.0");
node.setProperty("__author", "Dmitry");
tx.success();
}
JSON metadata
Other way - store JSON string in __metadata property and specify all your metadata as JSON.
Example:
ObjectMapper mapper = new ObjectMapper();
// create node and set metadata
try (Transaction tx = db.beginTx()) {
Map<String, Object> metadata = new HashMap<>();
metadata.put("version", "1.0.0");
metadata.put("author", "Dmitry");
Node node = db.createNode();
node.setProperty("__metadata", mapper.writeValueAsString(metadata));
tx.success();
}
// find node and get metadata
try (Transaction tx = db.beginTx()) {
Node node = db.findNode(...);
Map<String, Object> metadata = map = mapper.readValue(
node.getProperty("__metadata"), HashMap.class);
tx.success();
}
Note: if you go with this option, then you should create some sort of wrapper/helper for Node to minimize code duplication.
Note2: ObjectMapper should be created only once per application.
In addition to the other answer you can easily create a separate node representing your class and holding class-level meta information.
Either connect all the nodes representing instances to the class node using a relationship (this might cause lock contention if a lot of instances are added concurrently) or use a naming convention:
Node for instances: (:Person{name:'Danny'})
Metanode for person: (:Meta{clazz:'Person', metaProp1: value1, ...})
So the label if the instance node is linked to the clazz property of the meta node.
Related
I understand how the JCR API works and is used in Magnolia. I want to get the result as JSON object
My Node object has a hierarchical structure(each subnode has type mgnl:category)
test_1
test_a
test_b
test_c
test_c1
test_d
If I use
var session = context.getJCRSession("category");
Iterable<Node> categoryItems = NodeUtil.collectAllChildren(
session.getNode(nodePath),
new NodeTypePredicate("mgnl:category"));
List<String> result = new ArrayList<>();
for (Node node : categoryItems) {
result.add(node.getName());
}
I get just a list of children like: [test_a, test_b, test_c, text_c1, test_d].
How can I check if a child has a subnode? Because I need [test_a, test_b, test_c: {text_c1}, test_d].
I think recursion will do here... but I need info about if a node has a subnode...
You can check if a node has a subnode by hasNodes() method. You can refer more JCR Node APIs here https://developer.adobe.com/experience-manager/reference-materials/spec/jsr170/javadocs/jcr-2.0/javax/jcr/Node.html
Thanks
I am using Mongo Java Driver, and I am trying to use filters on the collection.find() function. For example, when I have a key which is a java object, the class of which contains certain fields:
Document document = (Document) collection.find(and(
eq("m_seniority", key.getM_seniority()),
eq("m_currency",key.getM_currency()),
eq("m_redCode",key.getM_redCode()),
eq("m_companyId",key.getM_companyId())
)).first();
I use the above command. But when I want to do that in bulk, I am being passed a collection of keys, ( Collection keys ), I can't access particular variables of the java objects inside as below:
List<Document> docs = (List<Document>) collection.find(and(
eq("m_seniority", keys.getM_seniority()),
eq("m_currency",keys.getM_currency()),
eq("m_redCode",keys.getM_redCode()),
eq("m_companyId",keys.getM_companyId())
)).into(new ArrayList<Document>());
Because getters are not of the collection, but just the objects, i can't use getters on the collection. How do I do this?
To create an or query on all of the Collection keys:
List<Bson> keyFilters = new ArrayList<>();
// for each key create an 'and' filter on seniority, currency, redcode and companyid
for (Key key : keys) {
keyFilters.add(
Filters.and(eq("m_seniority", key.getM_seniority()),
Filters.eq("m_currency",key.getM_currency()),
Filters.eq("m_redCode",key.getM_redCode()),
Filters.eq("m_companyId",key.getM_companyId())
)
);
}
List<Document> docs = (List<Document>) collection.find(
// 'or' all of the individual key filters
Filters.or(keyFilters)
).into(new ArrayList<Document>());
I am using hadoop map-reduce for processing XML file. I am directly storing the JSON data into mongodb. How can I achieve that only non-duplicate records will be stored into database before executing BulkWriteOperation?
The duplicate records criteria will be based on product image and product name, I do not want to use a layer of morphia where we can assign indexes to the class members.
Here is my reducer class:
public class XMLReducer extends Reducer<Text, MapWritable, Text, NullWritable>{
private static final Logger LOGGER = Logger.getLogger(XMLReducer.class);
protected void reduce(Text key, Iterable<MapWritable> values, Context ctx) throws IOException, InterruptedException{
LOGGER.info("reduce()------Start for key>"+key);
Map<String,String> insertProductInfo = new HashMap<String,String>();
try{
MongoClient mongoClient = new MongoClient("localhost", 27017);
DB db = mongoClient.getDB("test");
BulkWriteOperation operation = db.getCollection("product").initializeOrderedBulkOperation();
for (MapWritable entry : values) {
for (Entry<Writable, Writable> extractProductInfo : entry.entrySet()) {
insertProductInfo.put(extractProductInfo.getKey().toString(), extractProductInfo.getValue().toString());
}
if(!insertProductInfo.isEmpty()){
BasicDBObject basicDBObject = new BasicDBObject(insertProductInfo);
operation.insert(basicDBObject);
}
}
//How can I check for duplicates before executing bulk operation
operation.execute();
LOGGER.info("reduce------end for key"+key);
}catch(Exception e){
LOGGER.error("General Exception in XMLReducer",e);
}
}
}
EDIT: After the suggested answer I have added :
BasicDBObject query = new BasicDBObject("product_image", basicDBObject.get("product_image"))
.append("product_name", basicDBObject.get("product_name"));
operation.find(query).upsert().updateOne(new BasicDBObject("$setOnInsert", basicDBObject));
operation.insert(basicDBObject);
I am getting error like: com.mongodb.MongoInternalException: no mapping found for index 0
Any help will be useful.Thanks.
I suppose it all depends on what you really want to do with the "duplicates" here as to how you handle it.
For one you can always use .initializeUnOrderedBulkOperation() which won't "error" on a duplicate key from your index ( which you need to stop duplicates ) but will report any such errors in the returned BulkWriteResult object. Which is returned from .execute()
BulkWriteResult result = operation.execute();
On the other hand, you can just use "upserts" instead and use operators such as $setOnInsert to only make changes where no duplicate existed:
BasicDBObject basicdbobject = new BasicDBObject(insertProductInfo);
BasicDBObject query = new BasicDBObject("key", basicdbobject.get("key"));
operation.find(query).upsert().updateOne(new BasicDBObject("$setOnInsert", basicdbobject));
So you basically look up the value of the field that holds the "key" to determine a duplicate with a query, then only actually change any data where that "key" was not found and therefore a new document and "inserted".
In either case the default behaviour here will be to "insert" the first unique "key" value and then ignore all other occurances. If you want to do other things like "overwrite" or "increment" values where the same key is found then the .update() "upsert" approach is the one you want, but you will use other update operators for those actions.
What I have done:
I have created nodes with multiple properties like name,Age,Location,Gender etc.
I want to retrieve nodes which have matching property values and create a relationship between them.(Eg nodes which have same age or same location).
I have done this as follows:
void query()
{
ExecutionResult result;
Transaction tx=null;
ExecutionEngine engine = new ExecutionEngine( graphDb );
try
{
String name="Female";
tx=graphDb.beginTx();
result=engine.execute("start n=node(*) where has(n.City) with n.City as city, collect(n) as nodelist, count(*) as count where count > 1 return city, nodelist, count");
System.out.println(result.dumpToString());
tx.success();
}
catch(Exception e)
{
tx.failure();
}
finally
{
tx.finish();
}
}
The nodelist gives me the nodes with same properties .
I want to create a relationship between these nodes.
How can I point to the nodes in the nodelist?
Also,please suggest other alternative ways of doing the same
To get hold of the nodes in nodelist:
Iterator<Map<String,Object>> it=result.iterator ();
if(it.hasNext()) {
Map<String,Object> row=it.next();
List<Node> nodelist=(List<Node>) row.get("nodelist");
}
You haven't specified what kinds of relationships you want to create - take a look at Create or Merge and if applicable, Foreach - maybe you can write one Cypher query to do it all.
I want to store multiple custom key and value pair on Google Datastore entity inside the another model as a child entity. I found that there are two ways to do it
HashMap<String, String> map = new HashMap<String, String>()
(or)
List<KeyValuePair> pairs = new ArrayList<KeyValuePair>()
I really do not know which is correct method.
I also wanted to search by key and value pair which will be specified by the user to get the parent entity. The search also can have multiple key and value pair.
Please help me do it.
Google AppEngine Datastore writes and reads only simple Java data types listed in the Java Datastore Entities, Properties, and Keys documentation, not HashMap<String,String> or List<KeyValuePair> collections. However, it is possible to iterate over these collections and store each member as a separate record. The Datastore uses either a String or a long integer as the key (also known as ID or name) for each record. Thus the best fit for your Java program would be a HashMap<String,String>.
As you're open to suggestions, how about using the Datastore low level API instead of JDO? Your requirement is lightweight and a low level implementation might be simpler. For example:
// Make up some sample data
java.util.HashMap<String,String> capitals = new java.util.HashMap<String,String>();
capitals.put("France","Paris");
capitals.put("Peru","Lima");
// Create the records
com.google.appengine.api.datastore.DatastoreService datastoreService;
datastoreService = com.google.appengine.api.datastore.DatastoreServiceFactory.getDatastoreService();
for (String country : capitals.keySet()) {
com.google.appengine.api.datastore.Entity capitalEntity;
capitalEntity = new com.google.appengine.api.datastore.Entity("Capitals", country);
capitalEntity.setUnindexedProperty("capital", capitals.get(country)); // or setProperty if you prefer
datastoreService.put(capitalEntity);
}
// Retrieve one record
String wantedCountry = "Peru", wantedCapital;
com.google.appengine.api.datastore.Query query;
com.google.appengine.api.datastore.PreparedQuery pq;
com.google.appengine.api.datastore.Entity entity;
com.google.appengine.api.datastore.Key wantedKey;
com.google.appengine.api.datastore.Query.Filter filter;
query = new com.google.appengine.api.datastore.Query("Capitals");
wantedKey = com.google.appengine.api.datastore.KeyFactory.createKey("Capitals", wantedCountry);
filter = new com.google.appengine.api.datastore.Query.FilterPredicate(
com.google.appengine.api.datastore.Entity.KEY_RESERVED_PROPERTY,
com.google.appengine.api.datastore.Query.FilterOperator.EQUAL,
wantedKey );
query.setFilter(filter);
pq = datastoreService.prepare(query);
entity = pq.asSingleEntity();
wantedCapital = (String) entity.getProperty("capital");
// Retrieve all records
java.lang.Iterable<com.google.appengine.api.datastore.Entity> entities;
java.util.Iterator<com.google.appengine.api.datastore.Entity> entityIterator;
query = new com.google.appengine.api.datastore.Query("Capitals");
pq = datastoreService.prepare(query);
entities = pq.asIterable();
entityIterator = entities.iterator();
while (entityIterator.hasNext()) {
entity = entityIterator.next();
String foundCountry = entity.getKey().getName();
String foundCapital = (String) entity.getProperty("capital");
// ... do whatever you do with the data
}