Extracting relationships from a neo4J StatementResult - java

How do I extract relationships from a StatementResult?
For the moment I have something like this:
while (results.hasNext()) {
Record record = results.next();
try {
if (record.get(0).hasType(TYPE_SYSTEM.NODE())){
Node node = record.get(0).asNode();
//System.out.println(node.get("name") + ": " + node.get("guid"));
// Add block
if (node.hasLabel(configuration.getBlock())) {
Block block = Block.fromRecord(node);
blocks.addBlock(block);
} else
// Add property
if (node.hasLabel(configuration.getProp())) {
Property property = Property.fromRecord(node);
String guid = property.getGuid();
Block block = blocks.getBlock(guid);
block.addProperty(property);
} else
// Add output
if (node.hasLabel(configuration.getOutput())) {
Output output = Output.fromRecord(node);
String guid = output.getGuid();
Block block = blocks.getBlock(guid);
block.addOutput(output);
} else
// Add input
if (node.hasLabel(configuration.getInput())) {
Input input = Input.fromRecord(node);
inputs.add(input);
String guid = input.getGuid();
}
}
My original query was something like this:
MATCH (start:Block{name:'block_3'})
CALL apoc.path.subgraphNodes(start, {relationshipFilter:'PART_OF|hasOutPort>|connectsTo>|<hasInPort'}) YIELD node as block
WITH
block,
[(block)-[:PART_OF]->(prop) | prop] as properties,
[(block)-[:hasOutPort]->(output) | output] as outputs,
[(block)-[:hasInPort]->(input) | input] as inputs
RETURN block, properties, outputs, inputs
I need all the "connectsTo" relationships
Hope that makes sense.

First you need to specify those relations' aliases and return them like nodes. Simplified example:
MATCH (a:Block)-[r:PART_OF]->(b:Block) RETURN a, r, b
This way, your Record instances will contain data (Value instances) for a, r and b. And in your extraction logic you do the following to get the relationship data:
Relationship r = record.get("r").asRelationship();

Related

RavenDB query returns null

I'm on RavenDB 3.5.35183. I have a type:
import com.mysema.query.annotations.QueryEntity;
#QueryEntity
public class CountryLayerCount
{
public String countryName;
public int layerCount;
}
and the following query:
private int getCountryLayerCount(String countryName, IDocumentSession currentSession)
{
QCountryLayerCount countryLayerCountSurrogate = QCountryLayerCount.countryLayerCount;
IRavenQueryable<CountryLayerCount> levelDepthQuery = currentSession.query(CountryLayerCount.class, "CountryLayerCount/ByName").where(countryLayerCountSurrogate.countryName.eq(countryName));
CountryLayerCount countryLayerCount = new CountryLayerCount();
try (CloseableIterator<StreamResult<CountryLayerCount>> results = currentSession.advanced().stream(levelDepthQuery))
{
while(results.hasNext())
{
StreamResult<CountryLayerCount> srclc = results.next();
System.out.println(srclc.getKey());
CountryLayerCount clc = srclc.getDocument();
countryLayerCount = clc;
break;
}
}
catch(Exception e)
{
}
return countryLayerCount.layerCount;
}
The query executes successfully, and shows the correct ID for the document I'm retrieving (e.g. "CountryLayerCount/123"), but its data members are both null. The where clause also works fine, the country name is used to retrieve individual countries. This is so simple, but I can't see where I've gone wrong. The StreamResult contains the correct key, but getDocument() doesn't work - or, rather, it doesn't contain an object. The collection has string IDs.
In the db logger, I can see the request coming in:
Receive Request # 29: GET - geodata - http://localhost:8888/databases/geodata/streams/query/CountryLayerCount/ByName?&query=CountryName:Germany
Request # 29: GET - 22 ms - geodata - 200 - http://localhost:8888/databases/geodata/streams/query/CountryLayerCount/ByName?&query=CountryName:Germany
which, when plugged into the browser, correctly gives me:
{"Results":[{"countryName":"Germany","layerCount":5,"#metadata":{"Raven-Entity-Name":"CountryLayerCounts","Raven-Clr-Type":"DbUtilityFunctions.CountryLayerCount, DbUtilityFunctions","#id":"CountryLayerCounts/212","Temp-Index-Score":0.0,"Last-Modified":"2018-02-03T09:41:36.3165473Z","Raven-Last-Modified":"2018-02-03T09:41:36.3165473","#etag":"01000000-0000-008B-0000-0000000000D7","SerializedSizeOnDisk":164}}
]}
The index definition:
from country in docs.CountryLayerCounts
select new {
CountryName = country.countryName
}
AFAIK, one doesn't have to index all the fields of the object to retrieve it in its entirety, right ? In other words, I just need to index the field(s) to find the object, not all the fields I want to retrieve; at least that was my understanding...
Thanks !
The problem is related to incorrect casing.
For example:
try (IDocumentSession sesion = store.openSession()) {
CountryLayerCount c1 = new CountryLayerCount();
c1.layerCount = 5;
c1.countryName = "Germany";
sesion.store(c1);
sesion.saveChanges();
}
Is saved as:
{
"LayerCount": 5,
"CountryName": "Germany"
}
Please notice we use upper case letters in json for property names (this only applies to 3.X versions).
So in order to make it work, please update json properties names + edit your index:
from country in docs.CountryLayerCounts
select new {
CountryName = country.CountryName
}
Btw. If you have per country aggregation, then you can simply query using:
QCountryLayerCount countryLayerCountSurrogate =
QCountryLayerCount.countryLayerCount;
CountryLayerCount levelDepthQuery = currentSession
.query(CountryLayerCount.class, "CountryLayerCount/ByName")
.where(countryLayerCountSurrogate.countryName.eq(countryName))
.single();

How to Iterate column with multiple values using cypher?

The code I tried was
firstNode = graphDb.createNode();//creating nodes like this
firstNode.setProperty( "person", "Andy " );
Label myLabel = DynamicLabel.label("person");
firstNode.addLabel(myLabel); ...
relationship = firstNode.createRelationshipTo( secondNode, RelTypes.emails );// creating relationships like this
relationship.setProperty( "relationship", "email " );....
Transaction tx1 = graphDb.beginTx();
try{
ExecutionEngine engine = new ExecutionEngine(graphDb);
ExecutionResult result = engine.execute("MATCH (sender:person)-[:emails]-(receiver) RETURN sender, count(receiver)as count, collect(receiver) as receivers ORDER BY count DESC ");..
and the result I obtained was:
sender | count | receivers
Node[2]{person:"Chris"} | 3 | [Node[4]{person:"Elsa "},Node[0]{person:"Andy "},Node[1]{person:"Bobby"}]
Node[4]{person:"Elsa "} | 3 | [Node[5]{person:"Frank"},Node[2]{person:"Chris"},Node[3]{person:"David"}]
Node[1]{person:"Bobby"} | 3 | [Node[2]{person:"Chris"},Node[3]{person:"David"},Node[0]{person:"Andy "}]
Node[5]{person:"Frank"} | 2 | [Node[3]{person:"David"},Node[4]{person:"Elsa "}
I want to iterate the receivers. so I tried the following :
for (Map<String,Object> row : result) {
Node x = (Node)row.get("receivers");
System.out.println(x);
for (String prop : x.getPropertyKeys()) {
System.out.println(prop +": "+x.getProperty(prop));
}
But it throws Exception in thread "main" java.lang.ClassCastException: scala.collection.convert.Wrappers$SeqWrapper cannot be cast to org.neo4j.graphdb.Node.
How can I do this?
The problem is that in your cypher query you collect the receiver.person property into an array called receivers. Receivers isn't a Node, its an array of properties. You can't cast that to strings. If you were looking to get the actual receiver nodes then you need to change your query statement to:
MATCH (sender:person)-[:emails]-(receiver) RETURN sender, count(receiver)as count, receiver as receivers ORDER BY count DESC
Alternatively, if you want to use the array of properties then you can do something like the code below:
Object receivers = row.get("receivers")
if(receivers instanceof String[]) {
for(String receiver in receivers) {
.. do something
}
} else {
// do something with receiver as a single string value
}
Clearly, you will need to change to type of receivers from String to the appropriate type if it isn't a String.
It is a simple thing.
try{
ExecutionEngine engine = new ExecutionEngine(graphDb);
ExecutionResult result = engine.execute("MATCH (sender:person)-[:emails]-(receiver) RETURN sender, count(receiver)as count, collect(receiver.person) as receivers ORDER BY count DESC ");
//ExecutionResult result = engine.execute("MATCH (sender:person)-[:emails]->(receiver) WITH sender, collect(receiver.person) as receivers, RETURN {sender: sender.person, receivers: receivers) ORDER BY size(receivers) DESC");
//System.out.println(result.dumpToString());
LinkedList list_prop = new LinkedList();
for (Map<String,Object> row : result) {
Node x = (Node)row.get("sender");
Object y = row.get("receivers");
System.out.println(y);
for (String prop_x : x.getPropertyKeys()) {
System.out.println(prop_x +": "+x.getProperty(prop_x));
}
}
tx1.success();
}
finally {
tx1.close();
}
In the match query, I used "collect(receiver.person) as receivers" instead of "collect(receiver) as receivers". It worked.

Database insertion synchronization

I have a java code that generates a request number based on the data received from database, and then updates the database for newly generated
synchronized (this.getClass()) {
counter++;
System.out.println(counter);
System.out.println("start " + System.identityHashCode(this));
certRequest
.setRequestNbr(generateRequestNumber(certInsuranceRequestAddRq
.getAccountInfo().getAccountNumberId()));
System.out.println("outside funcvtion"+certRequest.getRequestNbr());
reqId = Utils.getUniqueId();
certRequest.setRequestId(reqId);
System.out.println(reqId);
ItemIdInfo itemIdInfo = new ItemIdInfo();
itemIdInfo.setInsurerId(certRequest.getRequestId());
certRequest.setItemIdInfo(itemIdInfo);
dao.insert(certRequest);
addAccountRel();
counter++;
System.out.println(counter);
System.out.println("end");
}
the output for System.out.println() statements is `
1
start 27907101
com.csc.exceed.certificate.domain.CertRequest#a042cb
inside function request number66
outside funcvtion66
AF88172D-C8B0-4DCD-9AC6-12296EF8728D
2
end
3
start 21695531
com.csc.exceed.certificate.domain.CertRequest#f98690
inside function request number66
outside funcvtion66
F3200106-6033-4AEC-8DC3-B23FCD3CA380
4
end
In my case I get a call from two threads for this code.
If you observe both the threads run independently. However the data for request number is same in both the cases.
is it possible that before the database updation for first thread completes the second thread starts execution.
`
the code for generateRequestNumber() is as follows:
public String generateRequestNumber(String accNumber) throws Exception {
String requestNumber = null;
if (accNumber != null) {
String SQL_QUERY = "select CERTREQUEST.requestNbr from CertRequest as CERTREQUEST, "
+ "CertActObjRel as certActObjRel where certActObjRel.certificateObjkeyId=CERTREQUEST.requestId "
+ " and certActObjRel.certObjTypeCd=:certObjTypeCd "
+ " and certActObjRel.certAccountId=:accNumber ";
String[] parameterNames = { "certObjTypeCd", "accNumber" };
Object[] parameterVaues = new Object[] {
Constants.REQUEST_RELATION_CODE, accNumber };
List<?> resultSet = dao.executeNamedQuery(SQL_QUERY,
parameterNames, parameterVaues);
// List<?> resultSet = dao.retrieveTableData(SQL_QUERY);
if (resultSet != null && resultSet.size() > 0) {
requestNumber = (String) resultSet.get(0);
}
int maxRequestNumber = -1;
if (requestNumber != null && requestNumber.length() > 0) {
maxRequestNumber = maxValue(resultSet.toArray());
requestNumber = Integer.toString(maxRequestNumber + 1);
} else {
requestNumber = Integer.toString(1);
}
System.out.println("inside function request number"+requestNumber);
return requestNumber;
}
return null;
}
Databases allow multiple simultaneous connections, so unless you write your code properly you can mess up the data.
Since you only seem to require a unique growing integer, you can easily generate one safely inside the database with for example a sequence (if supported by the database). Databases not supporting sequences usually provide some other way (such as auto increment columns in MySQL).

Compare field from previous row in JDBC ResultSet

I've researched can't find any relevant info. I have a result set that give me back distinct tagId's their can be multiple tagIds for same accountId's.
while(result_set.next()){
String tagId = result_set.getString("tagId");
String accountId = result_set.getString("accoundId");
// plenty of other fields being store locally
}
I need to store first accoundId(which is being done) & every subsequent iteration compare it with the previous Id to check for equality or not(if so same account).
I tried this and it failed horribly, after first iteration they'll continually be equal & I must be DUMB bc i though as long as I compare them before assignment global guy(previousId) should be holding the prior value.
String previousId = null;
while(result_set.next()){
String tagId = result_set.getString("tagId");
String accountId = result_set.getString("accoundId");
previousId = accountId;
}
Anyway I wanted my workflow to go something as follows:
while(result_set.next()){
if (previousId = null) {
// this would be the first iteration
}
else if (previousId.equals(accountId) {
// go here
} else {
// go here
}
}
If I've understood you well, this should work..
String previousId = null;
while(result_set.next()){
String tagId = result_set.getString("tagId");
String accountId = result_set.getString("accoundId");
if (previousId == null) {
// this would be the first iteration
} else if (previousId.equals(accountId) {
// go here
} else {
// go here
}
previousId = accountId;
}

After 128th node is created in neo4j index no more nodes can be accessed

This seems like a very strange problem. I'm stress testing my neo4j graph database, and so one of my tests requires creating a lot of users (in this specific test, 1000). So the code for that is as follows,
// Creates a n users and measures the time taken to add another
n = 1000;
tx = graphDb.beginTx();
try {
for(int i = 0; i < n; i++){
dataService.createUser(BigInteger.valueOf(i));
}
start = System.nanoTime();
dataService.createUser(BigInteger.valueOf(n));
end = System.nanoTime();
time = end - start;
System.out.println("The time taken for createUser with " + n + " users is " + time +" nanoseconds.");
tx.success();
}
finally
{
tx.finish();
}
And the code for dataService.createUser() is,
public User createUser(BigInteger identifier) throws ExistsException {
// Verify that user doesn't already exist.
if (this.nodeIndex.get(UserWrapper.KEY_IDENTIFIER, identifier)
.getSingle() != null) {
throw new ExistsException("User with identifier '"
+ identifier.toString() + "' already exists.");
}
// Create new user.
final Node userNode = graphDb.createNode();
final User user = new UserWrapper(userNode);
user.setIdentifier(identifier);
userParent.getNode().createRelationshipTo(userNode, NodeRelationships.PARENT);
return user;
}
Now I need to call dataService.getUser() after I've made these Users. The code for getUser() is as follows,
public User getUser(BigInteger identifier) throws DoesNotExistException {
// Search for the user.
Node userNode = this.nodeIndex.get(UserWrapper.KEY_IDENTIFIER,
identifier).getSingle();
// Return the wrapped user, if found.
if (userNode != null) {
return new UserWrapper(userNode);
} else {
throw new DoesNotExistException("User with identifier '"
+ identifier.toString() + "' was not found.");
}
}
So everything is going fine until I create the 129th user. I'm following along in the debugger and watching the value of dataService.getUser(BigInteger.valueOf(1)) which is the second node, dataService.getUser(BigInteger.valueOf(127)) which is the 128th node, and dataService.getUser(BigInteger.valueOf(i-1)) which is the last node created. And the debugger is telling me that after node 128 is created, node 129 and above aren't created because getUser() throws a DoesNotExistException for those nodes, but still gives values for node 2 and node 128.
The user id I'm passing to createUser() is autoindexed.
Any idea why it isn't making more nodes (or not indexing these nodes)?
It sounds suspiciously like a byte value conversion which flips around at 128. Could you make sure there isn't anything like that going on in your code?

Categories

Resources