Exception using QueryEngine inside a Transaction - java

I'm using the neo4j 1.9.M01 version with the java-rest-binding 1.8.M07, and I have a problem with this code that aims to get a node from a neo4j database with the property "URL" that is "ARREL", using the Query language via rest. The problems seems to happens only inside a transaction, throwing an exception, but otherwise works well :
RestGraphDatabase graphDb = new RestGraphDatabase("http://localhost:7474/db/data");
RestCypherQueryEngine queryEngine = new RestCypherQueryEngine(graphDb.getRestAPI());
Node nodearrel = null;
Transaction tx0 = gds.beginTx();
try{
final String queryStringarrel = ("START n=node(*) WHERE n.URL =~{URL} RETURN n");
QueryResult<Map<String, Object>> retornar = queryEngine.query(queryStringarrel, MapUtil.map("URL","ARREL"));
for (Map<String,Object> row : retornar)
{
nodearrel = (Node)row.get("n");
System.out.println("Arrel: "+nodearrel.getProperty("URL")+" id : "+nodearrel.getId());
}
tx0.success();
}
(...)
But an exception happens: *exception tx0: Error reading as JSON ''
* every execution at the line that returns the QueryResult object.
I also have tried to do it with the ExecutionEngine (between a transaction):
ExecutionEngine engine = new ExecutionEngine( graphDb );
String ARREL = "ARREL";
ExecutionResult result = engine.execute("START n=node(*) WHERE n.URL =~{"+ARREL+"} RETURN n");
Iterator<Node> n_column = result.columnAs("n");
Node arrelat = (Node) n_column.next();
for ( Node node : IteratorUtil.asIterable( n_column ) )
(...)
But it also fails at the *n_column.next()* returning a null object that throws an exception.
The problem is that I need to use the transactions to optimize the queries due if not it take too much time processing all the queries that I need to do. Should I try to join several operations to the query, to avoid using the transactions?

try to add single quotes at:
START n=node(*) WHERE n.URL =~ '{URL}' RETURN n

Can you update your java-rest-binding to the latest version (1.8) ? In between we had a version that automatically applied REST-batch-operations to places with transaction semantics.
So the transactions you see are not real transactions but just recording your operations to be executed as batch-rest-operations on tx.success/finish
Execute the queries within the transaction, but only access the results after the tx is finished. Then your results will be there.
This is for instance useful to send many cypher queries in one go to the server and have the results available all in one go afterwards.
And yes #ulkas use parameters but not like that:
START n=node(*) WHERE n.URL =~ {URL} RETURN n
params: { "URL" : "http://your.url" }
No quotes neccessary when using params, just like SQL prepared statements.

Related

ErrorCounter Unexpected Token where - Hibernate

I am using jpa for queries. My query is this :
public PartsItem getAvailablePartsItem(String search) {
Session s = sessionFactory.openSession();
PartsItem pi;
pi = s.createQuery("from PartsItem where lower(serialNumber) like lower(:serialNumber) and where available = true", PartsItem.class).setParameter("serialNumber",'%' + search + '%').list().get(0);
s.close();
return pi;
}
I get an error with unexpected tokens at with where and available. I want these two conditions fulfilled but I keep getting these errors. Could it be an issue with the and?
You shouldn't have the final where keyword, so removing that should make it parse successfully.

Java driver: how to get the objectId of an updated object with Mongodb's updateFirst method

I'm trying to get the objectId of an object that I have updated - this is my java code using the java driver:
Query query = new Query();
query.addCriteria(Criteria.where("color").is("pink"));
Update update = new Update();
update.set("name", name);
WriteResult writeResult = mongoTemplate.updateFirst(query, update, Colors.class);
Log.e("object id", writeResult.getUpsertedId().toString());
The log message returns null. I'm using a mongo server 3.0 on mongolab as I'm on the free tier so it shouldn't return null. My mongo shell is also:
MongoDB shell version: 3.0.7
Is there an easy way to return the object ID for the doc that I have just updated? What is the point of the method getUpsertedId() if I cannot return the upsertedId?
To do what I want, I currently have to issue two queries which is highly cumbersome:
//1st query - updating the object first
Query query = new Query();
query.addCriteria(Criteria.where("color").is("pink"));
Update update = new Update();
update.set("name", name);
WriteResult writeResult = mongoTemplate.updateFirst(query, update, Colors.class);
//2nd query - find the object so that I can get its objectid
Query queryColor = new Query();
queryColor.addCriteria(Criteria.where("color").is("pink"));
queryColor.addCriteria(Criteria.where("name").is(name));
Color color = mongoTemplate.findOne(queryColor, Color.class);
Log.e("ColorId", color.getId());
As per David's answer, I even tried his suggestion to rather use upsert on the template, so I changed the code to the below and it still does not work:
Query query = new Query();
query.addCriteria(Criteria.where("color").is("pink"));
Update update = new Update();
update.set("name", name);
WriteResult writeResult = mongoTemplate.upsert(query, update, Colors.class);
Log.e("object id", writeResult.getUpsertedId().toString());
Simon, I think its possible to achieve in one query. What you need is a different method called findAndModify().
In java driver for mongoDB, it has a method called findOneAndUpdate(filter, update, options).
This method returns the document that was updated. Depending on the options you specified for the method, this will either be the document as it was before the update or as it is after the update. If no documents matched the query filter, then null will be returned. Its not required to pass options, in that case it will return the document that was updated before update operation was applied.
A quick look at the mongoTemplate java driver docs here: http://docs.spring.io/spring-data/mongodb/docs/current/api/org/springframework/data/mongodb/core/FindAndModifyOptions.html tells me that you can use the method call:
public <T> T findAndModify(Query query,
Update update,
FindAndModifyOptions options,
Class<T> entityClass)
You can also change the FindAndModifyOptions class to take on an 'upsert' if the item was not found in the query.If it is found, the object will just be modified.
Upsert only applies if both
The update options had upsert on
A new document was actually created.
Your query neither has upsert enabled, nor creates a new document. Therefore it makes perfect sense that the getUpsertedId() returns null here.
Unfortunately it is not possible to get what you want in a single call with the current API; you need to split it into two calls. This is further indicated by the Mongo shell API for WriteResults:
The _id of the document inserted by an upsert. Returned only if an
upsert results in an insert.
This is an example to do this with findOneAndUpdate(filter,update,options) in Scala:
val findOneAndUpdateOptions = new FindOneAndUpdateOptions
findOneAndUpdateOptions.returnDocument(ReturnDocument.AFTER)
val filter = Document.parse("{\"idContrato\":\"12345\"}")
val update = Document.parse("{ $set: {\"descripcion\": \"New Description\" } }")
val response = collection.findOneAndUpdate(filter,update,findOneAndUpdateOptions)
println(response)

String Mongo delete all files from GridFs

Using Spring Mongo is there a way of deleting multiple files from Mongo(stored with GridFS) all at once using only query or something similar? For example if I have in collection .files a field Language I want to be able to delete all entries from .files and also from .chunks in a single query. Not sure if this is possible in Mongo (outside Spring).
Tried to use GridFsTemplate but the delete method calls GridFS.remove method (code from Spring-Mongodb)
public void delete(Query query) {
getGridFs().remove(getMappedQuery(query));
}
And this method gets all files in memory and then deletes them one by one:
public void remove( DBObject query ){
for ( GridFSDBFile f : find( query ) ){
f.remove();
}
}
void remove(){
_fs._filesCollection.remove( new BasicDBObject( "_id" , _id ) );
_fs._chunkCollection.remove( new BasicDBObject( "files_id" , _id ) );
}
UPDATE: from comments:
Yes, you can't perform such kind of queries in MongoDB. Also, there is no direct way of deleting bunch of files in GridFS using both mongofiles command line tool and mongo java driver.
To remove an entire collection at once, you can perform one of these operations:
If your collection is a MongoDB collection, you can delete it with
MongoDatabase db = client.getDatabase("db");
db.getCollection("MyCollectionName").drop();
If you are not certain collection named "MyCollectionName" exists, you should check that first, as described here.
If your collection is a GridFS collection, you can delete it with
MongoDatabase db = client.getDatabase("db");
db.getCollection("MyCollectionName.files").drop();
db.getCollection("MyCollectionName.chunks").drop();
This works, since GridFS collections consist of a .files and a .chunks collection behind the screens, and those are accessible (and removable) in the 'classic' way.
DBObject query;
//write appropriate query
List<GridFSDBFile> fileList = gfs.find(query);
for (GridFSDBFile f : fileList)
{
gfs.remove(f.getFilename());
// if you have not set fileName and
// your _id is of ObjectId type, then you can use
//gfs.remove((ObjectId) file.getId());
}

Neo4j ExecutionEngine does not return valid results

Trying to use a similar example from the sample code found here
My sample function is:
void query()
{
String nodeResult = "";
String rows = "";
String resultString;
String columnsString;
System.out.println("In query");
// START SNIPPET: execute
ExecutionEngine engine = new ExecutionEngine( graphDb );
ExecutionResult result;
try ( Transaction ignored = graphDb.beginTx() )
{
result = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n, n.Name" );
// END SNIPPET: execute
// START SNIPPET: items
Iterator<Node> n_column = result.columnAs( "n" );
for ( Node node : IteratorUtil.asIterable( n_column ) )
{
// note: we're grabbing the name property from the node,
// not from the n.name in this case.
nodeResult = node + ": " + node.getProperty( "Name" );
System.out.println("In for loop");
System.out.println(nodeResult);
}
// END SNIPPET: items
// START SNIPPET: columns
List<String> columns = result.columns();
// END SNIPPET: columns
// the result is now empty, get a new one
result = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n, n.Name" );
// START SNIPPET: rows
for ( Map<String, Object> row : result )
{
for ( Entry<String, Object> column : row.entrySet() )
{
rows += column.getKey() + ": " + column.getValue() + "; ";
System.out.println("nested");
}
rows += "\n";
}
// END SNIPPET: rows
resultString = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n.Name" ).dumpToString();
columnsString = columns.toString();
System.out.println(rows);
System.out.println(resultString);
System.out.println(columnsString);
System.out.println("leaving");
}
}
When I run this in the web console I get many results (as there are multiple nodes that have an attribute of Name that contains the pattern 79. Yet running this code returns no results. The debug print statements 'in loop' and 'nested' never print either. Thus this must mean there are not results found in the Iterator, yet that doesn't make sense.
And yes, I already checked and made sure that the graphDb variable is the same as the path for the web console. I have other code earlier that uses the same variable to write to the database.
EDIT - More info
If I place the contents of query in the same function that creates my data, I get the correct results. If I run the query by itself it returns nothing. It's almost as the query works only in the instance where I add the data and not if I come back to the database cold in a separate instance.
EDIT2 -
Here is a snippet of code that shows the bigger context of how it is being called and sharing the same DBHandle
package ContextEngine;
import ContextEngine.NeoHandle;
import java.util.LinkedList;
/*
* Class to handle streaming data from any coded source
*/
public class Streamer {
private NeoHandle myHandle;
private String contextType;
Streamer()
{
}
public void openStream(String contextType)
{
myHandle = new NeoHandle();
myHandle.createDb();
}
public void streamInput(String dataLine)
{
Context context = new Context();
/*
* get database instance
* write to database
* check for errors
* report errors & success
*/
System.out.println(dataLine);
//apply rules to data (make ContextRules do this, send type and string of data)
ContextRules contextRules = new ContextRules();
context = contextRules.processContextRules("Calls", dataLine);
//write data (using linked list from contextRules)
NeoProcessor processor = new NeoProcessor(myHandle);
processor.processContextData(context);
}
public void runQuery()
{
NeoProcessor processor = new NeoProcessor(myHandle);
processor.query();
}
public void closeStream()
{
/*
* close database instance
*/
myHandle.shutDown();
}
}
Now, if I call streamInput AND query in in the same instance (parent calls) the query returns results. If I only call query and do not enter ANY data in that instance (yet web console shows data for same query) I get nothing. Why would I have to create the Nodes and enter them into the database at runtime just to return a valid query. Shouldn't I ALWAYS get the same results with such a query?
You mention that you are using the Neo4j Browser, which comes with Neo4j. However, the example you posted is for Neo4j Embedded, which is the in-process version of Neo4j. Are you sure you are talking to the same database when you try your query in the Browser?
In order to talk to Neo4j Server from Java, I'd recommend looking at the Neo4j JDBC driver, which has good support for connecting to the Neo4j server from Java.
http://www.neo4j.org/develop/tools/jdbc
You can set up a simple connection by adding the Neo4j JDBC jar to your classpath, available here: https://github.com/neo4j-contrib/neo4j-jdbc/releases Then just use Neo4j as any JDBC driver:
Connection conn = DriverManager.getConnection("jdbc:neo4j://localhost:7474/");
ResultSet rs = conn.executeQuery("start n=node({id}) return id(n) as id", map("id", id));
while(rs.next()) {
System.out.println(rs.getLong("id"));
}
Refer to the JDBC documentation for more advanced usage.
To answer your question on why the data is not durably stored, it may be one of many reasons. I would attempt to incrementally scale back the complexity of the code to try and locate the culprit. For instance, until you've found your problem, do these one at a time:
Instead of looping through the result, print it using System.out.println(result.dumpToString());
Instead of the regex query, try just MATCH (n) RETURN n, to return all data in the database
Make sure the data you are seeing in the browser is not "old" data inserted earlier on, but really is an insert from your latest run of the Java program. You can verify this by deleting the data via the browser before running the Java program using MATCH (n) OPTIONAL MATCH (n)-[r]->() DELETE n,r;
Make sure you are actually working against the same database directories. You can verify this by leaving the server running. If you can still start your java program, unless your Java program is using the Neo4j REST Bindings, you are not using the same directory. Two Neo4j databases cannot run against the same database directory simultaneously.

Relationships don't show in neo4j web browser when created by api...neo4j 2.0.1

I have created a simple test in the java api (see below). I then set the neo4j-server.properties to that db and restart neo4j. I'm expecting to see two nodes and a relationship using the web browser localhost:7474/browser.
When code below is executed, the for loop does detect a relationship, however it does not display (nor are they returned by CYPHER query) in the browser.
I'm using 2.0.1 in the java and neo4j server. Is my expectation in error?
Transaction tx = gdb.beginTx();
try
{
Label courseLabel = DynamicLabel.label( "Book" );
Label courseLabelP = DynamicLabel.label( "Person" );
Node a = gdb.createNode(courseLabel), b = gdb.createNode(courseLabelP);
Relationship rel = a.createRelationshipTo( b, CourseRelTypes.HAS_AUTHOR );
for(Relationship r : b.getRelationships(CourseRelTypes.HAS_AUTHOR)) {
System.out.println("has rel");
}
}
finally {
tx.close();
}
You need to do a tx.success() to commit that transaction

Categories

Resources