I am using Apache Jackrabbit as a database.
In my case, root node has numbers of child nodes(only at depth 1).
All child node has unique name, i.e., some Integer.
Each child Node have some properties that I have used further.
My task
I have to take top 10 nodes whose keys(integer values) are minimum.
My thinking
To achieve above goal, I make a query that sorts the keys of all child nodes, and pick top 10. Then by using that keys, I get all corresponding nodes, and after working, delete all that key/value pairs.
For that I searched a lot on the internet how to run the query. Can you please tell me how to run query on apache jackrabit. It is good, if you explain with example.
Edit no. 1
public class JackRabbit {
public static void main(String[] args) throws Exception {
try {
Repository repository = JcrUtils.getRepository("http://localhost:4502/crx/server");
javax.jcr.Session session = repository.login(new SimpleCredentials("admin", "admin".toCharArray()));
Node root = session.getRootNode();
// Obtain the query manager for the session via the workspace ...
javax.jcr.query.QueryManager queryManager = session.getWorkspace().getQueryManager();
// Create a query object ...
String expression = "select * from nt:base where name= '12345' ";
javax.jcr.query.Query query = queryManager.createQuery(expression, javax.jcr.query.Query.JCR_SQL2);
// Execute the query and get the results ...
javax.jcr.query.QueryResult result = query.execute();
session.logout();
} catch (Exception e) {
e.printStackTrace();
}
}
}
Exception
javax.jcr.query.InvalidQueryException: Query:
select * from nt:(*)base where name= '12345'; expected: <end>
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:525)
at org.apache.jackrabbit.spi2dav.ExceptionConverter.generate(ExceptionConverter.java:69)
at org.apache.jackrabbit.spi2dav.ExceptionConverter.generate(ExceptionConverter.java:51)
at org.apache.jackrabbit.spi2dav.ExceptionConverter.generate(ExceptionConverter.java:45)
at org.apache.jackrabbit.spi2dav.RepositoryServiceImpl.executeQuery(RepositoryServiceImpl.java:2004)
at org.apache.jackrabbit.jcr2spi.WorkspaceManager.executeQuery(WorkspaceManager.java:349)
at org.apache.jackrabbit.jcr2spi.query.QueryImpl.execute(QueryImpl.java:149)
at jackrabbit.JackRabbit.main(JackRabbit.java:36)
I want to write a query of below scenereo
Here nodes having integer value have some properties. I want to sort these nodes by their integer values, and extract top 50 nodes for further processing.
Help me in that.
You should quote your node type name in JCR-SQL2:
select * from [nt:base]
This is one of the main differences between JCR-SQL and JCR-SQL2. Besides, name is a dynamic operand taking a selector argument. So a better way to write your query would be this:
select * from [nt:base] as b where name(b) = '12345'
You have different ways of executing your queries depending on the query language you want to use.
Take a look at this code for some simple query using only the API and not SQL like string queries.
You can take a look at JBoss Modeshape documentation for examples too since it is another JCR 2.0 implementation.
I hope this will help you to execute the query:
public FolderListReturn listFolder(String parentNode, String userid,String password) {
System.out.println("getting folders and files from = "+parentNode+" of user : "+userid);
SessionWrapper sessions =JcrRepositoryUtils.login(userid, password);
Session jcrsession = sessions.getSession();
Assert.notNull(name);
FolderListReturn folderList1 = new FolderListReturn();
ArrayOfFolders folders = new ArrayOfFolders();
try {
javax.jcr.query.QueryManager queryManager;
queryManager = jcrsession.getWorkspace().getQueryManager();
String expression = "select * from [nt:folder] AS s WHERE ISCHILDNODE(s,'"+name+"')and CONTAINS(s.[edms:owner],'*"+userid+"*') ORDER BY s.["+Config.EDMS_Sorting_Parameter+"] ASC";
javax.jcr.query.Query query = queryManager.createQuery(expression, javax.jcr.query.Query.JCR_SQL2);
javax.jcr.query.QueryResult result = query.execute();
for (NodeIterator nit = result.getNodes(); nit.hasNext();) {
Node node = nit.nextNode();
Folder folder = new Folder();
folder=setProperties(node,folder,userid,password,jcrsession,name);
folders.getFolderList().add(folder);
}
folderList1.setFolderListResult(folders);
folderList1.setSuccess(true);
} catch (Exception e) {
e.printStackTrace();
}finally{
//JcrRepositoryUtils.logout(sessionId);
}
return folderList1;
}
Related
I have the following java code that uses hibernate predicates to return search reusults from my Appointment MYSQL table.
public List<Appointment> getSearchResults(String client, AppointmentSearchRequest searchRequest, Predicate completePredicate) {
List<Appointment> searchResults = new ArrayList<>();
EntityManager entityManager = null;
try {
entityManager = entityManagement.createEntityManager(client);
CriteriaBuilder builder = entityManager.getCriteriaBuilder();
CriteriaQuery<Appointment> query = builder.createQuery(Appointment.class);
Root<Appointment> from = query.from(Appointment.class);
CriteriaQuery<Appointment> selectQuery = query.select(from);
selectQuery = selectQuery.where(completePredicate);
searchResults = entityManager.createQuery(selectQuery).setFirstResult(searchRequest.getIndex()).setMaxResults(searchRequest.getSize()).getResultList();
}catch (Exception e){
//
}
return searchResults;
}
When I run this code, at the following line:
searchResults = entityManager.createQuery(selectQuery).setFirstResult(searchRequest.getIndex()).setMaxResults(searchRequest.getSize()).getResultList();
I am getting the error:
17:20:27,730 ERROR [org.hibernate.hql.internal.ast.ErrorCounter] (http-/127.0.0.1:8080-2) Invalid path: 'generatedAlias1.title'
17:20:27,734 ERROR [org.hibernate.hql.internal.ast.ErrorCounter] (http-/127.0.0.1:8080-2) Invalid path: 'generatedAlias1.title': Invalid path: 'generatedAlias1.title'
at org.hibernate.hql.internal.ast.util.LiteralProcessor.lookupConstant(LiteralProcessor.java:119) [hibernate-core-4.2.7.SP1-redhat-3.jar:4.2.7.SP1-redhat-3]
What could be causing this error?
Ironically i just had exactly the same issue some days ago: the problem is your path "generatedAlias1.title" which is understandable in JPQL as a dereference of the field title from an inner entity referenced with generatedAlias1.
Unluckily this complex path in a single String cannot be understood by the CriteriaBuilder.
This path is normally described within the Predicate element which you are passing here as an argument to your method getSearchResults with name completePredicate ... ( the problem is that you did not tell us how you are creating this predicate )
So you need to refactor the way in which you declare this path, eventually using the class javax.persistence.criteria.Path and calculating it with this method
final private <V> Path<V> getPath(Root<T> root, String attributeName) {
Path<V> path = null;
for (String part : attributeName.split("\\.")) {
path = (path == null) ? root.get(part) : path.get(part);
}
return path;
}
Where you will pass the property from of your method getSearchResults as the argument root, and the string containing "generatedAlias1.title" as attributeName; then you can re-declare your predicate as an instance of Predicate which is acting on this path here returned.
I hope I have been clear enough
Try checking your Query
See this answer. It might be just as simple as checking your query aliases. Make sure that they match, otherwise the query won't compile at all. You didn't list your full query here, so I can't tell if it's incorrect or not.
I have a collection with some documents in it. And in my application I am creating this collection first and then inserting documents. Also, based on the requirement I need to truncate (delete all documents) the collection as well. Using document db java api I have written the following code for my this purpose-
DocumentClient documentClient = getConnection(masterkey, server, portNo);
List<Database> databaseList = documentClient.queryDatabases("SELECT * FROM root r WHERE r.id='" + schemaName + "'", null).getQueryIterable().toList();
DocumentCollection collection = null;
Database databaseCache = (Database)databaseList.get(0);
List<DocumentCollection> collectionList = documentClient.queryCollections(databaseCache.getSelfLink(), "SELECT * FROM root r WHERE r.id='" + collectionName + "'", null).getQueryIterable().toList();
// truncate logic
if (collectionList.size() > 0) {
collection = ((DocumentCollection) collectionList.get(0));
if (truncate) {
try {
documentClient.deleteDocument(collection.getSelfLink(), null);
} catch (DocumentClientException e) {
e.printStackTrace();
}
}
} else { // create logic
RequestOptions requestOptions = new RequestOptions();
requestOptions.setOfferType("S1");
collection = new DocumentCollection();
collection.setId(collectionName);
try {
collection = documentClient.createCollection(databaseCache.getSelfLink(), collection, requestOptions).getResource();
} catch (DocumentClientException e) {
e.printStackTrace();
}
With the above code I am able to create a new collection successfully. Also, I am able to insert documents as well in this collection. But while truncating the collection I am getting below error-
com.microsoft.azure.documentdb.DocumentClientException: The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used. Server used the following payload to sign: 'delete
colls
eyckqjnw0ae=
I am using Azure Document DB Java API version 1.9.5.
It will be of great help if you can point out the error in my code or if there is any other better way of truncating collection. I would really appreciate any kind of help here.
According to your description & code, I think the issue was caused by the code below.
try {
documentClient.deleteDocument(collection.getSelfLink(), null);
} catch (DocumentClientException e) {
e.printStackTrace();
}
It seems that you want to delete a document via the code above, but pass the argument documentLink with a collection link.
So if your real intention is to delete a collection, please using the method DocumentClient.deleteCollection(collectionLink, options).
I have to do this flexible search query in a service Java class:
select sum({oe:totalPrice})
from {Order as or join CustomerOrderStatus as os on {or:CustomerOrderStatus}={os:pk}
join OrderEntry as oe on {or.pk}={oe.order}}
where {or:versionID} is null and {or:orderType} in (8796093066999)
and {or:company} in (8796093710341)
and {or:pointOfSale} in (8796097413125)
and {oe:ecCode} in ('13','14')
and {or:yearSeason} in (8796093066981)
and {os:code} not in ('CANCELED', 'NOT_APPROVED')
When I perform this query in the hybris administration console I correctly obtain:
1164.00000000
In my Java service class I wrote this:
private BigDecimal findGroupedOrdersData(String total, String uncDisc, String orderPromo,
Map<String, Object> queryParameters) {
BigDecimal aggregatedValue = new BigDecimal(0);
final StringBuilder queryBuilder = new StringBuilder();
queryBuilder.append("select sum({oe:").append(total).append("})");
queryBuilder.append(
" from {Order as or join CustomerOrderStatus as os on {or:CustomerOrderStatus}={os:pk} join OrderEntry as oe on {or.pk}={oe.order}}");
queryBuilder.append(" where {or:versionID} is null");
if (queryParameters != null && !queryParameters.isEmpty()) {
appendWhereClausesToBuilder(queryBuilder, queryParameters);
}
queryBuilder.append(" and {os:code} not in ('");
queryBuilder.append(CustomerOrderStatus.CANCELED.getCode()).append("', ");
queryBuilder.append("'").append(CustomerOrderStatus.NOT_APPROVED.getCode()).append("')");
FlexibleSearchQuery query = new FlexibleSearchQuery(queryBuilder.toString(), queryParameters);
List<BigDecimal> result = Lists.newArrayList();
query.setResultClassList(Arrays.asList(BigDecimal.class));
result = getFlexibleSearchService().<BigDecimal> search(query).getResult();
if (!result.isEmpty() && result.get(0) != null) {
aggregatedValue = result.get(0);
}
return aggregatedValue;
}
private void appendWhereClausesToBuilder(StringBuilder builder, Map<String, Object> params) {
if ((params == null) || (params.isEmpty()))
return;
for (String paramName : params.keySet()) {
builder.append(" and ");
if (paramName.equalsIgnoreCase("exitCollection")) {
builder.append("{oe:ecCode}").append(" in (?").append(paramName).append(")");
} else {
builder.append("{or:").append(paramName).append("}").append(" in (?").append(paramName).append(")");
}
}
}
The query string before the search(query).getResult() function is:
query: [select sum({oe:totalPrice}) from {Order as or join CustomerOrderStatus as os on {or:CustomerOrderStatus}={os:pk}
join OrderEntry as oe on {or.pk}={oe.order}} where {or:versionID} is null
and {or:orderType} in (?orderType) and {or:company} in (?company)
and {or:pointOfSale} in (?pointOfSale) and {oe:ecCode} in (?exitCollection)
and {or:yearSeason} in (?yearSeason) and {os:code} not in ('CANCELED', 'NOT_APPROVED')],
query parameters: [{orderType=OrderTypeModel (8796093230839),
pointOfSale=B2BUnitModel (8796097413125), company=CompanyModel (8796093710341),
exitCollection=[13, 14], yearSeason=YearSeasonModel (8796093066981)}]
but after the search(query) result is [null].
Why? Where I wrong in the Java code? Thanks.
In addition, if you want to disable restriction in your java code. You can do like this ..
#Autowired
private SearchRestrictionService searchRestrictionService;
private BigDecimal findGroupedOrdersData(String total, String uncDisc, String orderPromo,
Map<String, Object> queryParameters) {
searchRestrictionService.disableSearchRestrictions();
// You code here
searchRestrictionService.enableSearchRestrictions();
return aggregatedValue;
}
In the above code, You can disabled the search restriction and after the search result, you can again enable it.
OR
You can use sessionService to execute flexible search query in Local View. The method executeInLocalView can be used to execute code within an isolated session.
(SearchResult<? extends ItemModel>) sessionService.executeInLocalView(new SessionExecutionBody()
{
#Override
public Object execute()
{
sessionService.setAttribute(FlexibleSearch.DISABLE_RESTRICTIONS, Boolean.TRUE);
return flexibleSearchService.search(query);
}
});
Here you are setting DISABLE RESTRICTIONS = true which will run the query in admin context [Without Restriction].
Check this
Better i would suggest you to check what restriction exactly applying to your item type. You can simply check in Backoffice/HMC
Backoffice :
Go to System-> Personalization (SearchRestricion)
Search by Restricted Type
Check Filter Query and analysis your item data based on that.
You can also check its Principal (UserGroup) on which restriction applied.
To confirm, just check by disabling active flag.
Query in the code is running not under admin user (most likely).
And in this case the different search Restrictions are applied to the query.
You can see that the original query is changed:
start DB logging (/hac -> Monitoring -> Database -> JDBC logging);
run the query from the code;
stop DB logging and check log file.
More information: https://wiki.hybris.com/display/release5/Restrictions
In /hac console the admin user is usually used and restrictions will not be applied because of this.
As the statement looks ok to me i'm going to go with visibility of the data. Are you able to see all the items as whatever user you are running the query as? In the hac you would be admin obviously.
Trying to use a similar example from the sample code found here
My sample function is:
void query()
{
String nodeResult = "";
String rows = "";
String resultString;
String columnsString;
System.out.println("In query");
// START SNIPPET: execute
ExecutionEngine engine = new ExecutionEngine( graphDb );
ExecutionResult result;
try ( Transaction ignored = graphDb.beginTx() )
{
result = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n, n.Name" );
// END SNIPPET: execute
// START SNIPPET: items
Iterator<Node> n_column = result.columnAs( "n" );
for ( Node node : IteratorUtil.asIterable( n_column ) )
{
// note: we're grabbing the name property from the node,
// not from the n.name in this case.
nodeResult = node + ": " + node.getProperty( "Name" );
System.out.println("In for loop");
System.out.println(nodeResult);
}
// END SNIPPET: items
// START SNIPPET: columns
List<String> columns = result.columns();
// END SNIPPET: columns
// the result is now empty, get a new one
result = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n, n.Name" );
// START SNIPPET: rows
for ( Map<String, Object> row : result )
{
for ( Entry<String, Object> column : row.entrySet() )
{
rows += column.getKey() + ": " + column.getValue() + "; ";
System.out.println("nested");
}
rows += "\n";
}
// END SNIPPET: rows
resultString = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n.Name" ).dumpToString();
columnsString = columns.toString();
System.out.println(rows);
System.out.println(resultString);
System.out.println(columnsString);
System.out.println("leaving");
}
}
When I run this in the web console I get many results (as there are multiple nodes that have an attribute of Name that contains the pattern 79. Yet running this code returns no results. The debug print statements 'in loop' and 'nested' never print either. Thus this must mean there are not results found in the Iterator, yet that doesn't make sense.
And yes, I already checked and made sure that the graphDb variable is the same as the path for the web console. I have other code earlier that uses the same variable to write to the database.
EDIT - More info
If I place the contents of query in the same function that creates my data, I get the correct results. If I run the query by itself it returns nothing. It's almost as the query works only in the instance where I add the data and not if I come back to the database cold in a separate instance.
EDIT2 -
Here is a snippet of code that shows the bigger context of how it is being called and sharing the same DBHandle
package ContextEngine;
import ContextEngine.NeoHandle;
import java.util.LinkedList;
/*
* Class to handle streaming data from any coded source
*/
public class Streamer {
private NeoHandle myHandle;
private String contextType;
Streamer()
{
}
public void openStream(String contextType)
{
myHandle = new NeoHandle();
myHandle.createDb();
}
public void streamInput(String dataLine)
{
Context context = new Context();
/*
* get database instance
* write to database
* check for errors
* report errors & success
*/
System.out.println(dataLine);
//apply rules to data (make ContextRules do this, send type and string of data)
ContextRules contextRules = new ContextRules();
context = contextRules.processContextRules("Calls", dataLine);
//write data (using linked list from contextRules)
NeoProcessor processor = new NeoProcessor(myHandle);
processor.processContextData(context);
}
public void runQuery()
{
NeoProcessor processor = new NeoProcessor(myHandle);
processor.query();
}
public void closeStream()
{
/*
* close database instance
*/
myHandle.shutDown();
}
}
Now, if I call streamInput AND query in in the same instance (parent calls) the query returns results. If I only call query and do not enter ANY data in that instance (yet web console shows data for same query) I get nothing. Why would I have to create the Nodes and enter them into the database at runtime just to return a valid query. Shouldn't I ALWAYS get the same results with such a query?
You mention that you are using the Neo4j Browser, which comes with Neo4j. However, the example you posted is for Neo4j Embedded, which is the in-process version of Neo4j. Are you sure you are talking to the same database when you try your query in the Browser?
In order to talk to Neo4j Server from Java, I'd recommend looking at the Neo4j JDBC driver, which has good support for connecting to the Neo4j server from Java.
http://www.neo4j.org/develop/tools/jdbc
You can set up a simple connection by adding the Neo4j JDBC jar to your classpath, available here: https://github.com/neo4j-contrib/neo4j-jdbc/releases Then just use Neo4j as any JDBC driver:
Connection conn = DriverManager.getConnection("jdbc:neo4j://localhost:7474/");
ResultSet rs = conn.executeQuery("start n=node({id}) return id(n) as id", map("id", id));
while(rs.next()) {
System.out.println(rs.getLong("id"));
}
Refer to the JDBC documentation for more advanced usage.
To answer your question on why the data is not durably stored, it may be one of many reasons. I would attempt to incrementally scale back the complexity of the code to try and locate the culprit. For instance, until you've found your problem, do these one at a time:
Instead of looping through the result, print it using System.out.println(result.dumpToString());
Instead of the regex query, try just MATCH (n) RETURN n, to return all data in the database
Make sure the data you are seeing in the browser is not "old" data inserted earlier on, but really is an insert from your latest run of the Java program. You can verify this by deleting the data via the browser before running the Java program using MATCH (n) OPTIONAL MATCH (n)-[r]->() DELETE n,r;
Make sure you are actually working against the same database directories. You can verify this by leaving the server running. If you can still start your java program, unless your Java program is using the Neo4j REST Bindings, you are not using the same directory. Two Neo4j databases cannot run against the same database directory simultaneously.
I'm using the neo4j 1.9.M01 version with the java-rest-binding 1.8.M07, and I have a problem with this code that aims to get a node from a neo4j database with the property "URL" that is "ARREL", using the Query language via rest. The problems seems to happens only inside a transaction, throwing an exception, but otherwise works well :
RestGraphDatabase graphDb = new RestGraphDatabase("http://localhost:7474/db/data");
RestCypherQueryEngine queryEngine = new RestCypherQueryEngine(graphDb.getRestAPI());
Node nodearrel = null;
Transaction tx0 = gds.beginTx();
try{
final String queryStringarrel = ("START n=node(*) WHERE n.URL =~{URL} RETURN n");
QueryResult<Map<String, Object>> retornar = queryEngine.query(queryStringarrel, MapUtil.map("URL","ARREL"));
for (Map<String,Object> row : retornar)
{
nodearrel = (Node)row.get("n");
System.out.println("Arrel: "+nodearrel.getProperty("URL")+" id : "+nodearrel.getId());
}
tx0.success();
}
(...)
But an exception happens: *exception tx0: Error reading as JSON ''
* every execution at the line that returns the QueryResult object.
I also have tried to do it with the ExecutionEngine (between a transaction):
ExecutionEngine engine = new ExecutionEngine( graphDb );
String ARREL = "ARREL";
ExecutionResult result = engine.execute("START n=node(*) WHERE n.URL =~{"+ARREL+"} RETURN n");
Iterator<Node> n_column = result.columnAs("n");
Node arrelat = (Node) n_column.next();
for ( Node node : IteratorUtil.asIterable( n_column ) )
(...)
But it also fails at the *n_column.next()* returning a null object that throws an exception.
The problem is that I need to use the transactions to optimize the queries due if not it take too much time processing all the queries that I need to do. Should I try to join several operations to the query, to avoid using the transactions?
try to add single quotes at:
START n=node(*) WHERE n.URL =~ '{URL}' RETURN n
Can you update your java-rest-binding to the latest version (1.8) ? In between we had a version that automatically applied REST-batch-operations to places with transaction semantics.
So the transactions you see are not real transactions but just recording your operations to be executed as batch-rest-operations on tx.success/finish
Execute the queries within the transaction, but only access the results after the tx is finished. Then your results will be there.
This is for instance useful to send many cypher queries in one go to the server and have the results available all in one go afterwards.
And yes #ulkas use parameters but not like that:
START n=node(*) WHERE n.URL =~ {URL} RETURN n
params: { "URL" : "http://your.url" }
No quotes neccessary when using params, just like SQL prepared statements.