ErrorCounter Unexpected Token where - Hibernate - java

I am using jpa for queries. My query is this :
public PartsItem getAvailablePartsItem(String search) {
Session s = sessionFactory.openSession();
PartsItem pi;
pi = s.createQuery("from PartsItem where lower(serialNumber) like lower(:serialNumber) and where available = true", PartsItem.class).setParameter("serialNumber",'%' + search + '%').list().get(0);
s.close();
return pi;
}
I get an error with unexpected tokens at with where and available. I want these two conditions fulfilled but I keep getting these errors. Could it be an issue with the and?

You shouldn't have the final where keyword, so removing that should make it parse successfully.

Related

Call transactional method Play Java JPA Hibernate

I have 2 database one is mysql and other is postgree.
I tried to get postgree data from mysql transactional method.
#Transactional(value = "pg")
public List<String> getSubordinate(){
Query q1 = JPA.em().createNativeQuery("select vrs.subordinate_number, vrs.superior_number\n" +
"from view_reporting_structure vrs\n" +
"where vrs.superior_number = :personel_number");
q1.setParameter("personel_number","524261");
List<String> me = q1.getResultList();
return me;
}
}
from another method
#Transactional
public Result getOpenRequestList(){
Subordinate subordinate = new Subordinate();
List<String> subordinateData = subordinate.getSubordinate();
....
}
i got error
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Table 'db_hcm.view_reporting_structure' doesn't exist
so my Postgre method recognized as mySQL transaction which is the view not exist in mySQL database. how do I get data from different presistence unit with 1 method?
I never did it (different databases), but I guess the following may work.
For example, you have the following data source definition in application.conf:
# MySql
db.mysql.driver=com.mysql.jdbc.Driver
... the rest of setting for db.mysql
# H2
db.postgre.driver=org.postgresql.Driver
... the rest of setting for db.postgre
Instead of using #Transactional annotation, manage a transaction explicitly and use JPA withTransaction API:
private static final String MYSQL_DB = "mysql";
private static final String POSTGRE_DB = "postgre";
public List<String> getSubordinate() {
JPA.withTransaction(MYSQL_DB, true/* this is read-only flag*/,
() -> {
Query q1 = JPA.em().createNativeQuery("select vrs.subordinate_number, vrs.superior_number\n" +
"from view_reporting_structure vrs\n" +
"where vrs.superior_number = :personel_number");
q1.setParameter("personel_number","524261");
List<String> me = q1.getResultList();
return me;
}
}
public Result getOpenRequestList(){
JPA.withTransaction(POSTGRE_DB, true/* this is read-only flag*/,
() -> {
Subordinate subordinate = new Subordinate();
List<String> subordinateData = subordinate.getSubordinate();
....
}
}
Note: I prefer always use withTransaction, since it allows better control of unhappy flow. You should wrap the call with try-catch. If JPA throws a run-time exception on commit, you can do proper error handling. In case of using #Transactional annotation, commit takes place after controller have finished and you cannot handle the error.

Flexible search with parameters return null value

I have to do this flexible search query in a service Java class:
select sum({oe:totalPrice})
from {Order as or join CustomerOrderStatus as os on {or:CustomerOrderStatus}={os:pk}
join OrderEntry as oe on {or.pk}={oe.order}}
where {or:versionID} is null and {or:orderType} in (8796093066999)
and {or:company} in (8796093710341)
and {or:pointOfSale} in (8796097413125)
and {oe:ecCode} in ('13','14')
and {or:yearSeason} in (8796093066981)
and {os:code} not in ('CANCELED', 'NOT_APPROVED')
When I perform this query in the hybris administration console I correctly obtain:
1164.00000000
In my Java service class I wrote this:
private BigDecimal findGroupedOrdersData(String total, String uncDisc, String orderPromo,
Map<String, Object> queryParameters) {
BigDecimal aggregatedValue = new BigDecimal(0);
final StringBuilder queryBuilder = new StringBuilder();
queryBuilder.append("select sum({oe:").append(total).append("})");
queryBuilder.append(
" from {Order as or join CustomerOrderStatus as os on {or:CustomerOrderStatus}={os:pk} join OrderEntry as oe on {or.pk}={oe.order}}");
queryBuilder.append(" where {or:versionID} is null");
if (queryParameters != null && !queryParameters.isEmpty()) {
appendWhereClausesToBuilder(queryBuilder, queryParameters);
}
queryBuilder.append(" and {os:code} not in ('");
queryBuilder.append(CustomerOrderStatus.CANCELED.getCode()).append("', ");
queryBuilder.append("'").append(CustomerOrderStatus.NOT_APPROVED.getCode()).append("')");
FlexibleSearchQuery query = new FlexibleSearchQuery(queryBuilder.toString(), queryParameters);
List<BigDecimal> result = Lists.newArrayList();
query.setResultClassList(Arrays.asList(BigDecimal.class));
result = getFlexibleSearchService().<BigDecimal> search(query).getResult();
if (!result.isEmpty() && result.get(0) != null) {
aggregatedValue = result.get(0);
}
return aggregatedValue;
}
private void appendWhereClausesToBuilder(StringBuilder builder, Map<String, Object> params) {
if ((params == null) || (params.isEmpty()))
return;
for (String paramName : params.keySet()) {
builder.append(" and ");
if (paramName.equalsIgnoreCase("exitCollection")) {
builder.append("{oe:ecCode}").append(" in (?").append(paramName).append(")");
} else {
builder.append("{or:").append(paramName).append("}").append(" in (?").append(paramName).append(")");
}
}
}
The query string before the search(query).getResult() function is:
query: [select sum({oe:totalPrice}) from {Order as or join CustomerOrderStatus as os on {or:CustomerOrderStatus}={os:pk}
join OrderEntry as oe on {or.pk}={oe.order}} where {or:versionID} is null
and {or:orderType} in (?orderType) and {or:company} in (?company)
and {or:pointOfSale} in (?pointOfSale) and {oe:ecCode} in (?exitCollection)
and {or:yearSeason} in (?yearSeason) and {os:code} not in ('CANCELED', 'NOT_APPROVED')],
query parameters: [{orderType=OrderTypeModel (8796093230839),
pointOfSale=B2BUnitModel (8796097413125), company=CompanyModel (8796093710341),
exitCollection=[13, 14], yearSeason=YearSeasonModel (8796093066981)}]
but after the search(query) result is [null].
Why? Where I wrong in the Java code? Thanks.
In addition, if you want to disable restriction in your java code. You can do like this ..
#Autowired
private SearchRestrictionService searchRestrictionService;
private BigDecimal findGroupedOrdersData(String total, String uncDisc, String orderPromo,
Map<String, Object> queryParameters) {
searchRestrictionService.disableSearchRestrictions();
// You code here
searchRestrictionService.enableSearchRestrictions();
return aggregatedValue;
}
In the above code, You can disabled the search restriction and after the search result, you can again enable it.
OR
You can use sessionService to execute flexible search query in Local View. The method executeInLocalView can be used to execute code within an isolated session.
(SearchResult<? extends ItemModel>) sessionService.executeInLocalView(new SessionExecutionBody()
{
#Override
public Object execute()
{
sessionService.setAttribute(FlexibleSearch.DISABLE_RESTRICTIONS, Boolean.TRUE);
return flexibleSearchService.search(query);
}
});
Here you are setting DISABLE RESTRICTIONS = true which will run the query in admin context [Without Restriction].
Check this
Better i would suggest you to check what restriction exactly applying to your item type. You can simply check in Backoffice/HMC
Backoffice :
Go to System-> Personalization (SearchRestricion)
Search by Restricted Type
Check Filter Query and analysis your item data based on that.
You can also check its Principal (UserGroup) on which restriction applied.
To confirm, just check by disabling active flag.
Query in the code is running not under admin user (most likely).
And in this case the different search Restrictions are applied to the query.
You can see that the original query is changed:
start DB logging (/hac -> Monitoring -> Database -> JDBC logging);
run the query from the code;
stop DB logging and check log file.
More information: https://wiki.hybris.com/display/release5/Restrictions
In /hac console the admin user is usually used and restrictions will not be applied because of this.
As the statement looks ok to me i'm going to go with visibility of the data. Are you able to see all the items as whatever user you are running the query as? In the hac you would be admin obviously.

Neo4j ExecutionEngine does not return valid results

Trying to use a similar example from the sample code found here
My sample function is:
void query()
{
String nodeResult = "";
String rows = "";
String resultString;
String columnsString;
System.out.println("In query");
// START SNIPPET: execute
ExecutionEngine engine = new ExecutionEngine( graphDb );
ExecutionResult result;
try ( Transaction ignored = graphDb.beginTx() )
{
result = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n, n.Name" );
// END SNIPPET: execute
// START SNIPPET: items
Iterator<Node> n_column = result.columnAs( "n" );
for ( Node node : IteratorUtil.asIterable( n_column ) )
{
// note: we're grabbing the name property from the node,
// not from the n.name in this case.
nodeResult = node + ": " + node.getProperty( "Name" );
System.out.println("In for loop");
System.out.println(nodeResult);
}
// END SNIPPET: items
// START SNIPPET: columns
List<String> columns = result.columns();
// END SNIPPET: columns
// the result is now empty, get a new one
result = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n, n.Name" );
// START SNIPPET: rows
for ( Map<String, Object> row : result )
{
for ( Entry<String, Object> column : row.entrySet() )
{
rows += column.getKey() + ": " + column.getValue() + "; ";
System.out.println("nested");
}
rows += "\n";
}
// END SNIPPET: rows
resultString = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n.Name" ).dumpToString();
columnsString = columns.toString();
System.out.println(rows);
System.out.println(resultString);
System.out.println(columnsString);
System.out.println("leaving");
}
}
When I run this in the web console I get many results (as there are multiple nodes that have an attribute of Name that contains the pattern 79. Yet running this code returns no results. The debug print statements 'in loop' and 'nested' never print either. Thus this must mean there are not results found in the Iterator, yet that doesn't make sense.
And yes, I already checked and made sure that the graphDb variable is the same as the path for the web console. I have other code earlier that uses the same variable to write to the database.
EDIT - More info
If I place the contents of query in the same function that creates my data, I get the correct results. If I run the query by itself it returns nothing. It's almost as the query works only in the instance where I add the data and not if I come back to the database cold in a separate instance.
EDIT2 -
Here is a snippet of code that shows the bigger context of how it is being called and sharing the same DBHandle
package ContextEngine;
import ContextEngine.NeoHandle;
import java.util.LinkedList;
/*
* Class to handle streaming data from any coded source
*/
public class Streamer {
private NeoHandle myHandle;
private String contextType;
Streamer()
{
}
public void openStream(String contextType)
{
myHandle = new NeoHandle();
myHandle.createDb();
}
public void streamInput(String dataLine)
{
Context context = new Context();
/*
* get database instance
* write to database
* check for errors
* report errors & success
*/
System.out.println(dataLine);
//apply rules to data (make ContextRules do this, send type and string of data)
ContextRules contextRules = new ContextRules();
context = contextRules.processContextRules("Calls", dataLine);
//write data (using linked list from contextRules)
NeoProcessor processor = new NeoProcessor(myHandle);
processor.processContextData(context);
}
public void runQuery()
{
NeoProcessor processor = new NeoProcessor(myHandle);
processor.query();
}
public void closeStream()
{
/*
* close database instance
*/
myHandle.shutDown();
}
}
Now, if I call streamInput AND query in in the same instance (parent calls) the query returns results. If I only call query and do not enter ANY data in that instance (yet web console shows data for same query) I get nothing. Why would I have to create the Nodes and enter them into the database at runtime just to return a valid query. Shouldn't I ALWAYS get the same results with such a query?
You mention that you are using the Neo4j Browser, which comes with Neo4j. However, the example you posted is for Neo4j Embedded, which is the in-process version of Neo4j. Are you sure you are talking to the same database when you try your query in the Browser?
In order to talk to Neo4j Server from Java, I'd recommend looking at the Neo4j JDBC driver, which has good support for connecting to the Neo4j server from Java.
http://www.neo4j.org/develop/tools/jdbc
You can set up a simple connection by adding the Neo4j JDBC jar to your classpath, available here: https://github.com/neo4j-contrib/neo4j-jdbc/releases Then just use Neo4j as any JDBC driver:
Connection conn = DriverManager.getConnection("jdbc:neo4j://localhost:7474/");
ResultSet rs = conn.executeQuery("start n=node({id}) return id(n) as id", map("id", id));
while(rs.next()) {
System.out.println(rs.getLong("id"));
}
Refer to the JDBC documentation for more advanced usage.
To answer your question on why the data is not durably stored, it may be one of many reasons. I would attempt to incrementally scale back the complexity of the code to try and locate the culprit. For instance, until you've found your problem, do these one at a time:
Instead of looping through the result, print it using System.out.println(result.dumpToString());
Instead of the regex query, try just MATCH (n) RETURN n, to return all data in the database
Make sure the data you are seeing in the browser is not "old" data inserted earlier on, but really is an insert from your latest run of the Java program. You can verify this by deleting the data via the browser before running the Java program using MATCH (n) OPTIONAL MATCH (n)-[r]->() DELETE n,r;
Make sure you are actually working against the same database directories. You can verify this by leaving the server running. If you can still start your java program, unless your Java program is using the Neo4j REST Bindings, you are not using the same directory. Two Neo4j databases cannot run against the same database directory simultaneously.

Exception using QueryEngine inside a Transaction

I'm using the neo4j 1.9.M01 version with the java-rest-binding 1.8.M07, and I have a problem with this code that aims to get a node from a neo4j database with the property "URL" that is "ARREL", using the Query language via rest. The problems seems to happens only inside a transaction, throwing an exception, but otherwise works well :
RestGraphDatabase graphDb = new RestGraphDatabase("http://localhost:7474/db/data");
RestCypherQueryEngine queryEngine = new RestCypherQueryEngine(graphDb.getRestAPI());
Node nodearrel = null;
Transaction tx0 = gds.beginTx();
try{
final String queryStringarrel = ("START n=node(*) WHERE n.URL =~{URL} RETURN n");
QueryResult<Map<String, Object>> retornar = queryEngine.query(queryStringarrel, MapUtil.map("URL","ARREL"));
for (Map<String,Object> row : retornar)
{
nodearrel = (Node)row.get("n");
System.out.println("Arrel: "+nodearrel.getProperty("URL")+" id : "+nodearrel.getId());
}
tx0.success();
}
(...)
But an exception happens: *exception tx0: Error reading as JSON ''
* every execution at the line that returns the QueryResult object.
I also have tried to do it with the ExecutionEngine (between a transaction):
ExecutionEngine engine = new ExecutionEngine( graphDb );
String ARREL = "ARREL";
ExecutionResult result = engine.execute("START n=node(*) WHERE n.URL =~{"+ARREL+"} RETURN n");
Iterator<Node> n_column = result.columnAs("n");
Node arrelat = (Node) n_column.next();
for ( Node node : IteratorUtil.asIterable( n_column ) )
(...)
But it also fails at the *n_column.next()* returning a null object that throws an exception.
The problem is that I need to use the transactions to optimize the queries due if not it take too much time processing all the queries that I need to do. Should I try to join several operations to the query, to avoid using the transactions?
try to add single quotes at:
START n=node(*) WHERE n.URL =~ '{URL}' RETURN n
Can you update your java-rest-binding to the latest version (1.8) ? In between we had a version that automatically applied REST-batch-operations to places with transaction semantics.
So the transactions you see are not real transactions but just recording your operations to be executed as batch-rest-operations on tx.success/finish
Execute the queries within the transaction, but only access the results after the tx is finished. Then your results will be there.
This is for instance useful to send many cypher queries in one go to the server and have the results available all in one go afterwards.
And yes #ulkas use parameters but not like that:
START n=node(*) WHERE n.URL =~ {URL} RETURN n
params: { "URL" : "http://your.url" }
No quotes neccessary when using params, just like SQL prepared statements.

problem in determining whether second level cache is working in hibernate

I am trying to use ehcache in my project.. i have specified the following properties in hibernate config file -
config.setProperty("hibernate.cache.provider_class","org.hibernate.cache.EhCacheProvider");
config.setProperty("hibernate.cache.provider_configuration_file_resource_path","ehcache.xml");
config.setProperty("hibernate.cache.use_second_level_cache","true");
config.setProperty("hibernate.cache.use_query_cache","true");
Now i am still not sure whether the results are coming from DB or the cache..
I looked around and found - Hibernate second level cache - print result where the person is suggesting HitCount/Misscount API's
However when i tried using it the hitcount and miss count is always returned 0... here's my code
String rName = "org.hibernate.cache.UpdateTimestampsCache";
Statistics stat =
HibernateHelper.getInstance().getFactory().getStatistics();
long oldMissCount =
stat.getSecondLevelCacheStatistics(rName).getMissCount();
long oldHitCount =
stat.getSecondLevelCacheStatistics(rName).getHitCount();
UserDAO user = new UserDAO();
user.read(new Long(1)); long
newMissCount =
stat.getSecondLevelCacheStatistics(rName).getMissCount();
long newHitCount =
stat.getSecondLevelCacheStatistics(rName).getHitCount();
if(oldHitCount+1 == newHitCount &&
oldMissCount+1 == newMissCount) {
System.out.println("came from DB"); }
else if(oldHitCount+1 == newHitCount
&& oldMissCount == newMissCount) {
System.out.println("came from cache");
}
Please let me know if i am using it wrong.. and what should be the rName(region Name) in this case..
Is there any other way of determining whether the second level cache is working ??
Thanks
You need to enable statistics collection:
config.setProperty("hibernate.generate_statistics", "true");

Categories

Resources