I have seen several questions regarding this error but each solution is different as it is a so called "Syntax error". I use Oracle in production and Derby in development (extremely annoying but what can I do).
When I run a certain SQL command that I have created on Oracle it seems to work fine and do what it is suppose to (am using Oracle SQL Developer). But when I want to run the same command in Derby I encounter this error.
And I encounter this error no matter what I seem to do.
WARN | SQL Error: 20000, SQLState: 42X01
ERROR | Syntax error: Encountered "WHERE" at line 94, column 6.
For the life of me I cannot figure out what is wrong. Here is my SQL command. It is a bit long and complicated:
CREATE VIEW BDPBCDBView AS SELECT
BDP_INSTITUTION_NAME,
BIC,
BDP_COUNTRY_NAME,
BDP_ISO_COUNTRY_CODE,
BDP_CITY,
BDP_NETWORK_CONNECTIVITY,
BDP_SERVICE_CODES,
BDP_ISTARGET,
BCDB_NAME,
BCDB_LAENDERKENNZEICHEN,
BCDB_AKTIVMERKMALBANK,
BCDB_AKTIVMERKMALLAND,
BCDB_AKTIVMERKMALBANKLAND,
BCDB_SWIFTKENNZEICHEN,
COUNTRYCODE,
ISBDP,
ISBCDB,
BCDB_ORT,
s1.BICS_RMA
FROM
(SELECT
bdp.bic,
bdp.institution_name AS bdp_institution_name,
bdp.country_name AS bdp_country_name,
bdp.iso_country_code AS bdp_iso_country_code,
bdp.city AS bdp_city,
bdp.network_connectivity AS bdp_network_connectivity,
bdp.service_codes as bdp_service_codes,
bdp.isTarget AS bdp_isTarget,
bcdb.name as bcdb_name,
bcdb.laenderKennzeichen as bcdb_laenderKennzeichen,
bcdb.aktivMerkmalBank AS bcdb_aktivMerkmalBank,
bcdb.aktivMerkmalLand AS bcdb_aktivMerkmalLand,
bcdb.aktivMerkmalBankLand AS bcdb_aktivMerkmalBankLand,
bcdb.swiftKennzeichen AS bcdb_swiftKennzeichen,
CASE
WHEN bcdb.laenderKennzeichen IS NOT NULL THEN bcdb.laenderKennzeichen
ELSE bdp.iso_country_code
END AS countryCode,
CASE
WHEN bdp.bic IS NOT NULL THEN 1
ELSE 0
END AS isbdp,
CASE
WHEN bcdb.bic IS NOT NULL THEN 1
ELSE 0
END AS isbcdb,
bcdb.ort AS bcdb_ort
FROM BDP bdp LEFT JOIN BCDB bcdb ON bdp.bic = bcdb.bic WHERE bdp.bic IS NOT NULL
UNION ALL SELECT
bcdb.bic,
bdp.institution_name AS bdp_institution_name,
bdp.country_name AS bdp_country_name,
bdp.iso_country_code AS bdp_iso_country_code,
bdp.city AS bdp_city,
bdp.network_connectivity AS bdp_network_connectivity,
bdp.service_codes as bdp_service_codes,
bdp.isTarget AS bdp_isTarget,
bcdb.name as bcdb_name,
bcdb.laenderKennzeichen as bcdb_laenderKennzeichen,
bcdb.aktivMerkmalBank AS bcdb_aktivMerkmalBank,
bcdb.aktivMerkmalLand AS bcdb_aktivMerkmalLand,
bcdb.aktivMerkmalBankLand AS bcdb_aktivMerkmalBankLand,
bcdb.swiftKennzeichen AS bcdb_swiftKennzeichen,
CASE
WHEN bcdb.laenderKennzeichen IS NOT NULL THEN bcdb.laenderKennzeichen
ELSE bdp.iso_country_code
END AS countryCode,
CASE
WHEN bdp.bic IS NOT NULL THEN 1
ELSE 0
END AS isbdp,
CASE
WHEN bcdb.bic IS NOT NULL THEN 1
ELSE 0
END AS isbcdb,
bcdb.ort AS bcdb_ort
FROM BDP bdp RIGHT JOIN BCDB bcdb ON bdp.bic = bcdb.bic WHERE bdp.bic IS NULL)
t1 LEFT JOIN ( SELECT * FROM
(
SELECT s1.BIC_CRSPNDT AS BICS_RMA FROM
(SELECT
rma.crspdt AS BIC_CRSPNDT,
rma.issr AS BIC_ISSR
From RMA
WHERE ((RMA.tp= 'Issued' OR RMA.tp = 'Received') AND RMA.RMASTS='Enabled' AND RMA.SVCNM='swift.fin') )s1
UNION
SELECT s1.BIC_ISSR AS BIC FROM (SELECT
rma.crspdt AS BIC_CRSPNDT,
rma.issr AS BIC_ISSR
FROM RMA
WHERE ((RMA.tp= 'Issued' OR RMA.tp = 'Received') AND RMA.RMASTS='Enabled' AND RMA.SVCNM='swift.fin') )s1 )
WHERE BICS_RMA IS NOT NULL
ORDER BY BICS_RMA) s1
ON (s1.BICS_RMA = substr(t1.BIC, 1,8))
The error occurs at the third to last line.
My read in code in Java is :
#PersistenceContext
EntityManager em;
#PostConstruct
public void createViewIfNeeded() {
if (FidaProfile.isActive(FidaProfile.DEVELOPMENT)) {
em.createNativeQuery("DROP TABLE BDPBCDBView").executeUpdate();
String command_1 = loadDevelopmentViewScript("DEV-DB/init_dev_view.sql");//BDPBCDView sql script, this is made from 3 tables namely BCDB, BDP and RMA
em.createNativeQuery(command_1).executeUpdate();
}
}
public void setEm(EntityManager em) {
this.em = em;
}
private String loadDevelopmentViewScript(String addressOfSQLScript) {
try {
InputStream stream = BDPBCDPViewGenerator.class.getClassLoader().getResourceAsStream(addressOfSQLScript);
ByteArrayOutputStream result = new ByteArrayOutputStream();
byte[] buffer = new byte[1024];
int length;
while ((length = stream.read(buffer)) != -1) {
result.write(buffer, 0, length);
}
return result.toString("UTF-8");
} catch (IOException e) {
throw new FidaErrorCodeException(FidaErrorCode.UNEXPECTED_EXCEPTION,
"Could NOT load Development-View-Script", e);
}
}
As an aside, good formatting is vital to keep track of what is going on in complex queries!
Since my previous answer was incorrect, what about if you were to change the s1 subquery so that you were unpivoting rather than using union? Something like:
SELECT DISTINCT CASE WHEN dummy.id = 1 THEN r.bic_crspndt
WHEN dummy.id = 2 THEN r.bic_issr
END AS bics_rma
FROM (SELECT rma.crspdt AS bic_crspndt,
rma.issr AS bic_issr
FROM rma
WHERE (rma.tp = 'Issued' OR rma.tp = 'Received')
AND rma.rmasts = 'Enabled'
AND rma.svcnm = 'swift.fin') r
INNER JOIN (SELECT 1 ID FROM dual UNION ALL
SELECT 2 ID FROM dual) dummy ON (dummy.id = 1 AND r.crspdt IS NOT NULL)
OR (dummy.id = 2 AND r.issr IS NOT NULL);
Maybe if you were to do that, the Derby database would be able to cope with it?
N.B. I've used a manual UNPIVOT via a conditional cross join rather than the Oracle 11g UNPIVOT function since I don't know anything about Derby and UNPIVOT might not be supported there. Quite why you're being forced to use different database platforms between your live and dev environments is beyond me; sounds bonkers and potentially rather dangerous! I assume you've already tried flagging that!
Related
I have started on a new job and my first task is to improve performance of code for which user action takes 40 mins currently sometimes. On analysis I found that there is a Parent - Child - Grandchilren - ....and so on tree kind of relationship and a recursive method is implemented in current code which is taking time, because network calls to database are being made recursively.
As an improvement, I want to hit database once and fetch all data recursively at once.
Service layer code (this method calls itself recursively):
private void **processRecommendedEvaluationMetaForReadyToExport**(EvaluationMeta parentEvaluationMeta,
Set<EvaluationMeta> childEvaluationMetas,
Map<String, MCGEvaluationMetadata> mcgHsimContentVersionEvaluationMetadataMap) {
// get the list of child evaluation recommendations for the given parent evaluation
Map<String, List<MCGEvaluationMetaMaster>> hsimAndchildEvaluationDefinitionsMap = mcgEvaluationMetaDao.findRecommendedChildEvaluationMeta(parentEvaluationMeta.getId());
Iterator childEvaluationDefinitionsMapIterator = hsimAndchildEvaluationDefinitionsMap.entrySet().iterator();
while (childEvaluationDefinitionsMapIterator.hasNext()) {
Map.Entry childEvaluationDefinition = (Map.Entry) childEvaluationDefinitionsMapIterator.next();
if (childEvaluationDefinition.getValue() != null && !((List<MCGEvaluationMetaMaster>) childEvaluationDefinition.getValue()).isEmpty()) {
for (MCGEvaluationMetaMaster mcgEvaluationMetaMaster : (List<MCGEvaluationMetaMaster>) childEvaluationDefinition.getValue()) {
MCGEvaluationMetadata mcgEvaluationMetadata = mcgHsimContentVersionEvaluationMetadataMap.get(mcgEvaluationMetaMaster.getHsim() + mcgEvaluationMetaMaster.getMcgContentVersion().getContentVersion());
// consider only evaluation definitions which are either published/disabled status to be marked as ready to export and also skip adding the
// parent evaluation meta as it will be marked as ready to export later
if (canMcgEvaluationBeMarkedAsReadyToExport(mcgEvaluationMetadata) && !childEvaluationMetas.contains(mcgEvaluationMetadata.getEvaluationMeta()) &&
!parentEvaluationMeta.getResource().getName().equals(mcgEvaluationMetadata.getEvaluationMeta().getResource().getName())) {
childEvaluationMetas.add(mcgEvaluationMetadata.getEvaluationMeta());
**processRecommendedEvaluationMetaForReadyToExport**(mcgEvaluationMetadata.getEvaluationMeta(), childEvaluationMetas, mcgHsimContentVersionEvaluationMetadataMap);
}
}
}
}
}
DAO Layer:
private static final String GET_RECOMMENDED_CHILD_EVALUATIONS =
"MCGEvaluationMetaRecommendation.getRecommendedChildEvaluations";
public Map<String, List<MCGEvaluationMetaMaster>> findRecommendedChildEvaluationMeta(final String evaluationMetaId) {
Map<String, List<MCGEvaluationMetaMaster>> recommendedChildGuidelineInfo = new HashMap<>();
if (evaluationMetaId != null) {
final Query query = getEntityManager().createNamedQuery(GET_RECOMMENDED_CHILD_ASSESSMENTS);
query.setParameter(ASSESSMENT_META_ID, evaluationMetaId);
List<MCGEvaluationMetaRecommendation> resultList = query.getResultList(); // get the MCGEvaluationMetaRecommendation
// for the given parent evaluation meta id
if (resultList != null && !resultList.isEmpty()) {
for (MCGEvaluationMetaRecommendation mcgEvaluationMetaRecommendation : resultList) {
populateRecommendedChildGuidelineInfo(mcgEvaluationMetaRecommendation, recommendedChildGuidelineInfo);
}
}
}
return recommendedChildGuidelineInfo;
}
private void populateRecommendedChildGuidelineInfo(MCGEvaluationMetaRecommendation mcgEvaluationMetaRecommendation,
Map<String, List<MCGEvaluationMetaMaster>> recommendedChildGuidelineInfo){
if (mcgEvaluationMetaRecommendation.getParentEvaluationResponseDefinition() != null) {
List<MCGEvaluationMetaMaster> mcgEvaluationMetaMasterList;
String evaluationResponseDefinitionId = mcgEvaluationMetaRecommendation.getParentEvaluationResponseDefinition().getId();
MCGEvaluationMetaMaster mcgEvaluationMetaMaster = mcgEvaluationMetaRecommendation.getChildMCGEvaluationMetaMaster();
if (recommendedChildGuidelineInfo.get(evaluationResponseDefinitionId) != null) {
mcgEvaluationMetaMasterList = recommendedChildGuidelineInfo.get(evaluationResponseDefinitionId);
//check if there exists a list of recommended evaluation definitions for the evaluationResponseDefinitionId
// if so, check if the current recommended evaluation definition is already there in the list if not add it
// or create a new list of recommended evaluation definitions and add to it
if (mcgEvaluationMetaMasterList != null && !mcgEvaluationMetaMasterList.contains(mcgEvaluationMetaMaster)) {
mcgEvaluationMetaMasterList.add(mcgEvaluationMetaMaster);
}
} else {
mcgEvaluationMetaMasterList = new ArrayList<>();
mcgEvaluationMetaMasterList.add(mcgEvaluationMetaMaster);
recommendedChildGuidelineInfo.put(evaluationResponseDefinitionId, mcgEvaluationMetaMasterList);
}
}
}
Hibernate Query:
<query name="MCGEvaluationMetaRecommendation.getRecommendedChildEvaluations">
<![CDATA[
SELECT mcgEvaluationMetaRecommendation
FROM com.casenet.domain.evaluation.mcg.MCGEvaluationMetaRecommendation mcgEvaluationMetaRecommendation
INNER JOIN mcgEvaluationMetaRecommendation.parentMCGEvaluationMetadata parentMCGEvaluationMeta
WHERE parentMCGEvaluationMeta.evaluationMeta.id = :evaluationMetaId
AND mcgEvaluationMetaRecommendation.obsolete = 0
AND parentMCGEvaluationMeta.obsolete = 0
]]>
</query>
Simplified TABLE structure below:
Table: *MCGEvaluationMetaRecommendation*
mcg_evaluation_meta_recommendation_id
obsolete
parent_evaluation_response_definition_id
child_mcg_evaluation_meta_master_id
parent_mcg_evaluation_metadata_id
Table: *MCGEvaluationMetadata*
mcg_evaluation_metadata_id
evaluation_meta_id
mcg_evaluation_meta_master_id
created_date
obsolete
Below is the query I have written trying to substitute the recursive method, but something is wrong as the query keeps excecuting and doesn't complete even after 6-7 mins
WITH parent_child AS (
SELECT
meta.mcg_evaluation_metadata_id METADATA_ID,
meta.mcg_evaluation_meta_master_id META_MASTER_ID,
meta.evaluation_meta_id META_ID,
meta.obsolete META_OBSOLETE,
rec.mcg_evaluation_meta_recommendation_id REC_META_RECOMM_ID,
rec.parent_evaluation_response_definition_id REC_PARENT_EVALUATION_RESPONSE_DEF_ID,
rec.child_mcg_evaluation_meta_master_id REC_CHILD_EVALUATION_META_MASTER_ID,
rec.parent_mcg_evaluation_metadata_id REC_PARENT_EVALUATION_METADATA_ID,
rec.obsolete REC_OBSOLETE
FROM
MCGevaluationMetaRecommendation rec,
MCGevaluationMetadata meta
WHERE
rec.parent_mcg_evaluation_metadata_id = meta.mcg_evaluation_metadata_id
),
generation AS (
SELECT
METADATA_ID,
META_MASTER_ID,
META_ID,
META_OBSOLETE,
REC_META_RECOMM_ID,
REC_PARENT_EVALUATION_RESPONSE_DEF_ID,
REC_CHILD_EVALUATION_META_MASTER_ID,
REC_PARENT_EVALUATION_METADATA_ID,
REC_OBSOLETE,
0 AS level
FROM
parent_child child
WHERE child.META_ID = 'root-id-passed-as-query-param'
AND child.META_OBSOLETE = 0
AND child. REC_OBSOLETE = 0
UNION ALL
SELECT
child.METADATA_ID,
child.META_MASTER_ID,
child.META_ID,
child.META_OBSOLETE,
child.REC_META_RECOMM_ID,
child.REC_PARENT_EVALUATION_RESPONSE_DEF_ID,
child.REC_CHILD_EVALUATION_META_MASTER_ID,
child.REC_PARENT_EVALUATION_METADATA_ID,
child.REC_OBSOLETE,
level+1 AS level
FROM
parent_child child
JOIN generation g
ON g.REC_CHILD_EVALUATION_META_MASTER_ID = child.META_MASTER_ID
)
SELECT *
FROM generation g
JOIN parent_child parent
ON g.REC_PARENT_EVALUATION_METADATA_ID = parent.METADATA_ID
ORDER BY level DESC
OPTION (MAXRECURSION 0);
Please can someone help me in identifying what is wrong about my query, or if there is some other way of improving performance in this scenario. If this query works than I will handle other logic on java side.
I think it would be easier for you to simply use a #temp table and a loop.
This way you get an idea on where the time is spent, how deep the rabbit hole goes etc.. It's also less overwhelming for the query engine and in my experience it is hardly ever slower than a recursive CTE.
Pseudo code:
DECLARE #level int,
#rowcount int
SET #level = 0
PRINT Convert(varchar, CURRENT_TIMESTAMP) + ' - Starting up...'
SELECT level = #level,
<fields you need>
INTO #temp
FROM <tables>
WHERE <filters>
SELECT #rowcount = ##ROWCOUNT
PRINT Convert(varchar, CURRENT_TIMESTAMP) + ' - Loaded ' + Convert(varchar, #rowcount) + ' rows for level ' + Convert(vachar, #rowcount)
CREATE CLUSTERED INDEX idx0 ON #temp (level)
-- keep adding 'new generation' of children as long as there are new 'parents'
WHILE #rowcount > 0
BEGIN
SET #level = #level + 1
INSERT #temp (level, <fields>)
SELECT level = #level
FROM <tables> tbls
JOIN #temp parent
ON parent.level = #level - 1
AND parent.<x> = tbls.<x>
AND parent.<y> = tbls.<y>
etc..
WHERE <filters>
SELECT #rowcount = ##ROWCOUNT
PRINT Convert(varchar, CURRENT_TIMESTAMP) + ' - Loaded ' + Convert(varchar, #rowcount) + ' rows for level ' + Convert(vachar, #rowcount)
UPDATE STATISTICS #temp
END
-- return results
SELECT * FROM #temp
The UPDATE STATISTICS might be overkill (read: premature optimization), depends on how 'different' each level is when it comes to number of children.
I am executing following query from my web application and access 2007 query wizard. And I am getting two different result.
SELECT R.Rept_Name, D.Dist_Name,S.State_Name FROM (tblReporter AS R LEFT JOIN tblDist AS D ON R.Dist_Id=D.Dist_Id) LEFT JOIN tblState AS S ON S.State_Id=R.State_Id WHERE R.Rept_Name LIKE '*Ra*' ORDER BY R.Rept_Name;
Result from web application is with 0 rows and from query wizard 2 rows.If I remove where condition than both result are same. Please help me what is wrong with query. If any other info require please tell me.
Web application code ...
public DataTable getRept(string rept, string mobno)
{
DataTable dt = new DataTable();
using (OleDbConnection conn = new OleDbConnection(getConnection()))
{
using (OleDbCommand cmd = conn.CreateCommand())
{
cmd.CommandType = CommandType.Text;
cmd.CommandText = "SELECT R.Rept_Name, D.Dist_Name,S.State_Name FROM (tblReporter AS R LEFT JOIN tblDist AS D ON R.Dist_Id=D.Dist_Id) LEFT JOIN tblState AS S ON S.State_Id=R.State_Id WHERE R.Rept_Name LIKE '*" + rept + "*' ORDER BY R.Rept_Name;";
conn.Open();
using (OleDbDataReader sdr = cmd.ExecuteReader())
{
if (sdr.HasRows)
dt.Load(sdr);
}
}
}
return dt;
}
You are getting tripped up by the difference in LIKE wildcard characters between queries run in Access itself and queries run from an external application.
When running a query from within Access itself you need to use the asterisk as the wildcard character: LIKE '*Ra*'.
When running a query from an external application (like your C# app) you need to use the percent sign as the wildcard character: LIKE '%Ra%'.
Is there a way to build SQL in phases/stages using jOOQ? Something like:
DSLContext create = DSL.using(conn, SQLDialect.MYSQL);
DSL dsl = create.from(table("links"));
if( !StringUtils.isEmpty(place) ) { // place is specified, change the query
long placeId = getPlaceId();
if (placeId > 0) {
dsl = create.from(table("place_links"))
.join(table("links"))
.on(field("links.id").equal(field("place_links.link_id")))
.where(field("place_links.place_id").equal(placeId));
}
}
String sql = dsl.select(field("*"))
.orderBy("links.score")
.limit(1)
.getSQL();
The above won't compile but I am looking for something on similar principles. I need to start with from since the target table changes at runtime.
The requirement is that the final query changes at runtime depending on values which are fed in.
SQL doesn't feel like a very composable language if you start constructing the SELECT statement right away. But if you think of the different clauses as being the dynamic building blocks, things immediately become a lot simpler. In your case:
Table<?> from = table("links");
Condition where = trueCondition();
if (!StringUtils.isEmpty(place)) {
long placeId = getPlaceId();
if (placeId > 0) {
from = from.join("place_links").on("links.id = place_links.link_id");
where = where.and("place_links.place_id = ?", placeId);
}
}
DSL.using(conn)
.selectFrom(from)
.where(where)
.orderBy(field("links.score"))
.limit(1)
.fetch();
The above is assuming this
import static org.jooq.impl.DSL.*;
More about how to build SQL statements dynamically with jOOQ is described here:
http://www.jooq.org/doc/latest/manual/sql-building/dynamic-sql
I have to do this flexible search query in a service Java class:
select sum({oe:totalPrice})
from {Order as or join CustomerOrderStatus as os on {or:CustomerOrderStatus}={os:pk}
join OrderEntry as oe on {or.pk}={oe.order}}
where {or:versionID} is null and {or:orderType} in (8796093066999)
and {or:company} in (8796093710341)
and {or:pointOfSale} in (8796097413125)
and {oe:ecCode} in ('13','14')
and {or:yearSeason} in (8796093066981)
and {os:code} not in ('CANCELED', 'NOT_APPROVED')
When I perform this query in the hybris administration console I correctly obtain:
1164.00000000
In my Java service class I wrote this:
private BigDecimal findGroupedOrdersData(String total, String uncDisc, String orderPromo,
Map<String, Object> queryParameters) {
BigDecimal aggregatedValue = new BigDecimal(0);
final StringBuilder queryBuilder = new StringBuilder();
queryBuilder.append("select sum({oe:").append(total).append("})");
queryBuilder.append(
" from {Order as or join CustomerOrderStatus as os on {or:CustomerOrderStatus}={os:pk} join OrderEntry as oe on {or.pk}={oe.order}}");
queryBuilder.append(" where {or:versionID} is null");
if (queryParameters != null && !queryParameters.isEmpty()) {
appendWhereClausesToBuilder(queryBuilder, queryParameters);
}
queryBuilder.append(" and {os:code} not in ('");
queryBuilder.append(CustomerOrderStatus.CANCELED.getCode()).append("', ");
queryBuilder.append("'").append(CustomerOrderStatus.NOT_APPROVED.getCode()).append("')");
FlexibleSearchQuery query = new FlexibleSearchQuery(queryBuilder.toString(), queryParameters);
List<BigDecimal> result = Lists.newArrayList();
query.setResultClassList(Arrays.asList(BigDecimal.class));
result = getFlexibleSearchService().<BigDecimal> search(query).getResult();
if (!result.isEmpty() && result.get(0) != null) {
aggregatedValue = result.get(0);
}
return aggregatedValue;
}
private void appendWhereClausesToBuilder(StringBuilder builder, Map<String, Object> params) {
if ((params == null) || (params.isEmpty()))
return;
for (String paramName : params.keySet()) {
builder.append(" and ");
if (paramName.equalsIgnoreCase("exitCollection")) {
builder.append("{oe:ecCode}").append(" in (?").append(paramName).append(")");
} else {
builder.append("{or:").append(paramName).append("}").append(" in (?").append(paramName).append(")");
}
}
}
The query string before the search(query).getResult() function is:
query: [select sum({oe:totalPrice}) from {Order as or join CustomerOrderStatus as os on {or:CustomerOrderStatus}={os:pk}
join OrderEntry as oe on {or.pk}={oe.order}} where {or:versionID} is null
and {or:orderType} in (?orderType) and {or:company} in (?company)
and {or:pointOfSale} in (?pointOfSale) and {oe:ecCode} in (?exitCollection)
and {or:yearSeason} in (?yearSeason) and {os:code} not in ('CANCELED', 'NOT_APPROVED')],
query parameters: [{orderType=OrderTypeModel (8796093230839),
pointOfSale=B2BUnitModel (8796097413125), company=CompanyModel (8796093710341),
exitCollection=[13, 14], yearSeason=YearSeasonModel (8796093066981)}]
but after the search(query) result is [null].
Why? Where I wrong in the Java code? Thanks.
In addition, if you want to disable restriction in your java code. You can do like this ..
#Autowired
private SearchRestrictionService searchRestrictionService;
private BigDecimal findGroupedOrdersData(String total, String uncDisc, String orderPromo,
Map<String, Object> queryParameters) {
searchRestrictionService.disableSearchRestrictions();
// You code here
searchRestrictionService.enableSearchRestrictions();
return aggregatedValue;
}
In the above code, You can disabled the search restriction and after the search result, you can again enable it.
OR
You can use sessionService to execute flexible search query in Local View. The method executeInLocalView can be used to execute code within an isolated session.
(SearchResult<? extends ItemModel>) sessionService.executeInLocalView(new SessionExecutionBody()
{
#Override
public Object execute()
{
sessionService.setAttribute(FlexibleSearch.DISABLE_RESTRICTIONS, Boolean.TRUE);
return flexibleSearchService.search(query);
}
});
Here you are setting DISABLE RESTRICTIONS = true which will run the query in admin context [Without Restriction].
Check this
Better i would suggest you to check what restriction exactly applying to your item type. You can simply check in Backoffice/HMC
Backoffice :
Go to System-> Personalization (SearchRestricion)
Search by Restricted Type
Check Filter Query and analysis your item data based on that.
You can also check its Principal (UserGroup) on which restriction applied.
To confirm, just check by disabling active flag.
Query in the code is running not under admin user (most likely).
And in this case the different search Restrictions are applied to the query.
You can see that the original query is changed:
start DB logging (/hac -> Monitoring -> Database -> JDBC logging);
run the query from the code;
stop DB logging and check log file.
More information: https://wiki.hybris.com/display/release5/Restrictions
In /hac console the admin user is usually used and restrictions will not be applied because of this.
As the statement looks ok to me i'm going to go with visibility of the data. Are you able to see all the items as whatever user you are running the query as? In the hac you would be admin obviously.
I'm using the neo4j 1.9.M01 version with the java-rest-binding 1.8.M07, and I have a problem with this code that aims to get a node from a neo4j database with the property "URL" that is "ARREL", using the Query language via rest. The problems seems to happens only inside a transaction, throwing an exception, but otherwise works well :
RestGraphDatabase graphDb = new RestGraphDatabase("http://localhost:7474/db/data");
RestCypherQueryEngine queryEngine = new RestCypherQueryEngine(graphDb.getRestAPI());
Node nodearrel = null;
Transaction tx0 = gds.beginTx();
try{
final String queryStringarrel = ("START n=node(*) WHERE n.URL =~{URL} RETURN n");
QueryResult<Map<String, Object>> retornar = queryEngine.query(queryStringarrel, MapUtil.map("URL","ARREL"));
for (Map<String,Object> row : retornar)
{
nodearrel = (Node)row.get("n");
System.out.println("Arrel: "+nodearrel.getProperty("URL")+" id : "+nodearrel.getId());
}
tx0.success();
}
(...)
But an exception happens: *exception tx0: Error reading as JSON ''
* every execution at the line that returns the QueryResult object.
I also have tried to do it with the ExecutionEngine (between a transaction):
ExecutionEngine engine = new ExecutionEngine( graphDb );
String ARREL = "ARREL";
ExecutionResult result = engine.execute("START n=node(*) WHERE n.URL =~{"+ARREL+"} RETURN n");
Iterator<Node> n_column = result.columnAs("n");
Node arrelat = (Node) n_column.next();
for ( Node node : IteratorUtil.asIterable( n_column ) )
(...)
But it also fails at the *n_column.next()* returning a null object that throws an exception.
The problem is that I need to use the transactions to optimize the queries due if not it take too much time processing all the queries that I need to do. Should I try to join several operations to the query, to avoid using the transactions?
try to add single quotes at:
START n=node(*) WHERE n.URL =~ '{URL}' RETURN n
Can you update your java-rest-binding to the latest version (1.8) ? In between we had a version that automatically applied REST-batch-operations to places with transaction semantics.
So the transactions you see are not real transactions but just recording your operations to be executed as batch-rest-operations on tx.success/finish
Execute the queries within the transaction, but only access the results after the tx is finished. Then your results will be there.
This is for instance useful to send many cypher queries in one go to the server and have the results available all in one go afterwards.
And yes #ulkas use parameters but not like that:
START n=node(*) WHERE n.URL =~ {URL} RETURN n
params: { "URL" : "http://your.url" }
No quotes neccessary when using params, just like SQL prepared statements.