Performance Improvement: Replacing recursive method call by a recursive sql query - java

I have started on a new job and my first task is to improve performance of code for which user action takes 40 mins currently sometimes. On analysis I found that there is a Parent - Child - Grandchilren - ....and so on tree kind of relationship and a recursive method is implemented in current code which is taking time, because network calls to database are being made recursively.
As an improvement, I want to hit database once and fetch all data recursively at once.
Service layer code (this method calls itself recursively):
private void **processRecommendedEvaluationMetaForReadyToExport**(EvaluationMeta parentEvaluationMeta,
Set<EvaluationMeta> childEvaluationMetas,
Map<String, MCGEvaluationMetadata> mcgHsimContentVersionEvaluationMetadataMap) {
// get the list of child evaluation recommendations for the given parent evaluation
Map<String, List<MCGEvaluationMetaMaster>> hsimAndchildEvaluationDefinitionsMap = mcgEvaluationMetaDao.findRecommendedChildEvaluationMeta(parentEvaluationMeta.getId());
Iterator childEvaluationDefinitionsMapIterator = hsimAndchildEvaluationDefinitionsMap.entrySet().iterator();
while (childEvaluationDefinitionsMapIterator.hasNext()) {
Map.Entry childEvaluationDefinition = (Map.Entry) childEvaluationDefinitionsMapIterator.next();
if (childEvaluationDefinition.getValue() != null && !((List<MCGEvaluationMetaMaster>) childEvaluationDefinition.getValue()).isEmpty()) {
for (MCGEvaluationMetaMaster mcgEvaluationMetaMaster : (List<MCGEvaluationMetaMaster>) childEvaluationDefinition.getValue()) {
MCGEvaluationMetadata mcgEvaluationMetadata = mcgHsimContentVersionEvaluationMetadataMap.get(mcgEvaluationMetaMaster.getHsim() + mcgEvaluationMetaMaster.getMcgContentVersion().getContentVersion());
// consider only evaluation definitions which are either published/disabled status to be marked as ready to export and also skip adding the
// parent evaluation meta as it will be marked as ready to export later
if (canMcgEvaluationBeMarkedAsReadyToExport(mcgEvaluationMetadata) && !childEvaluationMetas.contains(mcgEvaluationMetadata.getEvaluationMeta()) &&
!parentEvaluationMeta.getResource().getName().equals(mcgEvaluationMetadata.getEvaluationMeta().getResource().getName())) {
childEvaluationMetas.add(mcgEvaluationMetadata.getEvaluationMeta());
**processRecommendedEvaluationMetaForReadyToExport**(mcgEvaluationMetadata.getEvaluationMeta(), childEvaluationMetas, mcgHsimContentVersionEvaluationMetadataMap);
}
}
}
}
}
DAO Layer:
private static final String GET_RECOMMENDED_CHILD_EVALUATIONS =
"MCGEvaluationMetaRecommendation.getRecommendedChildEvaluations";
public Map<String, List<MCGEvaluationMetaMaster>> findRecommendedChildEvaluationMeta(final String evaluationMetaId) {
Map<String, List<MCGEvaluationMetaMaster>> recommendedChildGuidelineInfo = new HashMap<>();
if (evaluationMetaId != null) {
final Query query = getEntityManager().createNamedQuery(GET_RECOMMENDED_CHILD_ASSESSMENTS);
query.setParameter(ASSESSMENT_META_ID, evaluationMetaId);
List<MCGEvaluationMetaRecommendation> resultList = query.getResultList(); // get the MCGEvaluationMetaRecommendation
// for the given parent evaluation meta id
if (resultList != null && !resultList.isEmpty()) {
for (MCGEvaluationMetaRecommendation mcgEvaluationMetaRecommendation : resultList) {
populateRecommendedChildGuidelineInfo(mcgEvaluationMetaRecommendation, recommendedChildGuidelineInfo);
}
}
}
return recommendedChildGuidelineInfo;
}
private void populateRecommendedChildGuidelineInfo(MCGEvaluationMetaRecommendation mcgEvaluationMetaRecommendation,
Map<String, List<MCGEvaluationMetaMaster>> recommendedChildGuidelineInfo){
if (mcgEvaluationMetaRecommendation.getParentEvaluationResponseDefinition() != null) {
List<MCGEvaluationMetaMaster> mcgEvaluationMetaMasterList;
String evaluationResponseDefinitionId = mcgEvaluationMetaRecommendation.getParentEvaluationResponseDefinition().getId();
MCGEvaluationMetaMaster mcgEvaluationMetaMaster = mcgEvaluationMetaRecommendation.getChildMCGEvaluationMetaMaster();
if (recommendedChildGuidelineInfo.get(evaluationResponseDefinitionId) != null) {
mcgEvaluationMetaMasterList = recommendedChildGuidelineInfo.get(evaluationResponseDefinitionId);
//check if there exists a list of recommended evaluation definitions for the evaluationResponseDefinitionId
// if so, check if the current recommended evaluation definition is already there in the list if not add it
// or create a new list of recommended evaluation definitions and add to it
if (mcgEvaluationMetaMasterList != null && !mcgEvaluationMetaMasterList.contains(mcgEvaluationMetaMaster)) {
mcgEvaluationMetaMasterList.add(mcgEvaluationMetaMaster);
}
} else {
mcgEvaluationMetaMasterList = new ArrayList<>();
mcgEvaluationMetaMasterList.add(mcgEvaluationMetaMaster);
recommendedChildGuidelineInfo.put(evaluationResponseDefinitionId, mcgEvaluationMetaMasterList);
}
}
}
Hibernate Query:
<query name="MCGEvaluationMetaRecommendation.getRecommendedChildEvaluations">
<![CDATA[
SELECT mcgEvaluationMetaRecommendation
FROM com.casenet.domain.evaluation.mcg.MCGEvaluationMetaRecommendation mcgEvaluationMetaRecommendation
INNER JOIN mcgEvaluationMetaRecommendation.parentMCGEvaluationMetadata parentMCGEvaluationMeta
WHERE parentMCGEvaluationMeta.evaluationMeta.id = :evaluationMetaId
AND mcgEvaluationMetaRecommendation.obsolete = 0
AND parentMCGEvaluationMeta.obsolete = 0
]]>
</query>
Simplified TABLE structure below:
Table: *MCGEvaluationMetaRecommendation*
mcg_evaluation_meta_recommendation_id
obsolete
parent_evaluation_response_definition_id
child_mcg_evaluation_meta_master_id
parent_mcg_evaluation_metadata_id
Table: *MCGEvaluationMetadata*
mcg_evaluation_metadata_id
evaluation_meta_id
mcg_evaluation_meta_master_id
created_date
obsolete
Below is the query I have written trying to substitute the recursive method, but something is wrong as the query keeps excecuting and doesn't complete even after 6-7 mins
WITH parent_child AS (
SELECT
meta.mcg_evaluation_metadata_id METADATA_ID,
meta.mcg_evaluation_meta_master_id META_MASTER_ID,
meta.evaluation_meta_id META_ID,
meta.obsolete META_OBSOLETE,
rec.mcg_evaluation_meta_recommendation_id REC_META_RECOMM_ID,
rec.parent_evaluation_response_definition_id REC_PARENT_EVALUATION_RESPONSE_DEF_ID,
rec.child_mcg_evaluation_meta_master_id REC_CHILD_EVALUATION_META_MASTER_ID,
rec.parent_mcg_evaluation_metadata_id REC_PARENT_EVALUATION_METADATA_ID,
rec.obsolete REC_OBSOLETE
FROM
MCGevaluationMetaRecommendation rec,
MCGevaluationMetadata meta
WHERE
rec.parent_mcg_evaluation_metadata_id = meta.mcg_evaluation_metadata_id
),
generation AS (
SELECT
METADATA_ID,
META_MASTER_ID,
META_ID,
META_OBSOLETE,
REC_META_RECOMM_ID,
REC_PARENT_EVALUATION_RESPONSE_DEF_ID,
REC_CHILD_EVALUATION_META_MASTER_ID,
REC_PARENT_EVALUATION_METADATA_ID,
REC_OBSOLETE,
0 AS level
FROM
parent_child child
WHERE child.META_ID = 'root-id-passed-as-query-param'
AND child.META_OBSOLETE = 0
AND child. REC_OBSOLETE = 0
UNION ALL
SELECT
child.METADATA_ID,
child.META_MASTER_ID,
child.META_ID,
child.META_OBSOLETE,
child.REC_META_RECOMM_ID,
child.REC_PARENT_EVALUATION_RESPONSE_DEF_ID,
child.REC_CHILD_EVALUATION_META_MASTER_ID,
child.REC_PARENT_EVALUATION_METADATA_ID,
child.REC_OBSOLETE,
level+1 AS level
FROM
parent_child child
JOIN generation g
ON g.REC_CHILD_EVALUATION_META_MASTER_ID = child.META_MASTER_ID
)
SELECT *
FROM generation g
JOIN parent_child parent
ON g.REC_PARENT_EVALUATION_METADATA_ID = parent.METADATA_ID
ORDER BY level DESC
OPTION (MAXRECURSION 0);
Please can someone help me in identifying what is wrong about my query, or if there is some other way of improving performance in this scenario. If this query works than I will handle other logic on java side.

I think it would be easier for you to simply use a #temp table and a loop.
This way you get an idea on where the time is spent, how deep the rabbit hole goes etc.. It's also less overwhelming for the query engine and in my experience it is hardly ever slower than a recursive CTE.
Pseudo code:
DECLARE #level int,
#rowcount int
SET #level = 0
PRINT Convert(varchar, CURRENT_TIMESTAMP) + ' - Starting up...'
SELECT level = #level,
<fields you need>
INTO #temp
FROM <tables>
WHERE <filters>
SELECT #rowcount = ##ROWCOUNT
PRINT Convert(varchar, CURRENT_TIMESTAMP) + ' - Loaded ' + Convert(varchar, #rowcount) + ' rows for level ' + Convert(vachar, #rowcount)
CREATE CLUSTERED INDEX idx0 ON #temp (level)
-- keep adding 'new generation' of children as long as there are new 'parents'
WHILE #rowcount > 0
BEGIN
SET #level = #level + 1
INSERT #temp (level, <fields>)
SELECT level = #level
FROM <tables> tbls
JOIN #temp parent
ON parent.level = #level - 1
AND parent.<x> = tbls.<x>
AND parent.<y> = tbls.<y>
etc..
WHERE <filters>
SELECT #rowcount = ##ROWCOUNT
PRINT Convert(varchar, CURRENT_TIMESTAMP) + ' - Loaded ' + Convert(varchar, #rowcount) + ' rows for level ' + Convert(vachar, #rowcount)
UPDATE STATISTICS #temp
END
-- return results
SELECT * FROM #temp
The UPDATE STATISTICS might be overkill (read: premature optimization), depends on how 'different' each level is when it comes to number of children.

Related

Neo4j display subgraph based on multiple paths

I want to display a subgraph in Neo4j(COMMUNITY EDITION on localhost) based on multiple paths. The paths are the result of a custom traversalDescription() with a special evaluate(Path path). The intention was to ignore a special sequence of relationships and nodes(details to sequence). As far as i know its not possible in a cypher query.
The result looks like this:
(268911)
(268911)<--[REL1,151]--(276650)
(268911)<--[REL1,151]--(276650)<--[REL2,715]--(276651)
(268911)<--[REL1,151]--(276650)<--[REL2,715]--(276651)<--[REL1,34]--(259461)
(268911)<--[REL1,151]--(276650)<--[REL2,715]--(276651)<--[REL1,34]--(259461)<--[REL2,749]--(259466)
(268911)<--[REL1,151]--(276650)<--[REL2,715]--(276651)<--[REL1,34]--(259461)<--[REL2,749]--(259466)<--[REL1,230]--(281)
(268911)<--[REL1,151]--(276650)<--[REL2,715]--(276651)<--[REL1,34]--(259461)<--[REL2,749]--(259466)<--[REL1,230]--(281)--[REL2,2923]-->(278)
(268911)<--[REL1,151]--(276650)<--[REL2,715]--(276651)<--[REL1,34]--(259461)<--[REL2,749]--(259466)<--[REL1,230]--(281)--[REL2,2923]-->(278)<--[REL2,1034]--(9)
(268911)<--[REL1,151]--(276650)<--[REL2,715]--(276651)<--[REL1,34]--(259461)<--[REL2,749]--(259466)<--[REL1,230]--(281)--[REL2,2923]-->(278)<--[REL2,1034]--(9)--[REL2,2943]-->(6)
(268911)<--[REL1,151]--(276650)<--[REL2,715]--(276651)<--[REL1,34]--(259461)<--[REL2,749]--(259466)<--[REL1,230]--(281)--[REL2,2923]-->(278)<--[REL2,1034]--(9)--[REL2,1040]-->(396685)
(268911)<--[REL1,151]--(276650)<--[REL2,715]--(276651)<--[REL1,34]--(259461)<--[REL2,749]--(259466)<--[REL1,230]--(281)--[REL2,2923]-->(278)<--[REL2,1034]--(9)--[REL2,1040]-->(396685)<--[REL2,3047]--(396687)
(268911)<--[REL1,151]--(276650)<--[REL2,715]--(276651)<--[REL1,34]--(259461)<--[REL2,559]--(259465)
(268911)<--[REL1,151]--(276650)<--[REL2,715]--(276651)<--[REL1,34]--(259461)<--[REL2,559]--(259465)<--[REL1,294]--(257)
(268911)<--[REL1,151]--(276650)<--[REL2,715]--(276651)<--[REL1,34]--(259461)<--[REL2,559]--(259465)<--[REL1,294]--(257)--[REL2,2959]-->(255)
(268911)<--[REL1,151]--(276650)<--[REL2,715]--(276651)<--[REL1,34]--(259461)<--[REL2,559]--(259465)<--[REL1,294]--(257)--[REL2,2959]-->(255)<--[REL2,379]--(142)
(268911)<--[REL1,151]--(276650)<--[REL2,715]--(276651)<--[REL1,34]--(259461)<--[REL2,559]--(259465)<--[REL1,294]--(257)--[REL2,2959]-->(255)<--[REL2,379]--(142)--[REL2,2892]-->(139)
(268911)<--[REL1,151]--(276650)<--[REL2,715]--(276651)<--[REL1,34]--(259461)<--[REL2,559]--(259465)<--[REL1,294]--(257)--[REL2,2959]-->(255)<--[REL2,379]--(142)--[REL2,626]-->(396840)
(268911)<--[REL1,151]--(276650)<--[REL2,715]--(276651)<--[REL1,34]--(259461)<--[REL2,559]--(259465)<--[REL1,294]--(257)--[REL2,2959]-->(255)<--[REL2,379]--(142)--[REL2,626]-->(396840)<--[REL2,2988]--(396843)
(268911)<--[REL1,151]--(276650)<--[REL2,711]--(276653)
(268911)<--[REL1,151]--(276650)<--[REL2,711]--(276653)<--[REL1,74]--(259462)
(268911)<--[REL1,151]--(276650)<--[REL2,711]--(276653)<--[REL1,74]--(259462)<--[REL2,777]--(259469)
(268911)<--[REL1,151]--(276650)<--[REL2,711]--(276653)<--[REL1,74]--(259462)<--[REL2,777]--(259469)<--[REL1,12]--(283)
(268911)<--[REL1,151]--(276650)<--[REL2,711]--(276653)<--[REL1,74]--(259462)<--[REL2,777]--(259469)<--[REL1,12]--(283)--[REL2,2922]-->(279)
(268911)<--[REL1,151]--(276650)<--[REL2,711]--(276653)<--[REL1,74]--(259462)<--[REL2,777]--(259469)<--[REL1,12]--(283)--[REL2,2922]-->(279)<--[REL2,1032]--(11)
(268911)<--[REL1,151]--(276650)<--[REL2,711]--(276653)<--[REL1,74]--(259462)<--[REL2,777]--(259469)<--[REL1,12]--(283)--[REL2,2922]-->(279)<--[REL2,1032]--(11)--[REL2,2942]-->(8)
(268911)<--[REL1,151]--(276650)<--[REL2,711]--(276653)<--[REL1,74]--(259462)<--[REL2,777]--(259469)<--[REL1,12]--(283)--[REL2,2922]-->(279)<--[REL2,1032]--(11)--[REL2,1039]-->(396683)
(268911)<--[REL1,151]--(276650)<--[REL2,711]--(276653)<--[REL1,74]--(259462)<--[REL2,777]--(259469)<--[REL1,12]--(283)--[REL2,2922]-->(279)<--[REL2,1032]--(11)--[REL2,1039]-->(396683)<--[REL2,3009]--(396684)
(268911)<--[REL1,151]--(276650)<--[REL2,711]--(276653)<--[REL1,74]--(259462)<--[REL2,558]--(259464)
(268911)<--[REL1,151]--(276650)<--[REL2,711]--(276653)<--[REL1,74]--(259462)<--[REL2,558]--(259464)<--[REL1,147]--(259)
(268911)<--[REL1,151]--(276650)<--[REL2,711]--(276653)<--[REL1,74]--(259462)<--[REL2,558]--(259464)<--[REL1,147]--(259)--[REL2,2958]-->(258)
(268911)<--[REL1,151]--(276650)<--[REL2,711]--(276653)<--[REL1,74]--(259462)<--[REL2,558]--(259464)<--[REL1,147]--(259)--[REL2,2958]-->(258)<--[REL2,378]--(143)
(268911)<--[REL1,151]--(276650)<--[REL2,711]--(276653)<--[REL1,74]--(259462)<--[REL2,558]--(259464)<--[REL1,147]--(259)--[REL2,2958]-->(258)<--[REL2,378]--(143)--[REL2,2891]-->(140)
(268911)<--[REL1,151]--(276650)<--[REL2,711]--(276653)<--[REL1,74]--(259462)<--[REL2,558]--(259464)<--[REL1,147]--(259)--[REL2,2958]-->(258)<--[REL2,378]--(143)--[REL2,625]-->(396864)
(268911)<--[REL1,151]--(276650)<--[REL2,711]--(276653)<--[REL1,74]--(259462)<--[REL2,558]--(259464)<--[REL1,147]--(259)--[REL2,2958]-->(258)<--[REL2,378]--(143)--[REL2,625]-->(396864)<--[REL2,3088]--(396867)
Its basically a subgraph starting from one node showing all possible paths(ignoring the sequence). But how to display this in neo4j? Is it possible to use my traverser in neo4j as standard traverser? I want to avoid to pick every single node and query all nodes(possibly tons of nodes).
This solves the problem(for small amount of nodes, copy paste query into webui):
String query1=null,query2=null;
int ascii =65;
try ( Transaction tx = graphDb.beginTx() )
{
Traverser traverser = td.traverse(graphDb.getNodeById(id));
for ( Path path : traverser)
{
System.out.println(path.toString());
if(ascii == 65)
{
query1= "MATCH ("+Character.toString((char) ascii)+")";
query2= " WHERE id("+Character.toString((char) ascii)+")="+path.endNode().getId();
}
else
{
query1+= ",("+Character.toString((char) ascii)+")";
query2+= " AND id("+Character.toString((char) ascii)+")="+path.endNode().getId();
}
if(ascii==90)
ascii=96;
ascii++;
}
tx.success();
}
System.out.println(query1+query2+" RETURN *");
but is there any other solution?

sql Error in derby - ERROR 42X01: Syntax error: Encountered “WHERE”

I have seen several questions regarding this error but each solution is different as it is a so called "Syntax error". I use Oracle in production and Derby in development (extremely annoying but what can I do).
When I run a certain SQL command that I have created on Oracle it seems to work fine and do what it is suppose to (am using Oracle SQL Developer). But when I want to run the same command in Derby I encounter this error.
And I encounter this error no matter what I seem to do.
WARN | SQL Error: 20000, SQLState: 42X01
ERROR | Syntax error: Encountered "WHERE" at line 94, column 6.
For the life of me I cannot figure out what is wrong. Here is my SQL command. It is a bit long and complicated:
CREATE VIEW BDPBCDBView AS SELECT
BDP_INSTITUTION_NAME,
BIC,
BDP_COUNTRY_NAME,
BDP_ISO_COUNTRY_CODE,
BDP_CITY,
BDP_NETWORK_CONNECTIVITY,
BDP_SERVICE_CODES,
BDP_ISTARGET,
BCDB_NAME,
BCDB_LAENDERKENNZEICHEN,
BCDB_AKTIVMERKMALBANK,
BCDB_AKTIVMERKMALLAND,
BCDB_AKTIVMERKMALBANKLAND,
BCDB_SWIFTKENNZEICHEN,
COUNTRYCODE,
ISBDP,
ISBCDB,
BCDB_ORT,
s1.BICS_RMA
FROM
(SELECT
bdp.bic,
bdp.institution_name AS bdp_institution_name,
bdp.country_name AS bdp_country_name,
bdp.iso_country_code AS bdp_iso_country_code,
bdp.city AS bdp_city,
bdp.network_connectivity AS bdp_network_connectivity,
bdp.service_codes as bdp_service_codes,
bdp.isTarget AS bdp_isTarget,
bcdb.name as bcdb_name,
bcdb.laenderKennzeichen as bcdb_laenderKennzeichen,
bcdb.aktivMerkmalBank AS bcdb_aktivMerkmalBank,
bcdb.aktivMerkmalLand AS bcdb_aktivMerkmalLand,
bcdb.aktivMerkmalBankLand AS bcdb_aktivMerkmalBankLand,
bcdb.swiftKennzeichen AS bcdb_swiftKennzeichen,
CASE
WHEN bcdb.laenderKennzeichen IS NOT NULL THEN bcdb.laenderKennzeichen
ELSE bdp.iso_country_code
END AS countryCode,
CASE
WHEN bdp.bic IS NOT NULL THEN 1
ELSE 0
END AS isbdp,
CASE
WHEN bcdb.bic IS NOT NULL THEN 1
ELSE 0
END AS isbcdb,
bcdb.ort AS bcdb_ort
FROM BDP bdp LEFT JOIN BCDB bcdb ON bdp.bic = bcdb.bic WHERE bdp.bic IS NOT NULL
UNION ALL SELECT
bcdb.bic,
bdp.institution_name AS bdp_institution_name,
bdp.country_name AS bdp_country_name,
bdp.iso_country_code AS bdp_iso_country_code,
bdp.city AS bdp_city,
bdp.network_connectivity AS bdp_network_connectivity,
bdp.service_codes as bdp_service_codes,
bdp.isTarget AS bdp_isTarget,
bcdb.name as bcdb_name,
bcdb.laenderKennzeichen as bcdb_laenderKennzeichen,
bcdb.aktivMerkmalBank AS bcdb_aktivMerkmalBank,
bcdb.aktivMerkmalLand AS bcdb_aktivMerkmalLand,
bcdb.aktivMerkmalBankLand AS bcdb_aktivMerkmalBankLand,
bcdb.swiftKennzeichen AS bcdb_swiftKennzeichen,
CASE
WHEN bcdb.laenderKennzeichen IS NOT NULL THEN bcdb.laenderKennzeichen
ELSE bdp.iso_country_code
END AS countryCode,
CASE
WHEN bdp.bic IS NOT NULL THEN 1
ELSE 0
END AS isbdp,
CASE
WHEN bcdb.bic IS NOT NULL THEN 1
ELSE 0
END AS isbcdb,
bcdb.ort AS bcdb_ort
FROM BDP bdp RIGHT JOIN BCDB bcdb ON bdp.bic = bcdb.bic WHERE bdp.bic IS NULL)
t1 LEFT JOIN ( SELECT * FROM
(
SELECT s1.BIC_CRSPNDT AS BICS_RMA FROM
(SELECT
rma.crspdt AS BIC_CRSPNDT,
rma.issr AS BIC_ISSR
From RMA
WHERE ((RMA.tp= 'Issued' OR RMA.tp = 'Received') AND RMA.RMASTS='Enabled' AND RMA.SVCNM='swift.fin') )s1
UNION
SELECT s1.BIC_ISSR AS BIC FROM (SELECT
rma.crspdt AS BIC_CRSPNDT,
rma.issr AS BIC_ISSR
FROM RMA
WHERE ((RMA.tp= 'Issued' OR RMA.tp = 'Received') AND RMA.RMASTS='Enabled' AND RMA.SVCNM='swift.fin') )s1 )
WHERE BICS_RMA IS NOT NULL
ORDER BY BICS_RMA) s1
ON (s1.BICS_RMA = substr(t1.BIC, 1,8))
The error occurs at the third to last line.
My read in code in Java is :
#PersistenceContext
EntityManager em;
#PostConstruct
public void createViewIfNeeded() {
if (FidaProfile.isActive(FidaProfile.DEVELOPMENT)) {
em.createNativeQuery("DROP TABLE BDPBCDBView").executeUpdate();
String command_1 = loadDevelopmentViewScript("DEV-DB/init_dev_view.sql");//BDPBCDView sql script, this is made from 3 tables namely BCDB, BDP and RMA
em.createNativeQuery(command_1).executeUpdate();
}
}
public void setEm(EntityManager em) {
this.em = em;
}
private String loadDevelopmentViewScript(String addressOfSQLScript) {
try {
InputStream stream = BDPBCDPViewGenerator.class.getClassLoader().getResourceAsStream(addressOfSQLScript);
ByteArrayOutputStream result = new ByteArrayOutputStream();
byte[] buffer = new byte[1024];
int length;
while ((length = stream.read(buffer)) != -1) {
result.write(buffer, 0, length);
}
return result.toString("UTF-8");
} catch (IOException e) {
throw new FidaErrorCodeException(FidaErrorCode.UNEXPECTED_EXCEPTION,
"Could NOT load Development-View-Script", e);
}
}
As an aside, good formatting is vital to keep track of what is going on in complex queries!
Since my previous answer was incorrect, what about if you were to change the s1 subquery so that you were unpivoting rather than using union? Something like:
SELECT DISTINCT CASE WHEN dummy.id = 1 THEN r.bic_crspndt
WHEN dummy.id = 2 THEN r.bic_issr
END AS bics_rma
FROM (SELECT rma.crspdt AS bic_crspndt,
rma.issr AS bic_issr
FROM rma
WHERE (rma.tp = 'Issued' OR rma.tp = 'Received')
AND rma.rmasts = 'Enabled'
AND rma.svcnm = 'swift.fin') r
INNER JOIN (SELECT 1 ID FROM dual UNION ALL
SELECT 2 ID FROM dual) dummy ON (dummy.id = 1 AND r.crspdt IS NOT NULL)
OR (dummy.id = 2 AND r.issr IS NOT NULL);
Maybe if you were to do that, the Derby database would be able to cope with it?
N.B. I've used a manual UNPIVOT via a conditional cross join rather than the Oracle 11g UNPIVOT function since I don't know anything about Derby and UNPIVOT might not be supported there. Quite why you're being forced to use different database platforms between your live and dev environments is beyond me; sounds bonkers and potentially rather dangerous! I assume you've already tried flagging that!

jOOQ how to use optional sorting

I have a query which selects persons from a table.
SelectConditionStep<PersonRecord> select = context
.selectFrom(Tables.PERSON)
.where(Tables.PERSON.ISDELETED.eq(false));
if(searchValue != null && searchValue.length() > 0){
select.and(Tables.PERSON.LASTNAME.likeIgnoreCase(String.format("%%%s%%", searchValue)));
}
List<PersonRecord> dbPersons = select
.orderBy(Tables.PERSON.LASTNAME, Tables.PERSON.FIRSTNAME, Tables.PERSON.ID)
.limit(length).offset(start)
.fetch();
This code works pretty well. Because I display the data in a datatables table I need to have optional / dynamic sorting capability. I did not find a solution so far.
found the solution myself now:
Collection<SortField<?>> sortFields = new ArrayList<>();
sortFields.add(Tables.PERSON.FIRSTNAME.asc());
List<PersonRecord> dbPersons = select
.orderBy(sortFields)
.limit(length).offset(start)
.fetch();

Flexible search with parameters return null value

I have to do this flexible search query in a service Java class:
select sum({oe:totalPrice})
from {Order as or join CustomerOrderStatus as os on {or:CustomerOrderStatus}={os:pk}
join OrderEntry as oe on {or.pk}={oe.order}}
where {or:versionID} is null and {or:orderType} in (8796093066999)
and {or:company} in (8796093710341)
and {or:pointOfSale} in (8796097413125)
and {oe:ecCode} in ('13','14')
and {or:yearSeason} in (8796093066981)
and {os:code} not in ('CANCELED', 'NOT_APPROVED')
When I perform this query in the hybris administration console I correctly obtain:
1164.00000000
In my Java service class I wrote this:
private BigDecimal findGroupedOrdersData(String total, String uncDisc, String orderPromo,
Map<String, Object> queryParameters) {
BigDecimal aggregatedValue = new BigDecimal(0);
final StringBuilder queryBuilder = new StringBuilder();
queryBuilder.append("select sum({oe:").append(total).append("})");
queryBuilder.append(
" from {Order as or join CustomerOrderStatus as os on {or:CustomerOrderStatus}={os:pk} join OrderEntry as oe on {or.pk}={oe.order}}");
queryBuilder.append(" where {or:versionID} is null");
if (queryParameters != null && !queryParameters.isEmpty()) {
appendWhereClausesToBuilder(queryBuilder, queryParameters);
}
queryBuilder.append(" and {os:code} not in ('");
queryBuilder.append(CustomerOrderStatus.CANCELED.getCode()).append("', ");
queryBuilder.append("'").append(CustomerOrderStatus.NOT_APPROVED.getCode()).append("')");
FlexibleSearchQuery query = new FlexibleSearchQuery(queryBuilder.toString(), queryParameters);
List<BigDecimal> result = Lists.newArrayList();
query.setResultClassList(Arrays.asList(BigDecimal.class));
result = getFlexibleSearchService().<BigDecimal> search(query).getResult();
if (!result.isEmpty() && result.get(0) != null) {
aggregatedValue = result.get(0);
}
return aggregatedValue;
}
private void appendWhereClausesToBuilder(StringBuilder builder, Map<String, Object> params) {
if ((params == null) || (params.isEmpty()))
return;
for (String paramName : params.keySet()) {
builder.append(" and ");
if (paramName.equalsIgnoreCase("exitCollection")) {
builder.append("{oe:ecCode}").append(" in (?").append(paramName).append(")");
} else {
builder.append("{or:").append(paramName).append("}").append(" in (?").append(paramName).append(")");
}
}
}
The query string before the search(query).getResult() function is:
query: [select sum({oe:totalPrice}) from {Order as or join CustomerOrderStatus as os on {or:CustomerOrderStatus}={os:pk}
join OrderEntry as oe on {or.pk}={oe.order}} where {or:versionID} is null
and {or:orderType} in (?orderType) and {or:company} in (?company)
and {or:pointOfSale} in (?pointOfSale) and {oe:ecCode} in (?exitCollection)
and {or:yearSeason} in (?yearSeason) and {os:code} not in ('CANCELED', 'NOT_APPROVED')],
query parameters: [{orderType=OrderTypeModel (8796093230839),
pointOfSale=B2BUnitModel (8796097413125), company=CompanyModel (8796093710341),
exitCollection=[13, 14], yearSeason=YearSeasonModel (8796093066981)}]
but after the search(query) result is [null].
Why? Where I wrong in the Java code? Thanks.
In addition, if you want to disable restriction in your java code. You can do like this ..
#Autowired
private SearchRestrictionService searchRestrictionService;
private BigDecimal findGroupedOrdersData(String total, String uncDisc, String orderPromo,
Map<String, Object> queryParameters) {
searchRestrictionService.disableSearchRestrictions();
// You code here
searchRestrictionService.enableSearchRestrictions();
return aggregatedValue;
}
In the above code, You can disabled the search restriction and after the search result, you can again enable it.
OR
You can use sessionService to execute flexible search query in Local View. The method executeInLocalView can be used to execute code within an isolated session.
(SearchResult<? extends ItemModel>) sessionService.executeInLocalView(new SessionExecutionBody()
{
#Override
public Object execute()
{
sessionService.setAttribute(FlexibleSearch.DISABLE_RESTRICTIONS, Boolean.TRUE);
return flexibleSearchService.search(query);
}
});
Here you are setting DISABLE RESTRICTIONS = true which will run the query in admin context [Without Restriction].
Check this
Better i would suggest you to check what restriction exactly applying to your item type. You can simply check in Backoffice/HMC
Backoffice :
Go to System-> Personalization (SearchRestricion)
Search by Restricted Type
Check Filter Query and analysis your item data based on that.
You can also check its Principal (UserGroup) on which restriction applied.
To confirm, just check by disabling active flag.
Query in the code is running not under admin user (most likely).
And in this case the different search Restrictions are applied to the query.
You can see that the original query is changed:
start DB logging (/hac -> Monitoring -> Database -> JDBC logging);
run the query from the code;
stop DB logging and check log file.
More information: https://wiki.hybris.com/display/release5/Restrictions
In /hac console the admin user is usually used and restrictions will not be applied because of this.
As the statement looks ok to me i'm going to go with visibility of the data. Are you able to see all the items as whatever user you are running the query as? In the hac you would be admin obviously.

Neo4j ExecutionEngine does not return valid results

Trying to use a similar example from the sample code found here
My sample function is:
void query()
{
String nodeResult = "";
String rows = "";
String resultString;
String columnsString;
System.out.println("In query");
// START SNIPPET: execute
ExecutionEngine engine = new ExecutionEngine( graphDb );
ExecutionResult result;
try ( Transaction ignored = graphDb.beginTx() )
{
result = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n, n.Name" );
// END SNIPPET: execute
// START SNIPPET: items
Iterator<Node> n_column = result.columnAs( "n" );
for ( Node node : IteratorUtil.asIterable( n_column ) )
{
// note: we're grabbing the name property from the node,
// not from the n.name in this case.
nodeResult = node + ": " + node.getProperty( "Name" );
System.out.println("In for loop");
System.out.println(nodeResult);
}
// END SNIPPET: items
// START SNIPPET: columns
List<String> columns = result.columns();
// END SNIPPET: columns
// the result is now empty, get a new one
result = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n, n.Name" );
// START SNIPPET: rows
for ( Map<String, Object> row : result )
{
for ( Entry<String, Object> column : row.entrySet() )
{
rows += column.getKey() + ": " + column.getValue() + "; ";
System.out.println("nested");
}
rows += "\n";
}
// END SNIPPET: rows
resultString = engine.execute( "start n=node(*) where n.Name =~ '.*79.*' return n.Name" ).dumpToString();
columnsString = columns.toString();
System.out.println(rows);
System.out.println(resultString);
System.out.println(columnsString);
System.out.println("leaving");
}
}
When I run this in the web console I get many results (as there are multiple nodes that have an attribute of Name that contains the pattern 79. Yet running this code returns no results. The debug print statements 'in loop' and 'nested' never print either. Thus this must mean there are not results found in the Iterator, yet that doesn't make sense.
And yes, I already checked and made sure that the graphDb variable is the same as the path for the web console. I have other code earlier that uses the same variable to write to the database.
EDIT - More info
If I place the contents of query in the same function that creates my data, I get the correct results. If I run the query by itself it returns nothing. It's almost as the query works only in the instance where I add the data and not if I come back to the database cold in a separate instance.
EDIT2 -
Here is a snippet of code that shows the bigger context of how it is being called and sharing the same DBHandle
package ContextEngine;
import ContextEngine.NeoHandle;
import java.util.LinkedList;
/*
* Class to handle streaming data from any coded source
*/
public class Streamer {
private NeoHandle myHandle;
private String contextType;
Streamer()
{
}
public void openStream(String contextType)
{
myHandle = new NeoHandle();
myHandle.createDb();
}
public void streamInput(String dataLine)
{
Context context = new Context();
/*
* get database instance
* write to database
* check for errors
* report errors & success
*/
System.out.println(dataLine);
//apply rules to data (make ContextRules do this, send type and string of data)
ContextRules contextRules = new ContextRules();
context = contextRules.processContextRules("Calls", dataLine);
//write data (using linked list from contextRules)
NeoProcessor processor = new NeoProcessor(myHandle);
processor.processContextData(context);
}
public void runQuery()
{
NeoProcessor processor = new NeoProcessor(myHandle);
processor.query();
}
public void closeStream()
{
/*
* close database instance
*/
myHandle.shutDown();
}
}
Now, if I call streamInput AND query in in the same instance (parent calls) the query returns results. If I only call query and do not enter ANY data in that instance (yet web console shows data for same query) I get nothing. Why would I have to create the Nodes and enter them into the database at runtime just to return a valid query. Shouldn't I ALWAYS get the same results with such a query?
You mention that you are using the Neo4j Browser, which comes with Neo4j. However, the example you posted is for Neo4j Embedded, which is the in-process version of Neo4j. Are you sure you are talking to the same database when you try your query in the Browser?
In order to talk to Neo4j Server from Java, I'd recommend looking at the Neo4j JDBC driver, which has good support for connecting to the Neo4j server from Java.
http://www.neo4j.org/develop/tools/jdbc
You can set up a simple connection by adding the Neo4j JDBC jar to your classpath, available here: https://github.com/neo4j-contrib/neo4j-jdbc/releases Then just use Neo4j as any JDBC driver:
Connection conn = DriverManager.getConnection("jdbc:neo4j://localhost:7474/");
ResultSet rs = conn.executeQuery("start n=node({id}) return id(n) as id", map("id", id));
while(rs.next()) {
System.out.println(rs.getLong("id"));
}
Refer to the JDBC documentation for more advanced usage.
To answer your question on why the data is not durably stored, it may be one of many reasons. I would attempt to incrementally scale back the complexity of the code to try and locate the culprit. For instance, until you've found your problem, do these one at a time:
Instead of looping through the result, print it using System.out.println(result.dumpToString());
Instead of the regex query, try just MATCH (n) RETURN n, to return all data in the database
Make sure the data you are seeing in the browser is not "old" data inserted earlier on, but really is an insert from your latest run of the Java program. You can verify this by deleting the data via the browser before running the Java program using MATCH (n) OPTIONAL MATCH (n)-[r]->() DELETE n,r;
Make sure you are actually working against the same database directories. You can verify this by leaving the server running. If you can still start your java program, unless your Java program is using the Neo4j REST Bindings, you are not using the same directory. Two Neo4j databases cannot run against the same database directory simultaneously.

Categories

Resources