I am facing a problem while trying to persist the existing stock in a preproduction environment.
What I am trying to do is actually to loop on a text file and insert substrings from that file into the database.
Here is the class that I execute :
public class RepriseStock {
private static Session session;
public RepriseStock() {
session = HibernateUtil.getSessionFactory().openSession();
session.beginTransaction();
}
public static int insererPartenaires(String sCurrentLine, int i) {
String sql = "INSERT INTO PARTENAIRE(ID,"
+ "MVTSOC,"
+ " MVTAGR, "
+ "MVTNOMSOC,"
+ "MVTCPTTMAG,"
+ "DATEAGREMENT,"
+ "MVTCHAINE,"
+ "MVTRGPT,"
+ "MVTUNION,"
+ "MVTNOMMAG,"
+ "MVTTELSOC,"
+ "MVTADRMAG,"
+ "MVTVILMAG,"
+ "MVTMAIL,"
+ "MVTSITU)"
+ " VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)";
Query query = session.createSQLQuery(sql);
query.setInteger(0, i);
query.setInteger(1, Integer.parseInt(sCurrentLine.substring(0, 3)));
query.setInteger(2, Integer.parseInt(sCurrentLine.substring(3, 10)));
query.setString(3, sCurrentLine.substring(10, 34));
query.setInteger(4, Integer.parseInt(sCurrentLine.substring(48, 53)));
query.setString(5, sCurrentLine.substring(77, 83));
query.setInteger(6, Integer.parseInt(sCurrentLine.substring(86, 90)));
query.setInteger(7, Integer.parseInt(sCurrentLine.substring(90, 94)));
// union
query.setInteger(8, Integer.parseInt(sCurrentLine.substring(94, 98)));
// enseigne 30
query.setString(9, sCurrentLine.substring(248, 278));
// tel
query.setString(10, sCurrentLine.substring(278, 293));
// adresse
query.setString(11, sCurrentLine.substring(293, 323));
// ville
query.setString(12, sCurrentLine.substring(323, 348));
// mail
query.setString(13, sCurrentLine.substring(398, 448));
// situ
query.setString(14, sCurrentLine.substring(449, 452));
return query.executeUpdate();
}
/**
* #param args
*/
public static void main(String[] args) {
// TODO Module de remplacement de méthode auto-généré
BufferedReader br = null;
RepriseStock rs = new RepriseStock();
try {
String sCurrentLine;
br = new BufferedReader(
new FileReader(
"C:\\Users\\test\\Desktop\\test\\reprise de stock\\nouveauFichierPREPROD.dat"));
int i = 0;
sCurrentLine = br.readLine();
while ((sCurrentLine = br.readLine()) != null) {
i++;
RepriseStock.insererPartenaires(sCurrentLine, i);
System.out.println("Nombre de fois : " + i);
}
System.out.println("total (" + i + " )");
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if (br != null)
br.close();
} catch (IOException ex) {
ex.printStackTrace();
}
}
}
}
After the script is executed, i have the total of loops is 1022 times. But the data is not persisted into oracle table (Partenaire)
My log doesn't display any error.
Do you see the issue ?
It looks like you're not committing the transaction.
If you want each update to be a separate transaction, try moving session.beginTransaction(); to the beginning of the insererPartenaires method and capturing the Transaction object returned from that statement in a variable. Then, after each update, make sure to call commit() on the Transaction object.
If you want all of the updates to be the same transaction, move the beginTransaction() and commit() methods to surround the while loop in the main method.
Also just note that you're unnecessarily mixing static and non-static here. Try changing public static int insererPartenaires(String sCurrentLine, int i) to public int insererPartenaires(String sCurrentLine, int i). Then just use the instantiated RepriseStock object to call the method instead of invoking it statically.
You'll also need to change private static Session session to be private Session session
Related
I have an inherited project, a BMC Remedy application and never worked with this Remedy stuff. This project modifies Incidents and Work Orders from remedy through the Remedy API. I have literally no idea on this.
There's a process that closes incidents that are in resolved state and have not been modified in the last 36 hours. Sometimes, those incidents have the 'categorization' field empty, and the client wants to fill this categorization before closing it.
This is part of the code:
Connection to Remedy:
public static void main(String args[]) {
// Inicializamos el logger
java.util.logging.LogManager.getLogManager().reset();
try {
// Nos conectamos a Remedy y a MySQL
LOGGER.info("Conectando a bases de datos");
if (!connect()) {
throw new Exception("Fallo al conectar a Remedy o a MySQL");
}
// Metodo para cerrar incidecias resueltas
remedy.cerrarIncidencias(sql.queryResueltas36h());
// Desconectamos de Remedy y MySQL
disconnect();
} catch (Exception e) {
LOGGER.error("Error critico: ", e);
try {
remedy.desconectar();
} catch (Exception e1) {
}
try {
sql.desconectar();
} catch (Exception e1) {
}
}
}
Function to closing incidents:
public void cerrarIncidencias(List<String> incs) throws Exception {
int contador = 1;
for (String inc : incs) {
try {
// Obtenemos la incidencia
QualifierInfo qual = server.parseQualification("HPD:Help Desk", "'Incident Number' = \"" + inc + "\"");
List<Entry> entries = server.getListEntryObjects("HPD:Help Desk", qual, 0, 0, null,
Constantes.CAMPOS_HPD_HELP_DESK_CERRAR_INCIDENCIA, false, null);
// Rellenamos un comentario generico
Entry comment = new Entry();
comment.put(Constantes.HPD_WORKLOG_DETAILED_DESCRIPTION, new Value("Cierre automatico tras 36 horas en resuelto."));
comment.put(Constantes.HPD_WORKLOG_INCIDENT_NUMBER, new Value(inc));
comment.put(Constantes.HPD_WORKLOG_DESCRIPTION, new Value("----"));
comment.put(Constantes.HPD_WORKLOG_WORKLOG_TYPE, new Value(8000));
for (Entry entry : entries) {
entry.put(Constantes.HPD_HELP_DESK_STATUS, new Value(5)); // Estado a cerrado
if (entry.get(Constantes.HPD_HELP_DESK_ASSIGNEE_LOGIN_ID).getValue() == null) {
entry.put(Constantes.HPD_HELP_DESK_ASSIGNEE_LOGIN_ID, new Value("lmoren70"));
entry.put(Constantes.HPD_HELP_DESK_ASSIGNEE, new Value("Luis Manuel Moreno Rodriguez")); // Usuario asignado
}
server.setEntry("HPD:Help Desk", entry.getEntryId(), entry, null, 0);
server.createEntry("HPD:WorkLog", comment);
LOGGER.info("Incidencia " + inc + " cerrada con exito - " + contador + " de " + incs.size());
}
} catch (Exception e) {
LOGGER.error("Incidencia " + inc + " NO se ha podido cerrar - " + contador + " de " + incs.size() + "\n"
+ e.getMessage());
}
contador++;
}
}
Query:
I thought to do an update directly to the database BUT this database reads from Remedy, so I have to update Remedy.
public List<String> queryResueltas36h() {
String query = "SELECT inc FROM vdf_tickets, vdf_groups WHERE status = 'Resuelto' AND LENGTH(inc) > 9 "
+ "AND vdf_groups.group = creator_group AND (vdf_groups.categorization = 'TES' OR vdf_groups.group IN ('TES', 'ARCA', 'NetOps TES Assurance')) "
+ "AND last_resolved_date < DATE_ADD(NOW(), INTERVAL -36 HOUR) ORDER BY inc DESC";
List<String> incs = new ArrayList<String>();
try {
stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(query);
while (rs.next()) {
String inc = rs.getString("inc");
incs.add(inc);
}
stmt.close();
} catch (Exception e) {
LOGGER.error("Error al obtener lista de incidencias de la base de datos", e);
try {
stmt.close();
} catch (Exception e1) {
}
}
return incs;
}
What I want is to put the categorization to 'TES', in case there's no categorization.
One option I thought is to do an automation with Selenium and Python and not touching this code, but is far better to have all in the same project.
Any ideas? Thanks in advance!
You need to update your cerrarIncidencias function. But first you need to ask what categorisation you need to update.
There are three levels of categorisation.
Operational Categorisation
Product Categorisation
Resolution Categorisation
So decide which one you want to populate and get the field id for that field. For this example, I will say
Categorisation Tier 1 which is 1000000063
You'll need to add to CAMPOS_HPD_HELP_DESK_CATEGORISATION_TIER1=1000000063 to your Constantes file.
Then in your block
for (Entry entry : entries)
You need something like:
if (entry.get(Constantes.CAMPOS_HPD_HELP_DESK_CATEGORISATION_TIER1).getValue() == null) {
entry.put(Constantes.CAMPOS_HPD_HELP_DESK_CATEGORISATION_TIER1, new Value("Your Value for Categorisation Tier 1"));
}
I am trying to create nodes and relations in neo4j demo.db folder using this code .Its just creating the blank demo.db folder .When i open this db folder in neo4j it showing zero nodes an relations. I am providing the relations.xls file.
public class TestAut {
private static final File DB_PATH = new File("databases/demo.db");
private static GraphDatabaseService graphDb;
private static String [] r1={"PARTNERS_JV_WITH","EXEC_JOINS","EXEC_QUITS","INVESTS_IN_TECH_IP","ACQUIRES","LAUNCHES_NEW_PRODUCT_SERVICE","LAUNCHES",
"ACQUIRE_TALENT","DOWNSIZES_TALENT","ENTER_NEW_MARKET","DELIVERS_VALUE","OPENS_NEW_CENTER"};
private static String [] r2={"PARTNERS_JV_WITH","EXEC_JOINS","EXEC_QUITS","INVESTS_IN_TECH_IP","ACQUIRES","LAUNCHES_NEW_PRODUCT_SERVICE","LAUNCHES",
"ACQUIRE_TALENT","DOWNSIZES_TALENT","ENTER_NEW_MARKET","DELIVERS_VALUE","OPENS_NEW_CENTER"};
private static Relations relations;
public static void main(String args[]) {//throws FileNotFoundException {
String fileName = "relations.xls";
Workbook workbook;
startDb();
relations=new Relations(r1,r2);
System.out.println (fileName);
BufferedReader br;
try {
br = new BufferedReader( new InputStreamReader( new FileInputStream(fileName)));
br.close();
workbook = Workbook.getWorkbook(new File(fileName));
for(int i=0;i<workbook.getNumberOfSheets();i++) {
Sheet sheet=workbook.getSheet(i);
for(int j=0;j<sheet.getRows();j++) {
Cell cell[]=sheet.getRow(j);
for(int k=0;k<cell.length;k++)
System.out.print(cell[k].getContents()+" ");
System.out.print("\n");
createNodesAndRelationship(cell[0].getContents(),cell[1].getContents(),
cell[2].getContents(),cell[3].getContents(),
cell[4].getContents(),cell[5].getContents(),cell[6].getContents(),cell[7].getContents());
}
}
} catch (FileNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (BiffException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
stopDb();
System.out.println("Done!!");
successfully.... ");
}
public static void startDb() {
graphDb = new GraphDatabaseFactory().newEmbeddedDatabase(DB_PATH);
}
public static void stopDb() {
graphDb.shutdown();
}
public static void createNodesAndRelationship(String subject,String subjecttype,String object,
String objecttype,String relationship,String headline,String newslink,String date) {
Transaction tx = graphDb.beginTx();
try
{
Result result;
result=graphDb.execute("match ("+subjecttype+"{name:\""+subject+"\"}) return "+subjecttype+".name;");
if(result.toString().equals("empty iterator")) {
//Query="create (a:"+subjecttype+"{name:\""+subject+"\"}) return a;";
result=graphDb.execute("create (a:"+subjecttype+"{name:\""+subject+"\"}) return a;");
System.out.println(result.toString());
}
//Query="match ("+objecttype+"{name:\""+object+"\"}) return "+objecttype+".name;";
result=graphDb.execute("match ("+objecttype+"{name:\""+object+"\"}) return "+objecttype+".name;");
if(result.toString().equals("empty iterator")) {
result=graphDb.execute("create (a:"+objecttype+"{name:\""+object+"\"}) return a;");
System.out.println(result.toString());
}
result=graphDb.execute("match (a:"+subjecttype+"{name:\""+subject+"\"}) "
+ "match(b:"+objecttype+"{name:\""+object+"\"}) "
+ "match (a)-[:"+relationship+"]->"
+ "(b) return a.name,b.name;");
if(result.toString().equals("empty iterator")&&relations.contains(relationship)) {
result=graphDb.execute("match (a:"+subjecttype+"{name:\""+subject+"\"}) "
+ "match(b:"+objecttype+"{name:\""+object+"\"}) "
+ "create (a)-[r:"+relationship+"{headlines:\""+
headline+"\",newslink:\""+newslink+ "\",date:\""+date+"\""+ "}]->(b) return r;");
System.out.println(result.toString());
}
tx.success();
}
finally {
tx.close();
}
}
}
This is the console output after executing this code ....
relations.xls
Southwestern Bell Corporation Company Warner Communications Company ACQUIRES TIMELINE: AT&T's Merger With Time Warner Follows Decades Of Industry Deals http://www.npr.org/sections/thetwo-way/2016/10/22/498996253/timeline-at-ts-merger-with-time-warner-follows-decades-of-industry-deals?utm_medium=RSS&utm_campaign=technology 24/10/16
Verizon Company Yahoo Company ACQUIRES TIMELINE: AT&T's Merger With Time Warner Follows Decades Of Industry Deals http://www.npr.org/sections/thetwo-way/2016/10/22/498996253/timeline-at-ts-merger-with-time-warner-follows-decades-of-industry-deals?utm_medium=RSS&utm_campaign=technology 24/10/16
AOL Company Time Warner Inc. Company ACQUIRES TIMELINE: AT&T's Merger With Time Warner Follows Decades Of Industry Deals http://www.npr.org/sections/thetwo-way/2016/10/22/498996253/timeline-at-ts-merger-with-time-warner-follows-decades-of-industry-deals?utm_medium=RSS&utm_campaign=technology 24/10/16
Comcast Company The Walt Disney Company Company ACQUIRES TIMELINE: AT&T's Merger With Time Warner Follows Decades Of Industry Deals http://www.npr.org/sections/thetwo-way/2016/10/22/498996253/timeline-at-ts-merger-with-time-warner-follows-decades-of-industry-deals?utm_medium=RSS&utm_campaign=technology 24/10/16
SBC Corporation Company Southwestern Bell Corporation Company ACQUIRES TIMELINE: AT&T's Merger With Time Warner Follows Decades Of Industry Deals http://www.npr.org/sections/thetwo-way/2016/10/22/498996253/timeline-at-ts-merger-with-time-warner-follows-decades-of-industry-deals?utm_medium=RSS&utm_campaign=technology 24/10/16
Comcast Company NBC Universal Company ACQUIRES TIMELINE: AT&T's Merger With Time Warner Follows Decades Of Industry Deals http://www.npr.org/sections/thetwo-way/2016/10/22/498996253/timeline-at-ts-merger-with-time-warner-follows-decades-of-industry-deals?utm_medium=RSS&utm_campaign=technology 24/10/16
sss Company sdadasfd Company ACQUIRES bndfhfdhedr http://www.npr.org/sections/thetwo-way/2016/10/22/498996253/timeline-at-ts-merger-with-time-warner-follows-decades-of-industry-deals?utm_medium=RSS&utm_campaign=technology 24/10/16
Done!!
Actually the if condition is returning false all the time thats why its not creating any node and relations.I just changed my if condition and now its working fine.
try
{
Result result;
result = graphDb.execute("merge (a:" + subjecttype + "{name:\"" + subject + "\"}) return a;");
result = graphDb.execute("merge (a:" + objecttype + "{name:\"" + object + "\"}) return a;");
result = graphDb.execute("merge (a:" + subjecttype + "{name:\"" + subject + "\"}) " + "merge(b:"
+ objecttype + "{name:\"" + object + "\"}) " + "merge (a)-[r:" + relationship + "{headlines:\""
+ headline + "\",newslink:\"" + newslink + "\",date:\"" + date + "\"" + "}]->(b) return r;");
tx.success();
}
finally {
tx.close();
}
Sorry to say this, but this code is really messy! Besides, we can't reproduce your results, since we don't have the data and the code is far from being a minimal example. We can't really debug for you: isolate each step, see if does anything, etc.
Here' a few tips and remarks, though.
Testing absent results
if (result.toString().equals("empty iterator"))
Really? Please, use the API instead of a string conversion, which is never a stable interface (it's not part of any contract):
if (!result.hasNext())
Variable or label?
Do the values of subjecttype and objecttype represent a variable name or the node of a label? The former doesn't really make sense (why should the query change when it's functionally the same), but the latter isn't properly used:
result=graphDb.execute("match ("+subjecttype+"{name:\""+subject+"\"}) return "+subjecttype+".name;");
subjecttype is used as a variable in the return clause, but looks like a label in the match clause, except it's missing a leading colon:
result=graphDb.execute("match (n:"+subjecttype+"{name:\""+subject+"\"}) return n.name");
(The final semi-colon is unnecessary)
You're actually using it correctly for the matching create:
result=graphDb.execute("create (a:"+subjecttype+"{name:\""+subject+"\"}) return a;");
Query parameters
Also, your query is vulnerable to "Cypher injection" (a relative of SQL injection), if subject contains quotes. Use query parameters instead:
result = graphDb.execute("match (n:" + subjecttype + " {name:{name}}) return n.name",
Collections.<String, Object>singletonMap("name", subject));
It has the added benefit of making the query generic, which means it's not parsed and its execution plan is not computed for each line (only once per label).
Use MERGE
You could replace your logic by simply using MERGE instead of MATCH + CREATE:
result = graphDb.execute("merge (n:" + subjecttype + " {name:{name}}) return n",
Collections.<String, Object>singletonMap("name", subject));
Power to Cypher
Your multiple queries could actually be reduced to a single one, except for the filter on relationship being contained in relations:
Map<String, Object> params = new HashMap<>();
params.put("subject", subject);
params.put("object", object);
params.put("headline", headline);
params.put("newslink", newslink);
params.put("date", date);
graphDb.execute(
"MERGE (a:" + subjecttype + " {name: {subject}}) " +
"MERGE (b:" + objecttype + " {name: {object}}) " +
"MERGE (a)-[r:" + relationship + "]->(b) " +
"ON CREATE SET r.headlines = {headline}, " +
" r.newslink = {newslink}, " +
" r.date = {date}",
params);
With the filter, it's actually 3 queries:
Map<String, Object> params = new HashMap<>();
params.put("subject", subject);
params.put("object", object);
params.put("headline", headline);
params.put("newslink", newslink);
params.put("date", date);
graphDb.execute("MERGE (a:" + subjecttype + " {name: {subject}})", params);
graphDb.execute("MERGE (b:" + objecttype + " {name: {object}})", params);
if (relations.contains(relationship)) {
graphDb.execute(
"MATCH (a:" + subjecttype + " {name: {subject}}) " +
"MATCH (b:" + objecttype + " {name: {object}}) " +
"MERGE (a)-[r:" + relationship + "]->(b) " +
"ON CREATE SET r.headlines = {headline}, " +
" r.newslink = {newslink}, " +
" r.date = {date}",
params);
}
Try-with-resources
Transaction is AutoCloseable, which means you should use a try-with-resources instead of managing it manually. Instead of
Transaction tx = graphDb.beginTx();
try {
// ...
} finally {
tx.close();
}
just do
try (Transaction tx = graphDb.beginTx()) {
// ...
}
I have a hibernate class that requires 3 different sessions. It currently uses 2 sessions and works perfectly. The first session is used to read data from an external db. The second session is used to save data to our internal db. I'm adding a third session because, we need to keep track on the transaction regardless of whether or not the main transaction is successful (the XXXXUpdate object). My problem is that the new session is hanging on tx.commit().
private synchronized void executeUpdate(Long manualUpdateTagIndex) throws Exception {
LogPersistenceLoggingContext ctx = new LogPersistenceThreadContext().getLogPersistenceLoggingContext();
DateTime minTriggerDate = parseDateTimeIfNotNull(minTriggerTime);
DateTime maxTriggerDate = parseDateTimeIfNotNull(maxTriggerTime);
Session webdataSession = null;
Session XXXXUpdateSession = null;
XXXXUpdate update = new XXXXUpdate();
update.setExecutedAt(new DateTime());
update.setStatus(WebdataUpdateStatus.Success);
boolean commit = true;
int tagCount = 0;
List<Period> tagPeriods = new ArrayList<>();
Map<Long, DateTime> tagIndexes = new LinkedHashMap<>();
try {
XXXXUpdateSession = accountingService.openUnmanagedSession();
XXXXUpdateSession.getTransaction().begin();
XXXXUpdateSession.save(update);
HierarchicalLogContext logCtx = new HierarchicalLogContext(String.valueOf(update.getId()));
ctx.pushLoggingContext(logCtx);
ctx.log(logger, Level.INFO, new XXXXLogMarker(), "Executing XXXX data transfer", new Object[]{});
if (webdataSessionFactory == null){
throw new Exception("Failed to obtain webdata session factory. See earlier log entries");
}
try {
webdataSession = webdataSessionFactory.openSession();
} catch (Exception ex) {
update.setStatus(WebdataUpdateStatus.ConnectionError);
throw new Exception("Failed to obtain webdata connection", ex);
}
webdataSession.getTransaction().begin();
if (manualUpdateTagIndex == null) { // automatic tags update
XXXXUpdate lastUpdate = (XXXXUpdate) HibernateUtil.getCurrentSpringManagedSession()
.createCriteria(XXXXUpdate.class)
.add(Restrictions.isNotNull("latestTriggerTimestamp"))
.add(Restrictions.eq("status", WebdataUpdateStatus.Success))
.add(Restrictions.eq("manualUpdate", false))
.addOrder(Order.desc("latestTriggerTimestamp"))
.setMaxResults(1).uniqueResult();
DateTime lastUpdatedDate = Period.defaultEffectiveInstant;
if (minTriggerDate != null) {
lastUpdatedDate = minTriggerDate;
}
if (lastUpdate != null && lastUpdate.getLatestTriggerTimestamp() != null) {
lastUpdatedDate = lastUpdate.getLatestTriggerTimestamp();
ctx.log(logger, Level.INFO, new XXXXLogMarker(),
"Querying for tag event triggers newer than last update timestamp [" + lastUpdate.getLatestTriggerTimestamp() + "]", new Object[]{});
} else {
ctx.log(logger, Level.INFO, new XXXXLogMarker(), "Update has never run. Catching up with history", new Object[]{});
}
#SuppressWarnings("unchecked")
List<XXXXProcessedTagRequest> processedReqs = HibernateUtil.getCurrentSpringManagedSession()
.createCriteria(XXXXProcessedTagRequest.class).list();
Query triggerQuery = webdataSession.createQuery(
"select trigger, "
+ "trigger.TagIndex,"
+ "req "
+ "from XXXXTagEventTrigger as trigger "
+ "join trigger.req as req "
+ "where trigger.EventType in (:eventTypes) "
+ "and trigger.timestamp > :lastUpdateMinusDelta "
+ (maxTriggerDate != null?"and trigger.timestamp < :maxDate ":"")
+ "and req.CurrentState = :currentState "
+ "order by trigger.timestamp,trigger.reqIndex");
triggerQuery.setParameterList("eventTypes", new Object[]{5, 9});
triggerQuery.setParameter("lastUpdateMinusDelta", lastUpdatedDate.minusHours(hoursToKeepProcessedReqs) );
if (maxTriggerDate != null){
triggerQuery.setParameter("maxDate", maxTriggerDate);
}
triggerQuery.setParameter("currentState", 2);
#SuppressWarnings("unchecked")
List<Object[]> allTriggers = triggerQuery.list();
List<Object[]> unprocessedTriggers = removeProcessedTags(new ArrayList<Object[]>(allTriggers),processedReqs,ctx);
for (Object[] row : unprocessedTriggers) {
XXXXTagEventTrigger trigger = (XXXXTagEventTrigger) row[0];
if (lastUpdatedDate == null || lastUpdatedDate.isBefore(trigger.getTimestamp().getMillis())) {
lastUpdatedDate = new DateTime(trigger.getTimestamp());
}
tagIndexes.put((Long) row[1], new DateTime(trigger.getTimestamp()));
XXXXProcessedTagRequest processedReq = new XXXXProcessedTagRequest();
processedReq.setReqIndex(((XXXXTagReq)row[2]).getReqIndex());
processedReq.setTimestamp(trigger.getTimestamp());
HibernateUtil.getCurrentSpringManagedSession().save(processedReq);
}
ctx.log(logger, Level.INFO, new XXXXLogMarker(),
"Found [" + unprocessedTriggers.size() + "] tag event triggers on [" + tagIndexes.size() + "] tags", new Object[]{});
update.setLatestTriggerTimestamp(lastUpdatedDate);
} else { // manual tag update
ctx.log(logger, Level.INFO, new XXXXLogMarker(), "Executing manual update for tag index [" + manualUpdateTagIndex + "]", new Object[]{});
DateTime now = new DateTime();
tagIndexes.put(manualUpdateTagIndex, now);
update.setLatestTriggerTimestamp(now);
update.setManualUpdate(true);
}
if (tagIndexes.size() > 0) {
int totalTagCount = tagIndexes.size();
while (!tagIndexes.isEmpty()) {
List<Long> batchIndexes = new ArrayList<>();
Iterator<Map.Entry<Long, DateTime>> indexIt = tagIndexes.entrySet().iterator();
while (indexIt.hasNext() && batchIndexes.size() < tagBatchSize) {
batchIndexes.add(indexIt.next().getKey());
indexIt.remove();
}
Map<Long, LocalTag> existingTags = new HashMap<>();
#SuppressWarnings("unchecked")
List<LocalTag> existingTagIds = HibernateUtil.getCurrentSpringManagedSession()
.createCriteria(LocalTag.class)
.add(Restrictions.in("tagIndex", batchIndexes))
.add(Restrictions.eq("currentVersion", true)).list();
for (LocalTag lt : existingTagIds) {
existingTags.put(lt.getTagIndex(), lt);
}
ctx.log(logger, Level.INFO, new XXXXLogMarker(),
"Processing tag updates [" + tagCount + "-" + (tagCount + batchIndexes.size()) + "] of [" + totalTagCount + "]", new Object[]{});
Criteria tagCriteria = webdataSession.createCriteria(XXXXTag.class);
tagCriteria.add(Restrictions.in("TagIndex", batchIndexes));
if (!includeTestTags) {
tagCriteria.add(Restrictions.eq("TestTag", "0"));
}
tagCriteria.setFetchMode("XXXXTagMS", FetchMode.JOIN);
tagCriteria.setFetchMode("XXXXTagPS", FetchMode.JOIN);
tagCriteria.setFetchMode("XXXXTagCCList", FetchMode.JOIN);
tagCriteria.setFetchMode("XXXXTagTA", FetchMode.JOIN);
tagCriteria.setFetchMode("XXXXTagCP", FetchMode.JOIN);
tagCriteria.setResultTransformer(CriteriaSpecification.DISTINCT_ROOT_ENTITY);
#SuppressWarnings("unchecked")
List<XXXXTag> tags = tagCriteria.list();
if (manualUpdateTagIndex != null && tags.isEmpty()) {
throw new ValidationException("No tag found for manual update tag index [" + manualUpdateTagIndex + "]");
}
for (XXXXTag tag : tags) {
update.getProcessedTags().add(updateTag(tag, tagIndexes.get(tag.getTagIndex()), existingTags));
tagCount++;
if (fireEventLastActions.contains(tag.getLastAction().trim())) {
tagPeriods.add(new Period(tag.getStartTime().getMillis(), tag.getStopTime().getMillis()));
}
}
HibernateUtil.getCurrentSpringManagedSession().flush();
HibernateUtil.getCurrentSpringManagedSession().clear();
webdataSession.clear();
}
} else {
ctx.log(logger, Level.INFO, new XXXXLogMarker(), "No updates found", new Object[]{});
}
HibernateUtil.getCurrentSpringManagedSession()
.createQuery("delete XXXXUpdate where executedAt < :purgeDate")
.setParameter("purgeDate", new DateTime().minusDays(daysToKeepUpdateHistory))
.executeUpdate();
HibernateUtil.getCurrentSpringManagedSession()
.createQuery("delete XXXXProcessedTagRequest where timestamp < :purgeDate")
.setParameter("purgeDate", new DateTime().minusHours(hoursToKeepProcessedReqs))
.executeUpdate();
update.setStatus(WebdataUpdateStatus.Success);
update.setTagCount(update.getProcessedTags().size());
tagPeriods = Period.merge(tagPeriods);
for (Period p : tagPeriods) {
XXXXUpdatePeriod oup = new XXXXUpdatePeriod();
oup.setXXXXUpdate(update);
oup.setStartDate(p.getStart());
oup.setEndDate(p.getEnd());
update.getPeriods().add(oup);
}
HibernateUtil.getCurrentSpringManagedSession().flush();
ctx.log(logger, Level.INFO, new XXXXLogMarker(), "XXXX data transfer complete. Transferred [" + tagCount + "] tag updates", new Object[]{});
ctx.popLoggingContext(logCtx);
} catch (Exception ex) {
HibernateUtil.getCurrentSpringManagedSession().clear();
update.getProcessedTags().clear();
update.setTagCount(0);
update.setStatus(WebdataUpdateStatus.TransferError);
commit = false;
ctx.log(logger, Level.ERROR, new XXXXLogMarker(), "XXXX data transfer failed", new Object[]{}, ex);
throw new Exception("XXXX data transfer failed", ex);
} finally {
try {
XXXXUpdateSession.saveOrUpdate(update);
XXXXUpdateSession.getTransaction().commit();
} catch (Exception ex) {
commit = false;
ctx.log(logger, Level.ERROR, new XXXXLogMarker(), "Failed to save XXXX transfer update record", new Object[]{}, ex);
throw new Exception("Failed to save XXXX transfer update record", ex);
} finally {
if (!commit) {
webdataSession.getTransaction().rollback();
} else {
webdataSession.getTransaction().commit();
}
ResourceDisposer.dispose(webdataSession);
}
}
}
The new session is the XXXXUpdateSession. The only new code is that which is related to this session. It's some kind of timing issue because, when I use hibernate debug logging, the tx commits without issue. It also commits when I attempt to debug the hibernate commit(). I do not have much experience with hibernate so, I'm probably missing something obvious. Any help would be greatly appreciated. Thanks.
You have opened two transactions webdataSession.getTransaction().begin(); which is causing the issue (20 & 37 lines in the above code).
You can open the second transaction after committing the first transaction.
Also, it is not a best practice to have long methods like which will be very hard to debug the issues and become the nightmare for maintenance/support of the project.
I have a java code that generates a request number based on the data received from database, and then updates the database for newly generated
synchronized (this.getClass()) {
counter++;
System.out.println(counter);
System.out.println("start " + System.identityHashCode(this));
certRequest
.setRequestNbr(generateRequestNumber(certInsuranceRequestAddRq
.getAccountInfo().getAccountNumberId()));
System.out.println("outside funcvtion"+certRequest.getRequestNbr());
reqId = Utils.getUniqueId();
certRequest.setRequestId(reqId);
System.out.println(reqId);
ItemIdInfo itemIdInfo = new ItemIdInfo();
itemIdInfo.setInsurerId(certRequest.getRequestId());
certRequest.setItemIdInfo(itemIdInfo);
dao.insert(certRequest);
addAccountRel();
counter++;
System.out.println(counter);
System.out.println("end");
}
the output for System.out.println() statements is `
1
start 27907101
com.csc.exceed.certificate.domain.CertRequest#a042cb
inside function request number66
outside funcvtion66
AF88172D-C8B0-4DCD-9AC6-12296EF8728D
2
end
3
start 21695531
com.csc.exceed.certificate.domain.CertRequest#f98690
inside function request number66
outside funcvtion66
F3200106-6033-4AEC-8DC3-B23FCD3CA380
4
end
In my case I get a call from two threads for this code.
If you observe both the threads run independently. However the data for request number is same in both the cases.
is it possible that before the database updation for first thread completes the second thread starts execution.
`
the code for generateRequestNumber() is as follows:
public String generateRequestNumber(String accNumber) throws Exception {
String requestNumber = null;
if (accNumber != null) {
String SQL_QUERY = "select CERTREQUEST.requestNbr from CertRequest as CERTREQUEST, "
+ "CertActObjRel as certActObjRel where certActObjRel.certificateObjkeyId=CERTREQUEST.requestId "
+ " and certActObjRel.certObjTypeCd=:certObjTypeCd "
+ " and certActObjRel.certAccountId=:accNumber ";
String[] parameterNames = { "certObjTypeCd", "accNumber" };
Object[] parameterVaues = new Object[] {
Constants.REQUEST_RELATION_CODE, accNumber };
List<?> resultSet = dao.executeNamedQuery(SQL_QUERY,
parameterNames, parameterVaues);
// List<?> resultSet = dao.retrieveTableData(SQL_QUERY);
if (resultSet != null && resultSet.size() > 0) {
requestNumber = (String) resultSet.get(0);
}
int maxRequestNumber = -1;
if (requestNumber != null && requestNumber.length() > 0) {
maxRequestNumber = maxValue(resultSet.toArray());
requestNumber = Integer.toString(maxRequestNumber + 1);
} else {
requestNumber = Integer.toString(1);
}
System.out.println("inside function request number"+requestNumber);
return requestNumber;
}
return null;
}
Databases allow multiple simultaneous connections, so unless you write your code properly you can mess up the data.
Since you only seem to require a unique growing integer, you can easily generate one safely inside the database with for example a sequence (if supported by the database). Databases not supporting sequences usually provide some other way (such as auto increment columns in MySQL).
Here the method reads the database which has an unique ID with the sequence number which keeps on increasing, since am a beginner in java,can I know how to implement this repetitive polling and check for new incoming message each time.
public void run() {
int seqId = 0;
while(true) {
List<KpiMessage> list = null;
try {
list = fullPoll(seqId);
if (!list.isEmpty()) {
seqId = list.get(0).getSequence();
incomingMessages.addAll(list);
System.out.println("waiting 3 seconds");
System.out.println("new incoming message");
Thread.sleep(3000);
}
} catch (Exception e1) {
e1.printStackTrace();
}
}
}
//Method which defines polling of the database and also count the number of Queries
public List<KpiMessage> fullPoll(int lastSeq) throws Exception {
Statement st = dbConnection.createStatement();
ResultSet rs = st.executeQuery("select * from msg_new_to_bde where ACTION = 804 and SEQ >" + lastSeq + "order by SEQ DESC");
List<KpiMessage> pojoCol = new ArrayList<KpiMessage>();
while (rs.next()) {
KpiMessage filedClass = convertRecordsetToPojo(rs);
pojoCol.add(filedClass);
}
for (KpiMessage pojoClass : pojoCol) {
System.out.print(" " + pojoClass.getSequence());
System.out.print(" " + pojoClass.getTableName());
System.out.print(" " + pojoClass.getAction());
System.out.print(" " + pojoClass.getKeyInfo1());
System.out.print(" " + pojoClass.getKeyInfo2());
System.out.println(" " + pojoClass.getEntryTime());
}
// return seqId;
return pojoCol;
}
My goal is to Poll the table from the database and also check for new incoming message, which I can find from the Header field SequenceID in table which is unique and keeps on increasing for new entries. Now my problem is
1.Lets say after I poll the first time, it reads all the entries and makes the thread to sleep for 6 seconds, by the mean time how can I get the new incoming data and Poll it again ?
2.Also how to add the new data ,when it does Polling for the second time and pass the new data to another class.
Poller calls fullPoll every 6 secs and passes lastSeq param to it. Initially lastSeq = 0. When Poller gets result list it replaces the lastSeq with max SEQ value. fullPoll retrieves only records with SEQ > lastSeq.
void run() throws Exception {
int seqId = 0;
while(true) {
List<KpiMessage> list = fullPoll(seqId);
if (!list.isEmpty()) {
seqId = list.get(0).getSequene();
}
Thread.sleep(6000);
}
}
public List<KAMessage> fullPoll(int lastSeq) throws Exception {
...
ResultSet rs = st.executeQuery("select * from msg_new_to_bde where ACTION = 804 and SEQ > " + lastSeq + " order by SEQ
DESC");
..
}
Here is some code you may use to get working on. I tried to make it pretty flexible using the Observer pattern; this way you can connect multiple "message processors" to the same poller:
public class MessageRetriever implements Runnable {
private int lastID;
private List<MessageListener> listeners;
...
public void addMessageListener(MessageListener listener) {
this.listeners.add(listener)
}
public void removeMessageListener(MessageListener listener) {
this.listeners.remove(listener)
}
public void run() {
//code to repeat the polling process given some time interval
}
private void pollMessages() {
if (this.lastID == 0)
this.fullPoll()
else
this.partialPoll()
}
private void fullPoll() {
//your full poll code
//assuming they are ordered by ID and it haves the ID field - you should
//replace this code according to your structure
this.lastID = pojoCol.get(pojoCol.length() - 1).getID()
this.fireInitialMessagesSignal(pojoCol)
}
private void fireInitialMessagesSignal(List<KAMessage> messages) {
for (MessageListener listener : this.listeners)
listener.initialMessages(messages)
}
private void partialPoll() {
//code to retrieve messages *past* the lastID. You could do this by
//adding an extra condition on your where sentence e.g
//select * from msg_new_to_bde where ACTION = 804 AND SEQ > lastID order by SEQ DESC
//the same as before
this.lastID = pojoCol.get(pojoCol.length() - 1).getID()
this.fireNewMessagesListener(pojoCol)
}
private void fireNewMessagesListener(List<KAMessage> messages) {
for (MessageListener listener : this.listeners)
listener.newMessages(messages)
}
}
And the interface
public interface MessageListener {
public void initialMessages(List<KAMessage> messages);
public void newMessages(List<KAMessage> messages)
}
Basically, using this approach, the retriever is a runnable (can be executed on it's own thread) and takes care of the whole process: does an initial poll and continues doing "partial" polls on given intervals.
Different events fire different signals, sending the affected messages to the registered listeners, and those process the messages as they want.