Statement Closed: Next Oracle Migration 11g - java

I am migrating an application from Oracle 10g to Oracle 11g and i am having trouble on a method included on a "OSubjectsqlmapdao" (the application uses ibatis 2).
public List retrieveList(String user, String flag)throws Exception
{
try{
logger.info("Retreving List");
java.sql.ResultSet rs = null;
Map map = new HashMap();
map.put("user",user);
map.put("flag",flag);
map.put("listRetrieved",null);
List listRetrieved = new ArrayList();
queryForObject("DBListRetrieved",map);
rs = (java.sql.ResultSet)map.get("listRetrieved");
logger.info("HAsmap map " + map.size());
logger.info("HAsmap map " + map.isEmpty());
logger.info("listRetrieved map " + map.get("listRetrieved "));
logger.info("listRetrieved map getClass" + map.get("listRetrieved ").getClass());
OSubject subject = null;
while (rs.next())
{
subject = new OAsuntos();
subject.setUser(rs.getString(1));
subject.setCdode(rs.getString(2));
subject.setDescrip(rs.getString(3));
listRetrieved .add(subject );
}
return listRetrieved ;
}catch(Exception e){
logger.error(e.getMessage());
throw new Exception (e.getMessage());
}
}
The OSubject .xml is defined as following:
<parameterMap id="parameterMapRetrieveList" class="map">
<parameter property="listRetrieved" javaType="java.lang.Object" jdbcType="ORACLECURSOR" mode="OUT"/>
<parameter property="user" jdbcType="VARCHAR" javaType="java.lang.String" mode="IN"/>
<parameter property="flag" jdbcType="VARCHAR" javaType="java.lang.String" mode="IN"/>
</parameterMap>
<procedure id="DBListRetrieved" parameterMap="parameterMapRetrieveList">
{ ? = call AS.PCK_LIST.F_RetrieveList(?,?)}
</procedure>
The method worked perfectly in Oracle 10g but it does not work after the migration of the DDBB. Besides, the map list class is according to the log oracle.jdbc.driver when i am not using that driver and i think is not being referenced or even exist on the application.
The Log trace is:
Retreving List 2014-06-25 14:54:55,098 INFO OSubjectSqlMapDao
(OSubjectSqlMapDao .java:1268) - HAsmap map 3 2014-06-25 14:54:55,099
INFO OSubjectSqlMapDao (OSubjectSqlMapDao .java:1270) - HAsmap map
false 2014-06-25 14:54:55,099 INFO OSubjectSqlMapDao
(OSubjectSqlMapDao .java:1273) - listRetrieved map
oracle.jdbc.driver.OracleResultSetImpl#4efd9ac2 2014-06-25
14:54:55,100 INFO OSubjectSqlMapDao (OSubjectSqlMapDao .java:1276)
- listRetrieved map getClassclass oracle.jdbc.driver.OracleResultSetImpl 2014-06-25 14:54:55,104 ERROR
OSubjectSqlMapDao (OSubjectSqlMapDao .java:1294) - Closed Statement:
next
It would be very helpful if someone could lend me a hand. Thanks in advance.

Related

How to iterate over the keys of a hierarchical Immutable node in Java taken from YAML?

I have the following YAML :-
loadRules:
- table: accounts
rows: 10
columns:
- name: id
util: PrimaryIntGen
params:
- 1
- 10
- name: name
util: RandomStringNumeric
params:
- 10
executeRules:
- transaction_name: Account_query1
weight: 50
query:
queries:
- SELECT * FROM accounts WHERE id > ? and id < ?
- SELECT * FROM accounts WHERE id <= ?
bindings:
- binding:
- name: RowRandomBoundedInt
params:
- 5
- 10
- name : RowRandomBoundedInt
params:
- 5
- 10
- binding:
- name : RowRandomBoundedInt
params:
- 5
- 10
- transaction_name: Account_query2
weight: 50
query:
queries:
- SELECT * FROM accounts WHERE id < ? and id > ?
- SELECT * FROM accounts WHERE id >= ?
bindings:
- binding:
- name : RowRandomBoundedInt
params :
- 5
- 10
- name: RowRandomBoundedInt
params:
- 5
- 10
- binding:
- name : RowRandomBoundedInt
params :
- 5
- 10
I want to run these transactions from YAML.
I have written this code so far :-
Ignore the txnType for now.(Its passed from another function).
int executeRuleIndex= txnType.getId()-1;
HierarchicalConfiguration<ImmutableNode> executeRule=config.configurationsAt("executeRules").get(executeRuleIndex);
HierarchicalConfiguration<ImmutableNode> query=executeRule.configurationAt("query");
List<String> executeQueries = query.getList(String.class, "queries");
List<HierarchicalConfiguration<ImmutableNode>>bindings=
query.configurationsAt("bindings");
System.out.println("Transaction id" +executeRuleIndex+ "\n");
for(int i=0;i<executeQueries.size();i++)
{
PreparedStatement stmt= conn.prepareStatement(executeQueries.get(i));
HierarchicalConfiguration<ImmutableNode> bindingsForThisQuery= bindings.get(i);
executeRulesYaml(stmt,bindingsForThisQuery,txnType.getId());
}
I am not able to iterate for a single binding I pass to the executeRulesYaml :- bindingsForThisQuery.
I want to make a List of String and Object mapping for these bindings. I have done the same for loadRules :
List<HierarchicalConfiguration<ImmutableNode>> loadRulesConfig = config.configurationsAt("loadRules");
if (loadRulesConfig.isEmpty()) {
throw new RuntimeException("Empty Load Rules");
}
LOG.info("Using YAML for load phase");
for (HierarchicalConfiguration loadRuleConfig : loadRulesConfig) {
List<HierarchicalConfiguration<ImmutableNode>>
columnsConfigs = loadRuleConfig.configurationsAt("columns");
List<Map<String, Object>> columns = new ArrayList<>();
for (HierarchicalConfiguration columnsConfig : columnsConfigs) {
Iterator columnKeys = columnsConfig.getKeys();
Map<String, Object> column = new HashMap<>();
while (columnKeys.hasNext()) {
String element = (String) columnKeys.next();
System.out.println(element);
Object params;
if (element.equals("params")) {
params = columnsConfig.getList(Object.class, element);
} else {
params = columnsConfig.get(Object.class, element);
}
column.put(element, params);
}
columns.add(column);
}
}
}
Any help would be appreciated as I am new to Java.

nested exception is com.ibatis.common.jdbc.exception.NestedSQLException:

I am new to Java, I am trying to call a stored procedure from Java.
I am getting this exception for this code
#SuppressWarnings("unchecked")
List<Object> queryForList(String procName, Map<String, Object> map, CommonVO commonVO) {
List<Object> resultSetList = null;
Date connectionAttemptStartDate = null;
Date cidbQueryStartDate = null;
List<String> jndiNameList =null;
String jndiName = null;
try {
resultSetList = super.queryForList(procName, map);
}catch(Exception e) {
isError = true;
//exception = e;
System.out.println(e.getMessage());
exception=ExceptionUtils.validateCreateCIDBException(e);
}
return resultSetList;
}
My XML file here
<!DOCTYPE sqlMap PUBLIC "-//iBATIS.com//DTD SQL Map 2.0//EN" "http://www.ibatis.com/dtd/sql-map-2.dtd">
<sqlMap namespace="SP">
<parameterMap id="objectParameters" class="java.util.Map">
<parameter property="valueAddedOfferList" jdbcType="VALUE_ADDED_OFFER_LIST" typeName="VALUE_ADDED_OFFER_LIST" typeHandler="com.bam.vision.dao.db.typehandler.ValueAddedOfferListTypeHandlerCallback" mode="OUT" />
<parameter property="errorCode" jdbcType="INTEGER" javaType="java.lang.String" mode="OUT"/>
<parameter property="errorMessage" jdbcType="CHAR" javaType="java.lang.String" mode="OUT"/>
</parameterMap>
<procedure id="retrieve_value_added_offer" parameterMap="objectParameters" timeout="2">
{call retrieve_value_added_offer(?,?,?)}
</procedure>
</sqlMap>
Getting exception like here
--- Cause: java.lang.ClassCastException: java.math.BigDecimal incompatible with java.lang.String; nested exception is com.ibatis.common.jdbc.exception.NestedSQLException:
--- The error occurred in retrieve_value_added_offer.xml.
--- The error occurred while applying a parameter map.
--- Check the SP.objectParameters.
--- Check the output parameters (retrieval of output parameters failed).
--- Cause: java.lang.ClassCastException: java.math.BigDecimal incompatible with java.lang.String
This line in your XML file looks suspicious to me:
<parameter property="errorCode" jdbcType="INTEGER" javaType="java.lang.String" mode="OUT"/>
What type is the errorCode code parameter in your stored procedure?
You have jdbcType="INTEGER", which suggests it's a number, but you also have javaType="java.lang.String", which suggests it's a string. Clearly it can't be both. You haven't included the declaration of your stored procedure so it's impossible to tell which one it actually is.
If your error code is an integer, try changing javaType="java.lang.String" to javaType="java.lang.Integer" (or if that fails, javaType="java.math.BigDecimal" instead). If your error code is a string, change jdbcType="INTEGER" to jdbcType="STRING".

How to retrieve a single value from a list of records in neo4j?

I am pretty new to Neo4j. I have written this piece of code below:
List<Value> collect_appnames = new ArrayList<Value>();
Driver dr = GraphDatabase.driver("bolt://inmbzp5137.in.dst.ibm.com:7687", AuthTokens.basic("neo4j", " "));
Session session = dr.session();
StatementResult rs = session.run("Match(n:Application) with n.name as a return distinct collect(a)");
Assert.assertTrue(rs.hasNext());
while(rs.hasNext()) {
Record record = rs.next();
collect_appnames= record.values();
}
collect_appnames returns this->
[
[
"sample-tomcat",
"sampleticketing",
"sampleoms",
"samplescheduler",
"sampleinventory",
"samplerating",
"sampleloyalty",
"samplertcm",
"sampleproducts",
"sampleinvoicegenerator",
"samplejbosswildfly",
"samplewebsphereapplication",
"samplereports",
"sampleapplication",
"samplebanking",
"samplemoneytransfer",
"sampleautomobile",
"samplemanufacturing",
"sampleinvestment",
"samplesavings",
"sampledesign",
"samplepatterns",
"samplepatents",
"sampleonlinetransfer",
"samplerecharge",
"samplecreditcheck",
"samplecreditcheck",
"sampleautomobileserver",
"sample-tomcat",
"sampleticketing",
"sampleoms",
"samplescheduler",
"sampleinventory",
"samplerating",
"sampleloyalty",
"samplertcm",
"sampleproducts",
"sampleinvoicegenerator",
"samplejbosswildfly",
"samplewebsphereapplication",
"samplereports",
"sampleapplication",
"samplebanking",
"samplemoneytransfer",
"sampleautomobile",
"samplemanufacturing",
"sampleinvestment",
"samplesavings",
"sampledesign",
"samplepatterns",
"samplepatents",
"sampleonlinetransfer",
"samplerecharge",
"sampleticketing",
"sampleoms",
"samplescheduler",
"sampleinventory",
"samplerating",
"sampleloyalty",
"sample-tomcat",
"samplertcm",
"sampleproducts",
"sampleinvoicegenerator",
"samplejbosswildfly",
"samplewebsphereapplication",
"samplereports",
"sampleapplication",
"samplebanking",
"samplemoneytransfer",
"sampleautomobile",
"samplemanufacturing",
"sampleinvestment",
"samplesavings",
"sampledesign",
"samplepatterns",
"samplepatents",
"sampleonlinetransfer",
"samplerecharge",
"samplecreditcheck",
"sampleautomobileserver"
]
]
However collect_appnames.size() returns 1. So when i do collect_appnames.get(0) it is returning me the entire set. I want to extract only sample-tomcat from this list. Please tell me how to do this. Thanks in advance!

drop table not working - com.datastax.driver.core

Drop table using the datastax driver for Cassandra doesn't look to be working. create table works but drop table does not and does not throw an exception. 1) Am I doing the drop correctly? 2) Anyone else seen this behavior?
In the output you can see the table gets created and apparently dropped as it is not in the second table listing in the first run. However, when I reconnect (second run) the table is there resulting in an exception.
import java.util.Collection;
import com.datastax.driver.core.*;
public class Fail {
SimpleStatement createTableCQL = new SimpleStatement("create table test_table(testfield varchar primary key)");
SimpleStatement dropTableCQL = new SimpleStatement("drop table test_table");
Session session = null;
Cluster cluster = null;
public Fail()
{
System.out.println("First Run");
this.run();
System.out.println("Second Run");
this.run();
}
private void run()
{
try
{
cluster = Cluster.builder().addContactPoints("10.48.8.43 10.48.8.47 10.48.8.53")
.withCredentials("394016","394016")
.withQueryOptions(new QueryOptions().setConsistencyLevel(ConsistencyLevel.ALL))
.build();
session = cluster.connect("gid394016");
}
catch(Exception e)
{
System.err.println(e.toString());
System.exit(1);
}
//create the table
System.out.println("createTableCQL");
this.session.execute(createTableCQL);
//list tables in the keyspace
System.out.println("Table list:");
Collection<TableMetadata> results1 = cluster.getMetadata().getKeyspace("gid394016").getTables();
for (TableMetadata tm : results1)
{
System.out.println(tm.toString());
}
//drop the table
System.out.println("dropTableCQL");
this.session.execute(dropTableCQL);
//list tables in the keyspace
System.out.println("Table list:");
Collection<TableMetadata> results2 = cluster.getMetadata().getKeyspace("gid394016").getTables();
for (TableMetadata tm : results2)
{
System.out.println(tm.toString());
}
session.close();
cluster.close();
}
public static void main(String[] args) {
new Fail();
}
}
Console output:
First Run
[main] INFO com.datastax.driver.core.NettyUtil - Did not find Netty's native epoll transport in the classpath, defaulting to NIO.
[main] INFO com.datastax.driver.core.policies.DCAwareRoundRobinPolicy - Using data-center name 'Cassandra' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.51:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.47:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.53:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.49:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host 10.48.8.43 10.48.8.47 10.48.8.53/10.48.8.43:9042 added
createTableCQL
Table list:
CREATE TABLE gid394016.test_table (testfield text, PRIMARY KEY (testfield)) WITH read_repair_chance = 0.0 AND dclocal_read_repair_chance = 0.1 AND gc_grace_seconds = 864000 AND bloom_filter_fp_chance = 0.01 AND caching = { 'keys' : 'ALL', 'rows_per_partition' : 'NONE' } AND comment = '' AND compaction = { 'class' : 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy' } AND compression = { 'sstable_compression' : 'org.apache.cassandra.io.compress.LZ4Compressor' } AND default_time_to_live = 0 AND speculative_retry = '99.0PERCENTILE' AND min_index_interval = 128 AND max_index_interval = 2048;
dropTableCQL
Table list:
Second Run
[main] INFO com.datastax.driver.core.policies.DCAwareRoundRobinPolicy - Using data-center name 'Cassandra' for DCAwareRoundRobinPolicy (if this is incorrect, please provide the correct datacenter name with DCAwareRoundRobinPolicy constructor)
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.51:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.47:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.53:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host /10.48.8.49:9042 added
[main] INFO com.datastax.driver.core.Cluster - New Cassandra host 10.48.8.43 10.48.8.47 10.48.8.53/10.48.8.43:9042 added
createTableCQL
Exception in thread "main" com.datastax.driver.core.exceptions.AlreadyExistsException: Table gid394016.test_table already exists
at com.datastax.driver.core.exceptions.AlreadyExistsException.copy(AlreadyExistsException.java:111)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:217)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:54)
at com.bdcauto.cassandrachecks.Fail.run(Fail.java:38)
at com.bdcauto.cassandrachecks.Fail.<init>(Fail.java:17)
at com.bdcauto.cassandrachecks.Fail.main(Fail.java:65)
Caused by: com.datastax.driver.core.exceptions.AlreadyExistsException: Table gid394016.test_table already exists
at com.datastax.driver.core.exceptions.AlreadyExistsException.copy(AlreadyExistsException.java:130)
at com.datastax.driver.core.Responses$Error.asException(Responses.java:118)
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:151)
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:175)
at com.datastax.driver.core.RequestHandler.access$2500(RequestHandler.java:44)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:801)
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:617)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1014)
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:937)
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:105)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:266)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:276)
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:263)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:318)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:304)
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:846)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:112)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.datastax.driver.core.exceptions.AlreadyExistsException: Table gid394016.test_table already exists
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:69)
at com.datastax.driver.core.Responses$Error$1.decode(Responses.java:37)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:230)
at com.datastax.driver.core.Message$ProtocolDecoder.decode(Message.java:221)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
... 14 more
You are running this code with the table present in the database and that's why you are getting the "already exists" error. Please connect to the database using cqlsh and check that yourself.
Create, alter and drop table statements are being propagated throughout the cluster asynchronously. Even though you are receiving a response from the coordinator you still need to wait for schema agreement.

Envers throws RuntimeException in ValidityAuditStrategy when no audit records (table partitioning)

In envers (persistence.xml), I enabled the strategy for table partitioning, according to the development guide: http://docs.jboss.org/hibernate/orm/4.2/devguide/en-US/html/ch15.html#envers-partitioning
The class: ValidityAuditStrategy, throws RuntimeException when there is no audit record. This exception occurs when the Envers try to update an audit record with a date of final revision (revend_tstmp), but this audit record does not exist.
The database of my application receives data load from external applications and is not possible change these external applications to include their audit records.
I have no chance to handle this exception (I don't know how).
In method ValidityAuditStrategy#updateLastRevision:
if (l.size() == 1) {
//... doStuff - OK
} else {
throw new RuntimeException("Cannot find previous revision for entity " + auditedEntityName + " and id " + id);
}
In method ValidityAuditStrategy#perform:
if ( rowCount != 1 )
throw new RuntimeException("Cannot update previous revision for entity " + auditedEntityName + " and id " + id);
A similar issue occurred in this link: https://developer.jboss.org/thread/160195?tstart=0 but had no solution.
It's possible apply a workaround?
I use hibernate-envers-4.1.3-Final version.
Log:
2015-07-17 10:23:28,653 DEBUG [-] [org.hibernate.SQL] (http-/0.0.0.0:8080-5) update MY_ENTITY_AUD set ID_REV_FINAL=?, DATE_HOUR_REV_FINAL=? where ID_ENTITY=? and ID_REV <> ? and ID_REV_FINAL is null
2015-07-17 10:23:28,677 TRACE [-] [org.hibernate.type.descriptor.sql.BasicBinder] (http-/0.0.0.0:8080-5) binding parameter [1] as [INTEGER] - 422
2015-07-17 10:23:28,677 TRACE [-] [org.hibernate.type.descriptor.sql.BasicBinder] (http-/0.0.0.0:8080-5) binding parameter [2] as [TIMESTAMP] - Thu Jul 17 10:23:28 BRT 2015
2015-07-17 10:23:28,677 TRACE [-] [org.hibernate.type.descriptor.sql.BasicBinder] (http-/0.0.0.0:8080-5) binding parameter [3] as [INTEGER] - 12345
2015-07-17 10:23:28,678 TRACE [-] [org.hibernate.type.descriptor.sql.BasicBinder] (http-/0.0.0.0:8080-5) binding parameter [4] as [INTEGER] - 422
2015-07-17 10:23:28,803 ERROR [-] [org.hibernate.AssertionFailure] (http-/0.0.0.0:8080-5) HHH000099: an assertion failure occured (this may indicate a bug in Hibernate, but is more likely due to unsafe use of the session): java.lang.RuntimeException: Cannot update previous revision for entity my.package.MyEntity_AUD and id 12345
2015-07-17 10:23:28,841 WARN [-] [com.arjuna.ats.arjuna] (http-/0.0.0.0:8080-5) ARJUNA012125: TwoPhaseCoordinator.beforeCompletion - failed for SynchronizationImple< 0:ffffac1c045d:-3a5600e4:55a7c120:131, org.hibernate.engine.transaction.synchronization.internal.RegisteredSynchronization#5619c5a3 >: org.hibernate.AssertionFailure: Unable to perform beforeTransactionCompletion callback
at org.hibernate.engine.spi.ActionQueue$BeforeTransactionCompletionProcessQueue.beforeTransactionCompletion(ActionQueue.java:754) [hibernate-core-4.1.3-Final.jar:4.1.3-Final]
at org.hibernate.engine.spi.ActionQueue.beforeTransactionCompletion(ActionQueue.java:338) [hibernate-core-4.1.3-Final.jar:4.1.3-Final]
at org.hibernate.internal.SessionImpl.beforeTransactionCompletion(SessionImpl.java:490) [hibernate-core-4.1.3-Final.jar:4.1.3-Final]
at org.hibernate.engine.transaction.synchronization.internal.SynchronizationCallbackCoordinatorNonTrackingImpl.beforeCompletion(SynchronizationCallbackCoordinatorNonTrackingImpl.java:114) [hibernate-core-4.1.3-Final.jar:4.1.3-Final]
at org.hibernate.engine.transaction.synchronization.internal.RegisteredSynchronization.beforeCompletion(RegisteredSynchronization.java:53) [hibernate-core-4.1.3-Final.jar:4.1.3-Final]
at com.arjuna.ats.internal.jta.resources.arjunacore.SynchronizationImple.beforeCompletion(SynchronizationImple.java:76)
at com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.beforeCompletion(TwoPhaseCoordinator.java:273)
at com.arjuna.ats.arjuna.coordinator.TwoPhaseCoordinator.end(TwoPhaseCoordinator.java:93)
at com.arjuna.ats.arjuna.AtomicAction.commit(AtomicAction.java:162)
at com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionImple.commitAndDisassociate(TransactionImple.java:1189)
at com.arjuna.ats.internal.jta.transaction.arjunacore.BaseTransaction.commit(BaseTransaction.java:126)
at com.arjuna.ats.jbossatx.BaseTransactionManagerDelegate.commit(BaseTransactionManagerDelegate.java:75)
at org.jboss.as.ejb3.tx.CMTTxInterceptor.endTransaction(CMTTxInterceptor.java:92) [jboss-as-ejb3-7.4.0.Final-redhat-19.jar:7.4.0.Final-redhat-19]
...
Caused by: java.lang.RuntimeException: Cannot update previous revision for entity entity my.package.MyEntity_AUD and id 12345
at org.hibernate.envers.strategy.ValidityAuditStrategy.perform(ValidityAuditStrategy.java:210) [hibernate-core-4.1.3-Final.jar:4.1.3-Final]
at org.hibernate.envers.synchronization.work.AbstractAuditWorkUnit.perform(AbstractAuditWorkUnit.java:76) [hibernate-core-4.1.3-Final.jar:4.1.3-Final]
at org.hibernate.envers.synchronization.AuditProcess.executeInSession(AuditProcess.java:116) [hibernate-core-4.1.3-Final.jar:4.1.3-Final]
at org.hibernate.envers.synchronization.AuditProcess.doBeforeTransactionCompletion(AuditProcess.java:155) [hibernate-core-4.1.3-Final.jar:4.1.3-Final]
at org.hibernate.envers.synchronization.AuditProcessManager$1.doBeforeTransactionCompletion(AuditProcessManager.java:62) [hibernate-core-4.1.3-Final.jar:4.1.3-Final]
at org.hibernate.engine.spi.ActionQueue$BeforeTransactionCompletionProcessQueue.beforeTransactionCompletion(ActionQueue.java:748) [hibernate-core-4.1.3-Final.jar:4.1.3-Final]
... 90 more
In persistence.xml existing a property to specify a custom AuditStrategey: org.hibernate.envers.audit_strategy.
Change,
From:
<property name="org.hibernate.envers.audit_strategy" value="org.hibernate.envers.strategy.ValidityAuditStrategy"/>
To:
<property name="org.hibernate.envers.audit_strategy" value="com.app.your.pack.YourCustomValidityAuditStrategy"/>
So now you can extend ValidityAuditStrategy and override perform() to not throw a RuntimeException when there is no previous revision for the entity, like this:
public class YourCustomValidityAuditStrategy extends ValidityAuditStrategy {
private final Log logger = LogFactory.getLog(getClass());
#Override
public void perform(Session session, String entityName, AuditConfiguration auditCfg, Serializable id, Object data, Object revision) {
try {
super.perform(session, entityName, auditCfg, id, data, revision);
} catch (RuntimeException re) {
if (logger.isDebugEnabled()) {
logger.debug("IGNORE RuntimeException: Cannot update previous revision for entity.", re);
}
}
}
Overriding just the perform method and catching the RuntimeException won't help you as the code throwing the RuntimeException is enclosed in an anonymous class of type BeforeTransactionCompletionProcess and is executed later.
The ValidityAuditStrategy is not very flexible so the only solution i see is ugly but should work : you must copy the entire ValidityAuditStrategy code in a custom class and catch the RuntimeException in the BeforeTransactionCompletionProcess anonymous class. Then specify your custom class in the persistence.xml :
<property name="org.hibernate.envers.audit_strategy "value="com.app.xxx.CustomValidityAuditStrategy"/>
The perform() method should look like the following :
#Override
public void perform(
final Session session,
final String entityName,
final EnversService enversService,
final Serializable id,
final Object data,
final Object revision) {
final AuditEntitiesConfiguration audEntitiesCfg = enversService.getAuditEntitiesConfiguration();
final String auditedEntityName = audEntitiesCfg.getAuditEntityName( entityName );
final String revisionInfoEntityName = enversService.getAuditEntitiesConfiguration().getRevisionInfoEntityName();
// Save the audit data
session.save( auditedEntityName, data );
// Update the end date of the previous row.
//
// When application reuses identifiers of previously removed entities:
// The UPDATE statement will no-op if an entity with a given identifier has been
// inserted for the first time. But in case a deleted primary key value was
// reused, this guarantees correct strategy behavior: exactly one row with
// null end date exists for each identifier.
final boolean reuseEntityIdentifier = enversService.getGlobalConfiguration().isAllowIdentifierReuse();
if ( reuseEntityIdentifier || getRevisionType( enversService, data ) != RevisionType.ADD ) {
// Register transaction completion process to guarantee execution of UPDATE statement after INSERT.
( (EventSource) session ).getActionQueue().registerProcess( new BeforeTransactionCompletionProcess() {
#Override
public void doBeforeTransactionCompletion(final SessionImplementor sessionImplementor) {
final Queryable productionEntityQueryable = getQueryable( entityName, sessionImplementor );
final Queryable rootProductionEntityQueryable = getQueryable(
productionEntityQueryable.getRootEntityName(), sessionImplementor
);
final Queryable auditedEntityQueryable = getQueryable( auditedEntityName, sessionImplementor );
final Queryable rootAuditedEntityQueryable = getQueryable(
auditedEntityQueryable.getRootEntityName(), sessionImplementor
);
final String updateTableName;
/*commented code*/
...
/*comment the following piece of code*/
/*if ( rowCount != 1 && ( !reuseEntityIdentifier || ( getRevisionType( enversService, data ) != RevisionType.ADD ) ) ) {
throw new RuntimeException(
"Cannot update previous revision for entity " + auditedEntityName + " and id " + id
);
}*/
}
});
}
sessionCacheCleaner.scheduleAuditDataRemoval( session, data );
}
As i said its ugly...
Setting org.hibernate.envers.allow_identifier_reuse: true helped in my scenario.
Not exactly answers the original question, but on the outside looks the same: Cannot update previous revision for entity my.package.MyEntity_AUD and id caa4ce8e.
I am using hibernate-envers-5.4.1 and for whatever reason (probably some buggy import in the past) suddenly faced the same error.
Direct query to the database select * from myentity_aud where id='caa4ce8e' resulted in:
rev revend revtype id ...
2121736 NULL 0 caa4ce8e ...
2121737 NULL 1 caa4ce8e ...
2121738 NULL 1 caa4ce8e ...
-- as seen, revend is NULL for all records.
The issue is: envers expects only one (the latest) to be NULL, all the rest must have the "overriding" rev to be set as revend.
So, to fix this particular case, it was enough to update to:
rev revend revtype id ...
2121736 2121737 0 caa4ce8e ...
2121737 2121738 1 caa4ce8e ...
2121738 NULL 1 caa4ce8e ...
and after that everything worked like a charm.
However, if you have millions of such records, you may want to write some script which will take care of them automatically.

Categories

Resources