I have the following code to persist two different entities to my MYSQL DB.
This works as expected, however if there is an issue with one table and not the other then one table is getting populated, and the other not.
Note - I am running my application as an EAR file within a jboss EAP server.
I want to ensure that either both tables are populated or none.
How can I do so?
Persistence.xml
<persistence-unit name="entitystore" transaction-type="JTA">
<jta-data-source>java:/jdbc/datasources/global</jta-data-source>
Java service class:
public void createCompanyStatuses(String client, CompanyStatusPostDTO companyStatusPostDTO) {
EntityManager entityManager = null;
try {
CompanyStatus companyStatus = new CompanyStatus();
companyStatus.setCompanyLabel(candidateMaskedStatusPostDTO.getCompanyLabel());
entityManager = entityManagement.createEntityManager(client);
entityManager.persist(companyStatus);
for(Integer employeeStatusId: companyStatusPostDTO.getEmployeeStatuses()){
CompanyStatusEmployeeStatus companyStatusEmployeeStatus = new CompanyStatusEmployeeStatus();
companyStatusEmployeeStatus.setEmployeeId(employeeStatusId);
companyStatusEmployeeStatus.setCompanyId(companyStatus.getCompanyId()); //todo - how will get this?
entityManager.persist(CompanyStatusEmployeeStatus);
}
} catch(Exception e){
log.error("An exception has occurred in inserting data into the table" + e.getMessage(), e);
} finally {
entityManagement.closeEntityManager(client, entityManager);
}
}
Edit:
I have tried adding:
#TransactionAttribute(value = TransactionAttributeType.REQUIRES_NEW)
However, the issue still remains that the successful persists work and the non-successful don't - rather that all or nothing being persisted.
Simply use a transaction.
With spring use the #Transactional annotation.
Without spring framework, you can do
doInJPA(entityManager -> {
...
entityManager.persist(obj);
...
});
see : https://vladmihalcea.com/high-performance-java-persistence-github-repository/
I'm trying to wrap my ahead around some strange errors, where code that seemingly should run as one transaction does not. I'll try to get all the relevant parts down, but it's quite a lot so.
The project contains both Spring and EJB, so I'm not really sure if one of them is actually used here, or both.
The Spring configuration contains this:
<jee:jndi-lookup id="platformTransactionManager" jndi-name="java:appserver/TransactionManager" resource-ref="false"
expected-type="javax.transaction.TransactionManager" lookup-on-startup="false"/>
<bean id="transactionManager"
class="org.springframework.transaction.jta.JtaTransactionManager" lazy-init="true">
<constructor-arg ref="platformTransactionManager"/>
<property name="autodetectUserTransaction" value="false"/>
<property name="allowCustomIsolationLevels" value="true"/>
</bean>
<tx:annotation-driven/>
Then, I have the following Java-code (a bit simplified, but should contain all relevant details):
#Stateless
#ApplicationException(rollback = true)
#TransactionManagement(TransactionManagementType.CONTAINER)
#TransactionAttribute(TransactionAttributeType.REQUIRED)
#Local(MyLocal.class)
#Remote(MyRemote.class)
#EJB(beanInterface = MyLocal.class, name = "java:app/MyEJB", beanName = "MyEJB")
public class MyEJB {
public void insertSomething(final Something something) {
final com.microsoft.sqlserver.jdbc.SQLServerXADataSource dataSource;
final SomethingElse somethingElse = something.getSomethingElse();
//insert 1
try {
final String sql = convertToInsertSql(somethingElse);
dataSource = //gets the datasource from Glassfish via JNDI
final Connection conn = dataSource.getConnection();
final Statement stmt = conn.createStatement();
stmt.execute(sql);
//this is actually wrapped in a method that returns the id of created row
//I have removed this for brevity, but assume that you get that back
} finally {
conn.close();
}
something.getSomethingElse().setId(/* id from the result above */)
// insert 2
try {
final String sql = convertToInsertSql(something);
dataSource = //gets the datasource from Glassfish via JNDI
final Connection conn = dataSource.getConnection();
final Statement stmt = conn.createStatement();
stmt.execute(sql);
} finally {
conn.close();
}
}
}
At last, the class that invokes the method above (without boring SOAP-stuff):
public class MyService extends SpringBeanAutowiringSupport {
#Inject
private MyLocal myLocal;
public void createSomething(/*stuff*/) {
/* more stuff */
myLocal.insertSomething(something);
}
}
I have several questions here:
What (if any) transactions will be created?
Is the transactionManager defined with Spring in play here, or just the glassfish jndi one?
Assuming a transaction across the method insertSomething:
What will happen to the query when the connection is closed mid-transaction (insert 1)?
What will happen if an error appears after the connection is closed (after insert 1)?
Is there a possibility of insert 2 being commited to the database, while insert 1 is not? If so, how? (this is the error that I'm actually debugging)
What are the consequences of the use of getConnection() of the SQLServerXADataSource?
Will we have XA (I would assume you had to use one of the XA-related methods for getting a connection)?
Will we have connection pooling (getConnection() invokes an internal method with pooling variable set to null)?
If you think this question is messy, you should see the project I based it on ;)
I have recently started working on a project with SOAP webservices, Spring and Hibernate.
I am facing the following issue:
We use SOAP UI to send requests to test our code. I have written a service which processes bills. Basically there are 2 services, one creates a bill and the other processes that bill.
We have a table called BillTb. Before processing a bill, we check the status of the bill. If the bill status is 3(pending), we process it. If it is not equal to 3, we do not process it. Once the bill is processed, we change the status to 4(processed).
Now if the bill status is 3, we do a number of entries in other tables and at last, status is changed to 4.
If in between processing, if the processing fails, we need to revert all those entries. So we call these entries within a transaction.
The DAO layer with hibernate code is as follows:
import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;
import javax.persistence.PersistenceContextType;
import javax.persistence.Query;
#PersistenceContext(type = PersistenceContextType.EXTENDED)
private EntityManager entityManager;
public class BillDAOImpl implements BillDao {
...
...
...
int pendingStatus = 3;
int processedStatus = 4;
Session session = null;
for(int id: ids){
Bill bill = null;
try{
session = entityManager.unwrap(Session.class);
bill= entityManager.find(Bill.class, id);
session.getTransaction().begin();
if(bill.status() != pendingStatus ){
System.out.println("The bill is already processed");
continue;
}
...
...
bill.setStatus(processedStatus);
entityManager.persist(bill);
session.getTransaction().commit();
} catch(Exception e){
}
}
}
Now the problem is, once a bill status is changed from 3 to 4, if I change the status again to 3 by firing an update query in database, it should again work, but somehow, it reads the status as 4 only.
If I bring down the server, then execute the request again then it works for same entry.
The other transaction related parameters are set as :
<property name="hibernate.cache.use_query_cache" value="false" />
Also,
<bean id="projectEntityManagerFactory"
class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean"
p:persistenceXmlLocation="classpath*:META-INF/persistence.xml"
p:persistenceUnitName="persistenceUnit" p:loadTimeWeaver-ref="loadTimeWeaver"
p:jpaVendorAdapter-ref="jpaVendorAdapter" p:jpaDialect-ref="jpaDialect"
p:dataSource-ref="datasourceBean">
<property name="jpaProperties">
<props>
<prop key="hibernate.transaction.manager_lookup_class">org.hibernate.transaction.BTMTransactionManagerLookup
</prop>
<prop key="hibernate.transaction.flush_before_completion">false</prop>
...
...
<prop key="hibernate.connection.isolation">3</prop>
<prop key="hibernate.connection.release_mode">auto</prop>
</props>
</property>
</bean>
So here it seems that session is somehow storing the bill object and when I update the bill object directly in database, it stores stale data. So what should be done to in this case. Should I clear the session at end of method?
You should perform the query inside of the transaction and also remember to commit the transaction everytime (if you trigger continue, that is ommited).
Actually i would write it like this:
for(int id: ids){
Bill bill = null;
Transaction tx = session.getTransaction();
tx.begin();
try{
bill= entityManager.find(Bill.class, id);
if(bill.status() != pendingStatus ){
System.out.println("The bill is already processed");
tx.commit();
continue;
}
bill.setStatus(processedStatus);
entityManager.persist(bill);
session.flush();
tx.commit();
}catch(Exception e){
tx.rollback();
throw e;
}
}
I'm new to graph db and i'm having problems to get the api work within a transaction.
I have a simple code that uses the neo4j graph db api to create nodes and relationship. My code runs in JUnit and tries to create 2 nodes and a relationship between them using begin and end transaction given below.
The code works fine in a happy scenario. However, if something fails within the code, the nodes are still committed into the graph database. Not sure if i'm doing something wrong out here. I would have expected the 2 nodes created to be rolled back.
Here is the code snippet:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = { "classpath:/applicationContext.xml" })
public class RestBatchLoaderTest {
#Autowired
SpringRestGraphDatabase graphDatabaseService;
#Test
public void createNode() {
Transaction tx =graphDatabaseService.beginTx();
try {
Map<String,Object> nodeprops1 = new HashMap<String, Object>();
nodeprops1.put("name", "James Parker");
nodeprops1.put("age", Integer.valueOf(11));
Node james = graphDatabaseService.createNode(nodeprops1);
Assert.assertNotNull(james);
Map<String,Object> nodeprops2 = new HashMap<String, Object>();
nodeprops2.put("name", "Bing P");
nodeprops2.put("age", Integer.valueOf(34));
Node bing= graphDatabaseService.createNode(nodeprops2);
Node aa = null;
// Failure point: should rollback the previous node in the finally.
graphDatabaseService.remove(aa);
Map<String,Object> relprops = new HashMap<String, Object>();
RelationshipType type = new RelationshipType() {
#Override
public String name() {
return "MARRIED_TO";
}
};
graphDatabaseService.createRelationship(joe, jane, type, relprops);
tx.success();
} finally {
tx.finish();
}
}
The graphDatabaseService object is autowired using spring configuration. Here is the configuration:
<neo4j:config graphDatabaseService="graphDatabaseService"/>
<bean id="graphDatabaseService" class="org.springframework.data.neo4j.rest.SpringRestGraphDatabase">
<constructor-arg value="http://localhost:7474/db/data/"/>
</bean>
Also, I notice tx object is an instance of NullTransaction when graphDatabaseService.beginTx() is called in the code above.
Any ideas, what is going wrong?
Thanks.
I think figured out what the problem was. The configuration needs to have batch enabled - true. Also i used the RestAPI wrapper to the graph database object to run it as one atomic code. See code below:
#Autowired
SpringRestGraphDatabase graphDatabaseService;
private RestAPI restAPI;
#Before
public void init(){
this.restAPI = ((RestGraphDatabase)graphDatabaseService).getRestAPI();
}
#Test
public void testEnableBatchTransactions() throws Exception {
System.setProperty(Config.CONFIG_BATCH_TRANSACTION,"true");
Transaction tx = restAPI.beginTx();
try {
Node n1 = restAPI.createNode(map("name", "node1"));
Node n2 = restAPI.createNode(map("name", "node2"));
Node n3 = restAPI.createNode(map("name", "node3"));
//String s = null;
//s.toString();
Node n4 = restAPI.createNode(map("name", "node4"));
tx.success();
} finally {
tx.finish();
}
assertTrue(tx instanceof BatchTransaction);
}
Also System.setProperty(Config.CONFIG_BATCH_TRANSACTION,"true"); enables the batch mode.
To test this, try un-commenting the code snippet and run the test. Nodes n1, n2 and n3 will not be committed in the db.
You specified graphDatabaseService.remove(aa); as your failure point as aa is NULL. Looking into the documentation of org.springframework.data.neo4j.rest.SpringRestGraphDatabase there is no Exception documented that is being thrown if the node is NULL. Have you verified that an exception is actually thrown? Otherwise your code will run through to tx.success();. If an exception is thrown, please specify further what version of neo4j and spring you are using.
Edit:
After reading a little more, I see in the source of org.springframework.data.neo4j.rest.SpringRestGraphDatabase that it should give you a NullTransaction that basically does nothing (see here).
Furthermore, the Spring Data neo4j documentationstates that each operation is in its own transaction as the neo4j REST adapter does not allow transaction that span over multiple operations (see here).
I'm adding envers to an existing hibernate entities. Everything is working smoothly so far as far as auditing, however querying is a different issue because the revision tables aren’t populated with the existing data. Has anyone else already solved this issue? Maybe you’ve found some way to populate the revision tables with the existing table? Just thought I’d ask, I'm sure others would find it useful.
We populated the initial data by running a series of raw SQL queries to simulate "inserting" all the existing entities as if they had just been created at the same time. For example:
insert into REVINFO(REV,REVTSTMP) values (1,1322687394907);
-- this is the initial revision, with an arbitrary timestamp
insert into item_AUD(REV,REVTYPE,id,col1,col1) select 1,0,id,col1,col2 from item;
-- this copies the relevant row data from the entity table to the audit table
Note that the REVTYPE value is 0 to indicate an insert (as opposed to a modification).
You'll have a problem in this category if you are using Envers ValidityAuditStrategy and have data which has been created other than with Envers enabled.
In our case (Hibernate 4.2.8.Final) a basic object update throws "Cannot update previous revision for entity and " (logged as [org.hibernate.AssertionFailure] HHH000099).
Took me a while to find this discussion/explanation so cross-posting:
ValidityAuditStrategy with no audit record
You don't need to.
AuditQuery allows you to get both RevisionEntity and data revision by :
AuditQuery query = getAuditReader().createQuery()
.forRevisionsOfEntity(YourAuditedEntity.class, false, false);
This will construct a query which returns a list of Object [3]. Fisrt element is your data, the second is the revision entity and the third is the type of revision.
We have solved the issue of populating the audit logs with the existing data as follows:
SessionFactory defaultSessionFactory;
// special configured sessionfactory with envers audit listener + an interceptor
// which flags all properties as dirty, even if they are not.
SessionFactory replicationSessionFactory;
// Entities must be retrieved with a different session factory, otherwise the
// auditing tables are not updated. ( this might be because I did something
// wrong, I don't know, but I know it works if you do it as described above. Feel
// free to improve )
FooDao fooDao = new FooDao();
fooDao.setSessionFactory( defaultSessionFactory );
List<Foo> all = fooDao.findAll();
// cleanup and close connection for fooDao here.
..
// Obtain a session from the replicationSessionFactory here eg.
Session session = replicationSessionFactory.getCurrentSession();
// replicate all data, overwrite data if en entry for that id already exists
// the trick is to let both session factories point to the SAME database.
// By updating the data in the existing db, the audit listener gets triggered,
// and inserts your "initial" data in the audit tables.
for( Foo foo: all ) {
session.replicate( foo, ReplicationMode.OVERWRITE );
}
The configuration of my data sources (via Spring):
<bean id="replicationDataSource"
class="org.apache.commons.dbcp.BasicDataSource"
destroy-method="close">
<property name="driverClassName" value="org.postgresql.Driver"/>
<property name="url" value=".."/>
<property name="username" value=".."/>
<property name="password" value=".."/>
<aop:scoped-proxy proxy-target-class="true"/>
</bean>
<bean id="auditEventListener"
class="org.hibernate.envers.event.AuditEventListener"/>
<bean id="replicationSessionFactory"
class="o.s.orm.hibernate3.annotation.AnnotationSessionFactoryBean">
<property name="entityInterceptor">
<bean class="com.foo.DirtyCheckByPassInterceptor"/>
</property>
<property name="dataSource" ref="replicationDataSource"/>
<property name="packagesToScan">
<list>
<value>com.foo.**</value>
</list>
</property>
<property name="hibernateProperties">
<props>
..
<prop key="org.hibernate.envers.audit_table_prefix">AUDIT_</prop>
<prop key="org.hibernate.envers.audit_table_suffix"></prop>
</props>
</property>
<property name="eventListeners">
<map>
<entry key="post-insert" value-ref="auditEventListener"/>
<entry key="post-update" value-ref="auditEventListener"/>
<entry key="post-delete" value-ref="auditEventListener"/>
<entry key="pre-collection-update" value-ref="auditEventListener"/>
<entry key="pre-collection-remove" value-ref="auditEventListener"/>
<entry key="post-collection-recreate" value-ref="auditEventListener"/>
</map>
</property>
</bean>
The interceptor:
import org.hibernate.EmptyInterceptor;
import org.hibernate.type.Type;
..
public class DirtyCheckByPassInterceptor extends EmptyInterceptor {
public DirtyCheckByPassInterceptor() {
super();
}
/**
* Flags ALL properties as dirty, even if nothing has changed.
*/
#Override
public int[] findDirty( Object entity,
Serializable id,
Object[] currentState,
Object[] previousState,
String[] propertyNames,
Type[] types ) {
int[] result = new int[ propertyNames.length ];
for ( int i = 0; i < propertyNames.length; i++ ) {
result[ i ] = i;
}
return result;
}
}
ps: keep in mind that this is a simplified example. It will not work out of the box but it will guide you towards a working solution.
Take a look at http://www.jboss.org/files/envers/docs/index.html#revisionlog
Basically you can define your own 'revision type' using #RevisionEntity annotation,
and then implement a RevisionListener interface to insert your additional audit data,
like current user and high level operation. Usually those are pulled from ThreadLocal context.
You could extend the AuditReaderImpl with a fallback option for the find method, like:
public class AuditReaderWithFallback extends AuditReaderImpl {
public AuditReaderWithFallback(
EnversService enversService,
Session session,
SessionImplementor sessionImplementor) {
super(enversService, session, sessionImplementor);
}
#Override
#SuppressWarnings({"unchecked"})
public <T> T find(
Class<T> cls,
String entityName,
Object primaryKey,
Number revision,
boolean includeDeletions) throws IllegalArgumentException, NotAuditedException, IllegalStateException {
T result = super.find(cls, entityName, primaryKey, revision, includeDeletions);
if (result == null)
result = (T) super.getSession().get(entityName, (Serializable) primaryKey);
return result;
}
}
You could add a few more checks in terms of returning null in some cases.
You might want to use your own factory as well:
public class AuditReaderFactoryWithFallback {
/**
* Create an audit reader associated with an open session.
*
* #param session An open session.
* #return An audit reader associated with the given sesison. It shouldn't be used
* after the session is closed.
* #throws AuditException When the given required listeners aren't installed.
*/
public static AuditReader get(Session session) throws AuditException {
SessionImplementor sessionImpl;
if (!(session instanceof SessionImplementor)) {
sessionImpl = (SessionImplementor) session.getSessionFactory().getCurrentSession();
} else {
sessionImpl = (SessionImplementor) session;
}
final ServiceRegistry serviceRegistry = sessionImpl.getFactory().getServiceRegistry();
final EnversService enversService = serviceRegistry.getService(EnversService.class);
return new AuditReaderWithFallback(enversService, session, sessionImpl);
}
}
I've checked many ways, but the best way for me is to write a PL/SQL script as below.
The below script is written for PostgreSQL. Didn't check other vendors, but they must have the same feature.
CREATE SEQUENCE hibernate_sequence START 1;
DO
$$
DECLARE
u RECORD;
next_id BIGINT;
BEGIN
FOR u IN SELECT * FROM user
LOOP
SELECT NEXTVAL('hibernate_sequence')
INTO next_id;
INSERT INTO revision (rev, user_id, timestamp)
VALUES (next_id,
'00000000-0000-0000-0000-000000000000',
(SELECT EXTRACT(EPOCH FROM NOW() AT TIME ZONE 'utc')) * 1000);
INSERT INTO user_aud(rev,
revend,
revtype,
id,
created_at,
created_by,
last_modified_at,
last_modified_by,
name)
VALUES (next_id,
NULL,
0,
f.id,
f.created_at,
f.created_by,
f.last_modified_at,
f.last_modified_by,
f.name);
END LOOP;
END;
$$;