I have the following code to persist two different entities to my MYSQL DB.
This works as expected, however if there is an issue with one table and not the other then one table is getting populated, and the other not.
Note - I am running my application as an EAR file within a jboss EAP server.
I want to ensure that either both tables are populated or none.
How can I do so?
Persistence.xml
<persistence-unit name="entitystore" transaction-type="JTA">
<jta-data-source>java:/jdbc/datasources/global</jta-data-source>
Java service class:
public void createCompanyStatuses(String client, CompanyStatusPostDTO companyStatusPostDTO) {
EntityManager entityManager = null;
try {
CompanyStatus companyStatus = new CompanyStatus();
companyStatus.setCompanyLabel(candidateMaskedStatusPostDTO.getCompanyLabel());
entityManager = entityManagement.createEntityManager(client);
entityManager.persist(companyStatus);
for(Integer employeeStatusId: companyStatusPostDTO.getEmployeeStatuses()){
CompanyStatusEmployeeStatus companyStatusEmployeeStatus = new CompanyStatusEmployeeStatus();
companyStatusEmployeeStatus.setEmployeeId(employeeStatusId);
companyStatusEmployeeStatus.setCompanyId(companyStatus.getCompanyId()); //todo - how will get this?
entityManager.persist(CompanyStatusEmployeeStatus);
}
} catch(Exception e){
log.error("An exception has occurred in inserting data into the table" + e.getMessage(), e);
} finally {
entityManagement.closeEntityManager(client, entityManager);
}
}
Edit:
I have tried adding:
#TransactionAttribute(value = TransactionAttributeType.REQUIRES_NEW)
However, the issue still remains that the successful persists work and the non-successful don't - rather that all or nothing being persisted.
Simply use a transaction.
With spring use the #Transactional annotation.
Without spring framework, you can do
doInJPA(entityManager -> {
...
entityManager.persist(obj);
...
});
see : https://vladmihalcea.com/high-performance-java-persistence-github-repository/
Related
So I would like to wrap a PessimisticLockingFailureException that gets thrown in a jpa repo when trying to get a lock for an entity that is already locked. And handle the wrapped exception in my exception handlers.
But it seems that when spring tries to end the transaction the connection is already closed and spring throws a new exception that overwrites the exception I would like to see.
In the logs I get "Application exception overridden by rollback exception" and it is this I would like to avoid. (Cause of rollback ex is that "Connection is closed")
Is there a solution to this? Or am I doing something wrong?
(Here's some pseudo code of what I'm doing)
String restControllerMethod(String args) {
try {
return service.serviceMethod(args);
} catch (Exception e1) {
throw e1; // org.springframework.orm.jpa.JpaSystemException caused by org.hibernate.TransactionException caused by java.sql.SQLException
}
}
#Transactional
String serviceMethod(String args) {
Entity entity;
try {
entity = repo.repoFindMethod(args);
} catch (Exception e2) {
throw new WrappingException(e2); // org.springframework.dao.PessimisticLockingFailureException caused by org.hibernate.PessimisticLockException
}
// do some processing with entity
return result;
}
#Lock(LockModeType.PESSIMISTIC_READ)
String repoFindMethod(String args);
I'm using spring-boot-starter-parent 2.3.2.RELEASE with spring-boot-starter-web spring-boot-starter-data-jpa and an emmbedded h2 db
Fixed this by adding a com.zaxxer.hikari.SQLExceptionOverride implementation and pointing the
spring.datasource.hikari.exception-override-class-name to it.
This causes hikari to not close the connection when the db throws an exception with the specified error code.
I've also added #QueryHints({#QueryHint(name = "javax.persistence.lock.timeout", value = "0")}) to the locking query since default lock wait times can be vendor specific
The issue with this solution is that it is vendor specific (both for h2 and hikari). And not all vendors support a custom timeout for obtaining locks (h2 for example does not support this but it matters less since it's timeout is very short anyway)
Example of my solution (for h2):
spring.datasource.hikari.exception-override-class-name=com.example.H2SQLExceptionOverride
public class H2SQLExceptionOverride implements SQLExceptionOverride {
private static final Logger logger = LoggerFactory.getLogger(H2SQLExceptionOverride.class);
public static final int LOCK_TIMOUT_ERROR_CODE = 50200;
#java.lang.Override
public Override adjudicate(SQLException sqlException) {
if (sqlException.getErrorCode() == LOCK_TIMOUT_ERROR_CODE) {
logger.debug("Diverting from default hikari impl and continuing transaction with errorCode: "
+ sqlException.getErrorCode() + " and sqlState: " + sqlException.getSQLState());
return Override.DO_NOT_EVICT;
}
return Override.CONTINUE_EVICT;
}
}
In the process of migrating an old app from JBoss 5.1.0GA to WildFly 13.
On the previous server we were using hibernate3 while on WildFly 13 we're trying to use hibernate 5.
A bit on the app - it's built as a multi-tenant app so it has tables that are common to all the clients and tables that are client specific. These client specific tables are suffixed with the clientId, so they would read Document_2, Document_3, ..., Document_n (e.g.).
The common tables definitions are in a .sar archive, under the hibernate.cfg.xml file.
For the common table mappings there is another class like so:
#Singleton
#Startup
public class MyHibernateService
implements
DynamicMBean {
private MBeanServer platformMBeanServer;
private ObjectName objectName = null;
#PostConstruct
public void start() {
new Configuration().configure().buildSessionFactory();
this.registerJMX();
}
private void registerJMX() {
try {
this.objectName = new ObjectName("jboss.jca:service=HibernateFactory,name=HibernateFactory");
this.platformMBeanServer = ManagementFactory.getPlatformMBeanServer();
this.platformMBeanServer.registerMBean(this, this.objectName);
} catch (Exception e) {
throw new IllegalStateException("Exception during JMX registration :" + e);
}
}
....
}
This class binds the hibernate factory for the common tables under a JNDI name so it can be retrieved later.
The client specific tables definitions / mapping are derived from a template. On app boot we build session factories specific for each client. The way we do this is that we parse the template mapping and replace some sections with the client specific one. The template would read
<class name="com..DocumentMapping" table="TABLENAME">
where TABLENAME that would get replace at boot with e.g. Document_2.
This SessionFactoryManager class that does all the replacement looks like below:
if (SessionFactoryManager.LOGGER.isDebugEnabled()) {
SessionFactoryManager.LOGGER.info("build custom SessionFactory for datasource: " + databaseConfig);
}
Configuration cfg = new Configuration();
// build all mappings
for (Class c : mappingClasses) {
try {
Method m = c.getMethod("getInstance", (Class[])null);
Helper dao = (Helper)m.invoke(null, (Object[])null);
String tableName = dao.getTableName(id);
String mapping = dao.getMappping();
if (mapping == null) {
throw new DAOException(DAOException.TYPE.SESSION_FACTORY, "Mapping not available from class: " + c.getName());
}
cfg.addXML(mapping);
} catch (Exception e) {
throw new DAOException(DAOException.TYPE.SESSION_FACTORY, e);
}
}
cfg.setProperty("hibernate.dialect", databaseConfig.getDatabaseDialect().getClass().getName());
if (StringTools.isValidString(databaseConfig.getDatabaseJNDI())) {
cfg.setProperty("hibernate.connection.datasource", databaseConfig.getDatabaseJNDI());
} else {
cfg.setProperty("hibernate.connection.url", databaseConfig.getDatabaseURL());
cfg.setProperty("hibernate.connection.driver_class", databaseConfig.getDatabaseDriverClass());
cfg.setProperty("hibernate.connection.username", databaseConfig.getDatabaseUser());
cfg.setProperty("hibernate.connection.password", databaseConfig.getDatabasePassword());
}
if (showSQL) {
cfg.setProperty("hibernate.show_sql", "true");
cfg.setProperty("hibernate.format_sql", "false");
}
if (operation == OPERATION.RECREATE) {
cfg.setProperty("hibernate.hbm2ddl.auto", "update");
} else {
// With create-drop, the database schema will be dropped when the SessionFactory is closed explicitly
// this is necessary in order to remove client tables if something goes wrong at client creation time
cfg.setProperty("hibernate.hbm2ddl.auto", "create-drop");
}
SessionFactory sf = cfg.configure().buildSessionFactory();
System.out.println(cfg.getNamedQueries());
The problem is that, although I see that cfg.addXML(String xml) is deprecated, the cfg.getNamedQueries() returns an empty map, as if the mapping is not loaded onto the configuration.
All the above code works fine for hibernate3.
I also tried changing:
SessionFactory sf = cfg.configure().buildSessionFactory();
to
StandardServiceRegistryBuilder builder = new StandardServiceRegistryBuilder();
builder.applySettings(cfg.getProperties());
MetadataSources metaData = new MetadataSources(builder.build());
Metadata buildMetadata = metaData.buildMetadata(builder.build());
SessionFactory sf = buildMetadata.buildSessionFactory();
but to no avail.
Another change I tried was this:
SessionFactory sf = cfg.configure().buildSessionFactory();
i.e. calling configure on the cfg, after it loaded the mapping. In this case I get a duplicate query exception, as if the queries are not isolated per session factory.
The hibernate version I`m using is 5.1.14, the one shipped with WildFly 13 by default.
Any ideas ?
Fixed by changing
cfg.addXML(mapping)
to
cfg.addInputStream(new ByteArrayInputStream(mapping.getBytes()));
I had created a service layer in which my method was having the transactional annotation over it in the following manner :
#Transactional
void a() {
User user = new User(1, "Abc", "Delhi");
userDao.save(user);
A a = null;
a.toString(); //null pointer exception being encountered here.
}
The transaction should have been rolled back and the user's details should not have been persisted to the db, but it is not happening.
Run time exceptions will roll back the transaction by default. I don't know exactly in hibernate, but in eclipse link implementation of JPA, we can specify the rollback = true/false for the application exceptions as shown below.
#ApplicationException(inherited = true, rollback = true)
try similar configuration change.
you can also rollback in the catch block something like below
catch(Exception e) {
entityManger.getTransaction().rollback();
}
I created a service method that creates user accounts. If creation fails because the given e-mail-address is already in our database, I want to send the user an e-mail saying they are already registered:
#Transactional(noRollbackFor=DuplicateEmailException.class)
void registerUser(User user) {
try {
userRepository.create(user);
catch(DuplicateEmailException e) {
User registeredUser = userRepository.findByEmail(user.getEmail());
mailService.sendAlreadyRegisteredEmail(registeredUser);
}
}
This does not work. Although I marked the DuplicateEmailExcepetion as "no rollback", the second SQL query (findByEmail) still fails because the transaction was aborted.
What am I doing wrong?
There is no #Transactional annotation on the repository.
That's not a problem with Spring / JDBC or your code, the problem is with the underlying database. For example, when you are using the Postgres if any statement fails in a transaction all the subsequent statements will fail with current transaction is aborted.
For example executing the following statements on your Postgres:
> start a transaction
> DROP SEQUENCE BLA_BLA_BLA;
> Error while executing the query; ERROR: sequence "BLA_BLA_BLA" does not exist"
> SELECT * FROM USERS;
> ERROR: current transaction is aborted, commands ignored until end of transaction block
Still the SELECT and subsequent statements are expected to succeed against MySQL, Oracle and SQL Server
Why don't you change the logic as following :
void registerUser(User user) {
User existingUser = userRepository.findByEmail(user.getEmail())
if(existingUser == null){
userRepository.create(user);
}else{
mailService.sendAlreadyRegisteredEmail(existingUser)
}
}
This would ensure that only non-existing users to be inserted into the database.
#Transactional annotation is placed at incorrectly. Spring creates a AOP advisor around the method where #Transactional annotation is defined. So, in this case, pointcut will be created around registedUser method. But, registerUser method doesn't throw DuplicateEmailException. Hence, no rollback rules are evaluated.
You need to define the #Transactional rule around UserRepository.createUser method. This will ensure that Transaction pointcut created by spring doesn't rollback because of DuplicateEmailException.
public class UserRepository {
#Transactional(noRollbackFor=DuplicateEmailException.class)
public User createUser(){
//if user exist, throw DuplicateEmailException
}
}
void registerUser(User user) {
try {
userRepository.create(user);
catch(DuplicateEmailException e) {
User registeredUser = userRepository.findByEmail(user.getEmail());
mailService.sendAlreadyRegisteredEmail(registeredUser);
}
}
You could wrap the call to the userRepository in a try catch block. Or you could look first if the user exists, and abourt the creation of a new one.
I'm having a problem with Dropwizard where I can't catch the exception thrown by the Hibernate DAO object within my resource.
I have the following DAO object
public class ApplicantDAO extends AbstractDAO<Applicant>
{
public ApplicantDAO(SessionFactory factory)
{
super(factory);
}
public long create(Applicant person)
{
return persist(person).getApplicantId();
}
}
I am calling the create method from inside my Dropwizard resource to which I'm passing on my managed DAO from my Application's run method. The following doesn't work:
try
{
long id = dao.create(applicant);
message += "[Stored: " + id + "] ";
}catch (HibernateException ex)
{
message +="Could't store: " + exptionToString(ex);
}
Instead I get Dropwizard's/Jersey's message:
{"code":500,"message":"There was an error processing your request. It has been logged (ID a785167e05024c69)."}
Is there a way to get around that?
I am not familiar with Drop Wizard.
But my best guest is that it has a JAX-RS ExcepionMapper registered that writes its own error when an exception is thrown
see : javax.ws.rs.ext.ExceptionMapper
I figured it out. The problem was happening because of an exception throw inside of a transaction.
So instead of having #UnitOfWork on my resource method, I added #UnitOfWork(transactional = false)
Then I was able to manage my own transactions by passing in the SessionFactory to my resource and that did the trick!
It might be related to the following issue: https://github.com/dropwizard/dropwizard/issues/949