I am trying to find a solution to build reliability into our webapp. The plan is to dump sql along with data if network connectivity/database connection is lost. In current implementation we have Rest controller, Service, DAO. The DAO throws PersistenceExcetpion, and that is propagated till the Controller layer.
Example code:
public MyDAOClass {
public void save(Object object) {
try {
entityManager.persist(object);
} catch (PersistenceException e) {
throw new DBException("Error occurred in save", e);
}
}
}
The DBException is a runtime exception.
Now the comes the actual question. One of the teammate suggested to have custom exceptions like for eg. InsertException, UpdateException etc. And if we encounter any of these exceptions we know which operation was performed on that entity so that it can be saved to a file as appropriate sql.
For example. Lets say the code failed to save Employee entity. This will throw InsertException, and will create an entry in file as insert sql statement for that entity. insert into employeee values ('firstname','lastname');
For me the idea of implementing the creation of sql file when connectivity is lost doest not seem to be as simple as implementing the above.
The questions that I have put forward are
1) How do you handle when multiple actions (like any combination of insert, update, delete) are performed in the service method ?
2) What about different exceptions ? I mean the reason for PerisistenceException can be anything like constraint failure, entity not found etc and not just the connection issue.
Is there any way to implement the above scenario which also considers all the different conditions.
Thanks.
Update:
Based on comments by chrylis. I should have already added this to the question. It's a webapp running locally in different retail stores. And the application can't have a downtime, so if any connectivity issues, the app should keep work. The file will be later synched with the central database server.
With spring you have Hibernate ORM that will store the data to the database. If an exception occurs during any request it will be rolled back by hibernate. This depends on where you'we put the #Transnational annotation.
We use a Service layer that handles the transaction. So if a database operation or any other operation fails in the service layer and throws an exception the transaction is auto rolled back by hibernate. We then use a spring exception resolver to handle any exception and write custom errors in the log and to the user. I guess you could store the exception in another database as well if that is interesting I think logging them should suffice though.
This article teaches you more about general exception handling.
Here is our exception resolver.
import ...
#ControllerAdvice
public class SpringExceptionResolver {
Logger logger = LoggerFactory.getLogger("com.realitylabs.event.controller.RecoverController");
#ExceptionHandler({CorruptedSessionUserException.class})
#ResponseBody
#ResponseStatus(value=HttpStatus.FORBIDDEN)
public ErrorObject userNotFoundExceptionHandler() {
// Handle exception
// log using the logger.
// We usually return an error object in JSON so that we can show custom // error messages.
}
}
Here is how a service might look. We usually call our services from the controllers. If an exception is thrown when coming from a controller the advice will handle it.
import ...
#Service(value="ObjectService")
#Transactional
public class ObjectServiceImpl implements ObjectService {
#Autowired
private ObjectDAO objectDAO;
#Override
public Object get(int id) {
Object o = objectDAO.get(id);
Hibernate.initialize(o.getVoters());
return o;
}
}
I hope this helps.
Related
The scenario is like I wrote a MVC-based application:
Controller - Service - DAO layer. Now I got an exception in DAO layer and I want to handle that exception in presentation layer so that service layer needs no change.
Because by using regular try/catch or throws it has to pass through the service layer which I don't want.
Is there any better approach to achieve it?
class Controller{
method1(){}
}
class service(){
method1Serice(){}
}
class DAO(){
method1DAO(){
// exception occurs here
}
You can have the DAO class throw an unchecked exception. (any subclass of RuntimeException will do). You can create your own custom exception or use any of the predefined ones. just make sure the Service doesn't catch Throwable and you can have the Controller catch it.
You can extend your exception class from RuntimeException so that compiler does not complain about exception handling. You can then catch that exception in the presentation layer.
Perhaps you are looking for Controller Based Exception Handling, you can check these in Exception Handling in Spring MVC and also Error Handling for REST with Spring.
#ExceptionHandler(YourException.class)
public String handleException(){
logger.info(message);
return "database_error";
}
Actually I would suggest you handle the exceptions properly in Service-Layer and encapsulate that exception properly to return to the front-user via Controller-Layer.
Normally, checked exceptions are carrying some meaningful messages which can be used to do recovery or let the caller explicitly handle it properly. Try not to directly avoid it since it's there.
As I understand it, a data access object (DAO) is intended for transferring data between the server and the client. I'm assuming that the client is what you refer to as the presentation layer. In other words, the part that the end user interacts with. As such, the DAO should contain fields and accessor methods only, i.e. it should not contain logic. Hence it should not contain methods that may throw exceptions. So I would suggest perhaps re-designing your application. Otherwise, perhaps you can provide more detailed code?
I have an EJB with container-managed transactions. I have a method in it (used as a REST call) that calls something from another EJB I've injected using JNDI (not sure if that matters) and it returns an Exception that extends RuntimeException (so it causes a transaction rollback), which translates as a 404 response through an ExceptionMapper.
I want that exception to be what returns from my REST call and I don't mind it being in the logs at all, but I do not want my log to be spammed with the EJBExceptionRolledBackException stacktrace that it causes (the stacktrace gets printed three times for some reason). I believe two out of these three stacktraces get logged before the server even gets back to the final method for the REST call.
Either way, as long as I figure out how to suppress one of these logging actions I'll figure out a way to stop all three. Does anyone have an idea how to suppress this kind of logging?
As it said in the EJB Specification every SystemException must be logged by the container implementation. You can try to catch it or mark as ApplicationException but if you mark it it won't rollback the transaction. I suggest this:
#Stateless
#TransactionManagement(TransactionManagementType.BEAN)
public class MyBean {
#Resource
private UserTransaction tx;
public void myMethod() throws MyApplicationException {
try {
tx.begin();
// call needed methods
tx.commit();
} catch(Exception e) {
// silently rollback;
// exceptions from `UserTransaction` omitted for readability
tx.rollback();
throw new MyApplicationException(e);
}
}
}
Now in your client code of that EJB you can react to MyApplicationException and return whatever you want or log it or don't. By using container managed transactions will ensure that errors are logged by specification (and they are wrapped to another exceptions as bean instances are being destroyed). Also you can mark transaction as rollback-only. Be sure to use this carefully. If you don't want logs from container you need to control all of your flow by yourself.
Summary (details below):
I'd like to make a stored proc call before any entities are saved/updated/deleted using a Spring/JPA stack.
Boring details:
We have an Oracle/JPA(Hibernate)/Spring MVC (with Spring Data repos) application that is set up to use triggers to record history of some tables into a set of history tables (one history table per table we want audited). Each of these entities has a modifiedByUser being set via a class that extends EmptyInterceptor on update or insert. When the trigger archives any insert or update, it can easily see who made the change using this column (we're interested in which application user, not database user). The problem is that for deletes, we won't get the last modified information from the SQL that is executed because it's just a plain delete from x where y.
To solve this, we'd like to execute a stored procedure to tell the database which app user is logged in before executing any operation. The audit trigger would then look at this value when a delete happens and use it to record who executed the delete.
Is there any way to intercept the begin transaction or some other way to execute SQL or a stored procedure to tell the db what user is executing the inserts/updates/deletes that are about to happen in the transaction before the rest of the operations happen?
I'm light on details about how the database side will work but can get more if necessary. The gist is that the stored proc will create a context that will hold session variables and the trigger will query that context on delete to get the user ID.
From the database end, there is some discussion on this here:
https://docs.oracle.com/cd/B19306_01/network.102/b14266/apdvprxy.htm#i1010372
Many applications use session pooling to set up a number of sessions
to be reused by multiple application users. Users authenticate
themselves to a middle-tier application, which uses a single identity
to log in to the database and maintains all the user connections. In
this model, application users are users who are authenticated to the
middle tier of an application, but who are not known to the
database.....in these situations, the application typically connects
as a single database user and all actions are taken as that user.
Because all user sessions are created as the same user, this security
model makes it very difficult to achieve data separation for each
user. These applications can use the CLIENT_IDENTIFIER attribute to
preserve the real application user identity through to the database.
From the Spring/JPA side of things see section 8.2 at the below:
http://docs.spring.io/spring-data/jdbc/docs/current/reference/html/orcl.connection.html
There are times when you want to prepare the database connection in
certain ways that aren't easily supported using standard connection
properties. One example would be to set certain session properties in
the SYS_CONTEXT like MODULE or CLIENT_IDENTIFIER. This chapter
explains how to use a ConnectionPreparer to accomplish this. The
example will set the CLIENT_IDENTIFIER.
The example given in the Spring docs uses XML config. If you are using Java config then it looks like:
#Component
#Aspect
public class ClientIdentifierConnectionPreparer implements ConnectionPreparer
{
#AfterReturning(pointcut = "execution(* *.getConnection(..))", returning = "connection")
public Connection prepare(Connection connection) throws SQLException
{
String webAppUser = //from Spring Security Context or wherever;
CallableStatement cs = connection.prepareCall(
"{ call DBMS_SESSION.SET_IDENTIFIER(?) }");
cs.setString(1, webAppUser);
cs.execute();
cs.close();
return connection;
}
}
Enable AspectJ via a Configuration class:
#Configuration
#EnableAspectJAutoProxy
public class SomeConfigurationClass
{
}
Note that while this is hidden away in a section specific to Spring's Oracle extensions it seems to me that there is nothing in section 8.2 (unlike 8.1) that is Oracle specific (other than the Statement executed) and the general approach should be feasible with any Database simply by specifying the relevant procedure call or SQL:
Postgres for example as the following so I don't see why anyone using Postgres couldn't use this approach with the below:
https://www.postgresql.org/docs/8.4/static/sql-set-role.html
Unless your stored procedure does more than what you described, the cleaner solution is to use Envers (Entity Versioning). Hibernate can automatically store the versions of an entity in a separate table and keep track of all the CRUD operations for you, and you don't have to worry about failed transactions since this will all happen within the same session.
As for keeping track who made the change, add a new colulmn (updatedBy) and just get the login ID of the user from Security Principal (e.g. Spring Security User)
Also check out #CreationTimestamp and #UpdateTimestamp.
I think what you are looking for is a TransactionalEvent:
#Service
public class TransactionalListenerService{
#Autowired
SessionFactory sessionFactory;
#TransactionalEventListener(phase = TransactionPhase.BEFORE_COMMIT)
public void handleEntityCreationEvent(CreationEvent<Entity> creationEvent) {
// use sessionFactory to run a stored procedure
}
}
Registering a regular event listener is done via the #EventListener
annotation. If you need to bind it to the transaction use
#TransactionalEventListener. When you do so, the listener will be
bound to the commit phase of the transaction by default.
Then in your transactional services you register the event where necessary:
#Service
public class MyTransactionalService{
#Autowired
private ApplicationEventPublisher applicationEventPublisher;
#Transactional
public void insertEntityMethod(Entity entity){
// insert
// Publish event after insert operation
applicationEventPublisher.publishEvent(new CreationEvent(this, entity));
// more processing
}
}
This can work also outside the boundaries of a trasaction if you have the requirement:
If no transaction is running, the listener is not invoked at all since
we can’t honor the required semantics. It is however possible to
override that behaviour by setting the fallbackExecution attribute of
the annotation to true.
Consider the following code snippet. (I am using Spring 3.1 and Hibernate 3.6)
#Override
#Transactional
public <T extends Termination> void progressToPendingStage(Class<T> entity,
Long terminationId, String userName) throws Exception {
Termination termination = findTerminationById(entity, terminationId);
//TODO improvise such that email does not get sent if data is not saved
if (termination.getStatus().equals(TerminationStatus.BEING_PREPARED.toString())) {
termination.setStatus(TerminationStatus.PENDING.toString());
termination.setSubmittedDate(new Date());
termination.setSubmittedBy(userName);
saveOrUpdateTermination(termination);
//Send an email to SAS
emailHelper.configureEmailAndSend(termination);
}
}
Unit tests for the above method indicate that email will be sent regardless that the saveOrUpdateTermination(termination) throws an exception or not. On further testing and some research I have uncovered that this behavior is the expected behavior. This is not what the business rules desire. An email should be sent only if the termination record was saved successfully. Any suggestions on how to make this behave in the desired manner? One way I can think of is to make the caller handle the exception thrown by the progressToPendingStage method and if no exception was thrown send an email. Am I on the right track or can we alter the way #Transaction behaves.
I have solved this issue by designing around the problem. Sending an Email was never meant to be part of the transaction. I created an object that performed post saving tasks. The object will catch the exception thrown upon saving the termination and if no exceptions were thrown I would then trigger an email to be sent out. One could also put this in an Spring Aspect which could be executed upon successfully returning after a successful save.
Lessons learn't: Don't include steps that don't belong in a method marked with #transaction. If its included in a transaction Spring will silently handle the exception and not throw the exception till the transaction is finished. In short if a method is annotated with #Transaction every line in that method will be execute even though a line in the middle of the method throws an exception.
Context of problem I want to solve: I have a java spring http interceptor AuditHttpCommunicationInterceptor that audits communication with an external system. The HttpClieant that does the communication is used in a java service class that does some business logic called DoBusinessLogicSevice.
The DoBusinessLogicSevice opens a new transaction and using couple of collaborators does loads of stuff.
Problem to solove: Regardless of the outcome of any of the operations in DoBusinessLogicSevice (unexpected Exceptions, etc) I want audits to be stored in the database by AuditHttpCommunicationInterceptor.
Solution I used: The AuditHttpCommunicationInterceptor will open a new transaction this way:
TransactionDefinition transactionDefinition = new DefaultTransactionDefinition(TransactionDefinition.PROPAGATION_REQUIRES_NEW);
new TransactionTemplate(platformTransactionManager, transactionDefinition).execute(new TransactionCallbackWithoutResult() {
#Override
protected void doInTransactionWithoutResult(TransactionStatus status) {
// do stuff
}
});
Everything works fine. When a part of DoBusinessLogicSevice throws unexpected exception its transaction is rolled back, but the AuditHttpCommunicationInterceptor manages to store the audit in the database.
Problem that arises from this solution: AuditHttpCommunicationInterceptor uses a new db connection. So for every DoBusinessLogicSevice call I need 2 db connections.
Basicly, I want to know the solution to the problem: how to make TransactionTemplate "suspend" the current transaction and reuse the connection for a new one in this case.
Any ideas? :)
P.S.
One idea might be to take a different design approach: drop the interceptor and create an AuditingHttpClient that is used in DoBusinessLogicSevice directly (not invoked by spring) but I cannot do that because I cannot access all http fields in there.
Spring supports nested transactions (propagation="NESTED"), but this really depends on the database platform, and I don't believe every database platform is capable of handling nested transactions.
I really don't see what's a big deal with taking connection from a pool, doing a quick audit transaction and returning connection back.
Update: While Spring supports nested transactions, it looks like Hibernate doesn't. If that's the case, I say: go with another connection for audit.