How to retrieve common values (IpAddress, TenantId) in GenericDao? - java

We are using the Play! framework for HTTP sessions.
tenantId and ipAddress are columns that are common across multiple tables.
When the user is logged in, we are storing the tenantId in HttpContextSession
Whenever we require the ip address of the user we are using Http.Context.current().request().remoteAddress() to store the ip address.
We have huge set of queries written and now we want to save or query in a generic way for tenantId.
All the queries goes via GenericDao
Can we use the following in GenericDao to get tenant Id so that we can append in all queries?
Http.Context.session().get("tenantId");
what would be the best approach to save or retrieve these details?
Thanks.

You don't want your DAO to have to depend on presentation layer things like an HTTP session. I would recommend an abstraction to hide these details.
Create an interface called TenantIdProvider and inject it into your DAO. It would look something like this:
public interface TenantIdProvider
{
String getTenantId();
}
Then create an implementation called HttpSessionTenantIdProvider.
class HttpSessionTenantIdProvider implements TenantIdProvider
{
#Override
public String getTenantId()
{
return Http.Context.session().get("tenantId");
}
}
Now your GenericDAO can have a reference to TenantIdProvider and every query that needs the tenantId can get it through the TenantIdProvider and not have any dependency on the play framework or any other presentation layer that you use.
This really becomes important if you end up having scheduled jobs that run and send notifications or some other task, and they use this DAO. If this DAO depended on an HTTP session it would not be possible. Your job app could create a TenantIdProvider that just returned "system" or something like that.

Related

In Which Layer, Dao or Service, Should I Parse a Rest Client Response? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have my own service calling a third party rest service that is returning a text based response.This text based response is not a proper service response and needs to be parsed for content as well as errors. For purposes of discussion, assume the 3rd party rest service cannot be changed.
Given these circumstance I am wondering whether I should wire that parsing into the dao layer or the service layer of my application.I know that the service layer should contain all of your business logic, but I feel like if I don't do the parsing in my Dao layer I am leaking. Is it ok to have logic in the dao for purposes of parsing/transformation in this case or should it be done in the service layer?
Any advice is appreciated.
public void MyDao {
private RestTemplate restTemplate;
private ResponseParser responseParser;
public myDao(RestTemplate restTemplate, ResponseParser responsePaser){
this.restTemplate = restTemplate;
this.responseParser = responseParser;
}
public MyResponse sendRequest(MyRequest myRequest){
ResponseEntity<String> responeEntity = restTemplate.exchange(...);
String body = responseEntity.getBody();
return responseParser.parse(body);
}
}
OR
public void MyDao {
private RestTemplate restTemplate;
public myDao(RestTemplate restTemplate, ResponseParser responsePaser){
this.restTemplate = restTemplate;
}
public String sendRequest(MyRequest myRequest){
ResponseEntity<String> responeEntity = restTemplate.exchange(...);
return responseEntity.getBody();
}
}
public void MyService {
private MyDao myDao;
private ResponseParser responseParser;
public myDao(MyDao myDao, ResponseParser responsePaser){
this.myDao = myDao;
this.responseParser = responseParser;
}
public MyObject process(MyRequest myRequest){
String response = myDao.sendRequest(myRequest)
return responseParser.parse(response);
}
}
Here are my take and opinion of the design.
DAO is a pattern to abstract the persistence operations and should be kept solely to work with persistence operations.
The DAO patterns help to abstract away persistence mechanism/operations or data access operations from a data-source from the client and the design follows SRP, making the transition to a new persistence type easy. And the change - change of your persistence mechanism/ data source, stays in the DAO layer not boiling up to service layers.
The service layer is responsible to handle and compute business operations on your data. It uses a DAO/Repository/Client to fetch the data it needs to operate on.
Taking into consideration the above points, here is what I think of the existing design and how I would do it.
DAO, as chrylis mentioned above, is a data access object and should not matter if the data is fetched from the DB or over HTTP.
The article from Oracle about J2EE pattern reads:
Use a Data Access Object (DAO) to abstract and encapsulate all access to the data source. The DAO manages the connection with the data source to obtain and store data.
It further reads: The data source could be a persistent store like an RDBMS, an external service like a B2B exchange, a repository like an LDAP database, or a business service accessed via CORBA Internet Inter-ORB Protocol (IIOP) or low-level sockets.
Taking these into consideration, I would make the call from DAO, parse the response and send over a business object to the Service.
Taking SRP into consideration, the Service should not be aware of the call made over HTTP/ db call made/ reading from a flat-file. All it should know is, once I make a query for the data, I get back an object with the required data from the DAO.
If Service is taking care of the parsing, what if the data source changes tomorrow and you have the data in-situ. So now you DAO changes because now it talks to the DB instead of making an HTTP request. You cannot return back a String representation anymore. You need a Data Mapper and will send some sort of Object representation back, which means your Service class changes too. So one change of data source, not only changes your code in DAO but boils to the business layer, which breaks the SRP.
Saying this, not developing for long and not from a software engineering background(I had the understanding that data access object can only be from datastore, but thanks to chrylis' comment made me read more and think about the difference between data-source and datastore), I always prefer naming it Client -> RestClient and make the call and keep my DB operations to DAO/Repo. The reason being, it is simply easy to read tomorrow. One look at the classname and it is easy to understand what it is doing or what sort of operation the class is possibly handling.
So, yes the call and parsing should happen in the DAO/Client.
Strictly speaking, Dao layer is used to manage information included in a persistence mechanism like: database, LDAP, etc So when you deal with an external endpoint, "include" that functionality in a service is an approach more widely used.
Answering your question, the first option is a better one.
You are including the required business logic into the class that knows the returned format/information by the external endpoint.
External classes that use the above one will manage a well know object (instead of a raw string value)
Some types of upgrades in the external endpoint (changes in the response format, for example) can be better managed in your Dao class, without affecting to the other classes that use it.
My opinion is put it in DAO layer. Because parsing isn’t a business feature. Also DAO layer is meant for accessing data from DBs or other third party entities. So having the data in right POJO format while returning from DAO layer makes good sense in my opinion.

Service method arguments, object identifiers vs object references

I understand that it is probably better to pass objects into a service method, but is this still the case if the caller would first have to look up the object before calling the service? And if so, why?
Example
Let's say I have a RoleService, that adds a role to the given user. And let's say the RoleService is called via a web controller or possibly a REST API. The web controller takes the userId and roleId as input from the web request.
Would I be better off using this service method?
public void addRoleToUser(long userId, long roleId) {
User user = userRepository.find(userId);
Role role = userRepository.find(roleId);
user.addRole(role);
}
Or this one? The web controller would obviously need to retrieve both objects before calling the service in this case.
public void addRoleToUser(User user, Role role) {
user.addRole(role);
userRepository.save(user);
}
Whether called via a web controller or a REST API, the incoming request would only be giving the 2 ID's, so you have to do the find() calls somewhere.
You certainly cannot trust the caller to have up-to-date information about the two objects, and it's a waste to transmit the full objects if you're only going to use the ID's anyway.
It is common to have the service API also be the database transaction boundary (service class or method annotated with #Transactional), so it is best to have the service method do the find() and addRole() calls, so they all execute in a single database transaction.

How can I use Hibernate/JPA to tell the DB who the user is before inserts/updates/deletes?

Summary (details below):
I'd like to make a stored proc call before any entities are saved/updated/deleted using a Spring/JPA stack.
Boring details:
We have an Oracle/JPA(Hibernate)/Spring MVC (with Spring Data repos) application that is set up to use triggers to record history of some tables into a set of history tables (one history table per table we want audited). Each of these entities has a modifiedByUser being set via a class that extends EmptyInterceptor on update or insert. When the trigger archives any insert or update, it can easily see who made the change using this column (we're interested in which application user, not database user). The problem is that for deletes, we won't get the last modified information from the SQL that is executed because it's just a plain delete from x where y.
To solve this, we'd like to execute a stored procedure to tell the database which app user is logged in before executing any operation. The audit trigger would then look at this value when a delete happens and use it to record who executed the delete.
Is there any way to intercept the begin transaction or some other way to execute SQL or a stored procedure to tell the db what user is executing the inserts/updates/deletes that are about to happen in the transaction before the rest of the operations happen?
I'm light on details about how the database side will work but can get more if necessary. The gist is that the stored proc will create a context that will hold session variables and the trigger will query that context on delete to get the user ID.
From the database end, there is some discussion on this here:
https://docs.oracle.com/cd/B19306_01/network.102/b14266/apdvprxy.htm#i1010372
Many applications use session pooling to set up a number of sessions
to be reused by multiple application users. Users authenticate
themselves to a middle-tier application, which uses a single identity
to log in to the database and maintains all the user connections. In
this model, application users are users who are authenticated to the
middle tier of an application, but who are not known to the
database.....in these situations, the application typically connects
as a single database user and all actions are taken as that user.
Because all user sessions are created as the same user, this security
model makes it very difficult to achieve data separation for each
user. These applications can use the CLIENT_IDENTIFIER attribute to
preserve the real application user identity through to the database.
From the Spring/JPA side of things see section 8.2 at the below:
http://docs.spring.io/spring-data/jdbc/docs/current/reference/html/orcl.connection.html
There are times when you want to prepare the database connection in
certain ways that aren't easily supported using standard connection
properties. One example would be to set certain session properties in
the SYS_CONTEXT like MODULE or CLIENT_IDENTIFIER. This chapter
explains how to use a ConnectionPreparer to accomplish this. The
example will set the CLIENT_IDENTIFIER.
The example given in the Spring docs uses XML config. If you are using Java config then it looks like:
#Component
#Aspect
public class ClientIdentifierConnectionPreparer implements ConnectionPreparer
{
#AfterReturning(pointcut = "execution(* *.getConnection(..))", returning = "connection")
public Connection prepare(Connection connection) throws SQLException
{
String webAppUser = //from Spring Security Context or wherever;
CallableStatement cs = connection.prepareCall(
"{ call DBMS_SESSION.SET_IDENTIFIER(?) }");
cs.setString(1, webAppUser);
cs.execute();
cs.close();
return connection;
}
}
Enable AspectJ via a Configuration class:
#Configuration
#EnableAspectJAutoProxy
public class SomeConfigurationClass
{
}
Note that while this is hidden away in a section specific to Spring's Oracle extensions it seems to me that there is nothing in section 8.2 (unlike 8.1) that is Oracle specific (other than the Statement executed) and the general approach should be feasible with any Database simply by specifying the relevant procedure call or SQL:
Postgres for example as the following so I don't see why anyone using Postgres couldn't use this approach with the below:
https://www.postgresql.org/docs/8.4/static/sql-set-role.html
Unless your stored procedure does more than what you described, the cleaner solution is to use Envers (Entity Versioning). Hibernate can automatically store the versions of an entity in a separate table and keep track of all the CRUD operations for you, and you don't have to worry about failed transactions since this will all happen within the same session.
As for keeping track who made the change, add a new colulmn (updatedBy) and just get the login ID of the user from Security Principal (e.g. Spring Security User)
Also check out #CreationTimestamp and #UpdateTimestamp.
I think what you are looking for is a TransactionalEvent:
#Service
public class TransactionalListenerService{
#Autowired
SessionFactory sessionFactory;
#TransactionalEventListener(phase = TransactionPhase.BEFORE_COMMIT)
public void handleEntityCreationEvent(CreationEvent<Entity> creationEvent) {
// use sessionFactory to run a stored procedure
}
}
Registering a regular event listener is done via the #EventListener
annotation. If you need to bind it to the transaction use
#TransactionalEventListener. When you do so, the listener will be
bound to the commit phase of the transaction by default.
Then in your transactional services you register the event where necessary:
#Service
public class MyTransactionalService{
#Autowired
private ApplicationEventPublisher applicationEventPublisher;
#Transactional
public void insertEntityMethod(Entity entity){
// insert
// Publish event after insert operation
applicationEventPublisher.publishEvent(new CreationEvent(this, entity));
// more processing
}
}
This can work also outside the boundaries of a trasaction if you have the requirement:
If no transaction is running, the listener is not invoked at all since
we can’t honor the required semantics. It is however possible to
override that behaviour by setting the fallbackExecution attribute of
the annotation to true.

Jersey rest framework - authorization - some doubts

I read about jersey framework for rest service on this page http://howtodoinjava.com/jersey/jersey-restful-client-api-authentication-example/|
And I don't understand one thing.
For instance, when we have
#Path("/users")
public class JerseyService
{
#RolesAllowed("USER")
public String doLogin(#QueryParam("username") String uname,
#QueryParam("password") String result)
It means that user with role user can modify (by this method) ALL the users? Not only himself in the database? I am writing android app and I can imagine situation where someone is using for instance Advanced REST client. He logs on the service and modifying queries in appropriate way and strongly mess my database. For instance write some points to other user or something similar. How can I shut out this situation?
Jersey (and similar Spring Security) operate on Resource Types and Roles.
So, if you permit Role "USER" to operate on resource "Users", you can't block specific user from editing other users with Jersey only.
What you can do is use SecurityContext to get current user, and block dangerous operations if his credentials are different from user being changed.
Here's a good example on SecurityContext:
https://simplapi.wordpress.com/2015/09/19/jersey-jax-rs-securitycontext-in-action/

Spring MVC Domain Object handling Best Practice

Lets assume a simple Spring MVC Controller that receives the ID of a domain object. The Controller should call a service that should do something with that domain object.
Where do you "convert" the ID of the domain object into the domain object by loading it from the database? This should not be done by the Controller. So the service method interface has to use accept the ID of the domain object instead of the domain object itself. But the interface of the service would be nicer if it takes the domain object as a parameter.
What are your thoughts about this common use case? How do you solve this?
The controller should pass the id down into the service layer and then get back whatever is needed to render the rest of the HTTP response.
So -
Map<String,Object> doGet (#RequestParam("id") int id) {
return serviceLayer.getStuffByDomainObjectId(id);
}
Anything else is just going to be polluting the web layer, which shouldn't care at all about persistence. The entire purpose of the service layer is to get domain objects and tell them to perform their business logic. So, a database call should reside in the service layer as such -
public Map<String,Object> getStuffByDomainObjectId(int id) {
DomainObject domainObject = dao.getDomainObjectById(id);
domainObject.businessLogicMethod();
return domainObject.map();
}
in a project of mine I used the service layer:
class ProductService {
void removeById(long id);
}
I think this would depend on whether the service is remote or local. As a rule I try to pass IDs where possible to remote services but prefer objects for local ones.
The reasoning behind this is that it reduces network traffic by only sending what is absolutely necessary to remote services and prevents multiple calls to DAOs for local services (although with Hibernate caching this might be a mute point for local services).

Categories

Resources