Creating an application scoped bean in spring MVC - java

Good day Everyone,
I want to explain my current legacy application before i ask my question.
I have a servlet in a Tomcat in which i load a non-changing database table into memory in the init() using Hibernate. Because this is defined in the init(), it is called only once and its available across all subsequent requests to the servlet, this is used because it improved application performance because of less round trips to the database.
I have recently started to use Spring 3 and i want to change this set up (servlet class is now a controller) to Spring but my challenge is how do i create the ArrayList of domain object (as i do in the init()) at Spring load time for efficiency and have it available across all calls to the controller class without accessing the database every time a request comes in. If this is not possible, then what options do i have?
Any help would be very appreciated.

Pop that static data into the RequestInterceptor
public class RequestInterceptor extends HandlerInterceptorAdapter {
#Override
public void postHandle(
HttpServletRequest request,
HttpServletResponse response,
Object handler,
ModelAndView modelAndView) throws Exception {
....
modelAndView.addObject("variableName", dataIWantToHaveAvailableAllOverThePlace);
....
super.postHandle(request, response, handler, modelAndView);
}
}

how do i create the ArrayList of domain object (as i do in the init()) at Spring load time for efficiency and have it available across all calls to the controller class without accessing the database every time a request comes in. If this is not possible, then what options do i have?
I would design this almost identically in your scenario as I would if the data was constantly changing and had to be read from the database on each request:
The controller is wired up with an instance of the MyService interface which has operations for retrieving the data in question.
Optionally, depending on if you separate your DAO layer from your service layer, the MyService implementation is wired up with a MyDAO bean.
The MyService implementation is marked as InitializingBean, and in the afterPropertiesSet() method you retrieve the one-time-load data from the database.
With this design, your controller does not know where it's data is coming from, just that it asks a MyService implementation for the data. The data is loaded from the database when the MyService implementing bean is first created by the Spring container.
This allows you to easily change the design to load the data on each request (or to expire the data at certain times, etc) by swapping in a different implementation of MyService.

Related

Difference between javax.security.enterprise.SecurityContext and javax.ws.rs.core.SecurityContext?

I am struggling to understand when and how to use the different interfaces.
They seem to be quite similar, with some minor differences in method names to dynamically check security roles or retrieve the Principal, but - as far as I am currently understanding - are only accessible in their specific context.
I am trying to implement fine grained authorization with specific requirements.
Mainly the roles are not stored in the tokens, but must be read from a table in the database.
Therefore I have an implementation of IdentityStore that provides a CallerPrincipal with all available roles.
The IdentityStore is used by my HttpAuthenticationMechanism implementation, which is fairly simple, thus all it does is for valid requests to call HttpMessageContext.notifyContainerAboutLogin to push the CallerPrincipal into the SecurityContext - as far as I know.
Because there are a lot of generic endpoints in the codebase with path parameters, that decide which role has to be checked I need a generic way of checking if the user is in a role depending on the value of some path segments of the requested uri.
I created a method interceptor for that, where I want to access the SecurityContext, but both interfaces have their problems here:
#Interceptor
public class RolesAllowedInterceptor {
#Context
private UriInfo uriInfo;
// this injection is always null
#Context
private javax.security.enterprise.SecurityContext securityContext;
// this injection works
#Context
private javax.ws.rs.core.SecurityContext securityContext;
#AroundInvoke
public Object validate(InvocationContext ctx) throws Exception {
... // read path param to retrieve role and check SecurityContext.isUserInRole()
}
}
The injection of javax.security.enterprise.SecurityContext does not work. I assume the reason for this is, that the interceptor is called in a JAX-RS context.
The injection of javax.ws.rs.core.SecurityContext works (my assumption in 1. is based on this). But when SecurityContext.isUserInRole(String) is called, the debugger shows, that the Principal does not have any of the groups (roles in my business context) that were assigned via my IdentityStore implementation and thus the validation incorrectly fails.
I am currently using another approach with ContainerRequestFilter to set the javax.ws.rs.core.SecurityContext explicitly, which is working fine for the interceptor, but not with the javax.annotation.security.RolesAllowed annotation. For that I shifted the invocation of my IdentityStore into the filter, because I obviously do not want to call it twice.
I am not looking for complete code examples/solutions.
I am merely trying to understand why there are different interfaces of SecurityContext, as the Java Docs do not elaborate on that.
And therefore hopefully understand how I can use RolesAllowed for static endpoints and my interceptor for generic endpoints, without the need of a ContainerRequestFilter to set the SecurityContext for the later.
--
For context: I am using Payara Micro and jakartaee-api:8.0.0

In Which Layer, Dao or Service, Should I Parse a Rest Client Response? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have my own service calling a third party rest service that is returning a text based response.This text based response is not a proper service response and needs to be parsed for content as well as errors. For purposes of discussion, assume the 3rd party rest service cannot be changed.
Given these circumstance I am wondering whether I should wire that parsing into the dao layer or the service layer of my application.I know that the service layer should contain all of your business logic, but I feel like if I don't do the parsing in my Dao layer I am leaking. Is it ok to have logic in the dao for purposes of parsing/transformation in this case or should it be done in the service layer?
Any advice is appreciated.
public void MyDao {
private RestTemplate restTemplate;
private ResponseParser responseParser;
public myDao(RestTemplate restTemplate, ResponseParser responsePaser){
this.restTemplate = restTemplate;
this.responseParser = responseParser;
}
public MyResponse sendRequest(MyRequest myRequest){
ResponseEntity<String> responeEntity = restTemplate.exchange(...);
String body = responseEntity.getBody();
return responseParser.parse(body);
}
}
OR
public void MyDao {
private RestTemplate restTemplate;
public myDao(RestTemplate restTemplate, ResponseParser responsePaser){
this.restTemplate = restTemplate;
}
public String sendRequest(MyRequest myRequest){
ResponseEntity<String> responeEntity = restTemplate.exchange(...);
return responseEntity.getBody();
}
}
public void MyService {
private MyDao myDao;
private ResponseParser responseParser;
public myDao(MyDao myDao, ResponseParser responsePaser){
this.myDao = myDao;
this.responseParser = responseParser;
}
public MyObject process(MyRequest myRequest){
String response = myDao.sendRequest(myRequest)
return responseParser.parse(response);
}
}
Here are my take and opinion of the design.
DAO is a pattern to abstract the persistence operations and should be kept solely to work with persistence operations.
The DAO patterns help to abstract away persistence mechanism/operations or data access operations from a data-source from the client and the design follows SRP, making the transition to a new persistence type easy. And the change - change of your persistence mechanism/ data source, stays in the DAO layer not boiling up to service layers.
The service layer is responsible to handle and compute business operations on your data. It uses a DAO/Repository/Client to fetch the data it needs to operate on.
Taking into consideration the above points, here is what I think of the existing design and how I would do it.
DAO, as chrylis mentioned above, is a data access object and should not matter if the data is fetched from the DB or over HTTP.
The article from Oracle about J2EE pattern reads:
Use a Data Access Object (DAO) to abstract and encapsulate all access to the data source. The DAO manages the connection with the data source to obtain and store data.
It further reads: The data source could be a persistent store like an RDBMS, an external service like a B2B exchange, a repository like an LDAP database, or a business service accessed via CORBA Internet Inter-ORB Protocol (IIOP) or low-level sockets.
Taking these into consideration, I would make the call from DAO, parse the response and send over a business object to the Service.
Taking SRP into consideration, the Service should not be aware of the call made over HTTP/ db call made/ reading from a flat-file. All it should know is, once I make a query for the data, I get back an object with the required data from the DAO.
If Service is taking care of the parsing, what if the data source changes tomorrow and you have the data in-situ. So now you DAO changes because now it talks to the DB instead of making an HTTP request. You cannot return back a String representation anymore. You need a Data Mapper and will send some sort of Object representation back, which means your Service class changes too. So one change of data source, not only changes your code in DAO but boils to the business layer, which breaks the SRP.
Saying this, not developing for long and not from a software engineering background(I had the understanding that data access object can only be from datastore, but thanks to chrylis' comment made me read more and think about the difference between data-source and datastore), I always prefer naming it Client -> RestClient and make the call and keep my DB operations to DAO/Repo. The reason being, it is simply easy to read tomorrow. One look at the classname and it is easy to understand what it is doing or what sort of operation the class is possibly handling.
So, yes the call and parsing should happen in the DAO/Client.
Strictly speaking, Dao layer is used to manage information included in a persistence mechanism like: database, LDAP, etc So when you deal with an external endpoint, "include" that functionality in a service is an approach more widely used.
Answering your question, the first option is a better one.
You are including the required business logic into the class that knows the returned format/information by the external endpoint.
External classes that use the above one will manage a well know object (instead of a raw string value)
Some types of upgrades in the external endpoint (changes in the response format, for example) can be better managed in your Dao class, without affecting to the other classes that use it.
My opinion is put it in DAO layer. Because parsing isn’t a business feature. Also DAO layer is meant for accessing data from DBs or other third party entities. So having the data in right POJO format while returning from DAO layer makes good sense in my opinion.

Spring run method before starting endpoints

Is it possible to call some method while Spring is being initialized - after database connection is started but before #RestController endpoints are started (available to send requests)?
I need to send some database requests (using JpaRepository) before REST endpoints are ready.
I tried to find similiar post but I wasn't able to. I found annotation #PostConstruct or interfaces CommandLineRunner and ApplicationListener<ContextRefreshedEvent> but I think all of then are called after endpoints are started? Or am I wrong?
#PostConstruct is called after a bean is completely constructed but before it is "put into service"--which, in the case of a controller, means before it starts serving requests. (In the case of a service bean, this would mean before it is wired into any other beans.)
Note that it's best to use constructor injection to provide dependencies to your bean, but it may still be sensible to do database queries in #PostConstruct to avoid heavy operations in an actual constructor.

Spring MVC 3.x Setting Global Data

I have internationalization module and application runs in two different modes. To change the mode, we need to restart the tomcat server. Mode 1 supports two languages and mode 2 supports 5 languages. The languages are stored in a .json file.
Every time the user hits index.html, in the #RequestMapping of this page, I check the application mode. And based on this application mode I read the correct .json file. Extract the list of languages and set that in the model and then I return the page to the client.
Problems with this approach is - every time I hit the index.html file, the application reads the file from disk. which is not only unnecessary but also time consuming and it rings annoying bell to my developer ego.
What I'd like to have instead is, when the application boots up, I know the application mode.
How can get the spring MVC to read the file in the beginning and store this data as long as server is running? Is it even possible?
If yes, can you let me know what parts of Spring MVC do I need to look into?
I read about HandlerInterceptor and #ModelAttribute but it merely states how can I insert the data in each request. However, what I really want to know is how the persist the data read from the file once.
One of the approaches could be tohave a bean, which impements InitializingBean and loads the file in 'afterPropertiesSet' method. It would also have a method to return the list of languages and it could be wired into all other bean which need it.
You could also do it in 'HandlerInterceptor', just have it implement InitializingBean and store the list in the class variable.
e.g.
public MyInterceptor extends HandlerInterceptorAdaptor implements InitializingBean {
private List<String> languageList;
#Override
void postHandle(HttpServletRequest request,
HttpServletResponse response,
Object handler,
ModelAndView modelAndView)
throws Exception {
//set list in the model
}
#Override
public void afterPropetiesSet() {
languageList=...; //read file
}
}

Spring MVC Domain Object handling Best Practice

Lets assume a simple Spring MVC Controller that receives the ID of a domain object. The Controller should call a service that should do something with that domain object.
Where do you "convert" the ID of the domain object into the domain object by loading it from the database? This should not be done by the Controller. So the service method interface has to use accept the ID of the domain object instead of the domain object itself. But the interface of the service would be nicer if it takes the domain object as a parameter.
What are your thoughts about this common use case? How do you solve this?
The controller should pass the id down into the service layer and then get back whatever is needed to render the rest of the HTTP response.
So -
Map<String,Object> doGet (#RequestParam("id") int id) {
return serviceLayer.getStuffByDomainObjectId(id);
}
Anything else is just going to be polluting the web layer, which shouldn't care at all about persistence. The entire purpose of the service layer is to get domain objects and tell them to perform their business logic. So, a database call should reside in the service layer as such -
public Map<String,Object> getStuffByDomainObjectId(int id) {
DomainObject domainObject = dao.getDomainObjectById(id);
domainObject.businessLogicMethod();
return domainObject.map();
}
in a project of mine I used the service layer:
class ProductService {
void removeById(long id);
}
I think this would depend on whether the service is remote or local. As a rule I try to pass IDs where possible to remote services but prefer objects for local ones.
The reasoning behind this is that it reduces network traffic by only sending what is absolutely necessary to remote services and prevents multiple calls to DAOs for local services (although with Hibernate caching this might be a mute point for local services).

Categories

Resources