I am working on a project involving Spring and JPA/Hibernate. The database driver used in my development environment is H2. My application has a page that displays statistics, one example of such a statistic is the average age of my users. However, when I try to fetch the average age using JPQL, I receive an exception
Result must not be null!
Assume for simplicity reasons that I store age as an integer on every User object (in my application this is of course not the case, but that's not important for my problem).
User model
#Entity
public class User implements Identifiable<Long> {
private int age;
// more fields and methods, irrelevant
}
User repository
#Repository
public interface UserRepository extends CrudRepository<User, Long> {
#Query("SELECT AVG(u.age) FROM #{#entityName} u")
long averageAge();
}
I cannot seem to figure out why calling UserRepository#averageAge(); is throwing the exception. I have tried replacing the function AVG in the query by COUNT and this behaves as expected. I have also tried to use an SQL query and setting nativeQuery = true in the annotation, yet to no avail. I can ofcourse solve it by fetching all the users and calculate the average age in plain Java, but this wouldn't be very efficient.
Stacktrace:
Caused by: org.springframework.dao.EmptyResultDataAccessException: Result must not be null!
at org.springframework.data.repository.core.support.MethodInvocationValidator.invoke(MethodInvocationValidator.java:102)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185)
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212)
at com.sun.proxy.$Proxy150.averageAge(Unknown Source)
at my.test.application.StatisticsRunner.run(StatisticsRunner.java:72)
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:809)
... 30 more
Solved
The exception was caused by the fact that AVG() returns null when performed on an empty table. I fixed it by modifying the query (inspired by the answer to this question) as follows:
#Query("SELECT coalesce(AVG(u.age), 0) FROM #{#entityName} u")
long averageAge();
If you use Spring Data, and if your method returns null when Hibernate can't find a match, make sure you add #org.springframework.lang.Nullable to your method signature:
public interface SomeRepositoryCustom {
#org.springframework.lang.Nullable
public Thing findOneThingByAttr(Attribute attr) {
/* ...your logic here... */
}
}
This is because Spring Data checks the nullability of your method, and if the annotation is missing, it's going to enforce that you always need to return an object:
/* org.springframework.data.repository.core.support.MethodInvocationValidator */
#Nullable
#Override
public Object invoke(#SuppressWarnings("null") MethodInvocation invocation) throws Throwable {
/* ...snip... */
if (result == null && !nullability.isNullableReturn()) {
throw new EmptyResultDataAccessException("Result must not be null!", 1);
}
/* ...snip... */
I used Spring Boot version 2.1.1.RELEASE and Spring Data 2.1.4.RELEASE.
It seems that the EmptyResultDataAccessException exception is thrown when a result from a query was expected to have at least one row (or element) but none was returned.
Related documentation about this can be found here.
I would suggest to run the same query this attempts to run in order to further validate this theory. Now the good question's what to do with this.
You have two options. Either catch the EmptyResultDataAccessException exception at your calling point and handle it directly in there or alternatively you can have an ExceptionHandler which will be tasked with handling such exceptions.
Both ways of handling this, should be OK and you may choose between each depending on your scenario.
i am not complete sure, but i think the problem it is because of the type of return long, maybe you should use the Long wrapper, long does not allow null because it is a primitive, try to change to
#Query("SELECT AVG(u.age) FROM #{#entityName} u")
Long averageAge();
Related
This question already has answers here:
Spring Data JPA findOne() change to Optional how to use this?
(7 answers)
Closed 1 year ago.
So I'm going through various resources trying to learn Spring Boot and Rest API and I've encountered the same problem in several different tutorials and textbooks. It seems to stem from the CrudRepository interface and more specifically the JpaRepository.findById() method.
Every tutorial I've read has something to the effect of:
#GetMapping("/{id}")
public ResponseEntity<UserDTO> getUserById(#PathVariable("id") final Long id){
UserDTO user = userJpaRepository.findById(id);
if (user == null) {
return new ResponseEntity<UserDTO>(
new CustomErrorType("User with id " + id + " not found"),
HttpStatus.NOT_FOUND);
}
return new ResponseEntity<UserDTO>(user, HttpStatus.OK);
However, the UserDTO user = userJpaRepository.findById(id); won't compile.
I figured out if I change it to UserDTO user = userJpaRepository.findById(id).get(); it compiles, runs, and the GET is successful. The problem is if the user ID isn't found in the GET request it doesn't return NULL and I get a 500 internal server error.
The tooltips and suggestions from my IDE corrected the code to
#GetMapping("/{id}")
public ResponseEntity<UserDTO> getUserById(#PathVariable("id") final Long id){
Optional<UserDTO> user = userJpaRepository.findById(id);
if (user == null) {
return new ResponseEntity<UserDTO>(
new CustomErrorType("User with id " + id + " not found"),
HttpStatus.NOT_FOUND);
}
return new ResponseEntity<UserDTO>(HttpStatus.OK);
}
Which works just as it should for the GET request and error handling. Why do all the tutorials have it listed the first way?
Could someone explain to me what is going on here?
The JpaRepository.getById can retrieve a database record by id.
This method is pre-defined, same as the findAll methods.
The mentioned method CrudRepository.findById was inherited from ancestor class CrudRepository. It returns a Optional<T> since spring-data migrated to Java 8 (since Spring-Data-Jpa version 2.0).
See more:
Spring Data JPA findOne() change to Optional how to use this?
Implementing JpaRepository's find-methods
However, if you implement the JpaRepository and add a new findByName method, this relies on JPA's Query Language (JPQL) and implicitly issues a prepared statement like SELECT * FROM table WHERE name = ?. Here the WHERE clause's predicates like name = ? are extracted and built from method-name, after By.
Rational behind Optional return
The default return of any find method is Optional<T> since Java 8 (which introduced the Optional type).
This is because a search may either find some or not. If nothing is found, it is safer to return Optional instead of null or throw an exception.
Main benefit and intention behind Optional returns are:
signal empty = optional results in the method signature (instead of implicit null before)
force the API user/developer to deal with empty returns
avoid incidental NullPointerExceptions (NPE)
Update to deal with empty results found:
Handling NOT FOUND in REST controllers
A RESTful way of responding in cases of empty or not-found results on CRUD resources is:
to return HTTP status 404 NOT FOUND with some customized/descriptive error-message.
This "unhappy path" can easily be achieved by throwing Spring's ResponseStatusException like described in Baeldung's REST exception-handling tutorial like this:
#GetMapping("/{id}")
public UserDTO getUserById(#PathVariable("id") final Long id) {
Optional<UserDTO> user = userJpaRepository.findById(id);
if (user.isEmpty()) {
throw new ResponseStatusException(HttpStatus.NOT_FOUND, String.format("User with id %d not found", id));
}
return user;
}
What was simplified:
throwing ResponseStatusException leads to an exceptional return and signals Spring to respond with specified status (404) and body (message).
controller methods can simply return the type UserDTO (without wrapping in ResponseEntity) because Spring will convert to response-representation
for #GetMapping the return always gets HttpStatus.OK assigned by default
Further, like Mauricio Gracia Gutierrez tutoring answer explains ideomatic Optional handling, the method-body can be simplified to a one-liner:
return userJpaRepository.findById(id)
.orElseThrow(() -> new ResponseStatusException(HttpStatus.NOT_FOUND, String.format("User with id %d not found", id)));
Some hints on how to use Optional with Optional<T> findById(ID id).
Generally, as you look for an entity by id, you want to return it or make a particular processing if that is not retrieved.
Here are three classical usage examples.
Suppose that if the entity is found you want to get it otherwise you want to get a default value.
You could write :
Foo foo = repository.findById(id)
.orElse(new Foo());
or get a null default value if it makes sense (same behavior as before the API change) :
Foo foo = repository.findById(id)
.orElse(null);
Suppose that if the entity is found you want to return it, else you want to throw an exception.
You could write :
return repository.findById(id)
.orElseThrow(() -> new EntityNotFoundException(id));
Suppose you want to apply a different processing according to if the entity is found or not (without necessarily throwing an exception).
You could write :
Optional<Foo> fooOptional = fooRepository.findById(id);
if (fooOptional.isPresent()) {
Foo foo = fooOptional.get();
// processing with foo ...
} else {
// alternative processing....
}
Exctract taken from
Spring Data JPA findOne() change to Optional how to use this?
Basically, I am trying to understand how to write correct (or "to correctly write"?) transactional code, when developing REST service with Jax-RS and Spring. Also, we're using JOOQ for data-access. But that shouldn't be very relevant...
Consider simple model, where we have some organisations, that have these fields: "id", "name", "code". All of which must be unique. Also there's a status field.
Organization might be removed at some point. But we don't want to remove the data altogether, because we want to save it for analytical/maintenance purposes. So we just set organization 'status' field to 'REMOVED'.
Because we don't delete the organization row from the table, we can't simply put the unique constraint on the "name" column, because, we might delete organization and then create a new one with the same name. But let's assume that codes has to be unique globally, so we DO have a unique constraint on the code column.
So with that, let's see this simple example, that creates organization, performing some checks along the way.
Resource:
#Component
#Path("/api/organizations/{organizationId: [0-9]+}")
#Consumes(MediaType.APPLICATION_JSON)
#Produces(MediaTypeEx.APPLICATION_JSON_UTF_8)
public class OrganizationResource {
#Autowired
private OrganizationService organizationService;
#Autowired
private DtoConverter dtoConverter;
#POST
public OrganizationResponse createOrganization(#Auth Person person, CreateOrganizationRequest request) {
if (organizationService.checkOrganizationWithNameExists(request.name())) {
// this throws special Exception which is intercepted and translated to response with 409 status code
throw Responses.abortConflict("organization.nameExist", ImmutableMap.of("name", request.name()));
}
if (organizationService.checkOrganizationWithCodeExists(request.code())) {
throw Responses.abortConflict("organization.codeExists", ImmutableMap.of("code", request.code()));
}
long organizationId = organizationService.create(person.user().id(), request.name(), request.code());
return dtoConverter.from(organization.findById(organizationId));
}
}
DAO service looks like that:
#Transactional(DBConstants.SOME_TRANSACTION_MANAGER)
public class OrganizationServiceImpl implements OrganizationService {
#Autowired
#Qualifier(DBConstants.SOME_DSL)
protected DSLContext context;
#Override
public long create(long userId, String name, String code) {
Organization organization = new Organization(null, userId, name, code, OrganizationStatus.ACTIVE);
OrganizationRecord organizationRecord = JooqUtil.insert(context, organization, ORGANIZATION);
return organizationRecord.getId();
}
#Override
public boolean checkOrganizationWithNameExists(String name) {
return checkOrganizationExists(Tables.ORGANIZATION.NAME, name);
}
#Override
public boolean checkOrganizationWithCodeExists(String code) {
return checkOrganizationExists(Tables.ORGANIZATION.CODE, code);
}
private boolean checkOrganizationExists(TableField<OrganizationRecord, String> checkField, String checkValue) {
return context.selectCount()
.from(Tables.ORGANIZATION)
.where(checkField.eq(checkValue))
.and(Tables.ORGANIZATION.ORGANIZATION_STATUS.ne(OrganizationStatus.REMOVED))
.fetchOne(DSL.count()) > 0;
}
}
This brings some questions:
Should I put #Transactional annotation on Resource's createOrganization method? Or should I create one more service that talks to DAO and put #Transactional annotation to it's method? Something else?
What would happen if two users concurrently send request with the same "code" field. Before first transaction is commited the checks are successfully passed, so no 409 respones will be sent. Than first transaction will be committed properly, but the second one will violate DB constraint. This will throw SQLException. How to gracefully handle that? I mean I still want to show nice error message on the client side, saying that name is already used. But I can't really parse SQLException or smth.. can I?
Similar to the previous one, but this time "name" is not unique. In this case, second transaction will not violate any constraints, which leads to having two organization with the same name, that violates our buisness constraints.
Where can I see/learn tutorials/code/etc., that you consider great examples on how to write correct/reliable REST+DB code with complicated buisness logic. Github/books/blogs, whatever. I've tried to find something like that myselft, but most examples just focus on the plumbing - add these libs to maven, use these annotations, there is your simple CRUD, the end. They don't contain any transactional considirations at all. I.e.
UPDATE:
I know about isolation level and the usual error/isolation matrix (dirty reads, etc..). The problem I have is finding some "production-ready" sample to learn from. Or a good book on a subject. I still don't really get how to handle all the errors properly.. I guess I need to retry a couple of times, if transaction failed.. and than just throw some generic error and implement client, that handles that.. But do I really have to use SERIALIZABLE mode, whenever I use range queries? Because it will affect performance greatly. But otherwise how can I garantee that transaction will fail..
Anyway I've decided that for now I need more time to learn about transactions and db management in general to tackle this problem...
Generally, without talking about transactionality, endpoint should only grab parameters from the request and call the Service. It shouldn't do business logic.
It seems your checkXXX methods are part of the business logic, because they throw errors about domains-specific conflicts. Why not put them into the Service into one method, which is by the way transactional?
//service code
public Organization createOrganization(String userId, String name, String code) {
if (this.checkOrganizationWithNameExists(request.name())) {
throw ...
}
if (this.checkOrganizationWithCodeExists(code)) {
throw ...
}
long organizationId = this.create(userId, name, code);
return dao.findById(organizationId);
}
I took as your parameters are Strings, but they can be anything. I'm not sure you want to throw Responses.abortConflict in the service layer because it seems to be a REST concept, but you can define your own exception types for it if you want.
Endpoint code should look like this, however, it might contain additional try-catch block which converts the thrown exceptions to Error responses:
//endpoint code
#POST
public OrganizationResponse createOrganization(#Auth Person person, CreateOrganizationRequest request) {
String code = request.code();
String name = request.name();
String userId = person.user().id();
return dtoConverter.from(organizationService.createOrganization(userId, name, code));
}
As for question 2 and 3, transaction isolation levels are your friends. Put isolation level high enough. I think 'repeatable read' is the suitable one in your case. Your checkXXX methods will detect if some other transaction commits entities with the same name or code and it's guaranteeed that the situations stays by the time 'create' method is executed. One more useful read regarding Spring and transaction isolation levels.
As per my understanding the best way to handle DB level transaction you must use Spring's Isolation trnsaction in effective way in the dao layer. Below is sample industry standard codde in your case...
public interface OrganizationService {
#Retryable(maxAttempts=3,value=DataAccessResourceFailureException.class,backoff=#Backoff(delay = 1000))
public boolean checkOrganizationWithNameExists(String name);
}
#Repository
#EnableRetry
public class OrganizationServiceImpl implements OrganizationService {
#Transactional(isolation = Isolation.READ_COMMITTED)
#Override
public boolean checkOrganizationWithNameExists(String name){
//your code
return true;
}
}
Please pinch me if I'm wrong in here
Separation of concern :
Jax-rs resource (endpoint) layer : just handle the request, invoke the service and wrap the potential exception in appropriate response code (just catch and wrap manually or use exception mapper).
Service / business layer : expose a transactional method for each unit of work, business error must be handled as checked exception, operational ones as unchecked (subclasses of RuntimeException).
Data access layer: just handle the data access stuff (i.e. get db context, executes query and eventually map the result).
I insist on one thing, the good place to have transaction boundaries is the place where your business methods are defined. A transaction scope must be a business unit of work.
Regarding the concurrency issue, there is 2 way to handle this kind of concurrency problem : pessimistic or optimistic locking.
Pessimistic :
Lock
do your stuff
Update
Release lock
Optimistic :
check version
do your stuff
update if version is same, fail otherwise
Pessimistic is an issue regarding scalability and performance, optimistic problem is that you sometimes end by sending an operating error to the end-user.
I would personally go with optimistic locking in your case, JOOQ support it
First off the DAO layer should not even know it's being fronted by a REST webservice. Be sure to separate responsibilities.
Keep the #Transactional on the DAO. If you are issuing only a single statement than you need to decide if you are OK with dirty reads. Basically, figure out what the lowest Isolation Level is for your application. Every method will start a new Transaction (unless called from another method that already had one started) and if any Exceptions are thrown it will rollback any calls. You can setup a custom ExceptionHandler in your Controller to handle SQLDataIntegrityExceptions (like you're "code" insert example).
Use an Aggregate Primary Key that covers (id, name, code, status) so you can have an org with the same name but one will be "CURRENT" and one will be "REMOVED"
I'm refactoring a code base to get rid of SQL statements and primitive access and modernize with Spring Data JPA (backed by hibernate). I do use QueryDSL in the project for other uses.
I have a scenario where the user can "mass update" a ton of records, and select some values that they want to update. In the old way, the code manually built the update statement with an IN statement for the where for the PK (which items to update), and also manually built the SET clauses (where the options in SET clauses can vary depending on what the user wants to update).
In looking at QueryDSL documentation, it shows that it supports what I want to do. http://www.querydsl.com/static/querydsl/4.1.2/reference/html_single/#d0e399
I tried looking for a way to do this with Spring Data JPA, and haven't had any luck. Is there a repostitory interface I'm missing, or another library that is required....or would I need to autowire a queryFactory into a custom repository implementation and very literally implement the code in the QueryDSL example?
You can either write a custom method or use #Query annotation.
For custom method;
public interface RecordRepository extends RecordRepositoryCustom,
CrudRepository<Record, Long>
{
}
public interface RecordRepositoryCustom {
// Custom method
void massUpdateRecords(long... ids);
}
public class RecordRepositoryImpl implements RecordRepositoryCustom {
#Override
public void massUpdateRecords(long... ids) {
//implement using em or querydsl
}
}
For #Query annotation;
public interface RecordRepository extends CrudRepository<Record, Long>
{
#Query("update records set someColumn=someValue where id in :ids")
void massUpdateRecords(#Param("ids") long... ids);
}
There is also #NamedQuery option if you want your model class to be reusable with custom methods;
#Entity
#NamedQuery(name = "Record.massUpdateRecords", query = "update records set someColumn=someValue where id in :ids")
#Table(name = "records")
public class Record {
#Id
#GeneratedValue(strategy = GenerationType.AUTO)
private Long id;
//rest of the entity...
}
public interface RecordRepository extends CrudRepository<Record, Long>
{
//this will use the namedquery
void massUpdateRecords(#Param("ids") long... ids);
}
Check repositories.custom-implementations, jpa.query-methods.at-query and jpa.query-methods.named-queries at spring data reference document for more info.
This question is quite interesting for me because I was solving this very problem in my current project with the same technology stack mentioned in your question. Particularly we were interested in the second part of your question:
where the options in SET clauses can vary depending on what the user
wants to update
I do understand this is the answer you probably do not want to get but we did not find anything out there :( Spring data is quite cumbersome for update operations especially when it comes to their flexibility.
After I saw your question I tried to look up something new for spring and QueryDSL integration (you know, maybe something was released during past months) but nothing was released.
The only thing that brought me quite close is .flush in entity manager meaning you could follow the following scenario:
Get ids of entities you want to update
Retrieve all entities by these ids (first actual query to db)
Modify them in any way you want
Call entityManager.flush resulting N separate updates to database.
This approach results N+1 actual queries to database where N = number of ids needed to be updated. Moreover you are moving the data back and forth which is actually not good too.
I would advise to
autowire a queryFactory into a custom repository
implementation
Also, have a look into spring data and querydsl example. However you will find only lookup examples.
Hope my pessimistic answer helps :)
I have an #Entity Video having a one-to-many relation with a List<Tag> tags as one of its fields. I use the following #Repository using Spring Data to get the most popular tags:
#Repository
public interface TagRepository extends CrudRepository<Tag, Integer>{
#Query("SELECT t FROM Tag t WHERE (SELECT SUM(v.views) FROM Video v WHERE t MEMBER OF v.tags) > 0")
public List<Tag> findMostViewedTags(int maxTags);
}
The Query is processed and considered valid by Spring, I tested the generated SQL vs my database locally and it returned 2 Tags. In my Code however, I receive the value Null when I call the method findMostViewedTags(100).
The Query lookup strategy is the default "CREATE_IF_NOT_FOUND".
If there are no results found, should the method return an empty list or Null? My desired behavior is to receive an empty list.
Why does the method call return Null instead of a List<Tag> with size() 2?
The normal behavior is indeed returning an empty list if no results are found. If a List<Object> is the return value of the method in the defined interface, the method should never return Null.
The problem is that a parameter is given to the method and is not used anywhere in the Query. For some reason Spring decides to return a Null in that case. Solution: remove the unused parameter or use the parameter in the Query.
I have experienced similar problem. The cause was that I was using Mockito and have not correctly mocked the data with when().
I am working in a spring,hibernate project and database is oracle. I have DAO layer for persistence related operations.
In all my tables, I have create_date and update_date columns representing the timestamp when a row is inserted and updated in the tables respectively.
There is a requirement that I have to update the above two mentioned timestamp columns of that particular table for which the request is meant to whenever any insert/update operation happens.For example, If my DAO layer has two methods, say m1 and m2 responsible for impacting t1 and t2 tables respectively. Now, if m1 method is invoked, then timestamp columns of t1 table will be updatedi.e. For insert, create_date column will be updated and for any update update_date column will be updated.
I have idea of Spring AOP so I was thinking to use AOP to implement the above requirement, though, i am not quite sure if it can be achieved using AOP.
Please let me know if I can use AOP to fulfill this requirement. And if it is possible, then please provide me the inputs how to implement it.
I have implemented update date feature for one of the modules in my application using Spring AOP.
PFB code for your reference
Hope this will help.
I wonder if one can have pointcuts for variable as well.I know its might not possible with spring's aspect j implementation.But any work around guys :P
**
* #author Vikas.Chowdhury
* #version $Revision$ Last changed by $Author$ on $Date$ as $Revision$
*/
#Aspect
#Component
public class UpdateDateAspect
{
#Autowired
private ISurveyService surveyService;
Integer surveyId = null;
Logger gtLogger = Logger.getLogger(this.getClass().getName());
#Pointcut("execution(* com.xyz.service.impl.*.saveSurvey*(..)))")
public void updateDate()
{
}
#Around("updateDate()")
public Object myAspect(final ProceedingJoinPoint pjp)
{
// retrieve the runtime method arguments (dynamic)
Object returnVal = null;
for (final Object argument : pjp.getArgs())
{
if (argument instanceof SurveyHelper)
{
SurveyHelper surveyHelper = (SurveyHelper) argument;
surveyId = surveyHelper.getSurveyId();
}
}
try
{
returnVal = pjp.proceed();
}
catch (Throwable e)
{
gtLogger.debug("Unable to use JointPoint :(");
}
return returnVal;
}
#After("updateDate()")
public void updateSurveyDateBySurveyId() throws Exception
{
if (surveyId != null)
{
surveyService.updateSurveyDateBySurveyId(surveyId);
}
}
}
I'd use an Hibernate interceptor instead, that's what they are for. For example, the entities that need such fields could implement the following interface:
public interface Auditable {
Date getCreated();
void setCreated(Date created);
Date getModified();
void setModified(Date modified);
}
Then the interceptor always sets the modified field on save, and only sets the created field when it's not already set.
Even though you have been asking for a Spring AOP solution to your question, I would like to point out that the same result can be achieved using database triggers, e. g. automatically setting the created timestamp during INSERT operations and the modified timestamp during UPDATE statements.
This may be a good solution, especially if not all your DB calls are going through the AOP-captured logic (e. g. when bypassing your pointcut definition because a method does not fit the pattern or even bypassing the code completely using a standalone SQL client), so that you could enforce the modified timestamp even when somebody updates the entries from a different application.
It would have the drawback that you need to define the triggers on all affected tables, though.
It should be possible with Spring AOP using a #Before advice. If you pass an entity to a create method have an advice set the create_date and for an update method the update_date. You may want to consider the following to make your job easier:
Have all entities implement a common interface to set create_date and update_date. This allows you to have a common advice without having to resort to reflection.
Have a naming convention to identify create and update methods on our DAOs. This will make your point cuts simpler.