I encountered the #NoRepositoryBean interface several times whilst reading the Spring Data documentation.
To quote from the documentation:
If you're using automatic repository interface detection using the
Spring namespace using the interface just as is will cause Spring
trying to create an instance of MyRepository. This is of course not
desired as it just acts as indermediate between Repository and the
actual repository interfaces you want to define for each entity. To
exclude an interface extending Repository from being instantiated as
repository instance annotate it with #NoRepositoryBean.
However, I am still not sure when and where to use it. Can someone please advise and give me a concrete usage example?
The annotation is used to avoid creating repository proxies for interfaces that actually match the criteria of a repo interface but are not intended to be one. It's only required once you start going into extending all repositories with functionality. Let me give you an example:
Assume you'd like to add a method foo() to all of your repositories. You would start by adding a repo interface like this
public interface com.foobar.MyBaseInterface<…,…> extends CrudRepository<…,…> {
void foo();
}
You would also add the according implementation class, factory and so on. You concrete repository interfaces would now extend that intermediate interface:
public interface com.foobar.CustomerRepository extends MyBaseInterface<Customer, Long> {
}
Now assume you bootstrap - let's say Spring Data JPA - as follows:
<jpa:repositories base-package="com.foobar" />
You use com.foobar because you have CustomerRepository in the same package. The Spring Data infrastructure now has no way to tell that the MyBaseRepository is not a concrete repository interface but rather acts as intermediate repo to expose the additional method. So it would try to create a repository proxy instance for it and fail. You can now use #NoRepositoryBean to annotate this intermediate interface to essentially tell Spring Data: don't create a repository proxy bean for this interface.
That scenario is also the reason why CrudRepository and PagingAndSortingRepository carry this annotation as well. If the package scanning picked those up by accident (because you've accidentally configured it this way) the bootstrap would fail.
Long story short: use the annotation to prevent repository interfaces from being picked up as candidates to end up as repository bean instances eventually.
We can declare a new interface as our custom method:
#NoRepositoryBean
public interface ExtendedRepository<T, ID extends Serializable> extends JpaRepository<T, ID> {
List<T> findByAttributeContainsText(String attributeName, String text);
}
Our interface extends the JpaRepository interface so that we'll benefit from all the standard behavior.
You'll also notice we added the #NoRepositoryBean annotation. This is necessary because otherwise, the default Spring behavior is to create an implementation for all subinterfaces of Repository.
public interface ExtendedStudentRepository extends ExtendedRepository<Student, Long> {
}
Related
In my web app I have two different data storages - db and file. In application.properties I can set which one I want to use. I use Spring Data JPA for accessing object from db, so my DataBaseRepository extends CrudRepository.
I want to inject one inteface of Repository in Service layer, and
implementation will depend on chosen profile(in application.properties).
Problem is that my FileRepository doesn't implement CrudRepository, so my repositories haven't common interface for injecting.
Approach 1:Suppose, that my FileRepository extends CrudRepository(and I mark FileRepository with #NoRepositoryBean)
Problem: my implementation of FileRepository must implement many methods, which I don't need(I don't know if it is normal approach and it is worked)
Approach2:don't use CrudRepository interface
Problem: writing many boilerplate code
So, please tell me about another approaches,if they exist in such situation, or say which one is better. Any help is appreciated.
You could create a CustomCrudRepository that extends CrudRepository and a BaseRepository.The BaseRepository interface contains every method that has to be supported by any implementation. Most likely copy the signature from CrudRepository. Than inject based on the BaseRepository.
Hard to explain so see the following example without generics. You can add them on your own.
public interface BaseRepo {
// common methods
long count();
...
}
#NoRepositoryBean
interface CustomCrudRepository extends CrudRepository, BaseRepo {
}
interface EntityRepository extends CustomCrudRepository {
}
class FileRepository implements BaseRepo {
#Override
public long count() {
return 0;
}
}
#Service
class SomeService {
#Autowired
private BaseRepo repo;
}
i'm using spring-data-jpa 1.9.0.RELEASE and want to use the spring caching mechanism inside my repositories, e.g.
public interface LandDao extends CrudRepository<Land, Long> {
#Cacheable("laender")
Land findByName(String land)
}
Here is my cache configuration:
#Configuration
#EnableCaching(mode=AdviceMode.ASPECTJ)
public class EhCacheConfiguration extends CachingConfigurerSupport {
...
Note that i'm using AdviceMode.ASPECTJ (compile time weaving). Unfortunately caching is not working when calling the repo method 'findByName'.
Changing the caching mode to AdviceMode.PROXY all works fine.
To ensure that caching works in principle with aspectJ, i wrote the following service:
#Service
public class LandService {
#Autowired
LandDao landDao;
#Cacheable("landCache")
public Land getLand(String bez) {
return landDao.findByName(bez);
}
}
In this case the cache works like a charm. So i think that all parts of my application are correctly configured and the problem is the combination of spring-data-jpa and AspectJ caching mode. Does anyone have an idea what's going wrong here?
Okay, found the answer to my question by myself. The javadoc of the responsible aspect org.springframework.cache.aspectj.AnnotationCacheAspect says:
When using this aspect, you must annotate the implementation class (and/or methods within that class), not the interface (if any) that the class implements. AspectJ follows Java's rule that annotations on interfaces are not inherited.
So it's not possible to use the #Cacheable annotation inside repository interfaces together with aspectj. My solution now is to make use of custom implementations for Spring Data repositories:
Interface for custom repository functionality:
public interface LandRepositoryCustom {
Land findByNameCached(String land);
}
Implementation of custom repository functionality using query dsl:
#Repository
public class LandRepositoryImpl extends QueryDslRepositorySupport
implements LandRepositoryCustom {
#Override
#Cacheable("landCache")
public Land findByNameCached(String land) {
return from(QLand.land).where(QLand.land.name.eq(land)).singleResult(QLand.land);
}
}
Note the #Cacheable annotation for the findByNameCached method.
Basic repository interface:
public interface LandRepository extends CrudRepository<Land, Long>, LandRepositoryCustom {
}
Using the repository:
public class SomeService {
#Autowired
LandRepository landDao;
public void foo() {
// Cache is working here:-)
Land land = landDao.findByNameCached("Germany");
}
}
It would be helpful to add a note relating to this limitation in the spring data reference.
I have very strange problem. In my repository, i need to extend JpaSpecificationExecutor<T> interface to be able to use findAll(Specification<T>, Pageable) for custom query paging.
But, when I use the JpaSpecificationExecutor,
public interface DescriptionRepository extends ParentRepositoryCustom<Description, Long>,
JpaSpecificationExecutor<Description> {
}
application won´t build, throwing No property count found for type class Description exception.
My Description class has no count attribute. When I remove JpaSpecificationExecutor from repository, everything works well again.
I came across the same exception. In my case, the reason was that the
ParentRepositoryImpl was NOT exending correctly SimpleJpaRepository
which is an implementation of JpaSpecificationExecutor.
So when Spring try to resolve the query names, it exludes the method names belonging to what Spring call the repositoryBaseClass of your implementation. It s in the class org.springframework.data.repository.core.support.DefaultRepositoryInformation
public boolean isBaseClassMethod(Method method) {
return isTargetClassMethod(method, repositoryBaseClass);
}
Check that repositoryBaseClass is what you expect. It should defines the "count" method.
If you don't extends the correct superclass, the method ("count" in your case) is not excluded form resolution and Spring tries to build a query by creating it according to its name structure ... and in that case fragment of name are tested against your entity property.
Based on the Spring Data Document documentation, I have provided a custom implementation of a repository method. The custom method's name refers to a property which doesn't exist in the domain object:
#Document
public class User {
String username;
}
public interface UserRepositoryCustom {
public User findByNonExistentProperty(String arg);
}
public class UserRepositoryCustomImpl implements UserRepositoryCustom {
#Override
public User findByNonExistentProperty(String arg) {
return /*perform query*/;
}
}
public interface UserRepository
extends CrudRepository<?, ?>, UserRepositoryCustom {
public User findByUsername(String username);
}
However, perhaps because of the method name I've chosen (findByNonExistentPropertyName), Spring Data attempts to parse the method name, and create a query from it. When it can't find the nonExistentProperty in User, an exception is thrown.
Possible resolutions:
Have I made a mistake in how I provide the implementation of the custom method?
Is there a way to instruct Spring to not attempt to generate a query based on this method's name?
Do I just have to avoid using any of the prefixes that Spring Data recognizes?
None of the above.
Thank you!
Your implementation class has to be named UserRepositoryImpl (if you stick to the default configuration) as we try to look it up based on the Spring Data repository interface's name being found. The reason we start with that one is that we cannot reliably know which of the interfaces you extend is the one with the custom implementation. Given a scenario like this
public interface UserRepository extends CrudRepository<User, BigInteger>,
QueryDslPredicateExecutor<User>, UserRepositoryCustom { … }
we would have to somehow hard code the interfaces not to check for custom implementation classes to prevent accidental pick-ups.
So what we generally suggest is coming up with a naming convention of let's say the Custom suffix for the interface containing the methods to be implemented manually. You can then set up the repository infrastructure to pick up implementation classes using CustomImpl as suffix by using the repository-impl-postfix attribute of the repositories element:
<mongo:repositories base-package="com.acme"
repository-impl-postfix="CustomImpl" />
There's more information on that in the reference documentation but it seems you have at least briefly checked that. :)
This is the first time im using the DAO pattern. From what I've read so far, implementing this pattern will help me seperate my calling code (controller) from any persistence implementation - exactly what I want; that is, I don't want to be restrcited to the use of any particular database or 3rd party libraries.
I'm creating some test code (in TDD fashion) using MongoDB and morphia (as an example), with morphia's provided BasicDAO class.
As far as I can tell, extending BasicDAO<T, V> requires a constructor that accepts Morphia and Mongo objects; these are very specific (3rd party) types that I don't really want floating around outside of the DAO class itself.
How can I have more of a pluggable architecture? By this I mean, what should I look into re being able to configure my application to use a specific DAO with specific configuration arguments, external to the actual source?
A "pluggable" DAO layer is usually/always based on an interface DAO. For example, lets consider a quite generic simple one:
public interface GenericDAO <T, K extends Serializable> {
List<T> getAll(Class<T> typeClass);
T findByKey(Class<T> typeClass, K id);
void update(T object);
void remove(T object);
void insert(T object);
}
(This is what you have in Morphia's generic DAO)
Then you can develop different several generic DAO implementations, where you can find different fields (reflected in constructor parameters, setters and getters, etc). Let's assume a JDBC-based one:
public class GenericDAOJDBCImpl<T, K extends Serializable> implements GenericDAO<T, K extends Serializable> {
private String db_url;
private Connection;
private PreparedStatement insert;
// etc.
}
Once the generic DAO is implemented (for a concrete datastore), getting a concrete DAO would be a no brainer:
public interface PersonDAO extends GenericDAO<Person, Long> {
}
and
public class PersonDAOJDBCImpl extends GenericDAOJDBCImpl<Person, Long> implements PersonDAO {
}
(BTW, what you have in Morphia's BasicDAO is an implementation of the generic DAO for MongoDB).
The second thing in the pluggable architecture is the selection of the concrete DAO implementation. I would advise you to read chapter 2 from Apress: Pro Spring 2.5 ("Putting Spring into "Hello World") to progressively learn about factories and dependency injection.
Spring does DI for you using configurations and it's widely used.
Hi i am not an expert in java. but trying to give a solution.
you can have a superclass where all the connection related stuff happens and any other base class where you can extend and use it.
Later any switch in your DB for specific 3rd party drivers you can rewrite the superclass.
Again I am no expert. Just trying around here to learn. :)
A couple standard DI frameworks are Spring and Guice. Both these frameworks facilitate TDD.