I have the following repository:
#Repository
public interface EntityRepository extends JpaRepository<Entity, Long> {
List<Entity> findAllByFirstId(Long firstId);
List<Entity> findAllBySecondId(Long secondId);
List<Entity> findAllByFirstIdAndSecondId(Long firstId, Long secondId);
}
The constructor implementing an interface generated with io.swagger:swagger-codegen-maven-plugin uses Optional<Long> as optional request parameters (the underlying service uses also the same parameters):
ResponseEntity<List<Entity>> entities(Optional<Long> firstId, Optional<Long> secondId);
I would like to filter the entities based on the parameters firstId and secondId which are never nulls at the database but can be passed through the constructor (the parameter for searching is optional).
The problem comes with the named queries when the null is passed as the parameter is optional, the JpaReposotory uses the null as a criterion for the searching in the database. That's what I don't want - I want to ignore the filtering based on this parameter as long as it is null.
My workaround solution based on Optional is:
public List<Entity> entities(Optional<Long> firstId, Optional<Long> secondId) {
return firstId
.or(() -> secondId)
.map(value -> {
if (firstId.isEmpty()) {
return entityRepository.findAllBySecondId(value);
}
if (secondId.isEmpty()) {
return entityRepository.findAllByFirstId(value);
}
return entityRepository.findAllByFirstIdAndSecondId(
firstId.get(), secondId.get());
})
.orElse(entityRepository.findAll())
.stream()
.map(...) // Mapping between DTO and entity. For sake of brevity
// I used the same onject Entity for both controler and repository
// as long as it not related to the question
.collect(Collectors.toList());
}
This issue has been already asked: Spring Data - ignore parameter if it has a null value and a ticket created DATAJPA-209.
As long as the question is almost 3 years old and the ticket dates back to 2012, I would like to ask if there exists a more comfortable and universal way to avoid the overhead of handling the Optional and duplicating the repository methods. The solution for 2 such parameters looks acceptable, however I'd like to implement the very same filtering for 4-5 parameters.
You need Specification utility class like this
public class EntitySpecifications {
public static Specification<Entity> firstIdEquals(Optional<Long> firstId) {// or Long firstId. It is better to avoid Optional method parameters.
return (root, query, builder) ->
firstId.isPresent() ? // or firstId != null if you use Long method parameter
builder.equal(root.get("firstId"), firstId.get()) :
builder.conjunction(); // to ignore this clause
}
public static Specification<Entity> secondIdEquals(Optional<Long> secondId) {
return (root, query, builder) ->
secondId.isPresent() ?
builder.equal(root.get("secondId"), secondId.get()) :
builder.conjunction(); // to ignore this clause
}
}
Then your EntityRepository have to extend JpaSpecificationExecutor
#Repository
public interface EntityRepository
extends JpaRepository<Entity, Long>, JpaSpecificationExecutor<Entity> {
}
Usage:
#Service
public class EntityService {
#Autowired
EntityRepository repository;
public List<Entity> getEntities(Optional<Long> firstId, Optional<Long> secondId) {
Specification<Entity> spec =
Specifications.where(EntitySpecifications.firstIdEquals(firstId)) //Spring Data JPA 2.0: use Specification.where
.and(EntitySpecifications.secondIdEquals(secondId));
return repository.findAll(spec);
}
}
The io.swagger:swagger-codegen-maven-plugin generates them as
Optional since I request them as not required (required: false by
default). I might generate them as boxed types, such as Long, …
It’s probably partly a matter of taste. If it were me and I could, I’d go for the version without Optional. I don’t think they contribute anything useful here.
public List<Entity> entities(Long firstId, Long secondId) {
List<Dto> dtos;
if (firstId == null) {
if (secondId == null) {
dtos = entityRepository.findAll();
} else {
dtos = entityRepository.findAllBySecondId(secondId);
}
} else {
if (secondId == null) {
dtos = entityRepository.findAllByFirstId(firstId);
} else {
dtos = entityRepository.findAllByFirstIdAndSecondId(firstId, secondId);
}
}
return dtos.stream()
.map(...)
.collect(Collectors.toList());
}
The Optional class was designed to be used for return values that may be absent, not really for anything else, so I have read. I think there are rare situations where I’d use them for something else, but this is not one of them.
I'd suggest you to use specifications instead. See documentation and examples here.
Briefly, the idea is following. For each attribute you define a specification. Then check each attribute in your search criteria and if it is not null add corresponding specification to the "concatenated" specification. Then you search using this "concatenated" specification.
Related
I'm having trouble converting between java.sql.Timestamp and java.time.Instant using JOOQ converters.
Here's a simplified version of the code I'm working with.
public class User {
private static final Converter<Timestamp, Instant> MY_CONVERTER= Converter.of(
Timestamp.class,
Instant.class,
t -> t == null ? null : t.toInstant(),
i -> i == null ? null : Timestamp.from(i)
)
public static Table<?> table = DSL.table("user");
public static Field<String> name = DSL.field(DSL.name(table.getName(), "name"), String.class);
public static Field<Instant> name = DSL.field(DSL.name(table.getCreated(), "created"), SQLDataType.TIMESTAMP.asConvertedDataType(Converter.of(MY_CONVERTER)));
}
private class UserDto {
private String name;
private Instant created;
// getters, setters, etc.
}
public class UserWriter {
// constructor with injected DefaultDSLContext etc..
public void create(UserDto user) {
dslContext.insertInto(User.table, User.firstName, User.lastName)
.values(user.getName(), user.getCreated())
.execute();
}
}
public class UserReader {
// constructor with injected DefaultDSLContext etc..
public Result<Record> getAll() {
return dslContext.select().from(User.table).fetch();
}
}
public class UserService {
// constructor with injected UserReader etc..
public Collection<UserDto> getAll() {
return userReader
.getAll()
.stream()
.map(Users::from)
.collect(Collectors.toList());
}
}
public class Users {
public static UserDto from(Record record) {
UserDto user = new UserDto();
user.setName(record.get(User.name));
user.setCreated(record.get(User.created);
return user;
}
}
When I create a new User the converter is called and the insertion works fine. However, when I select the Users the converter isn't called and the record.get(User.created) call in the Users::from method returns a Timestamp (and therefore fails as UserDto.setCreated expects an Instant).
Any ideas?
Thanks!
Why the converter isn't applied
From the way you phrased your question (you didn't post the exact SELECT statement that you've tried), I'm assuming you didn't pass all the column expressions explicitly. But then, how would jOOQ be able to find out what columns your table has? You declared some column expressions in some class, but that class isn't following any structure known to jOOQ. The only way to get jOOQ to fetch all known columns is to make them known to jOOQ, using code generation (see below).
You could, of course,let User extend the internal org.jooq.impl.TableImpl class and use internal API to register the Field values. But why do that manually, if you can generate this code?
Code generation
I'll repeat the main point of my previous question, which is: Please use the code generator. I've now written an entire article on why you should do this. Once jOOQ knows all of your meta data via code generation, you can just automatically select all columns like this:
UserRecord user = ctx
.selectFrom(USER)
.where(USER.ID.eq(...))
.fetchOne();
Not just that, you can also configure your data types as INSTANT using a <forcedType>, so you don't need to worry about data type conversion every time.
I cannot stress this enough, and I'm frequently surprised how many projects try to use jOOQ without code generation, which removes so much of jOOQ's power. The main reason to not use code generation is if your schema is dynamic, but since you have that User class, it obviously isn't dynamic.
I am working on a Spring Boot - App that has multiple entities having some identical columns for filtering.
Currently, I have the same query defined in multiple repositories, so after doing some research, I've stumbled across an article about JPA - Specifications: https://spring.io/blog/2011/04/26/advanced-spring-data-jpa-specifications-and-querydsl/
So I made a generic class to build JPA-Specifications:
public final class GenericSpecifications<T>
{
public Specification whereNameLikeAndDateGreatherThan(String fieldName, String fieldDate, String name, LocalDate date)
{
return (root, query, builder) -> builder.lessThan(root.get(columnName), date);
}
}
So in the service I can use:
repository.findAll(whereNameLikeAndDateGreatherThan(Person_.name, Person_.date, "Max", LocalDate.now());
In this way, I have one query/specification in a central place and I don't need to write/maintain the same query on all repositories.
However, I have more complex queries, where I need to filter over multiple columns.
This means that my methods, in my GenericSpecification-Class, become too bloated, since I need to pass multiple column names and the search-values, so I could end up with methods with 6 or more parameters.
I could define an Abstract-Entity class extended by all other entities.This abstract entity would have all the common fields in order to be sure that all the entities have the same columns.
Then I can use these names for filtering, so I don't have to pass the field/coulmn-names at all.
But, I am not sure if this is the cleanest approach to my problem.
Do you know if there is a better way to do this?
I think the cleanest approach is to use inheritance, but in the specification creator, not the entities. So for example something like (didn't try if it compiles so it probably doesn't, but should give the idea):
class BasicSpecificationBuilder<T> {
public Specification<T> stringEqual(String fieldName, String value) {
// root is Root<T> here, don't know if this needs to be specified
return (root, query, builder) ->
builder.equal(root.<String>get(fieldName), value);
}
}
public Specification<T> dateAfter(String fieldName, LocalDate value) {
return (root, query, builder) ->
builder.<LocalDate>greaterThan(root.get(fieldName), value);
}
}
// extend per entity type and required queries
class ContractSpecificationBuilder<Contract> extends BasicSpecificationBuilder<Contract> {
public Specification<Contract> contractsCreatedAfter(String partner, LocalDate date) {
return (root, query, builder) ->
stringEqual(Contract_.partnerName, partner)
.and(
dateAfter(Contract_.closeDate, date));
}
}
class EmployeeSpecificationBuilder<Employee> extends BasicSpecificationBuilder<Employee> {
public Specification<Employee> employeesJoinedAfter(String name, LocalDate date) {
return (root, query, builder) ->
stringEqual(Employee_.name, name)
.and(
dateAfter(Employee_.entryDate, date));
}
}
This way you have a collection of builder methods in the base class you can reuse, and queries that don't explode because they're separated per entity. There may be a little code duplication as in the example above - if there's too many of those, you can refactor these common combinations into the base class.
class BasicSpecificationBuilder<T> {
public Specification<T> stringEqualAndDateAfter(String stringField, String stringValue, String dateField, LocalDate dateValue) {
public Specification<Employee> employeesJoinedAfter(String name, LocalDate date) {
return (root, query, builder) ->
stringEqual(stringField, name)
.and(
dateAfter(dateField, date));
}
}
class ContractSpecificationBuilder<Contract> extends BasicSpecificationBuilder<Contract> {
public Specification<Contract> contractsCreatedAfter(String partner, LocalDate date) {
return stringEqualAndDateAfter(Contract_.partnerName, partner, Contract_.closeDate, date);
}
}
That's a matter of taste and code quality settings (we had a code duplication measure in SonarQube with a limit, but I don't think this would have crossed the limit).
Since these are all factory methods, you can do pretty much the same thing with classes providing static methods and the "base" class containing the basic methods as static utility methods. I kind of dislike the syntax for generic static methods though.
That's all assuming you read the Baeldung intro on how to use Specification and didn't like that approach.
Overview
Given
Spring Data JPA, Spring Data Rest, QueryDsl
a Meetup entity
with a Map<String,String> properties field
persisted in a MEETUP_PROPERTY table as an #ElementCollection
a MeetupRepository
that extends QueryDslPredicateExecutor<Meetup>
I'd expect
A web query of
GET /api/meetup?properties[aKey]=aValue
to return only Meetups with a property entry that has the specified key and value: aKey=aValue.
However, that's not working for me.
What am I missing?
Tried
Simple Fields
Simple fields work, like name and description:
GET /api/meetup?name=whatever
Collection fields work, like participants:
GET /api/meetup?participants.name=whatever
But not this Map field.
Customize QueryDsl bindings
I've tried customizing the binding by having the repository
extend QuerydslBinderCustomizer<QMeetup>
and overriding the
customize(QuerydslBindings bindings, QMeetup meetup)
method, but while the customize() method is being hit, the binding code inside the lambda is not.
EDIT: Learned that's because QuerydslBindings means of evaluating the query parameter do not let it match up against the pathSpecs map it's internally holding - which has your custom bindings in it.
Some Specifics
Meetup.properties field
#ElementCollection(fetch = FetchType.EAGER)
#CollectionTable(name = "MEETUP_PROPERTY", joinColumns = #JoinColumn(name = "MEETUP_ID"))
#MapKeyColumn(name = "KEY")
#Column(name = "VALUE", length = 2048)
private Map<String, String> properties = new HashMap<>();
customized querydsl binding
EDIT: See above; turns out, this was doing nothing for my code.
public interface MeetupRepository extends PagingAndSortingRepository<Meetup, Long>,
QueryDslPredicateExecutor<Meetup>,
QuerydslBinderCustomizer<QMeetup> {
#Override
default void customize(QuerydslBindings bindings, QMeetup meetup) {
bindings.bind(meetup.properties).first((path, value) -> {
BooleanBuilder builder = new BooleanBuilder();
for (String key : value.keySet()) {
builder.and(path.containsKey(key).and(path.get(key).eq(value.get(key))));
}
return builder;
});
}
Additional Findings
QuerydslPredicateBuilder.getPredicate() asks QuerydslBindings.getPropertyPath() to try 2 ways to return a path from so it can make a predicate that QuerydslAwareRootResourceInformationHandlerMethodArgumentResolver.postProcess() can use.
1 is to look in the customized bindings. I don't see any way to express a map query there
2 is to default to Spring's bean paths. Same expression problem there. How do you express a map?
So it looks impossible to get QuerydslPredicateBuilder.getPredicate() to automatically create a predicate.
Fine - I can do it manually, if I can hook into QuerydslAwareRootResourceInformationHandlerMethodArgumentResolver.postProcess()
HOW can I override that class, or replace the bean? It's instantiated and returned as a bean in the RepositoryRestMvcConfiguration.repoRequestArgumentResolver() bean declaration.
I can override that bean by declaring my own repoRequestArgumentResolver bean, but it doesn't get used.
It gets overridden by RepositoryRestMvcConfigurations. I can't force it by setting it #Primary or #Ordered(HIGHEST_PRECEDENCE).
I can force it by explicitly component-scanning RepositoryRestMvcConfiguration.class, but that also messes up Spring Boot's autoconfiguration because it causes
RepositoryRestMvcConfiguration's bean declarations to be processed
before any auto-configuration runs. Among other things, that results in responses that are serialized by Jackson in unwanted ways.
The Question
Well - looks like the support I expected just isn't there.
So the question becomes:
HOW do I correctly override the repoRequestArgumentResolver bean?
BTW - QuerydslAwareRootResourceInformationHandlerMethodArgumentResolver is awkwardly non-public. :/
Replace the Bean
Implement ApplicationContextAware
This is how I replaced the bean in the application context.
It feels a little hacky. I'd love to hear a better way to do this.
#Configuration
public class CustomQuerydslHandlerMethodArgumentResolverConfig implements ApplicationContextAware {
/**
* This class is originally the class that instantiated QuerydslAwareRootResourceInformationHandlerMethodArgumentResolver and placed it into the Spring Application Context
* as a {#link RootResourceInformationHandlerMethodArgumentResolver} by the name of 'repoRequestArgumentResolver'.<br/>
* By injecting this bean, we can let {#link #meetupApiRepoRequestArgumentResolver} delegate as much as possible to the original code in that bean.
*/
private final RepositoryRestMvcConfiguration repositoryRestMvcConfiguration;
#Autowired
public CustomQuerydslHandlerMethodArgumentResolverConfig(RepositoryRestMvcConfiguration repositoryRestMvcConfiguration) {
this.repositoryRestMvcConfiguration = repositoryRestMvcConfiguration;
}
#Override
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
DefaultListableBeanFactory beanFactory = (DefaultListableBeanFactory) ((GenericApplicationContext) applicationContext).getBeanFactory();
beanFactory.destroySingleton(REPO_REQUEST_ARGUMENT_RESOLVER_BEAN_NAME);
beanFactory.registerSingleton(REPO_REQUEST_ARGUMENT_RESOLVER_BEAN_NAME,
meetupApiRepoRequestArgumentResolver(applicationContext, repositoryRestMvcConfiguration));
}
/**
* This code is mostly copied from {#link RepositoryRestMvcConfiguration#repoRequestArgumentResolver()}, except the if clause checking if the QueryDsl library is
* present has been removed, since we're counting on it anyway.<br/>
* That means that if that code changes in the future, we're going to need to alter this code... :/
*/
#Bean
public RootResourceInformationHandlerMethodArgumentResolver meetupApiRepoRequestArgumentResolver(ApplicationContext applicationContext,
RepositoryRestMvcConfiguration repositoryRestMvcConfiguration) {
QuerydslBindingsFactory factory = applicationContext.getBean(QuerydslBindingsFactory.class);
QuerydslPredicateBuilder predicateBuilder = new QuerydslPredicateBuilder(repositoryRestMvcConfiguration.defaultConversionService(),
factory.getEntityPathResolver());
return new CustomQuerydslHandlerMethodArgumentResolver(repositoryRestMvcConfiguration.repositories(),
repositoryRestMvcConfiguration.repositoryInvokerFactory(repositoryRestMvcConfiguration.defaultConversionService()),
repositoryRestMvcConfiguration.resourceMetadataHandlerMethodArgumentResolver(),
predicateBuilder, factory);
}
}
Create a Map-searching predicate from http params
Extend RootResourceInformationHandlerMethodArgumentResolver
And these are the snippets of code that create my own Map-searching predicate based on the http query parameters.
Again - would love to know a better way.
The postProcess method calls:
predicate = addCustomMapPredicates(parameterMap, predicate, domainType).getValue();
just before the predicate reference is passed into the QuerydslRepositoryInvokerAdapter constructor and returned.
Here is that addCustomMapPredicates method:
private BooleanBuilder addCustomMapPredicates(MultiValueMap<String, String> parameters, Predicate predicate, Class<?> domainType) {
BooleanBuilder booleanBuilder = new BooleanBuilder();
parameters.keySet()
.stream()
.filter(s -> s.contains("[") && matches(s) && s.endsWith("]"))
.collect(Collectors.toList())
.forEach(paramKey -> {
String property = paramKey.substring(0, paramKey.indexOf("["));
if (ReflectionUtils.findField(domainType, property) == null) {
LOGGER.warn("Skipping predicate matching on [%s]. It is not a known field on domainType %s", property, domainType.getName());
return;
}
String key = paramKey.substring(paramKey.indexOf("[") + 1, paramKey.indexOf("]"));
parameters.get(paramKey).forEach(value -> {
if (!StringUtils.hasLength(value)) {
booleanBuilder.or(matchesProperty(key, null));
} else {
booleanBuilder.or(matchesProperty(key, value));
}
});
});
return booleanBuilder.and(predicate);
}
static boolean matches(String key) {
return PATTERN.matcher(key).matches();
}
And the pattern:
/**
* disallow a . or ] from preceding a [
*/
private static final Pattern PATTERN = Pattern.compile(".*[^.]\\[.*[^\\[]");
I spent a few days looking into how to do this. In the end I just went with manually adding to the predicate. This solution feels simple and elegant.
So you access the map via
GET /api/meetup?properties.aKey=aValue
On the controller I injected the request parameters and the predicate.
public List<Meetup> getMeetupList(#QuerydslPredicate(root = Meetup.class) Predicate predicate,
#RequestParam Map<String, String> allRequestParams,
Pageable page) {
Predicate builder = createPredicateQuery(predicate, allRequestParams);
return meetupRepo.findAll(builder, page);
}
I then just simply parsed the query parameters and added contains
private static final String PREFIX = "properties.";
private BooleanBuilder createPredicateQuery(Predicate predicate, Map<String, String> allRequestParams) {
BooleanBuilder builder = new BooleanBuilder();
builder.and(predicate);
allRequestParams.entrySet().stream()
.filter(e -> e.getKey().startsWith(PREFIX))
.forEach(e -> {
var key = e.getKey().substring(PREFIX.length());
builder.and(QMeetup.meetup.properties.contains(key, e.getValue()));
});
return builder;
}
I'm using this awesome library, but I have a problem.
I'm implementing a DTO pattern, so I use another project to convert automaticaly an EJB to a DTO using naming conventions.
Then, I want to query the DTO and getting the real result (EJB query).
I implemented QueryDSL with JPAAnnotationProcessor on my ENTITIES, and the QuerydslAnnotationProcessor on my DTOs.
For example :
An entity User(Long Id, String username, Site site)
A DTO UserDto(Long id, String username, String siteName)
Converting objects is good, "siteName" automatically match "site.name".
And so, I put a QueryDSL Query like: userDto.id.gt(20).and(userDto.username.like("a%")).and(userDto.siteName.like("%b"));
I'm looking for a way to build the corresponding entity query
The only idea I got is to :
Clone the Query
Change the path "userDto" to "user"
Verify each predicate to know if the property exists and if the type is matching
Any way to do that or to reach my goal?
Thanks
Since this is still relevant and undocumented functionality, and since Timo's answer, while helpful, is very cryptic, here's how to do it:
First, extend ReplaceVisitor:
private class CustomReplaceVisior extends ReplaceVisitor<Void> {
#Override
public Expression<?> visit(Path<?> path, #Nullable Void context) {
// The map Timo mentioned to transform paths:
Map<Path<?>, Path<?>> map = Map.of(
QUser.user.id, QUserDto.userDto.id,
QUser.user.name, QUserDto.userDto.name
);
if (map.contains(path)) {
return map.get(path);
} else {
return super.visit(path, context);
}
}
}
Then use it like this:
CustomReplaceVisior replaceVisitor = new CustomReplaceVisior();
Predicate userPredicate = QUser.user.id.eq(2).and(QUser.user.name.eq("Somename"));
Predicate userDtoPredicate = (Predicate) userPredicate.accept(replaceVisitor, null);
You will need to convert expressions in general. With a custom ReplaceVisitor you can for example override visit(Path expr, #Nullable Void context)
A generic way to do the path replacements would be to use a Map map to define the replacements:
if (map.contains(path)) {
return map.get(path);
} else {
return super.visit(path, context);
}
You can use your visitor like this:
Expression transformedExpression = expr.accept(visitor, null);
Looking for some guidance on designing some code in Java.
Currently I have something like this....
#Service
class SomeService {
#Autowired
private FilterSoldOut filterSoldOut;
#Autowired
private FilterMinPriceThreshold filterMinPriceThreshold;
public List<Product> getProducts() {
List<Product> products = //...code to get some products
// Returns list of in-stock products
products = filterSoldOut.doFilter(products);
// Returns list of products above min price
products = filterMinPriceThreshold.doFilter(minPrice, products);
return products;
}
}
What I would like to be able to do is create a Filter interface with a doFilter method and then in SomeService create a List filters, which is autowired by Spring. Then in the getProducts method I can iterate the filters list and invoke doFilter. This way in the future I can, create new classes that implement the Filter interface and add them to the list via Spring configuration, and have the new filter applied without having to change the code.
But, the problem is that the parameters to the doFilter method can be different. I've read about the Command Pattern, and the Visitor Pattern but they don't quite seem to fit the bill.
Can anyone suggest a good pattern to achieve what I've described?
Thanks.
There are many ways to do this. Some are complicated, some are simpler. The simplest one would be to use varargs or an array of Object elements. The problem here is that you have to cast each objetc to its proper type in order to use them and that can be a little tricky if there are multiple types in an unknown order.
Another option is to use a Map<String,Object> (which you can wrap in a class of your own if required, something lile FilterParams) that stores parameters based on a name, and you can then obtain them and cast them accordingly.
Edit
Considering that the parameters vary on runtime, you'll need someone "well informed" about the current configuration.
Not pattern-wise but I'd rather keep it simple without using too many fancy names. What about introducing a FilterConfigurator that has a simple overloaded method configure that recieves the particular filter and configures it based on its type?. This configurator is the informed entity that knows the current values for those parameters.
The goal is to rid Service from the responsibility of configuring a filter.
In addition, if you create your Filter class, you'll be able to implement a single doFilter that you can invoke without changes.
There's another Idea... and it involves a FilterFactory that creates and initializes filters, thus having a filter 100% configured from scratch. This factory can rely on the very same FilterConfigurer or do it itself.
old:
I'd suggest you setting the filter state at construction time or at
least before you getProducts().
In your example with the two filters one of them is (probably)
checking a database for availability of the product and the other one
is comparing the product's price to some preset value. This value
(minPrice) is known before the filter is applied. It can
also be said that the filter depends on it, or that it's part of the
filter's state. Therefore I'd recommend you putting the
minPrice inside the filter at construction time (or via a
setter) and then only pass the list of products you want to filter.
Use the same pattern for your other filters.
new suggestion (came up with it after the comments):
You can create a single object (AllFiltersState) that holds all the values for all the filters. In your controller set whatever criteria you need in this object (minPrice, color, etc.) and pass it to every filter along the products - doFilter(allFiltersState, products).
As Cris say you can use next function definition:
public List<Product> doFilter(Object...args) {
if (args.length != 2)
throw new IllegalArgumentException();
if (! (args[0] instanceof String))
throw new IllegalArgumentException();
if (! (args[2] instanceof Integer))
throw new IllegalArgumentException();
String stringArgument = (String) args[0];
Integer integerArgument = (Integer) args[1];
// your code here
return ...;
}
or with command pattern:
public interface Command {
}
public class FirstCommand implements Command {
private String string;
// constructor, getters and setters
}
public class SecondCommand implements Command {
private Integer integer;
// constructor, getters and setters
}
// first service function
public List<Product> doFilter(Command command) {
if (command instanceof FirstCommand)
throw new IllegalArgumentException();
FirstCommand firstCommand = (FirstCommand) command;
return ...;
}
// second service function
public List<Product> doFilter(Command command) {
if (command instanceof SecondCommand)
throw new IllegalArgumentException();
SecondCommand secondCommand = (SecondCommand) command;
return ...;
}
EDIT:
Ok, i understand your question. And think you can create various session scoped filters.
#Service
class SomeService {
#Autowired(required = false)
private List<Filter> filters;
public List<Product> getProducts() {
List<Product> products = //...code to get some products
if (filters != null) {
for (Filter filter : filters)
products = filter.doFilter(products);
}
return products;
}
}
And then create filters with settings fields:
public PriceFilter implements Filter {
private Integer minPrice;
private Integer maxPrice;
// getters and setters
public List<Product> doFilter(List<Product> products) {
// implementation here
}
}
public ContentFilter implements Filter {
private String regexp;
// getters and setters
public List<Product> doFilter(List<Product> products) {
// implementation here
}
}
Then user can configure this filters for session and use service function getProducts to get result.
Having a list of filters getting autowired is not a very good approach to solve your problem.
Every filter depends on different types of parameters which would need to be passed to the doFilter method. Needing to do so makes the approach highly unflexible. Yes you could use varargs but it would just create a mess. That's why it's probably easier to implement a builder to build you a chain of filters to be applied to the collection of products. Adding new filters to the builder becomes a trivial task. The Builder Pattern is very useful when a lot of different parameters are at play.
Consider having this interface:
public interface CollectionFilter<T> {
public Collection<T> doFilter(Collection<T> collection);
}
A filter chaining class which applies all filters to the collection:
public class CollectionFilterChain<T> {
private final List<CollectionFilter<T>> filters;
public CollectionFilterChain(List<CollectionFilter<T>> filters) {
this.filters = filters;
}
public Collection<T> doFilter(Collection<T> collection) {
for (CollectionFilter<T> filter : filters) {
collection = filter.doFilter(collection);
}
return collection;
}
}
The two CollectionFilter<T> implementations:
public class InStockFilter<T> implements CollectionFilter<T> {
public Collection<T> doFilter(Collection<T> collection) {
// filter
}
}
public class MinPriceFilter<T> implements CollectionFilter<T> {
private final float minPrice;
public MinPriceFilter(float minPrice) {
this.minPrice = minPrice;
}
public Collection<T> doFilter(Collection<T> collection) {
// filter
}
}
And a builder to let you build the filter chain in a easy way:
public class CollectionFilterChainBuilder<T> {
List<CollectionFilter<T>> filters;
public CollectionFilterChainBuilder() {
filters = new ArrayList<CollectionFilter<T>>();
}
public CollectionFilterChainBuilder<T> inStock() {
filters.add(new InStockFilter<T>());
return this;
}
public CollectionFilterChainBuilder<T> minPrice(float price) {
filters.add(new MinPriceFilter<T>(price));
return this;
}
public CollectionFilterChain<T> build() {
return new CollectionFilterChain<T>(filters);
}
}
With the builder it's easy to create a filter chain as follows:
CollectionFilterChainBuilder<Product> builder =
new CollectionFilterChainBuilder();
CollectionFilterChain<Product> filterChain =
builder.inStock().minPrice(2.0f).build();
Collection<Product> filteredProducts =
filterChain.doFilter(products);
In a more dynamic settings you could use the builder like:
CollectionFilterChainBuilder<Product> builder = new CollectionFilterChainBuilder();
if (filterInStock) {
builder.inStock();
}
if (filterMinPrice) {
builder.minPrice(minPrice);
}
// build some more