I'm working on code that has an ever increasing amount of implementations for an interface VendorService. Right now, where these services are used, we autowire them all in the constructor, leading to long lists of dependencies. Is there a preferred way to handle dependencies when a single interface is repeatedly used?
Current approach:
private final VendorService xVendorService;
private final VendorService yVendorService;
private final VendorService zVendorService;
...
#Autowired
public VendorDelegateService(XVendorService xVendorService,
YVendorService yVendorService,
ZVendorService zVendorService,
...) {
this.xVendorService = xVendorService;
this.yVendorService = yVendorService;
this.yVendorService = yVendorService;
...
}
public void doSomething(VendorId vendorId) {
if (vendorId = VendorId.X) {
xVendorService.doSomething();
} else if (vendorId = VendorId.Y) {
yVendorService.doSomething();
} else if (vendorId = VendorId.Z) {
zVendorService.doSomething();
}
...
}
Clearly this is very verbose and requires updating whenever a new implementation of the interface is created.
An alternative is getting the Bean from the ApplicationContext, something like:
private final ApplicationContext context;
#Autowired
public VendorDelegateService(ApplicationContext context) {
this.context = context;
}
public void doSomething(VendorId vendorId) {
context.getBean(VendorService.class, vendorId.name()).doSomething();
}
This wouldn't require another if/else bracket with every new implementation, but it's obtuse and doesn't feel correct. This logic could of course be abstracted away in its own class to lessen that problem.
Which of these is more idiomatic in Spring and Java? Are there any other approaches I haven't considered?
I feel it is a matter of preference whether there is an idiomatic way for this, but what I suggest is the following solution:
Create an interface for all the services, we can call this VendorService:
public interface VendorService {
void doSomething();
VendorId getVendorId();
}
Now we would want to implement this interface for all the services, as an example this can be done like this for XVendorService:
#Service
public XVendorService implements VendorService {
private VendorId vendorId = ....
#Override
public void doSomething() {
...
}
#Override
public VendorId getKey() {
return vendorId;
}
}
Now for the VendorDelegateService we can do something like this:
#Service
public class VendorDelegateService {
private Map<VendorId, VendorService> services = new HashMap<>();
#Autowired
public AllServices(Set<? extends VendorService> serviceSet) {
serviceSet.stream().forEach(service -> services.put(service.getVendorId(), service));
}
public void doSomething(VendorId vendorId) {
if (services.containsKey(vendorId)) {
services.get(vendorId).doSomething();
}
}
}
Please note that with Set<? extends VendorService> serviceSet all the services will be autowired automatically. By creating a map afterwards, we are able to dispatch our request to every service based on its vendorKey.
Related
Situation -
During the implementation of an Activity tracking system, to improve performance I resort to Caching some repeated DB calls using #Cachable annotation.
The structure looks like this:
An Interface
ActivityCaseService<T>
An Abstract
AbstractActivityCaseImpl<T> implements ActivityCaseService<T>
Concrete classes
LoginActivityCaseImpl extends AbstractActivityCaseImpl<LoginActivityCase>
CallActivityCaseImpl extends AbstractActivityCaseImpl<CallActivityCase>
Based on the type of ActivityCaseType requested, it switches between LoginActivity or CallActivity implementation and executes the methods on them. Have implemented ActivityCaseServiceFactory to get the instance of a class at runtime.
But, during the implementation, #Cachable added on the method gets ignored. I have an intuition that beans returned by ActivityCaseServiceFactory are not Cacheable weaved proxies, that's why annotation is not working, but don't understand the exact issue, and how to rectify if that's the problem!
Reference:
spring.version 4.3.9.RELEASE
#Cachable works perfectly for other services in the same codebase
ActivityCaseServiceFactory
#Service
public class ActivityCaseServiceFactory {
/*ActivityCaseType:Enum*/
private Map<ActivityCaseType, ActivityCaseService<?>> activityCaseTypeVsActivityCaseService = new HashMap<>();
public ActivityCaseServiceFactory(List<ActivityCaseService<?>> abstractActivityCaseServiceImpls) {
activityCaseTypeVsActivityCaseService =
abstractActivityCaseServiceImpls.stream()
.collect(Collectors.toMap(ActivityCaseService::getActivityCaseType, A -> A));
}
public ActivityCaseService<?> getService(ActivityCaseType activityCaseType) {
return activityCaseTypeVsActivityCaseService.get(activityCaseType);
}
}
ActivityCaseService
public interface ActivityCaseService<T> extends BeanInitializer {
ActivityCaseType getActivityCaseType();
/* Method executed based on ActivityCaseType */
List<? extends ActivityBaseCaseDto> getActivityByUserIds(
ActivityInRangeQuery activityInRangeQuery, List<Integer> userIds, Boolean fromCache);
/* Tried to declare here so that execution could be intercepted by AOP Proxy */
Optional<List<T>> findActivityInRangeQuery(
ActivityInRangeQuery activityInRangeQuery, List<Integer> agentIds, Boolean fromCache);
}
AbstractActivityCaseServiceImpl
public abstract class AbstractActivityCaseServiceImpl<T extends ActivityCase> implements ActivityCaseService<T> {
public abstract ActivityCaseType getActivityCaseType();
public abstract List<? extends ActivityBaseCaseDto> getActivityByUserIds(
ActivityInRangeQuery activityInRangeQuery,
List<Integer> userIds,
Boolean fromCache);
public abstract Optional<List<T>> findActivityInRangeQuery(
ActivityInRangeQuery activityInRangeQuery, List<Integer> agentIds, Boolean fromCache);
}
LoginActivityCaseImpl
#Service(value = "LoginActivityService")
public class LoginAbstractActivityCaseServiceImpl extends AbstractActivityCaseServiceImpl<LoginActivityCase> {
#Override
public ActivityCaseType getActivityCaseType() {
return ActivityCaseType.LOGIN;
}
#Override
public List<LoginActivityBaseCaseDto> getActivityByUserIds(
ActivityInRangeQuery activityInRangeQuery, List<Integer> userIds, Boolean fromCache) {
String[]
beanNamesForType =
applicationContext.getBeanNamesForType(ResolvableType.forClassWithGenerics(ActivityCaseService.class,
LoginActivityCase.class));
(List<LoginActivityCase>) applicationContext.getBean(beanNamesForType[0],
ActivityCaseService.class).findActivityInRangeQuery(activityInRangeQuery, agents, fromCache);
/*- More logic to transform List<LoginActivityCase> received above -*/
}
/* ----> #Cacheable not working here <---- */
#Override
#Cacheable(value = CacheNames.ACTIVITY_CACHE,
condition = "#fromCache",
unless = "#result == null")
public Optional<List<LoginActivityCase>> findActivityInRangeQuery(
ActivityInRangeQuery activityInRangeQuery, List<Integer> agentIds, Boolean fromCache) {
/* -- Time taking IO calls -- */
}
}
OtherService
#Autowired
private ActivityCaseServiceFactory activityCaseServiceFactory;
public List<ActivityBaseCaseDto> getActivityForUserAndTeam(/*Params*/) {
return activityCaseServiceFactory.getService(ActivityCaseType.LOGIN)
.getActivityByUserIds(ActivityInRangeQuery.builder().build(),new ArrayList<>(Arrays.asList(1,2)),true);
}
Have been scratching my head for 2 days while searching solution to resolve this issue, but all in vain :). So, before moving to Custom Cache implementation, thought to get help from this amazing StackOverflow community.
What am I doing wrong with #Cachable? And also, open to any criticism/suggestion about how the code is overall structured to implement Activity Tracking.
Adding some screenshots to give more context:
I have an old code base that I need to refactor using Java 8, so I have an interface, which tells whether my current site supports the platform.
public interface PlatformSupportHandler {
public abstract boolean isPaltformSupported(String platform);
}
and I have multiple classes implementing it and each class supports a different platform.
A few of the implementing classes are:
#Component("bsafePlatformSupportHandler")
public class BsafePlatoformSupportHandler implements PlatformSupportHandler {
String[] supportedPlatform = {"iPad", "Android", "iPhone"};
Set<String> supportedPlatformSet = new HashSet<>(Arrays.asList(supportedPlatform));
#Override
public boolean isPaltformSupported(String platform) {
return supportedPlatformSet.contains(platform);
}
}
Another implementation:
#Component("discountPlatformSupportHandler")
public class DiscountPlatoformSupportHandler implements PlatformSupportHandler{
String[] supportedPlatform = {"Android", "iPhone"};
Set<String> supportedPlatformSet = new HashSet<>(Arrays.asList(supportedPlatform));
#Override
public boolean isPaltformSupported(String platform) {
return supportedPlatformSet.contains(platform);
}
}
At runtime in my filter, I get the required bean which I want:
platformSupportHandler = (PlatformSupportHandler) ApplicationContextUtil
.getBean(subProductType + Constants.PLATFORM_SUPPORT_HANDLER_APPEND);
and call isPlatformSupported to get whether my current site supports the following platform or not.
I am new to Java 8, so is there any way I can refactor this code without creating multiple classes? As the interface only contains one method, can I somehow use lambda to refactor it?
If you want to stick to the current design, you could do something like this:
public class MyGeneralPurposeSupportHandler implements PlatformSupportHandler {
private final Set<String> supportedPlatforms;
public MyGeneralPurposeSupportHandler(Set<String> supportedPlatforms) {
this.supportedPlatforms = supportedPlatforms;
}
public boolean isPlatformSupported(String platform) {
return supportedPlatforms.contains(platform);
}
}
// now in configuration:
#Configuration
class MySpringConfig {
#Bean
#Qualifier("discountPlatformSupportHandler")
public PlatformSupportHandler discountPlatformSupportHandler() {
return new MyGeneralPurposeSupportHandler(new HashSefOf({"Android", "iPhone"})); // yeah its not a java syntax, but you get the idea
}
#Bean
#Qualifier("bsafePlatformSupportHandler")
public PlatformSupportHandler bsafePlatformSupportHandler() {
return new MyGeneralPurposeSupportHandler(new HashSefOf({"Android", "iPhone", "iPad"}));
}
}
This method has an advantage of not creating class per type (discount, bsafe, etc), so this answers the question.
Going step further, what happens if there no type that was requested, currently it will fail because the bean does not exist in the application context - not a really good approach.
So you could create a map of type to the set of supported platforms, maintain the map in the configuration or something an let spring do its magic.
You'll end up with something like this:
public class SupportHandler {
private final Map<String, Set<String>> platformTypeToSuportedPlatforms;
public SupportHandler(Map<String, Set<String>> map) {
this.platformTypeToSupportedPlatforms = map;
}
public boolean isPaltformSupported(String type) {
Set<String> supportedPlatforms = platformTypeToSupportedPlatforms.get(type);
if(supportedPlatforms == null) {
return false; // or maybe throw an exception, the point is that you don't deal with spring here which is good since spring shouldn't interfere with your business code
}
return supportedPlatforms.contains(type);
}
}
#Configuration
public class MyConfiguration {
// Configuration conf is supposed to be your own way to read configurations in the project - so you'll have to implement it somehow
#Bean
public SupportHandler supportHandler(Configuration conf) {
return new SupportHandler(conf.getRequiredMap());
}
}
Now if you follow this approach, adding a new supported types becomes codeless at all, you only add a configuration, by far its the best method I can offer.
Both methods however lack the java 8 features though ;)
You can use the following in your config class where you can create beans:
#Configuration
public class AppConfiguration {
#Bean(name = "discountPlatformSupportHandler")
public PlatformSupportHandler discountPlatformSupportHandler() {
String[] supportedPlatforms = {"Android", "iPhone"};
return getPlatformSupportHandler(supportedPlatforms);
}
#Bean(name = "bsafePlatformSupportHandler")
public PlatformSupportHandler bsafePlatformSupportHandler() {
String[] supportedPlatforms = {"iPad", "Android", "iPhone"};
return getPlatformSupportHandler(supportedPlatforms);
}
private PlatformSupportHandler getPlatformSupportHandler(String[] supportedPlatforms) {
return platform -> Arrays.asList(supportedPlatforms).contains(platform);
}
}
Also, when you want to use the bean, it is again very easy:
#Component
class PlatformSupport {
// map of bean name vs bean, automatically created by Spring for you
private final Map<String, PlatformSupportHandler> platformSupportHandlers;
#Autowired // Constructor injection
public PlatformSupport(Map<String, PlatformSupportHandler> platformSupportHandlers) {
this.platformSupportHandlers = platformSupportHandlers;
}
public void method1(String subProductType) {
PlatformSupportHandler platformSupportHandler = platformSupportHandlers.get(subProductType + Constants.PLATFORM_SUPPORT_HANDLER_APPEND);
}
}
As it was written in Mark Bramnik's answer you can move this to configuration.
Suppose that it would be in yaml in that way:
platforms:
bsafePlatformSupportHandler: ["iPad", "Android", "iPhone"]
discountPlatformSupportHandler: ["Android", "iPhone"]
Then you can create config class to read this:
#Configuration
#EnableConfigurationProperties
#ConfigurationProperties
public class Config {
private Map<String, List<String>> platforms = new HashMap<String, List<String>>();
// getters and setters
You can than create handler with checking code.
Or place it in your filter like below:
#Autowired
private Config config;
...
public boolean isPlatformSupported(String subProductType, String platform) {
String key = subProductType + Constants.PLATFORM_SUPPORT_HANDLER_APPEND;
return config.getPlatforms()
.getOrDefault(key, Collections.emptyList())
.contains(platform);
}
My Spring Boot application contains several #KafkaListeners, and each listener performs the same steps before and after actually processing the payload: Validate the payload, check whether the event has been processed already, check whether it's a tombstone (null) message, decide whether processing should be retried in case of failure, emit metrics, etc.
These steps are currently implemented in a base class, but because the topics passed to #KafkaListener must be constant at runtime, the method annotated with #KafkaListener is defined in the subclass, and does nothing but pass its parameters to a method in the base class.
This works just fine, but I wonder if there's a more elegant solution. I assume my base class would have to create a listener container programmatically, but after a quick look at KafkaListenerAnnotationBeanPostProcessor, it seems to be quite involved.
Does anyone have any recommendadtions?
Having stumbled upon this question while looking to implement something similar, I first started with Artem Bilan's answer. However this did not work because annotations by default are not inherited in child classes unless they are themselves annotated with #Inherited. Despite this there may yet be a way to make an annotation approach work and I will update this answer if and when I get it to work. Thankfully though I have achieved the desired behavour using programtic registration of the Kafka listeners.
My code is something like the following:
Interface:
public interface GenericKafkaListener {
String METHOD = "handleMessage";
void handleMessage(ConsumerRecord<String, String> record);
}
Abstract Class:
public abstract class AbstractGenericKafkaListener implements GenericKafkaListener {
private final String kafkaTopic;
public AbstractGenericKafkaListener(final String kafkaTopic) {
this.kafakTopic = kafkaTopic;
}
#Override
public void handleMessage(final ConsumerRecord<String, String> record) {
//do common logic here
specificLogic(record);
}
protected abstract specificLogic(ConsumerRecord<String, String> record);
public String getKafkaTopic() {
return kafkaTopic;
}
}
We can then programtically register all beans of type AbstractGenericKafkaListener in a KafkaListenerConfigurer:
#Configuration
public class KafkaListenerConfigurataion implements KafkaListenerConfigurer {
#Autowired
private final List<AbstractGenericKafkaListener> listeners;
#Autowired
private final BeanFactory beanFactory;
#Autowired
private final MessageHandlerMethodFactory messageHandlerMethodFactory;
#Autowired
private final KafkaListenerContainerFactory kafkaListenerContainerFactory;
#Value("${your.kafka.consumer.group-id}")
private String consumerGroup;
#Value("${your.application.name}")
private String service;
#Override
public void configureKafkaListeners(
final KafkaListenerEndpointRegistrar registrar) {
final Method listenerMethod = lookUpMethod();
listeners.forEach(listener -> {
registerListenerEndpoint(listener, listenerMethod, registrar);
});
}
private void registerListenerEndpoint(final AbstractGenericKafkaListener listener,
final Method listenerMethod,
final KafkaListenerEndpointRegistrar registrar) {
log.info("Registering {} endpoint on topic {}", listener.getClass(),
listener.getKafkaTopic());
final MethodKafkaListenerEndpoint<String, String> endpoint =
createListenerEndpoint(listener, listenerMethod);
registrar.registerEndpoint(endpoint);
}
private MethodKafkaListenerEndpoint<String, String> createListenerEndpoint(
final AbstractGenericKafkaListener listener, final Method listenerMethod) {
final MethodKafkaListenerEndpoint<String, String> endpoint = new MethodKafkaListenerEndpoint<>();
endpoint.setBeanFactory(beanFactory);
endpoint.setBean(listener);
endpoint.setMethod(listenerMethod);
endpoint.setId(service + "-" + listener.getKafkaTopic());
endpoint.setGroup(consumerGroup);
endpoint.setTopics(listener.getKafkaTopic());
endpoint.setMessageHandlerMethodFactory(messageHandlerMethodFactory);
return endpoint;
}
private Method lookUpMethod() {
return Arrays.stream(GenericKafkaListener.class.getMethods())
.filter(m -> m.getName().equals(GenericKafkaListener.METHOD))
.findAny()
.orElseThrow(() ->
new IllegalStateException("Could not find method " + GenericKafkaListener.METHOD));
}
}
How about this:
public abstract class BaseKafkaProcessingLogic {
#KafkaHandler
public void handle(Object payload) {
}
}
#KafkaListener(topics = "topic1")
public class Topic1Handler extends BaseKafkaProcessingLogic {
}
#KafkaListener(topics = "topic2")
public class Topic2Handler extends BaseKafkaProcessingLogic {
}
?
I needed the same functionality and came up with solution close to Artem Bilan answer. Yes, #KafkaHandler annotation is not inherited by the child classes but defined in interface it is. Here is the solution:
interface AbstractKafkaListener<T> {
default Class<T> getCommandType() {
TypeToken<T> type = new TypeToken<>(getClass()) {};
return (Class<T>) type.getRawType();
}
#KafkaHandler
default void handle(String message) throws JsonProcessingException {
ObjectMapper objectMapper = new ObjectMapper();
T value = objectMapper.readValue(message, getCommandType());
handle(value);
}
void handle(T message);
}
The class should implement the handle method only:
#Component
#KafkaListener(topics = "my_topic")
public class KafkaListenerForMyCustomMessage implements AbstractKafkaListener<MyCustomMessage> {
#Override
public void handle(MyCustomMessage message) {
System.out.println(message);
}
}
The 2 implemented methods in the interface should be private/protected but because they are in interface this cannot be done. default methods are always public. Actually, all methods defined in interface are always public.
I use this solution to dynamically parse the message from kafka (received in String) to the custom class.
getCommandType method returns the class of the T generic param. TypeToken is from Google Guava package.
Is there any way to get the number and some identification information of already created entities of particular Prototype-bean in Spring application?
Addition. In our project we have more then 400 prototype-beans and I would like to trace the state what beans were created during execution and the number of entities of each type.
I have found a way to see the actual picture about created prototype-beans.
I use free VisualVM memory profiler.
In the Sampler tab you can see all instances of created classes including singleton and prototype beans.
You'll see the names of your own packages and classes. In this case:
prototype is a package with my prototype-beans.
singleton is a package with my singleton-beans.
newclasses is a package with classes that I created by new operator.
Also after the garbage collector will clean up the memory you will see the result here.
you can do it by Publish and Listen Application Events.
create you own event.
when prototype bean was created send event from it.
create count ApplicationListener , and listen to income creation event.
here is example
Spring – Publish and Listen Application Events
Spring does not manage the complete lifecycle of a prototype bean: the container instantiates, configures, decorates and otherwise assembles a prototype object, hands it to the client and then has no further knowledge of that prototype instance.
Simple variant :
public class PrototypeCreationEvent extends ApplicationEvent {
private String beanName;
public PrototypeCreationEvent(Object source , String beanName) {
super(source);
this.beanName = beanName;
}
public String getBeanName(){
return beanName;
}
}
public class PrototypeCreationListener implements ApplicationListener<PrototypeCreationEvent> {
private ConcurrentMap<String,AtomicInteger> prototypeCreationStatistic = new ConcurrentHashMap<>();
//or from guava AtomicLongMap prototypeCreationStatistic = AtomicLongMap.create();
#Override
public void onApplicationEvent(PrototypeCreationEvent event) {
prototypeCreationStatistic.computeIfAbsent(event.getBeanName() , k->new AtomicInteger(0)).incrementAndGet();
System.out.println(event);
}
public ConcurrentMap<String,AtomicInteger> getPrototypeCreationStatistic(){
return prototypeCreationStatistic;
}
}
public abstract class PrototypeCreationPublisher implements BeanNameAware , ApplicationEventPublisherAware ,InitializingBean {
private String beanName;
private ApplicationEventPublisher applicationEventPublisher;
#Override
public void setBeanName(String name) {
this.beanName = name;
}
#Override
public void afterPropertiesSet() throws Exception {
System.out.println();
}
#Override
public void setApplicationEventPublisher(ApplicationEventPublisher applicationEventPublisher) {
this.applicationEventPublisher = applicationEventPublisher;
}
#PostConstruct //or use interface InitializingBean
public void sendEventAfterCreation() throws Exception {
applicationEventPublisher.publishEvent(new PrototypeCreationEvent(this , beanName));
}
}
#Component(value = BeanDefinition.SCOPE_PROTOTYPE)
public class PrototypeA extends PrototypeCreationPublisher{
}
#Component(value = BeanDefinition.SCOPE_PROTOTYPE)
public class PrototypeB extends PrototypeCreationPublisher{
}
example :
PrototypeA prototypeA1 = context.getBean(PrototypeA.class);
PrototypeA prototypeA2 = context.getBean(PrototypeA.class);
PrototypeA prototypeA3 = context.getBean(PrototypeA.class);
PrototypeB prototypeB1 = context.getBean(PrototypeB.class);
PrototypeCreationListener statistic = context.getBean(PrototypeCreationListener.class);
statistic.getPrototypeCreationStatistic().entrySet().forEach(s->{
System.out.println(s.getKey() + " count = "+s.getValue());
});
result :
PrototypeB count = 1
PrototypeA count = 3
My module:
#Module
public class TcpManagerModule {
private ITcpConnection eventsTcpConnection;
private ITcpConnection commandsTcpConnection;
public TcpManagerModule(Context context) {
eventsTcpConnection = new EventsTcpConnection(context);
commandsTcpConnection = new CommandsTcpConnection(context);
}
#Provides
#Named("events")
public ITcpConnection provideEventsTcpConnection() {
return eventsTcpConnection;
}
#Provides
#Named("commands")
public ITcpConnection provideCommandsTcpConnection() {
return commandsTcpConnection;
}
}
Component:
#Component(modules = TcpManagerModule.class)
public interface TcpManagerComponent {
void inject(ITcpManager tcpManager);
}
class where injection happens:
public class DefaultTcpManager implements ITcpManager {
private TcpManagerComponent tcpComponent;
#Inject #Named("events") ITcpConnection eventsTcpConnection;
#Inject #Named("commands") ITcpConnection commandsTcpConnection;
public DefaultTcpManager(Context context){
tcpComponent = DaggerTcpManagerComponent.builder().tcpManagerModule(new TcpManagerModule(context)).build();
tcpComponent.inject(this);
}
#Override
public void startEventsConnection() {
eventsTcpConnection.startListener();
eventsTcpConnection.connect();
}
}
When I call startEventsConnection, I get NullPointerException - meaning the injection didn't populate the fields.
I followed the example exactly the way it is on the Docs, what is the issue?
Note: on the builder line
tcpComponent = DaggerTcpManagerComponent.builder().tcpManagerModule(new TcpManagerModule(context)).build();
I have a warning saying "tcpManagerModule is deprecated". I read the answer here about this issue, and its saying
It is safe to say that you can just ignore the deprecation. It is intended to notify you of unused methods and modules. As soon as you actually require / use Application somewhere in your subgraph the module is going to be needed, and the deprecation warning will go away.
So, am I not requiring/using the instances? What is the issue here?
You could try changing your Component defining the specific class for injection:
#Component(modules = TcpManagerModule.class)
public interface TcpManagerComponent {
void inject(DefaultTcpManager tcpManager);
}
So that Dagger knows exactly about DefaultTcpManager.class.