I have this simple state machine configuration :
#Configuration
#EnableStateMachine
public class SimpleStateMachineConfiguration extends StateMachineConfigurerAdapter<State, Boolean> {
#Override
public void configure(StateMachineStateConfigurer<State, Boolean> states) throws Exception {
states.withStates()
.initial(State.INITIAL)
.states(EnumSet.allOf(State.class));
}
#Override
public void configure(StateMachineTransitionConfigurer<State, Boolean> transitions) throws Exception {
transitions
.withExternal()
.source(State.INITIAL)
.target(State.HAS_CUSTOMER_NUMBER)
.event(true)
.action(retrieveCustomerAction())
// here I'd like to retrieve the customer from this action, like:
// stateMachine.start();
// stateMachine.sendEvent(true);
// stateMachine.retrieveCustomerFromAction();
.and()
.withExternal()
.source(State.INITIAL)
.target(State.NO_CUSTOMER_NUMBER)
.event(false)
.action(createCustomerAction());
// here I'd like to send the customer instance to create, like:
// stateMachine.start();
// stateMachine.sendEvent(false);
// stateMachine.sendCustomerToAction(Customer customer);
}
#Bean
public Action<State, Boolean> retrieveCustomerAction() {
return ctx -> System.out.println(ctx.getTarget().getId());
}
#Bean
public Action<State, Boolean> createCustomerAction() {
return ctx -> System.out.println(ctx.getTarget().getId());
}
}
Is it possible to improve actions definition to be able to interact with them with dynamics parameters ?
How could I add consumer or provider behaviors to those actions ?
Is it possible to improve actions definition to be able to interact
with them with dynamics parameters?
Yes, it's possible. You can store the variables in the context store and retrieve then wherever you want.
public class Test {
#Autowired
StateMachine<State, Boolean> stateMachine;
public void testMethod() {
stateMachine.getExtendedState().getVariables().put(key, value);
stateMachine.start();
stateMachine.sendEvent(true);
}
}
And You can retrieve this value from the context using the key. Suppose the value was of the type String then it can be retrieved like this:-
#Bean
public Action<State, Boolean> retrieveCustomerAction() {
return ctx -> {
String value = ctx.getExtendedState().get(key, String.class);
// Do Something
};
}
For more you can refer the link and this
How could I add consumer or provider behaviors to those actions?
Can you elaborate more on this question
Related
I'm building a package that is trying to intercept a function's return value based on a flag. My design involves some AOP. The idea is that a class FirstIntercept intercepts a call firstCall and stores parameters in a Parameters object. Then later, a second class SecondIntercept intercepts another call secondCall and does some logic based on what is populated in Parameters:
// pseudoish code
public class FirstIntercept {
private Parameters param;
#AfterReturning(pointcut = "execution(* ...firstCall(..))", returning = "payload")
public void loadParam(Joinpoint joinPoint, Object payload) {
// logic handling payload returned from firstCall()
// logic provides a Boolean flag
this.param = new Parameters(flag);
}
}
public class Parameters {
#Getter
private Boolean flag;
public Parameters(Boolean flag) {
this.flag = flag;
}
}
public class SecondIntercept {
private static Parameters params;
#Around("execution(* ...secondCall(..))")
public void handleSecondCallIntercept(ProceedingJoinPoint joinPoint) {
// want to do logic here based on what params contains
}
}
What I want to achieve is that the Parameters object is loaded once and for all when FirstIntercept.loadParam is invoked through AOP. I'm not too sure how I can go about with this persistence. I looked online and Google guice seems to be promising. I believe a first step would to use dependency injection on the Parameters, but I'm really not sure. Can someone help point me in the right direction?
edit:
So I tried this setup:
public class FirstIntercept implements MethodInterceptor {
public Object invoke(MethodInvocation invocation) throws Throwable {
System.out.println("invoked!");
return invocation.proceed();
}
#AfterReturning(pointcut = "execution(* ...firstCall(..))", returning = "payload")
public void loadParam(Joinpoint joinPoint, Object payload) {
// do stuff
}
public String firstCall() {
return "hello";
}
}
public class InterceptionModule extends AbstractModule {
protected void configure() {
FirstIntercept first = new FirstIntercept();
bindInterceptor(Matchers.any(), Matchers.annotatedWith(AfterReturning.class), first);
}
}
public class FirstIterceptTest {
#Test
public void dummy() {
Injector injector = Guice.createInjector(new InterceptionModule());
FirstIntercept intercept = injector.getInstance(FirstIntercept.class);
intercept.firstCall();
}
}
When I do .firstCall(), I can see the #AfterReturning running but the invoke is not being called.
If you expand upon the documentation for AOP https://github.com/google/guice/wiki/AOP you should get something close to:
public class FirstInterceptor implements MethodInterceptor {
#Inject Parameters parameters; // Injected with singleton Parameter
public Object invoke(MethodInvocation invocation) throws Throwable {
Object result = invocation.proceed();
// your logic based on result to set parameters.setFlag()
return result;
}
}
Then the second:
public class SecondInterceptor implements MethodInterceptor {
#Inject Parameters parameters; // Injected with singleton Parameter
public Object invoke(MethodInvocation invocation) throws Throwable {
boolean flag = parameters.getFlag();
// your logic here
return invocation.proceed(); // maybe maybe not?
}
}
Your parameters is the key, you'll need to ensure it's thread safe, which is another topic. But to inject these you need:
public class InterceptionModule extends AbstractModule {
protected void configure() {
// Ensure there is only ever one Parameter injected
bind(Parameter.class).in(Scopes.SINGLETON);
// Now inject and bind the first interceptor
FirstInterceptor firstInterceptor = new FirstInterceptor();
requestInjection(firstInterceptor );
bindInterceptor(Matchers.any(), Matchers.annotatedWith(AfterReturning.class),
firstInterceptor);
// Now inject and bind the second interceptor
SecondInterceptor SecondInterceptor = new SecondInterceptor ();
requestInjection(firstInterceptor);
bindInterceptor(Matchers.any(), Matchers.annotatedWith(AfterReturning.class),
SecondInterceptor);
}
}
Edit
Look at what you're doing.
You're telling Guice to wrap a method with #AfterReturn with the FirstInterceptor
Then you're calling interceptor.firstCall()
First call does not have #AfterReturn annotation, so why would it be matched against that configuration?
I'm guessing if you called:
intercept.loadParam();
you would see the invoke method. Also, this is great for a test, but in real life you want to have a Service level class have the #AfterReturn which is then Injected into another Api/Job/Etc that will call LoadParam.
edit
Oh no. Take a look at this line
bindInterceptor(Matchers.any(), // a class with this matcher
Matchers.annotatedWith(AfterReturning.class), // a method with this
firstInterceptor);
This means that the injector only fires on the loadParams. You need to annotate the method of the class youw ish to cause the interception with #AfterReturning. And you want the loadParams to be the invoke method.
I am working on a small project that includes signing up an account, logging a user in, and request data via certain endpoints. The authentication piece is done via Spring security session Ids. The endpoints consist of publicly available endpoints (i.e. signing up, or forgot password), and some endpoints that require the user to be signed in (i.e. change password, get some content, etc). Kind of like this:
#RestController
public class FightController {
//publicly available
#GetMapping("/public/foo")
String methodForEveryone() {
return "Hi common dude";
}
#GetMapping("secret/bar")
String methodForSpecialPeople() {
return "What happens in fight controller...";
}
}
Spring security then has a list of public endpoints in a static WHITE_LIST
private static final String[] AUTH_WHITELIST = {
//public endpoints
"/public/foo", "swagger", "etc"
}
#Override
protected void configure(HttpSecurity http) throws Exception {
http
.addFilterAfter(customAuthFilter(), RequestHeaderAuthenticationFilter.class)
.authorizeRequests()
.antMatchers(AUTH_WHITELIST).permitAll()
.antMatchers("/**").authenticated()
.and()
Tests are currently being done by hitting every endpoint and determining whether it is behaving as expected (via custom antmatchers in the WebSecurityConfigurer). Like so:
package com.fight.testpackages
public class EndpointList {
public static class PublicEndpoints implements ArgumentsProvider {
#Override
public Stream<? extends Arguments> provideArguments(ExtensionContext context) {
return Stream.of(
Arguments.of("/public/foo"),
);
}
}
public static class PrivateEndoints implements ArgumentsProvider {
#Override
public Stream<? extends Arguments> provideArguments(ExtensionContext context) {
return Stream.of(
Arguments.of("secret/bar"),
);
}
}
and then managed with tests like
#ParameterizedTest
#ArgumentsSource(PrivateContentEndpoints.class)
public void privateEndpoint_unauthorizedUser_isUnauthorizedResponse(String url) throws Exception {
assertFalse(super.isAuthenticated(url));
}
#WithMockUser(roles = "USER")
#ParameterizedTest
#ArgumentsSource(PublicAccountManagementEndpoints.class)
public void publicEndpoint_authorizedUser_hasAccess(String url) throws Exception {
assertTrue(super.isAuthenticated(url));
}
The issue I am trying to solve can best be described with the following scenario:
A developer adds a new endpoint;
They add the endpoint to list of antmatchers (if it should be public);
And then they add the endpoint to a list of public and private endpoints that gets pulled into the tests.
The problem here is that there is no enforcement of this behaviour, and it's super easy to forget to add an endpoint to a test, or if the names of the endpoints change then the tests need to be updated.
The current setup works, but I was wondering if there was a standard for this? I've looked at the #PreAuthorize and #RolesAllowed, etc but they only seem to be useful for securing a method, and not making it public. I actually want the reverse (i.e. the endpoint to be private by default, and then to be marked as publicly available intentionally).
A solution that I've come up with is as follows:
Create an annotation
#Target(ElementType.METHOD)
#Retention(RetentionPolicy.RUNTIME)
public #interface EndpointSecurity {
boolean isPublic() default false;
}
assign annotation to method if you want to make it public
#EndpointSecurity(isPublic = true)
#GetMapping("/public/foo")
String methodForEveryone() {
return "Hi common dude";
}
Build a scanner that checks all RestController methods for the EndpointSecurity annotation and the REST mapping annotation, kind of like below.. Hopefully it's enough to get the point..
#DependsOn("classScanner")
public class ClassMethodScanner {
private final List<Class<? extends Annotation>> annotationFilters;
private List<Method> annotatedMethods;
private final AnnotationScanner<?> classScanner;
public <T extends Annotation> ClassMethodScanner(ClassScanner<T> classScanner) {
this(classScanner, Collections.emptyList());
}
public <T extends Annotation> ClassMethodScanner(ClassScanner<T> classScanner,
List<Class<? extends Annotation>> annotations) {
this.classScanner = classScanner;
this.annotationFilters = annotations;
}
#PostConstruct
void extractAnnotatedMethods() throws ClassNotFoundException {
if (annotatedMethods == null) {
annotatedMethods =
classScanner.getAnnotatedHandlers().stream()
.sequential()
.map(Class::getDeclaredMethods)
.flatMap(Arrays::stream)
.filter(this::hasExpectedAnnotations)
.collect(Collectors.toUnmodifiableList());
}
}
private boolean hasExpectedAnnotations(Method method) {
return
(annotationFilters.isEmpty() && method.getAnnotations().length > 0)
|| annotationFilters.stream().anyMatch(method::isAnnotationPresent);
}
//Is there a good way of making this protected?
public List<Method> getAnnotatedMethods() {
return annotatedMethods;
}
}
And finally produce a list of public and private endpoints that feeds into the HttpSecurity.
public class SecurityEndpoints {
private List<String> publicEndpoints;
private List<String> privateEndpoints;
private final EndpointDetailsCollector<?> collector;
public String[] getWhiteList() {
And feeds into the EndpointList that I mentioned above.
This seems somewhat convoluted though. So was wondering what is a standard approach, or am I making too much of testing the endpoints??
I have an old code base that I need to refactor using Java 8, so I have an interface, which tells whether my current site supports the platform.
public interface PlatformSupportHandler {
public abstract boolean isPaltformSupported(String platform);
}
and I have multiple classes implementing it and each class supports a different platform.
A few of the implementing classes are:
#Component("bsafePlatformSupportHandler")
public class BsafePlatoformSupportHandler implements PlatformSupportHandler {
String[] supportedPlatform = {"iPad", "Android", "iPhone"};
Set<String> supportedPlatformSet = new HashSet<>(Arrays.asList(supportedPlatform));
#Override
public boolean isPaltformSupported(String platform) {
return supportedPlatformSet.contains(platform);
}
}
Another implementation:
#Component("discountPlatformSupportHandler")
public class DiscountPlatoformSupportHandler implements PlatformSupportHandler{
String[] supportedPlatform = {"Android", "iPhone"};
Set<String> supportedPlatformSet = new HashSet<>(Arrays.asList(supportedPlatform));
#Override
public boolean isPaltformSupported(String platform) {
return supportedPlatformSet.contains(platform);
}
}
At runtime in my filter, I get the required bean which I want:
platformSupportHandler = (PlatformSupportHandler) ApplicationContextUtil
.getBean(subProductType + Constants.PLATFORM_SUPPORT_HANDLER_APPEND);
and call isPlatformSupported to get whether my current site supports the following platform or not.
I am new to Java 8, so is there any way I can refactor this code without creating multiple classes? As the interface only contains one method, can I somehow use lambda to refactor it?
If you want to stick to the current design, you could do something like this:
public class MyGeneralPurposeSupportHandler implements PlatformSupportHandler {
private final Set<String> supportedPlatforms;
public MyGeneralPurposeSupportHandler(Set<String> supportedPlatforms) {
this.supportedPlatforms = supportedPlatforms;
}
public boolean isPlatformSupported(String platform) {
return supportedPlatforms.contains(platform);
}
}
// now in configuration:
#Configuration
class MySpringConfig {
#Bean
#Qualifier("discountPlatformSupportHandler")
public PlatformSupportHandler discountPlatformSupportHandler() {
return new MyGeneralPurposeSupportHandler(new HashSefOf({"Android", "iPhone"})); // yeah its not a java syntax, but you get the idea
}
#Bean
#Qualifier("bsafePlatformSupportHandler")
public PlatformSupportHandler bsafePlatformSupportHandler() {
return new MyGeneralPurposeSupportHandler(new HashSefOf({"Android", "iPhone", "iPad"}));
}
}
This method has an advantage of not creating class per type (discount, bsafe, etc), so this answers the question.
Going step further, what happens if there no type that was requested, currently it will fail because the bean does not exist in the application context - not a really good approach.
So you could create a map of type to the set of supported platforms, maintain the map in the configuration or something an let spring do its magic.
You'll end up with something like this:
public class SupportHandler {
private final Map<String, Set<String>> platformTypeToSuportedPlatforms;
public SupportHandler(Map<String, Set<String>> map) {
this.platformTypeToSupportedPlatforms = map;
}
public boolean isPaltformSupported(String type) {
Set<String> supportedPlatforms = platformTypeToSupportedPlatforms.get(type);
if(supportedPlatforms == null) {
return false; // or maybe throw an exception, the point is that you don't deal with spring here which is good since spring shouldn't interfere with your business code
}
return supportedPlatforms.contains(type);
}
}
#Configuration
public class MyConfiguration {
// Configuration conf is supposed to be your own way to read configurations in the project - so you'll have to implement it somehow
#Bean
public SupportHandler supportHandler(Configuration conf) {
return new SupportHandler(conf.getRequiredMap());
}
}
Now if you follow this approach, adding a new supported types becomes codeless at all, you only add a configuration, by far its the best method I can offer.
Both methods however lack the java 8 features though ;)
You can use the following in your config class where you can create beans:
#Configuration
public class AppConfiguration {
#Bean(name = "discountPlatformSupportHandler")
public PlatformSupportHandler discountPlatformSupportHandler() {
String[] supportedPlatforms = {"Android", "iPhone"};
return getPlatformSupportHandler(supportedPlatforms);
}
#Bean(name = "bsafePlatformSupportHandler")
public PlatformSupportHandler bsafePlatformSupportHandler() {
String[] supportedPlatforms = {"iPad", "Android", "iPhone"};
return getPlatformSupportHandler(supportedPlatforms);
}
private PlatformSupportHandler getPlatformSupportHandler(String[] supportedPlatforms) {
return platform -> Arrays.asList(supportedPlatforms).contains(platform);
}
}
Also, when you want to use the bean, it is again very easy:
#Component
class PlatformSupport {
// map of bean name vs bean, automatically created by Spring for you
private final Map<String, PlatformSupportHandler> platformSupportHandlers;
#Autowired // Constructor injection
public PlatformSupport(Map<String, PlatformSupportHandler> platformSupportHandlers) {
this.platformSupportHandlers = platformSupportHandlers;
}
public void method1(String subProductType) {
PlatformSupportHandler platformSupportHandler = platformSupportHandlers.get(subProductType + Constants.PLATFORM_SUPPORT_HANDLER_APPEND);
}
}
As it was written in Mark Bramnik's answer you can move this to configuration.
Suppose that it would be in yaml in that way:
platforms:
bsafePlatformSupportHandler: ["iPad", "Android", "iPhone"]
discountPlatformSupportHandler: ["Android", "iPhone"]
Then you can create config class to read this:
#Configuration
#EnableConfigurationProperties
#ConfigurationProperties
public class Config {
private Map<String, List<String>> platforms = new HashMap<String, List<String>>();
// getters and setters
You can than create handler with checking code.
Or place it in your filter like below:
#Autowired
private Config config;
...
public boolean isPlatformSupported(String subProductType, String platform) {
String key = subProductType + Constants.PLATFORM_SUPPORT_HANDLER_APPEND;
return config.getPlatforms()
.getOrDefault(key, Collections.emptyList())
.contains(platform);
}
We need to call a Bean class using spring remoting and also set dynamic header in the call. We can set custom HttpInvokerRequestExecutor in the HttpInvokerProxyFactoryBean and add header but how to set dynamic header generated on the fly for the request?
In the Config class, declaring the HttpInvokerProxyFactoryBean
#Bean
#Qualifier("service")
public HttpInvokerProxyFactoryBean invoker() {
HttpInvokerProxyFactoryBean invoker = new HttpInvokerProxyFactoryBean();
invoker.setServiceUrl(url);
invoker.setServiceInterface(Service.class);
return invoker;
}
In the invoker class
#Autowired
Service service;
public void invoke(Bean bean) {
service.process(bean);
}
Its been a long time that I used spring remoting but as far as I remember I found a solution to this by subclassing of SimpleHttpInvokerRequestExecutor which is default when you do not set any custom request executor to HttpInvokerProxyFactoryBean.
IMHO you can write a custom request executor which you can set custom header values and a simple helper component which sets the dynamically provided values to the executor before the next request.
CustomHttpInvokerRequestExecutor:
public class CustomHttpInvokerRequestExecutor extends SimpleHttpInvokerRequestExecutor {
private Map<String, String> headers;
public void setHeaders(Map<String, String> headers) {
this.headers = headers;
}
#Override
protected void prepareConnection(HttpURLConnection connection, int contentLength) throws IOException {
super.prepareConnection(connection, contentLength);
if (headers != null) {
// adding our custom headers
for (String headerName : headers.keySet()) {
connection.setRequestProperty(headerName, headers.get(headerName));
}
// do not want to persist headers for another request!
headers.clear();
}
}
}
CustomRemoteExecutor:
#Component
public class CustomRemoteExecutor {
#Autowired
private HttpInvokerProxyFactoryBean factoryBean;
/*
* May be you should need a synchronized modifier here if there is possibility
* of multiple threads access here at the same time
*/
public void executeInTemplate(Map<String, String> headers, Runnable task) {
CustomHttpInvokerRequestExecutor executor = (CustomHttpInvokerRequestExecutor) factoryBean.getHttpInvokerRequestExecutor();
executor.setHeaders(headers);
task.run();
}
}
And then you can use it by below:
#Bean
#Qualifier("service")
public HttpInvokerProxyFactoryBean invoker() {
HttpInvokerProxyFactoryBean invoker = new HttpInvokerProxyFactoryBean();
invoker.setServiceUrl(testUrl);
invoker.setServiceInterface(Service.class);
// set our custom request executor
CustomHttpInvokerRequestExecutor executor = new CustomHttpInvokerRequestExecutor();
invoker.setHttpInvokerRequestExecutor(executor);
return invoker;
}
#Autowired
CustomRemoteExecutor executor;
#Autowired
Service service;
public void invoke(Bean bean) {
// when you need custom headers
Map<String, String> headers = new HashMap<>();
headers.put("CUSTOM_HEADER", "CUSTOM_VALUE");
headers.put("CUSTOM_HEADER2", "CUSTOM_VALUE2");
executor.executeInTemplate(headers, () -> service.process(bean));
}
There is one drawback here as I also stated in comments, if you execute your proxy service client in a multithreaded environment (server to server requests may be) you should consider to make executeInTemplate method synchronized
An addition to my answer if your service method needs to return some object then you can add another helper method to CustomRemoteExecutor and use it when you need to return something. The method can have the same name here so it can overload the former one which is much better I think.
public <T> T executeInTemplate(Map<String, String> headers, Callable<T> task) {
CustomHttpInvokerRequestExecutor executor = (CustomHttpInvokerRequestExecutor) factoryBean.getHttpInvokerRequestExecutor();
executor.setHeaders(headers);
try {
return task.call();
} catch (Exception e) {
// it is better to log this exception by your preferred logger (log4j, logback
// etc.)
e.printStackTrace();
}
return null;
}
And again you can use like below:
#Autowired
CustomRemoteExecutor executor;
#Autowired
ISampleService service;
public void invoke(Bean bean) {
// when you need custom headers
Map<String, String> headers = new HashMap<>();
headers.put("CUSTOM_HEADER", "CUSTOM_VALUE");
headers.put("CUSTOM_HEADER2", "CUSTOM_VALUE2");
// assume that service.returnSomething() method returns String
String value = executor.executeInTemplate(headers, () -> service.returnSomething(bean));
}
Hope it helps.
My Spring Boot application contains several #KafkaListeners, and each listener performs the same steps before and after actually processing the payload: Validate the payload, check whether the event has been processed already, check whether it's a tombstone (null) message, decide whether processing should be retried in case of failure, emit metrics, etc.
These steps are currently implemented in a base class, but because the topics passed to #KafkaListener must be constant at runtime, the method annotated with #KafkaListener is defined in the subclass, and does nothing but pass its parameters to a method in the base class.
This works just fine, but I wonder if there's a more elegant solution. I assume my base class would have to create a listener container programmatically, but after a quick look at KafkaListenerAnnotationBeanPostProcessor, it seems to be quite involved.
Does anyone have any recommendadtions?
Having stumbled upon this question while looking to implement something similar, I first started with Artem Bilan's answer. However this did not work because annotations by default are not inherited in child classes unless they are themselves annotated with #Inherited. Despite this there may yet be a way to make an annotation approach work and I will update this answer if and when I get it to work. Thankfully though I have achieved the desired behavour using programtic registration of the Kafka listeners.
My code is something like the following:
Interface:
public interface GenericKafkaListener {
String METHOD = "handleMessage";
void handleMessage(ConsumerRecord<String, String> record);
}
Abstract Class:
public abstract class AbstractGenericKafkaListener implements GenericKafkaListener {
private final String kafkaTopic;
public AbstractGenericKafkaListener(final String kafkaTopic) {
this.kafakTopic = kafkaTopic;
}
#Override
public void handleMessage(final ConsumerRecord<String, String> record) {
//do common logic here
specificLogic(record);
}
protected abstract specificLogic(ConsumerRecord<String, String> record);
public String getKafkaTopic() {
return kafkaTopic;
}
}
We can then programtically register all beans of type AbstractGenericKafkaListener in a KafkaListenerConfigurer:
#Configuration
public class KafkaListenerConfigurataion implements KafkaListenerConfigurer {
#Autowired
private final List<AbstractGenericKafkaListener> listeners;
#Autowired
private final BeanFactory beanFactory;
#Autowired
private final MessageHandlerMethodFactory messageHandlerMethodFactory;
#Autowired
private final KafkaListenerContainerFactory kafkaListenerContainerFactory;
#Value("${your.kafka.consumer.group-id}")
private String consumerGroup;
#Value("${your.application.name}")
private String service;
#Override
public void configureKafkaListeners(
final KafkaListenerEndpointRegistrar registrar) {
final Method listenerMethod = lookUpMethod();
listeners.forEach(listener -> {
registerListenerEndpoint(listener, listenerMethod, registrar);
});
}
private void registerListenerEndpoint(final AbstractGenericKafkaListener listener,
final Method listenerMethod,
final KafkaListenerEndpointRegistrar registrar) {
log.info("Registering {} endpoint on topic {}", listener.getClass(),
listener.getKafkaTopic());
final MethodKafkaListenerEndpoint<String, String> endpoint =
createListenerEndpoint(listener, listenerMethod);
registrar.registerEndpoint(endpoint);
}
private MethodKafkaListenerEndpoint<String, String> createListenerEndpoint(
final AbstractGenericKafkaListener listener, final Method listenerMethod) {
final MethodKafkaListenerEndpoint<String, String> endpoint = new MethodKafkaListenerEndpoint<>();
endpoint.setBeanFactory(beanFactory);
endpoint.setBean(listener);
endpoint.setMethod(listenerMethod);
endpoint.setId(service + "-" + listener.getKafkaTopic());
endpoint.setGroup(consumerGroup);
endpoint.setTopics(listener.getKafkaTopic());
endpoint.setMessageHandlerMethodFactory(messageHandlerMethodFactory);
return endpoint;
}
private Method lookUpMethod() {
return Arrays.stream(GenericKafkaListener.class.getMethods())
.filter(m -> m.getName().equals(GenericKafkaListener.METHOD))
.findAny()
.orElseThrow(() ->
new IllegalStateException("Could not find method " + GenericKafkaListener.METHOD));
}
}
How about this:
public abstract class BaseKafkaProcessingLogic {
#KafkaHandler
public void handle(Object payload) {
}
}
#KafkaListener(topics = "topic1")
public class Topic1Handler extends BaseKafkaProcessingLogic {
}
#KafkaListener(topics = "topic2")
public class Topic2Handler extends BaseKafkaProcessingLogic {
}
?
I needed the same functionality and came up with solution close to Artem Bilan answer. Yes, #KafkaHandler annotation is not inherited by the child classes but defined in interface it is. Here is the solution:
interface AbstractKafkaListener<T> {
default Class<T> getCommandType() {
TypeToken<T> type = new TypeToken<>(getClass()) {};
return (Class<T>) type.getRawType();
}
#KafkaHandler
default void handle(String message) throws JsonProcessingException {
ObjectMapper objectMapper = new ObjectMapper();
T value = objectMapper.readValue(message, getCommandType());
handle(value);
}
void handle(T message);
}
The class should implement the handle method only:
#Component
#KafkaListener(topics = "my_topic")
public class KafkaListenerForMyCustomMessage implements AbstractKafkaListener<MyCustomMessage> {
#Override
public void handle(MyCustomMessage message) {
System.out.println(message);
}
}
The 2 implemented methods in the interface should be private/protected but because they are in interface this cannot be done. default methods are always public. Actually, all methods defined in interface are always public.
I use this solution to dynamically parse the message from kafka (received in String) to the custom class.
getCommandType method returns the class of the T generic param. TypeToken is from Google Guava package.