I'm following a guide on using Spring JMS using the JmsListener annotation at the method level. I think it is working but since I can't debug my breakpoint set in that method or log4j logging doesn't work, or not even a simple System.out.println(), I'm not 100% sure that destination is hitting.
#Component
public class JmsEmailServiceConsumer {
private final Logger log = LoggerFactory.getLogger(this.getClass());
private final JmsEmailService jmsEmailService;
#Autowired
public JmsEmailServiceConsumer(JmsEmailService jmsEmailService){
this.jmsEmailService = jmsEmailService;
}
#JmsListener(destination = "simple.queue")
public void receiveEmailData(EmailData emailData) {
jmsEmailService.sendEmail(emailData);
}
}
Pretty simple task. All I'm trying to do is create a JMS queue to handle the generating of emails. This process makes a call to the Service, jmsEmailService, which determines via calls to DAOs to select a list of email addresses to send emails. If none are found, no email is sent. Now I am testing locally and I don't have an email server up and running but I want to verify if the calls to the DAOs are working. If they are then I can proceed with committing in the code and get QA to test the email process.
I did it this way because of a blog I found that really removes the bulk of dealing with JMS. As you can see, all I needed to do was annotate the receiveEmailData method with JmsListener and provide a destination which has already been setup in the Producer class as:
private static final String SIMPLE_QUEUE = "simple.queue";
#Autowired
public JmsEmailProducerImpl(JmsTemplate jmsTemplate) {
this.jmsTemplate = jmsTemplate;
}
#Override
public void sendEmail(EmailData emailData) {
//EmailData emailData = new EmailData(userId, person, company, roleKind, isRemoved);
jmsTemplate.convertAndSend(SIMPLE_QUEUE, emailData);
}
Pretty easy right? That's what I thought. For reference, here's the website I am looking at:
http://xpadro.blogspot.com/2015/04/configure-spring-jms-application-with.html
Any thoughts? I can place a breakpoint at the line in the producer class which works, but once the jmsTemplate fires off the convertAndSend method, no breakpoint in the consumer class, System.out.println() or log4j logging works. I do see this in my broker logging:
2015-10-26 00:02:34,804 DEBUG org.apache.activemq.broker.region.Queue::expireMessages:905 queue://simple.queue expiring messages ..
2015-10-26 00:02:34,804 DEBUG org.apache.activemq.broker.region.Queue::expireMessages:911 queue://simple.queue expiring messages done.
2015-10-26 00:02:34,804 DEBUG org.apache.activemq.broker.region.Queue::doPageInForDispatch:1874 queue://simple.queue, subscriptions=0, memory=0%, size=2, pending=0 toPageIn: 0, Inflight: 0, pagedInMessages.size 2, pagedInPendingDispatch.size 2, enqueueCount: 2, dequeueCount: 0, memUsage:48394
Thanks for the nudge Gary! I had a block of settings for org.springframework in my log4j properties but the jms logging didn't appear until I added it for org.springframework.jms. I did a bit of analyzing with and without my code and saw that the console and file output remained the same.
So in the end, what I was missing that the author of that blog didn't explain is I needed to add the #EnableJms annotation to my JMSConfiguration class and I needed to add the following to the same class:
#Bean
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory() {
DefaultJmsListenerContainerFactory factory =
new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(connectionFactory());
return factory;
}
I'm assuming that Spring Boot adds the necessary plumbing to your configuration class automagically and that is one thing I didn't do. Once I did this, the breakpoints worked fine.
It's interesting how there are so many ways to skin a cat in Spring and I could have easily stuck with using MessageListeners and overriding the onMessage method, but I wanted to try out using the JmsListener annotation since it's cleaner code. If I want to add a new JMS queue, all I need to do is create a POJO and add the #JmsListener annotation to the method that will receive the message from the producer.
Related
I know how to create a single JMS listener with Springboot with annotations. But now I want to create several JMS listeners listening at several brokers sending same kind of messages at server startup, reading in a file the properties of brokers.
How can I achieve this ? Is it possible to ask SpringBoot to create the beans with java statements instead of annotations ? With a factory or something like that ?
I know there won't be more than 10 brokers in the system. Is there a solution to define statically 10 JMS listeners with annotations but deactivating some listeners if they are not used so that they don't cause errors ?
My answer relates to: "Is there a solution to define statically 10 JMS listeners with annotations but deactivating some listeners if they are not used so that they don't cause errors ?" and not the Dynamic portion of creating JMSListeners on the fly.
You can use #ConditionalOnProperty to enable/disable your consumer and use profiles to specify when they are enabled.
Example:
#Slf4j
#Service
#ConditionalOnProperty(name = "adapter-app-config.jms.request.enable-consumer", havingValue = "true")
public class RequestMessageConsumer {
#Autowired
private AdapterConfig adapterConfig;
#Autowired
private RequestMessageHandler requestMessageHandler;
#JmsListener(concurrency = "1", destination = "${adapter-app-config.jms.request.InQueue}")
#Transactional(rollbackFor = { Exception.class })
public void onMessage(TextMessage message) {
requestMessageHandler.handleMessage(message);
}
}
application.properties
adapter-app-config:
jms:
sync:
enable-consumer: true
request:
enable-consumer: false
For Dynamic, please see:
Adding Dynamic Number of Listeners(Spring JMS)
https://docs.spring.io/spring-framework/docs/current/reference/html/integration.html#jms-annotated-programmatic-registration
I am using Spring Boot with log4j2 - and I'd like to trigger a custom method in a '#Service' class when logger.error(...) is triggered.
For example,
#Service
public class Foo {
private final Logger logger = LogManager.getLogger(getClass());
...
public void doSomething() {
try {
...
}
catch (Exception e) {
logger.error("Error!", e); // When `error` is triggered...
}
}
}
// Other class
#Service
public class Bar {
#Autowired private NotificationService notificationService;
public void triggeredOnError() { // I'd like to trigger this method
this.notificationService.notifySomething();
}
}
I'd like to know this is possible in log4j2 with Spring Boot. The thing is, I just want to trigger the default method logger.error(...) since I don't want to change the default behavior of log4j2. I researched a bit - and filter or adapter might be the solution here, but I am not really sure how to achieve this. Please help me out!
While an appender would work as Mark suggests, I would implement a Filter. A Filter can be placed in four different locations in Log4j 2 and has the option of forcing the log event to be logged, forcing it to not be logged or just continue on with the normal evaluation of whether it should be logged. But a filter can always be configured with onMatch=NEUTRAL and onMismatch=NEUTRAL so that it really has no effect on whether the log event is processed but allows some other processing to take place. In addition, Filters are much easier to write than an Appender.
You can find a sample Filter at http://logging.apache.org/log4j/2.x/manual/extending.html#Filters
What you should not do in a Filter though, is use it as a way to write the log event to some destination. That is exactly what Appenders are for.
IMO the easiest way to achieve that is to create a special appender and in Log4j2 configuration associate it with a logger of your choice (or maybe with all the loggers if you want a “global” configuration).
Then you could use an “appender filter” to make an appender called only if its an error message.
The only potential issue is contacting the spring bean from log4j2 appender. Read this SO thread to understand how technically you can achieve that.
The benefit of this method is that you don’t change the framework but instead leverage the configuration options that it already provides.
I'am trying to send a simple message using "spring cloud stream" to the rabbitmq. Basically code looks like this:
#EnableBinding(Source.class)
#SpringBootApplication
public class SourceApplication {
public static void main(String[] args) {
SpringApplication.run(SourceApplication.class, args);
}
#Autowired Source source;
#PostConstruct
public void init() {
source.send(MessageBuilder.withPayload("payload").build());
}
}
then I get this error message:
org.springframework.messaging.MessageDeliveryException: Dispatcher has no subscribers for channel 'unknown.channel.name'.; nested exception is org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers, failedMessage=GenericMessage [payload=******, headers={id=c60dd5be-6576-99d5-fd1b-b1cb94c191c1, timestamp=1488651422892}]
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:93)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:423)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:373)
However, if I add some delay, before sending a message (just second or few), it works ok. My question is: how can I wait before spring completely initialize message channels and then send a message?
#PostConstruct is triggered too early (when the configuration bean is created, but before the context is started and the binding takes place). What you want is to trigger the sending of the message once the context is completely initialized, or at least after the output channels are bound.
You have a few options, all relying on the creation of an additional bean:
To use the SmartLifecycle support from Spring (make sure that isAutoStartup returns true by default and the phase is zero - the default - so that the bean is started after outputs are bound).
Use an ApplicationListener for ContextRefreshedEvent.
Since this is a Spring Boot application you can use an ApplicationRunner bean (which gets invoked after the context has been created).
You might look into Spring's Task Execution and Scheduling features.
In particular, it sounds like you want something like what section 34.4 covers.
Also, I spotted this answer to a similar question.
Say I have the following route:
from(rabbitMQUri)
.to(myCustomerProcessor)
.choice()
.when(shouldGotoA)
.to(fizz)
.when(shouldGotoB)
.to(buzz)
.otherwise()
.to(foo);
Let's pretend that myCustomProcessor tunes shouldGotoA and shouldGotoB according to the message consumed from RabbitMQ.
I would like to unit test 3 scenarios:
A "fizz" message is consumed and shouldGotoA is set to true, which executes the first when(...).
A "buzz" message is consumed and shouldGotoB is set to true, which executes the second when(...).
A "foo" message is consumed and the otherwise() is executed.
My question is: how do I mock/stub the RabbitMQ endpoint so that the route executes as it normally will in production, but so that I don't have to actually connect the test to a RabbitMQ server? I need some kind of "mock message" producer.
A code example or snippet would be extremely helpful and very much so appreciated!
This is one way of putting together a suitable test.
Firstly define an empty Camel Context with just a ProducerTemplate in it:
<camel:camelContext id="camelContext">
<camel:template id="producerTemplate" />
</camel:camelContext>
I do this so that when I execute the test, I can control which routes actually start as I don't want all my routes starting during a test.
Now in the test class itself, you'll need references to the producer template and Camel Context. In my case, I'm using Spring and I autowire them in:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = { "classpath:/spring/spring-test-camel.xml" })
public class MyTest {
#Autowired
private ProducerTemplate producerTemplate;
#Autowired
private CamelContext camelContext;
In the test itself, replace the RabbitMQ/ActiveMQ/JMS component in the context with the seda or direct component. eg replace all JMS calls with a seda queue.
camelContext.removeComponent("jms");
camelContext.addComponent("jms", this.camelContext.getComponent("seda"));
camelContext.addRoutes(this.documentBatchRouting);
Now whenever you are reading or writing to a JMS URI, it is actually going to a seda queue. This is similar to injecting a new URI into the component but take less configuration and allows you to add new endpoints to the route without worrying about remembering to inject all the URIs.
Finally in the test, use the the producer template to send a test message:
producerTemplate.sendBody("jms:MyQueue", 2);
You're route should then execute and you can test your expectations.
Two things to note are:
Your transaction boundaries may change, especially if you replace JMS queues with a direct component
If you are testing several routes, you'll have to be careful to remove the route from the Camel Context at the end of the tests for that route.
It may depend what component you are using (AMQP or RabbitMQ) for the communication.
The single most important resource for sample code in Camel is the junit test cases in the source.
Two files that does similar things to what you need are these two, but you may want to look around in the test cases in general to get inspiration:
AMQPRouteTest.java
RabbitMQConsumerIntTest.java
A more "basic" way to make routes testable is to make the "from" uri a parameter.
Let's say you make your RouteBuilder something like this:
private String fromURI = "amqp:/..";
public void setFromURI(String fromURI){
this.fromURI = fromURI;
}
public void configure(){
from(fromURI).whatever();
}
Then you can inject a "seda:foobar" endpoint in the fromURI before your start the unit test. The seda endpoint is trivial to test. This assumes you don't need to test AMQP/RabbitMQ specific constructs, but simply receive the payload.
A good way to make software better testable (especially software that communicates to external stuff) is to use dependency injection. I love Guice and it is directly supported by camel.
(all this stuff will burden you with learning about dependency injection but it will pay very soon i can assure you)
In this scenario you would just inject "Endpoint"s. You pre-configure the endpoints like this (would be placed in "module").
#Provides
#Named("FileEndpoint")
private Endpoint fromFileEndpoint() {
FileEndpoint fileEndpoint = getContext().getEndpoint("file:" + somFolder, FileEndpoint.class);
fileEndpoint.setMove(".done");
fileEndpoint.setRecursive(true);
fileEndpoint.setDoneFileName(FtpRoutes.DONE_FILE_NAME);
...
return fileEndpoint;
}
Your RouteBuilder just inject the endpoint:
#Inject
private MyRoutes(#Named("FileEndpoint") final Endpoint fileEndpoint) {
this.fileEndpoint = fileEndpoint;
}
#Override
public void configure() throws Exception {
from(fileEndpoint)....
}
To easily test such an route you inject another endpoint for test not FileEndpoint but "direct:something". A very easy way to do this is "Jukito", it combines Guice with Mockito. A test would look like this:
#RunWith(JukitoRunner.class)
public class OcsFtpTest extends CamelTestSupport {
public static class TestModule extends JukitoModule {
#Override
protected void configureTest() {
bind(CamelContext.class).to(DefaultCamelContext.class).in(TestSingleton.class);
}
#Provides
#Named("FileEndpoint")
private Endpoint testEndpoint() {
DirectEndpoint fileEndpoint = getContext().getEndpoint("direct:a", DirectEndpoint.class);
return fileEndpoint;
}
}
#Inject
private MyRoutes testObject;
#Test
....
}
Now the "testObject" will get the direct endpoint instead of the file endpoint.This works with all kinds of Endpoints and generally with all Interfaces/ abstract classes and Apis that heavily rely on Interfaces (camel excels here!).
In my web application(Spring based), i have simple layered architecture as Service->Manager->Dao->database. For logging purpose i want to log request coming to Service and then exit from Service at one go so that it is easy to debug issues. Otherwise logs contain various output from different threads intermingle with each other which is not easy to read. Is it possible with existing logging framework like log4j.
It is possible with any logging framework. You can use AOP to create a "logging" aspect around your service methods. Here is some example.
You can use Spring AOP to implement such logs. Here's an example:
#Aspect
public class LoggingAspect {
private static final Logger LOG = LoggerFactory.getLogger(LoggingAspect.class);
#Pointcut("call(* com.yourcompany.*.*(..))")
public void serviceMethod() {
}
#Before("serviceMethod()")
public void logMethodCalls(final JoinPoint joinPoint) {
if (LOG.isDebugEnabled())
LOG.debug("Calling method {} with args {}",
joinPoint.getSignature(), joinPoint.getArgs());
}
}
Just wire it as a Spring Bean:
<bean class="com.somepackage.LoggingAspect" />
<aop:aspectj-autoproxy/>
and calls to public methods of Spring beans in the matched packages should be logged.