What would be the Spring Integration DSL way of creating the equivalent of
<int:gateway service-interface="MyService" default-request-channel="myService.inputChannel"/>
// where my existing interface looks like
interface MyService { process(Foo foo); }
I've not been able to find a factory in org.springframework.integration.dsl and none of argument lists for IntegrationFlows.from(...) are helping self discovery.
It sort of feels like I'm missing something like a Java protocol adaptor from https://github.com/spring-projects/spring-integration-java-dsl/wiki/Spring-Integration-Java-DSL-Reference#using-protocol-adapters.
// I imagine this is what I can't find
IntegrationFlows.from(Java.gateway(MyService.class)
.channel("myService.inputChannel")
.get();
The only thing I've come across is on an old blog post, but it seems to require annotating the interface with #MessagingGateway and #Gateway, which I'd like to avoid. See https://spring.io/blog/2014/11/25/spring-integration-java-dsl-line-by-line-tutorial
We have done that recently in Spring Integration 5.0. With that you really can do this:
#Bean
public IntegrationFlow controlBusFlow() {
return IntegrationFlows.from(ControlBusGateway.class)
.controlBus()
.get();
}
public interface ControlBusGateway {
void send(String command);
}
See more info in the latest blog post.
Right now you don't have choice unless declare #MessagingGateway on the interface and start the flow from the request channel for that gateway definition.
Related
I have the following spring-integration XML config
<ip:tcp-outbound-gateway id="outboundClient"
request-channel="requestChannel"
reply-channel="string2ObjectChannel"
connection-factory="clientConnectionFactory"
request-timeout="10000"
reply-timeout="10000"/>
How can I write the Java config equivalent of the above?
I thought the equivalent would be
#Bean
public TcpOutboundGateway outboundClient() {
TcpOutboundGateway tcpOutboundGateway = new TcpOutboundGateway();
tcpOutboundGateway.setConnectionFactory(clientConnectionFactory());
tcpOutboundGateway.setRequiresReply(true);
tcpOutboundGateway.setReplyChannel(string2ObjectChannel());
tcpOutboundGateway.setRequestTimeout(10000);
tcpOutboundGateway.setSendTimeout(10000);
return tcpOutboundGateway;
}
But I couldn't find a way to set the request channel.
Any help would be appreciated.
Thank you
Your config looks good, but you should know in addition that any Spring Integration Consumer component consists of two main objects: MessageHandler (TcpOutboundGateway in your case) and EventDrivenConsumer for subscriable input-channel or PollingConsumer if input-channel is Pollable.
So, since you already have the first, handling, part you need another consuming. For this purpose Spring Integration suggests to mark your #Bean with endpoint annotations:
#Bean
#ServiceActivator(inputChannel = "requestChannel")
public TcpOutboundGateway outboundClient() {
See more in the Spring Integration Reference Manual.
However to allow such a annotation process (or any other Spring Integration infrastructure) you have to mark your #Configuration with #EnableIntegration.
Also consider to use Spring Integration Java DSL to have more gain from JavaConfig.
Is there any way to configure JMS outbound channel adapter
<int-jms:outbound-channel-adapter id="jmsOut" destination="outQueue" channel="exampleChannel"/>
by the similar "easy" way, but using only java-based (annotations) configuration?
If no, so what is the simplest way to achieve this point?
Eugene, I've already pointed you out to the Spring Integration Java DSL. It is exactly the best way to simplify Spring Integration from Java-based config.
Since it isn't the first your similar question, please, pay attention to that project, which has a simple fusion with Core SI:
#Bean
public IntegrationFlow jmsOutboundFlow() {
return IntegrationFlows.from("exampleChannel")
.handleWithAdapter(h ->
h.jms(this.jmsConnectionFactory).destination("outQueue"))
.get();
}
Otherwise it may look like this for the raw Java & Annotation configuration:
#Bean
#serviceActivator(inputChannel = "exampleChannel")
public MessageHandler jsmOutboundAdapter() {
JmsTemplate template = new DynamicJmsTemplate();
template.setConnectionFactory(this.jmsConnectionFactory);
JmsSendingMessageHandler handler = new JmsSendingMessageHandler(template);
handler.setDestinationName("outQueue");
return handler;
}
My application uses the Spring Framework 4 included spring-messaging module (with key abstractions from the Spring Integration project such as Message, MessageChannel,MessageHandler and others that can serve as a foundation for such a messaging architecture.)
My application uses Websocket & STOMP. It maintains connections(websocket sessions) with a high volume of java websocket clients & one of the requirements was to use either akka or reactor.
I want to integrate spring-reactor RingBufferAsyncTaskExecutor in place of ThreadPoolTaskExecutor in clientInboundChannelExecutor & clientOutboundChannelExecutor
to get better Throughput. At least I've identified this approach as the way to integrate spring-reactor into my existing application - this may not be the right approach.
I was looking at reactor-si-quickstart, since it demonstrates how to use reactor with spring integration & since spring-messaging in Spring Framework 4 includes key abstractions from the Spring Integration project. I thought it would be the closest reference.
My working java config for web socket has the following class declaration
public class WebSocketConfig extends WebSocketMessageBrokerConfigurationSupport implements WebSocketMessageBrokerConfigurer.
WebSocketMessageBrokerConfigurationSupport extends AbstractMessageBrokerConfiguration.
In org.springframework.messaging.simp.config.AbstractMessageBrokerConfiguration I wanted to try configure RingBufferAsyncTaskExecutor in place of ThreadPoolTaskExecutor
#Bean
public ThreadPoolTaskExecutor clientInboundChannelExecutor() {
TaskExecutorRegistration reg = getClientInboundChannelRegistration().getOrCreateTaskExecRegistration();
ThreadPoolTaskExecutor executor = reg.getTaskExecutor();
executor.setThreadNamePrefix("clientInboundChannel-");
return executor;
}
When I try override this method in WebSocketConfig "The method getOrCreateTaskExecRegistration() from the type ChannelRegistration is not visible" because
in AbstractMessageBrokerConfiguration it's protected ....
protected final ChannelRegistration getClientInboundChannelRegistration() {
if (this.clientInboundChannelRegistration == null) {
ChannelRegistration registration = new ChannelRegistration();
configureClientInboundChannel(registration);
this.clientInboundChannelRegistration = registration;
}
return this.clientInboundChannelRegistration;
}
I don't fully understand the WebSocketMessageBrokerConfigurationSupport hierarchy or the WebSocketMessageBrokerConfigurer interface in my WebSocketConfig. I just played around with overriding what I needed to for my customizations to work.
Not sure if it's relevant, but I don't need an external broker because my application doesn't send any data to all connected subscribers, at the moment and is unlikely to down the line. Communication with daemon type java websocket clients is point-to-point, but the web ui websocket in the browser does use subscribe to get real-time data so it's a convenient setup (rather then spring integration direct channel) and there where clear sources on how to set it up - still I'm not sure it is most efficient application design.
STOMP Over WebSocket Messaging Architecture as described in the spring-framework reference documentation was the most comprehensive approach since this is my first spring project.
Is it possible to get the performance boosts from integrating spring-reactor into my existing application?
Or should I try to use spring integration instead, this would require a lot of modification, as far as I can tell - also it seems illogical that it would be necessary given that Spring Framework 4 included spring-messaging module was from spring integration.
How can should I integrate spring-reactor into my standard spring framework 4 STOMP Over WebSocket Messaging Architecture?
If configuring RingBufferAsyncTaskExecutor in place of ThreadPoolTaskExecutor in clientInboundChannelExecutor & clientOutboundChannelExecutor is the correct way, how should I go about doing this?
Actually the RingBufferAsyncTaskExecutor isn't ThreadPoolTaskExecutor, so you can't use it that way.
You can simply override clientInbound(Outbound)Channel beans from your AbstractWebSocketMessageBrokerConfigurer impl and just use #EnableWebSocketMessageBroker:
#Configuration
#EnableWebSocketMessageBroker
#EnableReactor
public class WebSocketConfig extends AbstractWebSocketMessageBrokerConfigurer {
#autowired
Environment reactorEnv;
#Override
public void registerStompEndpoints(StompEndpointRegistry registry) {
registry.addEndpoint("/ws").withSockJS();
}
#Override
public void configureMessageBroker(MessageBrokerRegistry configurer) {
configurer.setApplicationDestinationPrefixes("/app");
configurer.enableSimpleBroker("/topic", "/queue");
}
#Bean
public AbstractSubscribableChannel clientInboundChannel() {
ExecutorSubscribableChannel channel = new ExecutorSubscribableChannel(new RingBufferAsyncTaskExecutor(this.reactorEnv));
ChannelRegistration reg = getClientOutboundChannelRegistration();
channel.setInterceptors(reg.getInterceptors());
return channel;
}
}
And pay attention, please, to the WebSocket support in the Spring Integration.
By the way: point me out, please, to the link for that reactor-si-quickstart.
Say I have the following route:
from(rabbitMQUri)
.to(myCustomerProcessor)
.choice()
.when(shouldGotoA)
.to(fizz)
.when(shouldGotoB)
.to(buzz)
.otherwise()
.to(foo);
Let's pretend that myCustomProcessor tunes shouldGotoA and shouldGotoB according to the message consumed from RabbitMQ.
I would like to unit test 3 scenarios:
A "fizz" message is consumed and shouldGotoA is set to true, which executes the first when(...).
A "buzz" message is consumed and shouldGotoB is set to true, which executes the second when(...).
A "foo" message is consumed and the otherwise() is executed.
My question is: how do I mock/stub the RabbitMQ endpoint so that the route executes as it normally will in production, but so that I don't have to actually connect the test to a RabbitMQ server? I need some kind of "mock message" producer.
A code example or snippet would be extremely helpful and very much so appreciated!
This is one way of putting together a suitable test.
Firstly define an empty Camel Context with just a ProducerTemplate in it:
<camel:camelContext id="camelContext">
<camel:template id="producerTemplate" />
</camel:camelContext>
I do this so that when I execute the test, I can control which routes actually start as I don't want all my routes starting during a test.
Now in the test class itself, you'll need references to the producer template and Camel Context. In my case, I'm using Spring and I autowire them in:
#RunWith(SpringJUnit4ClassRunner.class)
#ContextConfiguration(locations = { "classpath:/spring/spring-test-camel.xml" })
public class MyTest {
#Autowired
private ProducerTemplate producerTemplate;
#Autowired
private CamelContext camelContext;
In the test itself, replace the RabbitMQ/ActiveMQ/JMS component in the context with the seda or direct component. eg replace all JMS calls with a seda queue.
camelContext.removeComponent("jms");
camelContext.addComponent("jms", this.camelContext.getComponent("seda"));
camelContext.addRoutes(this.documentBatchRouting);
Now whenever you are reading or writing to a JMS URI, it is actually going to a seda queue. This is similar to injecting a new URI into the component but take less configuration and allows you to add new endpoints to the route without worrying about remembering to inject all the URIs.
Finally in the test, use the the producer template to send a test message:
producerTemplate.sendBody("jms:MyQueue", 2);
You're route should then execute and you can test your expectations.
Two things to note are:
Your transaction boundaries may change, especially if you replace JMS queues with a direct component
If you are testing several routes, you'll have to be careful to remove the route from the Camel Context at the end of the tests for that route.
It may depend what component you are using (AMQP or RabbitMQ) for the communication.
The single most important resource for sample code in Camel is the junit test cases in the source.
Two files that does similar things to what you need are these two, but you may want to look around in the test cases in general to get inspiration:
AMQPRouteTest.java
RabbitMQConsumerIntTest.java
A more "basic" way to make routes testable is to make the "from" uri a parameter.
Let's say you make your RouteBuilder something like this:
private String fromURI = "amqp:/..";
public void setFromURI(String fromURI){
this.fromURI = fromURI;
}
public void configure(){
from(fromURI).whatever();
}
Then you can inject a "seda:foobar" endpoint in the fromURI before your start the unit test. The seda endpoint is trivial to test. This assumes you don't need to test AMQP/RabbitMQ specific constructs, but simply receive the payload.
A good way to make software better testable (especially software that communicates to external stuff) is to use dependency injection. I love Guice and it is directly supported by camel.
(all this stuff will burden you with learning about dependency injection but it will pay very soon i can assure you)
In this scenario you would just inject "Endpoint"s. You pre-configure the endpoints like this (would be placed in "module").
#Provides
#Named("FileEndpoint")
private Endpoint fromFileEndpoint() {
FileEndpoint fileEndpoint = getContext().getEndpoint("file:" + somFolder, FileEndpoint.class);
fileEndpoint.setMove(".done");
fileEndpoint.setRecursive(true);
fileEndpoint.setDoneFileName(FtpRoutes.DONE_FILE_NAME);
...
return fileEndpoint;
}
Your RouteBuilder just inject the endpoint:
#Inject
private MyRoutes(#Named("FileEndpoint") final Endpoint fileEndpoint) {
this.fileEndpoint = fileEndpoint;
}
#Override
public void configure() throws Exception {
from(fileEndpoint)....
}
To easily test such an route you inject another endpoint for test not FileEndpoint but "direct:something". A very easy way to do this is "Jukito", it combines Guice with Mockito. A test would look like this:
#RunWith(JukitoRunner.class)
public class OcsFtpTest extends CamelTestSupport {
public static class TestModule extends JukitoModule {
#Override
protected void configureTest() {
bind(CamelContext.class).to(DefaultCamelContext.class).in(TestSingleton.class);
}
#Provides
#Named("FileEndpoint")
private Endpoint testEndpoint() {
DirectEndpoint fileEndpoint = getContext().getEndpoint("direct:a", DirectEndpoint.class);
return fileEndpoint;
}
}
#Inject
private MyRoutes testObject;
#Test
....
}
Now the "testObject" will get the direct endpoint instead of the file endpoint.This works with all kinds of Endpoints and generally with all Interfaces/ abstract classes and Apis that heavily rely on Interfaces (camel excels here!).
I have a class that after it does some stuff, sends a JMS message.
I'd like to unit test the "stuff", but not necessarily the sending of the message.
When I run my test, the "stuff" green bars, but then fails when sending the message (it should, the app server is not running).
What is the best way to do this, is it to mock the message queue, if so, how is that done.
I am using Spring, and "jmsTemplate" is injected, along with "queue".
The simplest answer I would use is to stub out the message sending functionality. For example, if you have this:
public class SomeClass {
public void doit() {
//do some stuff
sendMessage( /*some parameters*/);
}
public void sendMessage( /*some parameters*/ ) {
//jms stuff
}
}
Then I would write a test that obscures the sendMessage behavior. For example:
#Test
public void testRealWorkWithoutSendingMessage() {
SomeClass thing = new SomeClass() {
#Override
public void sendMessage( /*some parameters*/ ) { /*do nothing*/ }
}
thing.doit();
assertThat( "Good stuff happened", x, is( y ) );
}
If the amount of code that is stubbed out or obscured is substantial, I would not use an anonymous inner class but just a "normal" inner class.
You can inject a mocked jmsTemplate.
Assuming easymock, something like
JmsTemplate mockTemplate = createMock(JmsTemplate.class)
That would do the trick.
Regarding how to organise all those test stubbing / mocks in a larger application...
We build and maintain a larger Enterprise App, which is configured with Spring. The real App runs as EAR on a JBoss Appserver. We defined our Spring context(s) with a beanRefFactory.xml
<bean id="TheMegaContext"
class="org.springframework.context.support.ClassPathXmlApplicationContext">
<constructor-arg>
<list>
<value>BasicServices.xml</value>
<value>DataAccessBeans.xml</value>
<value>LoginBeans.xml</value>
<value>BussinessServices.xml</value>
....
</list>
</constructor-arg>
</bean>
For running the unit tests, we just use a different beanRefFactory.xml, which exchanges the BasicServices to use a test version. Within that test version, we can define beans with the same names as in the production version, but with a mock/stub or whatever implementation (e.g. database uses a local Apache DPCP pooled datasource, while the production version uses the data source from the Appserver).
Another option is MockRunner which provides mock environments for JDBC, JMS, JSP, JCA and EJB. This allows you to define the queues/topics just like you would in the "real" case and simply send the message.
This is the perfect candidate to use jMock unit testing since your server is not running but you would use jMock to simulate interaction with the server.