The Spring DSL documentation provides a sample project -- café
I'm unsure of a couple of aspects of how this works. Pasting the relevant excerpts here: (Full source at the above link)
#Configuration
#EnableAutoConfiguration
#IntegrationComponentScan
public class Application {
public static void main(String[] args) throws InterruptedException {
ConfigurableApplicationContext ctx = SpringApplication.run(Application.class, args);
Cafe cafe = ctx.getBean(Cafe.class);
for (int i = 1; i <= 100; i++) {
Order order = new Order(i);
order.addItem(DrinkType.LATTE, 2, false);
order.addItem(DrinkType.MOCHA, 3, true);
cafe.placeOrder(order);
}
Thread.sleep(60000);
ctx.close();
}
#MessagingGateway
public interface Cafe {
#Gateway(requestChannel = "orders.input")
void placeOrder(Order order);
}
#Bean
public IntegrationFlow orders() {
return f -> f
.split(Order.class, Order::getItems)
.channel(c -> c.executor(Executors.newCachedThreadPool()))
// SNIP
}
Reading this example, I'm unclear on a couple of points:
The Cafe interface exposes a #Gateway that connects to requestChannel = "orders.input". However, this channnel is not defined anywhere. How does this work?
The DSL snippet does is not wired to consume from any channels, nor does it refer to the Cafe::placeOrder method -- how does this get connected to the orders.input channel to receive the inbound Order?
We just published (yesterday) a line-by-line tutorial for the cafe dsl sample which goes into a lot of details about the internals.
When using the lambda version (f -> f.split()...) the framework declares an implicit DirectChannel with the bean name ("orders") + ".input" as its id.
You can also use return IntegrationFlows.from("myChannel"). ... .get() instead of the lambda expression and, again, the framework will auto-generate the channel if not declared as a bean already.
See the InterationFlows javadoc for more information.
cafe.placeOrder() is invoked in the last line in the for loop in the main method. The framework creates a proxy for the interface that wraps the Order object in a message and sends it to the gateway's request channel.
Related
I've been trying to get pubsub to work within a spring application. To get up and running I've been reading through tutorials and documentation like this
I can get things to build and start but if I go through cloud console to send a message to the test subscription it never arrives.
This is what my code looks like right now:
#Configuration
#Import({GcpPubSubAutoConfiguration.class})
public class PubSubConfigurator {
#Bean
public GcpProjectIdProvider projectIdProvider(){
return () -> "project-id";
}
#Bean
public CredentialsProvider credentialsProvider(){
return GoogleCredentials::getApplicationDefault;
}
#Bean
public MessageChannel inputMessageChannel() {
return new PublishSubscribeChannel();
}
#Bean
#InboundChannelAdapter(channel = "inputMessageChannel", poller = #Poller(fixedDelay = "5"))
public MessageSource<Object> pubsubAdapter(PubSubTemplate pubSubTemplate) {
PubSubMessageSource messageSource = new PubSubMessageSource(pubSubTemplate, "tst-sandbox");
messageSource.setAckMode(AckMode.MANUAL);
messageSource.setPayloadType(String.class);
messageSource.setBlockOnPull(false);
messageSource.setMaxFetchSize(10);
//pubSubTemplate.pull("tst-sandbox", 10, true);
return messageSource;
}
// Define what happens to the messages arriving in the message channel.
#ServiceActivator(inputChannel = "inputMessageChannel")
public void messageReceiver(
String payload,
#Header(GcpPubSubHeaders.ORIGINAL_MESSAGE) BasicAcknowledgeablePubsubMessage message) {
System.out.println("Message arrived via an inbound channel adapter from sub-one! Payload: " + payload);
message.ack();
}
}
My thinking was that the poller annotation would start a poller to run every so often to check for messages and send them to the method annotated with service activator but this is clearly not the case as it is never hit.
Interestingly enough if I put a breakpoint right before "return messageSource" and check the result of the template.pull call the messages ARE returned so it is seemingly not an issue with the connection itself.
What am I missing here? Tutorials and documentation aren't helping much at this point as they all use pretty much the same bit of tutorial code like above...
I have tried variations of the above code like creating the adapter instead of the messagesource like so:
#Bean
public PubSubInboundChannelAdapter inboundChannelAdapter(
#Qualifier("inputMessageChannel") MessageChannel messageChannel,
PubSubTemplate pubSubTemplate) {
PubSubInboundChannelAdapter adapter =
new PubSubInboundChannelAdapter(pubSubTemplate, "tst-sandbox");
adapter.setOutputChannel(messageChannel);
adapter.setAckMode(AckMode.MANUAL);
adapter.setPayloadType(String.class);
return adapter;
}
to no avail. Any suggestions are welcome.
Found the problem after creating a spring boot project from scratch (main project is normal spring). Noticed in the debug output that it was auto starting the service activator bean and some other things like actually subscribing to the channels which it wasn't doing in the main project.
After a quick google the solution was simple, had to add
#EnableIntegration
annotation at class level and the messages started coming in.
We are using spring cloude stream 2.0 & Kafka as a message broker.
We've implemented a circuit breaker which stops the Application context, for cases where the target system (DB or 3rd party API) is unavilable, as suggested here: Stop Spring Cloud Stream #StreamListener from listening when target system is down
Now in spring cloud stream 2.0 there is a way to manage the lifecycle of binder using actuator: Binding visualization and control
Is it possible to control the binder lifecycle from the code, means in case target server is down, to pause the binder, and when it's up, to resume?
Sorry, I misread your question.
You can auto wire the BindingsEndpoint but, unfortunately, its State enum is private so you can't call changeState() programmatically.
I have opened an issue for this.
EDIT
You can do it with reflection, but it's a bit ugly...
#SpringBootApplication
#EnableBinding(Sink.class)
public class So53476384Application {
public static void main(String[] args) {
SpringApplication.run(So53476384Application.class, args);
}
#Autowired
BindingsEndpoint binding;
#Bean
public ApplicationRunner runner() {
return args -> {
Class<?> clazz = ClassUtils.forName("org.springframework.cloud.stream.endpoint.BindingsEndpoint$State",
So53476384Application.class.getClassLoader());
ReflectionUtils.doWithMethods(BindingsEndpoint.class, method -> {
try {
method.invoke(this.binding, "input", clazz.getEnumConstants()[2]); // PAUSE
}
catch (InvocationTargetException e) {
e.printStackTrace();
}
}, method -> method.getName().equals("changeState"));
};
}
#StreamListener(Sink.INPUT)
public void listen(String in) {
}
}
I want to create a Spring Cloud Dataflow source application based on a lib that connects to a messaging service (IRC, actually) and calls my callback when a message arrives. The only goal of the source app is to create an SCDF message from the received IRC message and send it to the stream.
I have come up with the following solution:
The IrcListener class annotated with #Component does some configuration and starts listening for IRC messages when the start() method is called. When a message is received its onGenericMessage callback simply sends the message to the stream via the injected source property:
#Component
public class IrcListener extends ListenerAdapter {
#Override
public void onGenericMessage(GenericMessageEvent event) {
Message msg = new Message();
msg.content = event.getMessage();
source.output().send(MessageBuilder.withPayload(msg).build());
}
private Source source;
private String _name;
private String _server;
private List<String> _channels;
public void start() throws Exception {
Configuration configuration = new Configuration.Builder()
.setName(_name)
.addServer(_server)
.addAutoJoinChannels(_channels)
.addListener(this)
.buildConfiguration();
PircBotX bot = new PircBotX(configuration);
bot.startBot();
}
#Autowired
public IrcListener(Source source) {
this.source = source;
_name = "ircsource";
_server = "irc.rizon.net";
_channels = Arrays.asList("#test".split(","));
}
}
The main class runs Spring Application and calls the aforementioned start() method on the IrcListener component.
#EnableBinding(Source.class)
#SpringBootApplication
public class IrcStreamApplication {
public static void main(String[] args) throws Exception {
ConfigurableApplicationContext context = SpringApplication.run(IrcStreamApplication.class, args);
context.getBean(IrcListener.class).start();
}
}
This works ok and the messages are received and published to the stream successfully, but I'd like to know whether this is the right approach to take in the Spring (Cloud Dataflow) universe and or maybe I am missing something important?
It looks ok; but, generally, message-driven sources extend MessageProducerSupport and call sendMessage(Message<?>).
(and override doStart() in this case).
It would give you access to message history tracking and error handling (if the send fails).
I am using hystrix javanica collapser in spring boot, but I found it did not work, my code just like this below:
service class:
public class TestService {
#HystrixCollapser(batchMethod = "getStrList")
public Future<String> getStr(String id) {
System.out.println("single");
return null;
}
#HystrixCommand
public List<String> getStrList(List<String> ids) {
System.out.println("batch,size=" + ids.size());
List<String> strList = Lists.newArrayList();
ids.forEach(id -> strList.add("test"));
return strList;
}
}
where I use:
public static void main(String[] args) {
TestService testService = new TestService();
HystrixRequestContext context = HystrixRequestContext.initializeContext();
Future<String> f1= testService.getStr("111");
Future<String> f2= testService.getStr("222");
try {
Thread.sleep(3000);
System.out.println(f1.get()); // nothing printed
System.out.println(f2.get()); // nothing printed
} catch (Exception e) {
}
context.shutdown();
}
It printed 3 single instead of 1 batch.
I want to know what's wrong with my code, a valid example is better.
I can't find a hystrix javanica sample on the internet, So I have to read the source code to solve this problem, now it's solved, and this is my summary:
when you use hystrix(javanica) collapser in spring-boot, you have to:
Defined a hystrixAspect spring bean and import hystrix-strategy.xml;
Annotate single method with #Hystrix Collapser annotate batch method with #HystrixCommand;
Make your single method need 1 parameter(ArgType) return Future , batch method need List return List and make sure size of args be equal to size of return.
Set hystrix properties batchMethod, scope, if you want to collapse requests from multiple user threads, you must set the scope to GLOBAL;
Before you submit a single request, you must init the hystrix context with HystrixRequestContext.initializeContext(), and shutdown the context when your request finish;
I have a service called TestService which extends AbstractVerticle:
public class TestService extends AbstractVerticle {
#Override
public void start() throws Exception {
//Do things
}
}
I then deploy that verticle with vertx like this:
Vertx vertx = Vertx.vertx();
vertx.deployVerticle(TestService.class.getName());
How can I get a reference to my deployed TestService after vertx instantiates it?
You should use an alternative method for deployment:
vertx.deployVerticle(TestService.class.getName(), deployment -> {
if (deployment.succeeded()) {
// here is your ID
String deploymentId = deployment.result();
} else {
// deployment failed...
}
});
If you're just interested in listing all deployed verticles then you can just request the list of ids:
vertx.deploymentIDs()
I know this question is old however it may be useful to someone to see an example of how to do this.
You will often see examples for deployment like this from vertx-examples
this follows as asynchronous micro service framework, however its really easy to get the reference as the method 'deployVerticle' (see line 29 in the link) will take an instance as shown in the simple example below, and u can get a reference in the call back as shown.
example in Kotlin easily translate to java
MyVert: io.vertx.core.AbstractVerticle() {
override fun start() {
// init
}
fun someFunction() {
}
}
fun main() {
val vertx = Vertx.vertx()
val myVert = MyVert()
vertx.deployVerticle(myVert) {
if(it.succeeded() ) {
myVert.someFunction()
}
else { println(it.cause().localizedMessage)} }
}
you can get all deployed verticles in current vertx instance by this way
Set<String> strings = vertx.deploymentIDs();
strings
.stream()
.map(id -> ((VertxImpl)vertx.getDelegate()).getDeployment(id))
.forEach(deployment -> System.out.println(deployment.verticleIdentifier() + " " + deployment.isChild() ));
Looks like the vertx API does not allow you to retrieve the Verticle objects once they are deployed. Maybe because verticles can be distributed over multiple JVM.
I needed to do it for unit tests though and I came up with this.
This is unreliable since you rely on VertxImpl (it can break at any vertx version upgrade). But I prefer this over changing production code to be able to test it.
private static <T extends Verticle> List<T> retrieveVerticles(Vertx vertx, Class<T> verticleClass) {
VertxImpl vertxImpl = (VertxImpl) vertx;
return vertxImpl.deploymentIDs().stream().
map(vertxImpl::getDeployment).
map(Deployment::getVerticles).
flatMap(Set::stream).
filter(verticleClass::isInstance).
map(verticleClass::cast).
collect(Collectors.toList());
}
Usage example:
vertx.deployVerticle(new MainVerticle());
// some MyCustomVerticle instances are deployed from the MainVerticle.start
// you can't reach the MyCustomVerticle objects from there
// so the trick is to rely on VertxImpl
List<MyCustomVerticle> deployedVerticles = retrieveVerticles(vertx, MyCustomVerticle.class);