How should I build my Messages in Spring Integration? - java

I have an application I coded which I am refactoring to make better use of Spring Integration. The application processes the contents of files.
The problem (as I see it) is that my current implementation passes Files instead of Messages, i.e. Spring Integration Messages.
In order to avoid further rolling my own code, which I then have to maintain later, I'm wondering if there is a recommended structure for constructing Messages in Spring Integration. What I wonder is if there is some recommended combination of channel with something like MessageBuilder that I should use.
Process/Code (eventually)
I don't yet have the code to configure it but I would like to end up with the following components/processes:
Receive a file, remove header and footer of the file, take each line and convert it into a Message<String> (This it seems will actually be a Splitter) which I send on to...
Channel/Endpoint sends message to Router
Router detects format String in Payload and routes to the appropriate channel similar to Order Router here...
Selected channel then builds appropriate type of Message, specifically typed messages. For example I have the following builder to build a Message...
public class ShippedBoxMessageBuilder implements CustomMessageBuilder {
#Override
public Message buildMessage(String input) {
ShippedBox shippedBox = (ShippedBox) ShippedBoxFactory.manufactureShippedFile(input);
return MessageBuilder.withPayload(shippedBox).build();
}
...
Message is routed by type to the appropriate processing channel
My intended solution does seem like I've complicated it. However, I've purposefully separated two tasks 1) Breaking a file into many lines of Messages<String> and 2) Converting Messages<String> into Messages<someType>. Because of that I think I need an additional router/Message builder for the second task.

Actually, there is MessageBuilder support in the Spring Integration.
The general purpose of such Frameworks is to help back-end developers to decouple their domain code from messaging infrastructure. Finally, to work with Spring Integration you need to follow the POJO and Method Invocation principles.
You write your own services, transformers and domain models. Then you just use some out of the box compoenents (e.g. <int-file:inbound-channel-adapter>) and just refer from there to your POJOs, but not vise versa.
I recommend you to read Spring Integration in Action book to have more pictures on the matter.
Can you explain the reason to get deal with Spring Integration components directly?
UPDATE
1) Breaking a file into many lines of Messages
The <splitter> is for you. You should write some POJO which returns List<String> - the lines from your file without header and footer. How to read lines from File isn't a task of Spring Integration. Especially, if the "line" is something logical, not the real file line.
2) Converting Messages into Messages
One more time: there is no reason to build Message object. It's just enough to build new payload in some transformer (again POJO) and framework wrap to its Message to send.
Payload Type Router speaks for itself: it checks a payload type, but not Message type.
Of course, payload can be Message too, and even any header can be as well.
Anyway your Builder snapshot shows exactly a creation of plain Spring Integration Message in the end. And as I said: it will be enough just to transform one payload to another and return it from some POJO, which you will use as a transformer reference.

Related

Best practice for including data persistence inside integration flow (SpringIntegration)

My question is related to finding a best practice to include data persistence inside an integration flow while returning the Message object so that it can be further processed by the flow.
Let's consider the following flow:
#Bean
IntegrationFlow myFlow() {
return flowDefinition ->
flowDefinition
.filter(filterUnwantedMessages)
.transform(messageTransformer)
.wireTap(flow -> flow.trigger(messagePayloadPersister)) <--- here is the interesting part
.handle(terminalHandler);
}
The wide majority of cases, instead of the wireTap I have seen in some projects, a Transformer is used to persist data, which I do not particulary like, as
the name implies transformation of a message, and persistence is something else.
My wish is to find out alternatives to the wireTap, and a colleague of mine proposed using #ServiceActivator:
#Bean
IntegrationFlow myFlow() {
return flowDefinition ->
flowDefinition
.filter(filterUnwantedMessages)
.transform(messageTransformer)
.handle(messagePayloadPersister)
.handle(terminalHandler);
}
#Component
class MesssagePayloadPersister {
#ServiceActivator <--- interesting, but..
public Message handle(Message<?> msg) {
//persist the payload somewhere..
return message;
}
}
I like the flow, it looks clean now, but also I am not 100% happy with the solution, as I am mixing DSL with Spring.
Note: org.springframework.messaging.MessageHandler is not good because the handle method returns void so it is a terminal part to the flow. I need a method that returns Message object.
Is there any way to do this?
Need to understand what you are going to do with that persisted data in the future.
And what information from the message you are going to store (or the whole message at all).
See this parts of documentation - may be something will give you some ideas:
https://docs.spring.io/spring-integration/docs/5.3.2.RELEASE/reference/html/system-management.html#message-store
https://docs.spring.io/spring-integration/docs/5.3.2.RELEASE/reference/html/system-management.html#metadata-store
https://docs.spring.io/spring-integration/docs/5.3.2.RELEASE/reference/html/message-transformation.html#claim-check
https://docs.spring.io/spring-integration/docs/5.3.2.RELEASE/reference/html/core.html#persistent-queuechannel-configuration
https://docs.spring.io/spring-integration/docs/5.3.2.RELEASE/reference/html/jdbc.html#jdbc-outbound-channel-adapter
With the last one you may need to consider to use a publishSubscribeChannel() of the Java DSL to be able to store in the DB and have a second subscriber to continue the flow.

Do we need to stub the other micro service in Spring cloud contract

#marcin
I am doing a pilot on implementing the spring cloud contract for Micro services which has around 50+ services talking to each other. I have few questions which I haven't found the answer precisely in your document.
The service which I am building has controller which processes and transforms my input payload to the desired output in json format. This json is used to build desired structure that should match the response in groovy (Our contract). However the controller, is sending json to another services with some URL as shown below.
request_url=http://localhost:8090/services/rest/transact/v2/pay/validate/0000118228/new response_body=null
Basically it is expecting the Response back from the other service by making use of this json and now response_body=null
My question is do I need to create a stub or mock the service? to make use of this response as an input to produce expected output from the response. Basically the microservice is expecting a ServiceResponse.
Another question is do we need to load in-memory data while doing the contract testing or do we need to just test the controller itself?
I don't really follow your description... "The service which I am building has controller which transforms my input payload sent from groovy and giving the desired output in json format" . Sent from which groovy? Groovy application? Can you explain that in more depth?
But I guess I can try to answer the question anyways...
My question is do I need to create a stub or mock the service? to make use of this response as input to produce expected output from the response. It is expecting a ServiceResponse.
If I understand correctly - service you mean a class not an application? If that's the case then, yes, in the controller I would inject a stubbed service.
Another question is do we need to load in-memory data while doing the contract testing or do we need to just test the controller itself?
That's connected with the previous answer. Your controller doesn't delegate work to any real implementation of a service, so no access to the DB takes place. If you check out the samples (https://github.com/spring-cloud-samples/spring-cloud-contract-samples/blob/master/producer/src/test/java/com/example/BeerRestBase.java) you'll see that the base class has mocks injected to it and no real integration takes place
EDIT:
"The service which I am building has controller which transforms my input payload sent from groovy and giving the desired output in json format" is actually the description of what is done via the Spring Cloud Contract generated test. The next sentence was
However the controller, is sending json to another services with some URL as shown below.
In Contract testing, I don't care what your controller further on does. If it's in the controller where you send the request to some other application then you should wrap it in a service class. Then such a service you would mock out in your contract tests. What we care about in the Contract tests is whether we can communicate. Not whether the whole end to end functionality is working correctly.

Dropwizard registering two classes/clients

I have two instances of clients with different configs that I am creating (timeout, threadpool, etc...), and would like to leverage Dropwizard's metric on both of the clients.
final JerseyClientBuilder jerseyClientBuilder = new JerseyClientBuilder(environment)
.using(configuration.getJerseyClientConfiguration());
final Client config1Client = jerseyClientBuilder.build("config1Client");
environment.jersey().register(config1Client);
final Client config2Client = jerseyClientBuilder.build("config2Client");
environment.jersey().register(config2Client);
However, I am getting
org.glassfish.jersey.internal.Errors: The following warnings have been detected:
HINT: Cannot create new registration for component type class org.glassfish.jersey.client.JerseyClient:
Existing previous registration found for the type.
And only one client's metric shows up.
How do I track both clients' metrics or is it not common to have 2 clients in a single dropwizard app?
Never mind, turned out I was an idiot (for trying to save some resource on the ClientBuilder).
2 Things that I did wrong with my original code:
1. You don't need to register Jersey clients, just the resource is enough... somehow I missed the resource part in my code and just straight up trying to register the client
2. You need to explicitly build each JerseyClientBuilder and then build your individually configured clients, then dropwizard will fetch by each JerseyClientBuilder's metrics
In the end, I just had to change my code to the following:
final Client config1Client = new JerseyClientBuilder(environment)
.using(configuration.getJerseyClientConfiguration()).build("config1Client");
final Client config2Client = new JerseyClientBuilder(environment)
.using(configuration.getJerseyClientConfiguration()).build("config2Client");
Doh.
environment.jersey().register() has a javadoc listing of Adds the given object as a Jersey singleton component meaning that the objects registered become part of the jersey dependency injection framework. Specifically this method is used to add resource classes to the jersey context, but any object with an annotation or type that Jersey looks for can be added this way. Additionally, since they are singletons you can only have one of them per any concrete type (which is why you are getting a "previous registration" error from Jersey).
I imagine that you want to have two Jersey clients to connect to two different external services via REST/HTTP. Since your service needs to talk to these others to do its work, you'll want to have the clients accessible wherever the "work" or business logic is being performed.
For example, this guide creates a resource class that requires a client to an external http service to do currency conversions. I'm not saying this is a great example (just a top google result for dropwizard external client example). In fact, I think this not a good to structure your application. I'd create several internal objects that hide from the resource class how the currency information is fetched, like a business object (BO) or data access object (DAO), etc.
For your case, you might want something like this (think of these as constructor calls). JC = jersey client, R = resource object, BO = business logic object
JC1()
JC2()
B1(JC1)
B2(JC2)
R1(B1)
R2(B2)
R3(B1, B2)
environment.jersey().register(R1)
environment.jersey().register(R2)
environment.jersey().register(R3)
The official Dropwizard docs are somewhat helpful. They at least explain how to create a jersey client; they don't explain how to structure your application.
If you're using the Jersey client builder from dropwizard, each of the clients that you create should be automatically registered to record metrics. Make sure you're using the client builder from the dropwizard-client artifact and package io.dropwizard.client. (Looks like you are because you have the using(config) method.)

How to access vertx HttpClientRequest fields?

I am writing a web server in java using vertx.
I use the server as a proxy to other services, and I'm the the testing stage. I want to know that I have created the request correctly with custom tokens and headers.
But, I cant manage to find a way to receive the properties upon creation.
HttpClientRequest clientRequest = vertx.createHttpClient().request(HttpMethod.GET,80,"host","/path?query=value");
When I try to read the host clientRequest.getHost() I receive a null, but in debug, reading its values, I can see a property named delegate which contains all of its data.
How can I access those values from clientRequest?
What you see in debug is:
((HttpClientRequestImpl) req).host
While getHost() method actually returns you hostHeader
For testing purposes I suggest to cast your HttpClientRequest to HttpClientRequestImpl, as it will expose more data.
If everything else fails, you can also fall back to reflection, of course.

Apache Camel - Multicast - Is there a 'null' or a similar endpoint in Camel?

Please excuse stupidity as this is my first Camel application
To respond to a web request, I am sourcing the content from two different sources.
I am, therefore, making a multicast request to two methods and parallelizing it.
The response is an marshalled JSON object (using camel-jackson)
All works fine.
public class RestToBeanRouter extends RouteBuilder{
#Override
public void configure() throws Exception {
from("cxfrs://bean://rsServer")
.multicast()
.parallelProcessing()
.aggregationStrategy(new CoreSearchResponseAggregator())
.beanRef("searchRestServiceImpl", "firstMethod")
.beanRef("searchRestServiceImpl", "secondMethod")
.end()
.marshal().json(JsonLibrary.Jackson)
.to("log://camelLogger?level=DEBUG");
}
Question :
The Multicast routing expects a to in the DSL. Currently, I am mapping this to a log endpoint. Is this fine?
Since I am not using the to and the last exchange of the Aggregator strategy is the one which is returned to the user, should my endpoint be configured to something else - like a null or something? (Ah, the stupidity kicks in)
For the benefit of SO visitors, copying the solution given in the Camel mailing list here :
by Robert Simmons Jr. MSc. - Lead Java Architect # EA
Author of: Hardcore Java (2003) and Maintainable Java (2012)
The aggregated exchange is the one that gets returned and how the
aggregated exchange is created depends on the aggregation strategy you use.
When a route stops either by calling stop or merely not routing anymore,
the exchange on the last part of the route could be considered a reply. In
most cases it will reply back to the caller (unless you set a reply-to
destination in a JMS based route or some other cases). In your case if all
you want to do is return the enriched exchange then you dont need any to()
call. Just stop after the marshal.

Categories

Resources