Launch a batch job based on Json Parameter in Spring-XD - java

I assume that this is not possible as it is not listed in the XD documentation.
What I am looking for is a way to launch a job dynamically from a RabbitMQ message that contains the jobName in the payload. This would allow me to have a single job queue where all my jobs are sent rather than having to have a separate queue of each job.
{
jobName:"myJob",
jobParm1: "parm1",
jobParm2: "parm2"
}
This would allow me to have a single job queue where all my jobs are sent rather than having to have a separate queue of each job.
This post shows that is possible using http

You could construct an XD stream the reads from rabbit, transforms the payload and invokes the http-client processor (dumping the output the null or log sink).

Related

Use Apache Flink to only produce messages to a Kafka topic

From the examples I have seen the below code snippet and it works fine. But the problem is that : I don't always have requirements of processing the input-stream and produce it to a sink.
What if I have an application where based on some events I have to only publish to a kafka topic so that down-stream applications can make certain decisions. That means, I don't really have an input-stream but I just know when something happens in my application, I need to publish a message to a particular topic of kafka. That is, I only need a sink.
I was going through examples but didn't find anything matching to my requirements. Is there a way to only configure a KafkaSink that exposes a method() to be called for publishing messages to a topic.
Many thanks in advance!!
String inputTopic = "flink_input";
String outputTopic = "flink_output";
String consumerGroup = "baeldung";
String address = "localhost:9092";
StreamExecutionEnvironment environment = StreamExecutionEnvironment
.getExecutionEnvironment();
FlinkKafkaConsumer011<String> flinkKafkaConsumer = createStringConsumerForTopic(
inputTopic, address, consumerGroup);
DataStream<String> stringInputStream = environment
.addSource(flinkKafkaConsumer);
FlinkKafkaProducer011<String> flinkKafkaProducer = createStringProducer(
outputTopic, address);
stringInputStream
.map(new WordsCapitalizer())
.addSink(flinkKafkaProducer);
You must have a source. You might want to implement a custom source, or you could use something like a NumberSequenceSource followed by an operator like a process function that emits whatever you know you want to write to the sink, followed by the sink.
That process function could, for example, transform the incoming events into whatever you want to write to Kafka, or it could ignore its inputs and use a timer to generate the events to be sent to Kafka.
Or you might find that async i/o is a better building block than a process function, depending on your requirements.

How to identify Azure Service Bus queue is empty

I'm working on an application which processes messages from multiple Azure Service Bus queues. In order to optimize resources, I need to prioritize one queue and process its messages first. Once it's empty, process the next queue. In order to do that, I'm searching for a method to find if particular queue is empty. It would be really good if there's any way to set an empty-queue-listener.
Currently I have implemented my code to process just one queue with ServiceBusProcessorClient.
this.processor = new ServiceBusClientBuilder()
.connectionString(connectionString)
.processor()
.queueName(queueName)
.processMessage(onMessage)
.processError(onError)
.disableAutoComplete()
.prefetchCount(1)
.maxConcurrentCalls(1)
.receiveMode(ServiceBusReceiveMode.PEEK_LOCK)
.buildProcessorClient();
As Gaurav Mantri mentioned in the comment, you can use getTotalMessageCount() to get the number of messages in the queue.

Camel Exchange on different routes in the same route builder

My REST application will post data to a queue (Q1) on rabbitMQ. There's another separate application that will read from Q1, process the data and post the result back to Q2. My application will read the data from Q2 and return the result. Many clients will use these 2 queues so I generate a UUID and set it in the header so that I can listen on Q2 (the response topic). I will then query each incoming message and match the incoming UUID in the header to the one I generated when I posted to Q1.
from("direct:test")
.choice().when(isValid)
.bean(FOOProcessor.class, "setFooQuery")
.to(FOO_REQUEST_QUEUE).log(LoggingLevel.INFO, "body=${in.body}")
.otherwise()
.setBody(constant("error"))
.setHeader(Exchange.HTTP_RESPONSE_CODE, constant(400)).log(LoggingLevel.INFO, "body=${in.body}")
.to("direct:error");
from(FOO_RESPONSE_QUEUE)
.unmarshal(new JacksonDataFormat(JsonNode.class))
.bean(FooProcessor.class, "setFooResponse")
.to("direct:end");
from("direct:error").log(LoggingLevel.DEBUG, "end");
from("direct:end").log(LoggingLevel.DEBUG, "end");
The trouble is the 2 "from" statements - they create to separate Camel exchanges/contexts and I can't get the original UUID. Any suggestions?
I solved this by using a processor that had a route builder embedded in it (with its own producer and consumer).
The processor provided a reference to the main exchange from this
process(final Exchange exchange)

Kafka - producer - handle "failed to send"

I'm running a 0.8 Kafka, and build a producer using the provided Java API.
The API functions of sending a message (or messages) return void.
Is there a way to get the status of the sent message? If it sent or failed?
This is extremely important to us since we are reading the messages from a file and we want to delete the file after all messages were sent. But if there were errors and some messages weren't sent and I delete the file it will cause a loss of a very important data.
You can configure your producer to wait until it gets n acks from the Kafka cluster (request.required.acks) so that you have some kind of guarantee that the data has been committed properly before deleting your source file.
If really you need to be sure that the message sent succeeded, you might want to consider the alternative of making the producer to be synchronous (producer.type=sync). This way, you would be able to catch any exception thrown by the blocking invocation and act accordingly. The exception thrown by send() is kafka.common.FailedToSendMessageException.
Kafka's Java API is not ideal, hope this helps you.

JMS Listener & Sender - Spring Framework

I want to understand a java program and need to modify which was developed using jms spring framework. Typically it has JMS receiver & sender, it receives a message from request queue and will invoke a job (another java program) once the job is completed the sender will send response to response queue. Have couple of questions which are below,
The request message is not deleted until response posted into response queue successfully. How its been achieved what is the logic behind it.
I want to write a functionality of writing response into flat file when sender fails to send message (by catching JMS exception). Once the sender queue is up and running i will read flat file and will send responses. The reason i need is because its involved in job processing could be in hours if job failed then input message will be read again by receiver. I want to avoid duplicate processing. Please suggest your ideas here.
Without seeing the configuration it's hard to answer these questions, but best guess is that #1 is because the app is using a transactional session. This means all updates on that session are not completed until the transaction is committed.
Just catch the exception and write the data; as long as the transaction commits (because you caught the exception) the input message will be removed.

Categories

Resources