I am struggling to find a fully fledged example of how to use Apache Camel in Spring Boot framework for the purpose of a polling consumer.
I have looked at this: https://camel.apache.org/manual/latest/polling-consumer.html as well as this: https://camel.apache.org/components/latest/timer-component.html but the code examples are not wide enough for me to understand what it is that I need to do to accomplish my task in Java.
I'm typically a C# developer, so a lot of these small references to things don't make sense.
I am seeking an example of the following to do in Java including all the imports and other dependencies that are required to get this to work.
What I am trying to do, is the following
A web request is made to an endpoint, which should trigger the start of a polling consumer
The polling consumer needs to poll another web endpoint with a provided "ID" that needs to be sent to the consumer at the time that it is trigger.
The polling consumer should poll every X seconds (let's say 5 seconds).
Once a specific successful response is received from the endpoint we are polling, the consumer should stop polling and send a message to another web endpoint.
I would like to know if this is possible, and if so, can you provide a small example of everything that is needed to achieve this (as the documentation from the Camel website is extremely sparse in terms of imports and class structure etc.)?
After discussions with some fellow Java colleagues, they have assured me that this use case is not one that Camel is designed for. This is reason it was so difficult to find anything on the internet before I posted this question.
For those that are seeking this answer via Google, the best suggested approach is to use a different tool or just use standard java.
In my case, I ended up using plain old Java thread to achieve what was required. Once the request is received I simply start a new Runnable thread, that handles the checking of the result from the other service, sleeps for X seconds, and terminates when the response is successful.
A simple example is below:
Runnable runner = new Runnable() {
#Override
public void run() {
boolean cont = true;
while (cont) {
cont = getResponseFromServer();
try {
Thread.sleep(5000);
} catch (Exception e) {
// we don't care about this, it just means this time it didn't sleep
}
}
}
}
new Thread(runner).start();
Related
I have a spring boot web application with the functionality to update an entity called StudioLinking. This entity describes a temporary, mutable, descriptive logical link between two IoT devices for which my web app is their cloud service. The Links between these devices are ephemeral in nature, but the StudioLinking Entity persists on the database for reporting purposes. StudioLinking is stored to the SQL based datastore in the conventional way using Spring Data/ Hibernate. From time to time this StudioLinking entity will be updated with new information from a Rest API. When that link is updated the devices need to respond (change colors, volume, etc). Right now this is handled with polling every 5 seconds but this creates lag from when a human user enters an update into the system and when the IoT devices actually update. It could be as little as a millisecond or up to 5 seconds! Clearly increasing the frequency of the polling is unsustainable and the vast majority of the time there are no updates at all!
So, I am trying to develop another Rest API on this same application with HTTP Long Polling which will return when a given StudioLinking entity is updated or after a timeout. The listeners do not support WebSocket or similar leaving me with Long Polling. Long polling can leave a race condition where you have to account for the possibility that with consecutive messages one message may be "lost" as it comes in between HTTP requests (while the connection is closing and opening, a new "update" might come in and not be "noticed" if I used a Pub/Sub).
It is important to note that this "subscribe to updates" API should only ever return the LATEST and CURRENT version of the StudioLinking, but should only do so when there is an actual update or if an update happened since the last checkin. The "subscribe to updates" client will initially POST an API request to setup a new listening session and pass that along so the server knows who they are. Because it is possible that multiple devices will need to monitor updates to the same StudioLinking entity. I believe I can acomplish this by using separately named consumers in the redis XREAD. (keep this in mind for later in the question)
After hours of research I believe the way to acomplish this is using redis streams.
I have found these two links regarding Redis Streams in Spring Data Redis:
https://www.vinsguru.com/redis-reactive-stream-real-time-producing-consuming-streams-with-spring-boot/
https://medium.com/#amitptl.in/redis-stream-in-action-using-java-and-spring-data-redis-a73257f9a281
I also have read this link about long polling, both of these links just have a sleep timer during the long polling which is for demonstration purposes but obviously I want to do something useful.
https://www.baeldung.com/spring-deferred-result
And both these links were very helpful. Right now I have no problem figuring out how to publish the updates to the Redis Stream - (this is untested "pseudo-code" but I don't anticipate having any issues implementing this)
// In my StudioLinking Entity
#PostUpdate
public void postToRedis() {
StudioLinking link = this;
ObjectRecord<String, StudioLinking> record = StreamRecords.newRecord()
.ofObject(link)
.withStreamKey(streamKey); //I am creating a stream for each individual linking probably?
this.redisTemplate
.opsForStream()
.add(record)
.subscribe(System.out::println);
atomicInteger.incrementAndGet();
}
But I fall flat when it comes to subscribing to said stream: So basically what I want to do here - please excuse the butchered pseudocode, it is for idea purposes only. I am well aware that the code is in no way indicative of how the language and framework actually behaves :)
// Parameter studioLinkingID refers to the StudioLinking that the requester wants to monitor
// updateList is a unique token to track individual consumers in Redis
#GetMapping("/subscribe-to-updates/{linkId}/{updatesId}")
public DeferredResult<ResponseEntity<?>> subscribeToUpdates(#PathVariable("linkId") Integer linkId, #PathVariable("updatesId") Integer updatesId) {
LOG.info("Received async-deferredresult request");
DeferredResult<ResponseEntity<?>> output = new DeferredResult<>(5000l);
deferredResult.onTimeout(() ->
deferredResult.setErrorResult(
ResponseEntity.status(HttpStatus.REQUEST_TIMEOUT)
.body("IT WAS NOT UPDATED!")));
ForkJoinPool.commonPool().submit(() -> {
//----------------------------------------------
// Made up stuff... here is where I want to subscribe to a stream and block!
//----------------------------------------------
LOG.info("Processing in separate thread");
try {
// Subscribe to Redis Stream, get any updates that happened between long-polls
// then block until/if a new message comes over the stream
var subscription = listenerContainer.receiveAutoAck(
Consumer.from(studioLinkingID, updateList),
StreamOffset.create(studioLinkingID, ReadOffset.lastConsumed()),
streamListener);
listenerContainer.start();
} catch (InterruptedException e) {
}
output.setResult("IT WAS UPDATED!");
});
LOG.info("servlet thread freed");
return output;
}
So is there a good explanation to how I would go about this? I think the answer lies within https://docs.spring.io/spring-data/redis/docs/current/api/org/springframework/data/redis/core/ReactiveRedisTemplate.html but I am not a big enough Spring power user to really understand the terminology within Java Docs (the Spring documentation is really good, but the JavaDocs is written in the dense technical language which I appreciate but don't quite understand yet).
There are two more hurdles to my implementation:
My exact understanding of spring is not at 100% yet. I haven't yet reached that a-ha moment where I really fully understand why all these beans are floating around. I think this is the key to why I am not getting things here... The configuration for the Redis is floating around in the Spring ether and I am not grasping how to just call it. I really need to keep investigating this (it is a huge hurdle to spring for me).
These StudioLinking are short lived, so I need to do some cleanup too. I will implement this later once I get the whole thing up off the ground, I do know it will be needed.
Why don't you use a blocking polling mechanism? No need to use fancy stuff of spring-data-redis. Just use simple blocking read of 5 seconds, so this call might take around 6 seconds or so. You can decrease or increase the blocking timeout.
class LinkStatus {
private final boolean updated;
LinkStatus(boolean updated) {
this.updated = updated;
}
}
// Parameter studioLinkingID refers to the StudioLinking that the requester wants to monitor
// updateList is a unique token to track individual consumers in Redis
#GetMapping("/subscribe-to-updates/{linkId}/{updatesId}")
public LinkStatus subscribeToUpdates(
#PathVariable("linkId") Integer linkId, #PathVariable("updatesId") Integer updatesId) {
StreamOperations<String, String, String> op = redisTemplate.opsForStream();
Consumer consumer = Consumer.from("test-group", "test-consumer");
// auto ack block stream read with size 1 with timeout of 5 seconds
StreamReadOptions readOptions = StreamReadOptions.empty().block(Duration.ofSeconds(5)).count(1);
List<MapRecord<String, String, String>> records =
op.read(consumer, readOptions, StreamOffset.latest("test-stream"));
return new LinkStatus(!CollectionUtils.isEmpty(records));
}
I am investigating a quite strange problem. The project I'm working on uses Spring-remoting to invoke methods over http. From what I have gathered so far the following happens:
My client code executes a request to the server
The server starts handling the request, but is slow
25-30 seconds later, a new request comes in to the server
The second request finishes, the client continues its processing
A while later, the first request get completed, but the client no longer cares
Since my client code executes only one request to the Spring remoting client, and the client continuous on after the second invocation it receives is completed, I can only conclude that this occurs somewhere in the Spring remoting client.
The client uses AbstractHttpInvokerRequestExecutor to make the actual http-invocation, and this in turn uses SimpleHttpInvokerRequestExecutor to make the request. But, from what I can read, this has no mechanism to retry the requests. So now I'm quite stuck.
Can anyone think of what might cause this behaviour? (I have tried to keep the question clean, but I have more details if needed.)
Just an idea to give you some direction, not necessarily a solution. Use a third party Http client (not one from Spring) to see if it changes a behavior. That might help you to see if it is SimpleHttpInvokerRequestExecutor that is "guilty" of re-try or something else. Here is a very simple 3d party HttpClient: Provided in MgntUtils Open source library (written by me). Very simple in use. Take a look at Javadoc. Library itself provided as Maven artifacts and on Git (including source code and Javadoc). All in all your code may look like this:
private static void testHttpClient() {
HttpClient client = new HttpClient();
client.setContentType("application/json");
String content = null;
try {
content = client.sendHttpRequest("http://www.google.com/", HttpMethod.GET);
//content holds the response. Do your logic here
} catch (IOException e) {
//Error Handling is here
content = TextUtils.getStacktrace(e, false);
}
}
I have different problems with a Camel Producer that I tried to solved but I've fallen into other problems.
1) The first implementation I did was to create a producer template each time we needed to communicate with an ActiveMQ topic. That resulted with poor memory results leading to server crashing after sometime.
The solution for the memory problem was to stop() producer template after each request. That fix has corrected the memory issue but cause some latency problem.
2) I read somewhere that it's not necessary to create each time a producer template. So I decide to fix the latency problem and declared only one producer template in my class and use it for each request. It seem to work fine, no memory leak, fix the latency problem...
BUT, when we send multiple queries that take a lot of time (20 sec each), it looks like we hit a timeout and the component crash with something like «javax.jms.IllegalStateException: The Session is closed».
Is there a way to do multi threading? Is this cause by using InOut exchange pattern? How the MAXIMUM_CACHE_POOL_SIZE works? Is my implementation is right?
I've put a sample of the code of my component:
public void process(Exchange exchange) throws Exception
{
Message in = exchange.getIn();
if (producerTemplate == null) {
CamelContext camelContext = exchange.getContext();
//camelContext.getProperties().put(Exchange.MAXIMUM_CACHE_POOL_SIZE, "50");
producerTemplate = camelContext.createProducerTemplate();
}
...
result = producerTemplate.sendBody(String.format("activemq:%s", camelContext.resolvePropertyPlaceholders("{{channel1}}")), ExchangePattern.InOut, messageToSend).toString();
...
finalResult = producerTemplate.sendBody(String.format("activemq:%s", camelContext.resolvePropertyPlaceholders("{{channel2}}")), ExchangePattern.InOut, result).toString();
...
in.setBody(finalResult );
}
Yes it is because you use InOut pattern.
Your route expects a response to the specified reply queue, which is never received, and therefore results in the default 20 sec. timeout.
Change the Exchange pattern to InOnly to resolve your issue.
Apart from that, your posted code seems to be fine.
The MAXIMUM_CACHE_POOL_SIZE is used internally in Camel, and thus does not effect the ActiveMQ endpoint settings.
How does async JMS work? I've below sample code:
public class JmsAdapter implements MessageListener, ExceptionListener
{
private ConnectionFactory connFactory = null;
private Connection conn = null;
private Session session = null;
public void receiveMessages()
{
try
{
this.session = this.conn.createSession(true, Session.SESSION_TRANSACTED);
this.conn.setExceptionListener(this);
Destination destination = this.session.createQueue("SOME_QUEUE_NAME");
this.consumer = this.session.createConsumer(destination);
this.consumer.setMessageListener(this);
this.conn.start();
}
catch (JMSException e)
{
//Handle JMS Exceptions Here
}
}
#Override
public void onMessage(Message message)
{
try
{
//Do Message Processing Here
//Message sucessfully processed... Go ahead and commit the transaction.
this.session.commit();
}
catch(SomeApplicationException e)
{
//Message processing failed.
//Do whatever you need to do here for the exception.
//NOTE: You may need to check the redelivery count of this message first
//and just commit it after it fails a predefined number of times (Make sure you
//store it somewhere if you don't want to lose it). This way you're process isn't
//handling the same failed message over and over again.
this.session.rollback()
}
}
}
But I'm new to Java & JMS. I'll probably consume messages in onMessage method. But I don't know how does it work exactly.
Do I need to add main method in JmsAdapter class? After adding main method, do I need to create a jar & then run the jar as "java -jar abc.jar"?
Any help is much appreciated.
UPDATE: What I want to know is that if I add main method, should I simply call receiveMessages() in main? And then after running, will the listener keep on running? And if there are messages, will it retrieve automatically in onMessage method?
Also, if the listener is continuously listening, doesn't it take CPU??? In case of threads, when we create a thread & put it in sleep, the CPU utilization is zero, how doe it work in case of listener?
Note: I've only Tomcat server & I'll not be using any jms server. I'm not sure if listener needs any specific jms server such as JBoss? But in any case, please assume that I'll not be having anything except tomcat.
Thanks!
You need to learn to walk before you start trying to run.
Read / do a tutorial on Java programming. This should explain (among other things) how to compile and run a Java program from the command line.
Read / do a tutorial on JMS.
Read the Oracle material on how to create an executable JAR file.
Figure out what it is you are trying to do ... and design your application.
Looking at what you've shown and told us:
You could add a main method to that class, but to make an executable JAR file, you've got to create your JAR file with a manifest entry that specifies the name of the class with the main method.
There's a lot more that you have to do before that code will work:
add code to (at least) log the exceptions that you are catching
add code to process the messages
add code to initialize the connection factory and connection objects
And like I said above, you probably need some kind of design ... so that you don't end up with everything in a "kitchen sink" class.
if I add main method, should I simply call receiveMessages() in main?
That is one approach. But like I said, you really need to design your application.
And then after running, will the listener keep on running?
It is not entirely clear. It should keep running as long as the main thread is alive, but it is not immediately obvious what happens when your main method returns. (It depends on whether the JMS threads are created as daemon threads, and that's not specified.)
And if there are messages, will it retrieve automatically in onMessage method?
It would appear that each message is retrieved (read from the socket) before your onMessage method is called.
Also, if the listener is continuously listening, doesn't it take CPU???
Not if it is implemented properly.
In case of threads, when we create a thread & put it in sleep, the CPU utilization is zero, how doe it work in case of listener?
At a certain level, a listener thread will make a system call that waits for data to arrive on a network socket. I don't know how it is exactly implemented, but this could be as simple as an read() call on the network socket's InoutStream. No CPU is used by a thread while it waits in a blocking system call.
This link looks like a pretty good place with examples using Oracle AQ. There's an examples section that tells you how to setup the examples and run them. Hopefully this can help.
Link to Oracle Advanced Queueing
Edit
This question has gone through a few iterations by now, so feel free to look through the revisions to see some background information on the history and things tried.
I'm using a CompletionService together with an ExecutorService and a Callable, to concurrently call the a number of functions on a few different webservices through CXF generated code.. These services all contribute different information towards a single set of information I'm using for my project. The services however can fail to respond for a prolonged period of time without throwing an exception, prolonging the wait for the combined set of information.
To counter this I'm running all the service calls concurrently, and after a few minutes would like to terminate any of the calls that have not yet finished, and preferably log which ones weren't done yet either from within the callable or by throwing an detailed Exception.
Here's some highly simplified code to illustrate what I'm doing already:
private Callable<List<Feature>> getXXXFeatures(final WiwsPortType port,
final String accessionCode) {
return new Callable<List<Feature>>() {
#Override
public List<Feature> call() throws Exception {
List<Feature> features = new ArrayList<Feature>();
//getXXXFeatures are methods of the WS Proxy
//that can take anywhere from second to never to return
for (RawFeature raw : port.getXXXFeatures(accessionCode)) {
Feature ft = convertFeature(raw);
features.add(ft);
}
if (Thread.currentThread().isInterrupted())
log.error("XXX was interrupted");
return features;
}
};
}
And the code that concurrently starts the WS calls:
WiwsPortType port = new Wiws().getWiws();
List<Future<List<Feature>>> ftList = new ArrayList<Future<List<Feature>>>();
//Counting wrapper around CompletionService,
//so I could implement ccs.hasRemaining()
CountingCompletionService<List<Feature>> ccs =
new CountingCompletionService<List<Feature>>(threadpool);
ftList.add(ccs.submit(getXXXFeatures(port, accessionCode)));
ftList.add(ccs.submit(getYYYFeatures(port accessionCode)));
ftList.add(ccs.submit(getZZZFeatures(port, accessionCode)));
List<Feature> allFeatures = new ArrayList<Feature>();
while (ccs.hasRemaining()) {
//Low for testing, eventually a little more lenient
Future<List<Feature>> polled = ccs.poll(5, TimeUnit.SECONDS);
if (polled != null)
allFeatures.addAll(polled.get());
else {
//Still jobs remaining, but unresponsive: Cancel them all
int jobsCanceled = 0;
for (Future<List<Feature>> job : ftList)
if (job.cancel(true))
jobsCanceled++;
log.error("Canceled {} feature jobs because they took too long",
jobsCanceled);
break;
}
}
The problem I'm having with this code is that the Callables aren't actually canceled when waiting for port.getXXXFeatures(...) to return, but somehow keep running. As you can see from the if (Thread.currentThread().isInterrupted()) log.error("XXX was interrupted"); statements the interrupted flag is set after port.getFeatures returns, this is only available after the Webservice call completes normally, instead of it having been interrupted when I called Cancel.
Can anyone tell me what I am doing wrong and how I can stop the running CXF Webservice call after a given time period, and register this information in my application?
Best regards, Tim
Edit 3 New answer.
I see these options:
Post your problem on the Apache CXF as feature request
Fix ACXF yourself and expose some features.
Look for options for asynchronous WS call support within the Apache CXF
Consider switching to a different WS provider (JAX-WS?)
Do your WS call yourself using RESTful API if the service supports it (e.g. plain HTTP request with parameters)
For über experts only: use true threads/thread group and kill the threads with unorthodox methods.
The CXF docs have some instructions for setting the read timeout on the HTTPURLConnection:
http://cwiki.apache.org/CXF20DOC/client-http-transport-including-ssl-support.html
That would probably meet your needs. If the server doesn't respond in time, an exception is raised and the callable would get the exception. (except there is a bug where is MAY hang instead. I cannot remember if that was fixed for 2.2.2 or if it's just in the SNAPSHOTS right now.)