Jetty: Stopping programatically causes "1 threads could not be stopped" - java

I have an embedded Jetty 6.1.26 instance.
I want to shut it down by HTTP GET sent to /shutdown.
So I created a JettyShutdownServlet:
#Override
protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException {
resp.setStatus(202, "Shutting down.");
resp.setContentType("text/plain");
ServletOutputStream os = resp.getOutputStream();
os.println("Shutting down.");
os.close();
resp.flushBuffer();
// Stop the server.
try {
log.info("Shutting down the server...");
server.stop();
} catch (Exception ex) {
log.error("Error when stopping Jetty server: "+ex.getMessage(), ex);
}
However, when I send the request, Jetty does not stop - a thread keeps hanging in org.mortbay.thread.QueuedThreadPool on the line with this.wait():
// We are idle
// wait for a dispatched job
synchronized (this)
{
if (_job==null)
this.wait(getMaxIdleTimeMs());
job=_job;
_job=null;
}
...
2011-01-10 20:14:20,375 INFO org.mortbay.log jetty-6.1.26
2011-01-10 20:14:34,756 INFO org.mortbay.log Started SocketConnector#0.0.0.0:17283
2011-01-10 20:25:40,006 INFO org.jboss.qa.mavenhoe.MavenHoeApp Shutting down the server...
2011-01-10 20:25:40,006 INFO org.mortbay.log Graceful shutdown SocketConnector#0.0.0.0:17283
2011-01-10 20:25:40,006 INFO org.mortbay.log Graceful shutdown org.mortbay.jetty.servlet.Context#1672bbb{/,null}
2011-01-10 20:25:40,006 INFO org.mortbay.log Graceful shutdown org.mortbay.jetty.webapp.WebAppContext#18d30fb{/jsp,file:/home/ondra/work/Mavenhoe/trunk/target/classes/org/jboss/qa/mavenhoe/web/jsp}
2011-01-10 20:25:43,007 INFO org.mortbay.log Stopped SocketConnector#0.0.0.0:17283
2011-01-10 20:25:43,009 WARN org.mortbay.log 1 threads could not be stopped
2011-01-10 20:26:43,010 INFO org.mortbay.log Shutdown hook executing
2011-01-10 20:26:43,011 INFO org.mortbay.log Shutdown hook complete
It blocks for exactly one minute, then shuts down.
I've added the Graceful shutdown, which should allow me to shut the server down from a servlet; However, it does not work as you can see from the log.
I've solved it this way:
Server server = new Server( PORT );
server.setGracefulShutdown( 3000 );
server.setStopAtShutdown(true);
...
server.start();
if( server.getThreadPool() instanceof QueuedThreadPool ){
((QueuedThreadPool) server.getThreadPool()).setMaxIdleTimeMs( 2000 );
}
setMaxIdleTimeMs() needs to be called after the start(), becase the threadPool is created in start(). However, the threads are already created and waiting, so it only applies after all threads are used at least once.
I don't know what else to do except some awfulness like interrupting all threads or System.exit().
Any ideas? Is there a good way?

Graceful doesn't do what you think it does - it allows the server to shutdown gracefully, but it does not allow you to shutdown from inside a servlet.
The problem is as described in the mailing-list post you linked to - you're trying to stop the server, while you're still processing a connection inside the server.
You should try changing your servlet's implementation to:
// Stop the server.
new Thread()
{
public void run() {
try {
log.info("Shutting down the server...");
server.stop();
log.info("Server has stopped.");
} catch (Exception ex) {
log.error("Error when stopping Jetty server: "+ex.getMessage(), ex);
}
}
}.start();
That way the servlet can finished processing while the server is shutting down, and will not hold up the shutdown process.

Related

Taking 5 seconds to shutdown a java grpc ManagedChannel

I have a client that needs to disconnect from one server and connect to another. Its taking about 16 seconds. I still haven't debugged the connection logic, but I can see the shutdown of the channel is taking 5 seconds. Is this expected behavior, or should I be looking for thread starvation in my code.
LOG.debug("==============SHUTTING DOWN MANAGED CHANNEL");
long startTime=System.currentTimeMillis();
channel.shutdown().awaitTermination(20, SECONDS);
long endTime=System.currentTimeMillis();
LOG.debug("Time to shutdown channel ms = {}",endTime-startTime);
LOG.debug("==============RETURN FROM SHUTTING DOWN MANAGED CHANNEL");
From the log
2018-07-09 14:41:23,143 DEBUG [com.ticomgeo.ftc.client.FTCClient] (EE-ManagedExecutorService-singleThreaded-Thread-1) ==============SHUTTING DOWN MANAGED CHANNEL
2018-07-09 14:41:28,151 INFO [io.grpc.internal.ManagedChannelImpl] (grpc-default-worker-ELG-1-1) [io.grpc.internal.ManagedChannelImpl-1] Terminated
2018-07-09 14:41:28,152 DEBUG [com.ticomgeo.ftc.client.FTCClient] (EE-ManagedExecutorService-singleThreaded-Thread-1) Time to shutdown channel ms = 5009
2018-07-09 14:41:28,152 DEBUG [com.ticomgeo.ftc.client.FTCClient] (EE-ManagedExecutorService-singleThreaded-Thread-1) ==============RETURN FROM SHUTTING DOWN MANAGED CHANNEL
There are two shutdown functions, shutdown and shutdownNow. Is there any chance you have a calls going that are blocking shutdown? You may be better served by shutdownNow.
shutdown
Initiates an orderly shutdown in which preexisting calls continue but new calls are rejected.
shutdownNow
Initiates a forceful shutdown in which preexisting and new calls are rejected. Although forceful, the shutdown process is still not instantaneous; isTerminated() will likely return false immediately after this method returns.

Rabbit SimpleMessageListenerContainer won't shut down

Following on from this question, we have a scenario where Rabbit credentials become invalidated, and we need to call resetConnection() on our CachingConnectionFactory to pick up a fresh set of credentials.
We're doing this in a ShutdownSignalException handler, and it basically works. What doesn't work is that we also need to restart our listeners. We have a few of these:
#RabbitListener(
id = ABC,
bindings = #QueueBinding(value = #Queue(value="myQ", durable="true"),
exchange = #Exchange(value="myExchange", durable="true"),
key = "myKey"),
containerFactory = "customQueueContainerFactory"
)
public void process(...) {
...
}
The impression given by this answer (also this) is that we just need to do:
#Autowired RabbitListenerEndpointRegistry registry;
#Autowired CachingConnectionFactory connectionFactory;
#Override
public void shutdownCompleted(ShutdownSignalException cause) {
refreshRabbitMQCredentials();
}
public void refreshRabbitMQCredentials() {
registry.stop(); // do this first
// Fetch credentials, update username/pass
connectionFactory.resetConnection(); // then this
registry.start(); // finally restart
}
The problem is that having debugged my way through SimpleMessageListenerContainer, when the very first of these containers has doShutdown() called, Spring tries to cancel the BlockingQueueConsumer.
Because the underlying Channel still reports as being open - even though the RabbitMQ UI doesn't report any connections or channels being open - a Cancel event is sent to the broker inside ChannelN.basicCancel(), but the channel then blocks forever for a reply, and as a result container shutdown is completely blocked.
I've tried injecting a TaskExecutor (a Executors.newCachedThreadPool()) into the containers and calling shutdownNow() or interrupting them, but none of this affects the channel's blocking wait.
It looks like my only option to unblock the channel is to trigger an additional ShutdownSignalException during cancellation, but (a) I don't know how I can do that, and (b) it looks like I would have to initiate cancellation of all listeners in parallel before trying to shutdown again).
// com.rabbitmq.client.impl.ChannelN
#Override
public void basicCancel(final String consumerTag) throws IOException
{
// [snip]
rpc(new Basic.Cancel(consumerTag, false), k);
try {
k.getReply(); // <== BLOCKS HERE
} catch(ShutdownSignalException ex) {
throw wrap(ex);
}
metricsCollector.basicCancel(this, consumerTag);
}
I'm not sure why this is proving so difficult. Is there a simpler way to force SimpleMessageListenerContainer shutdown?
Using Spring Rabbit 1.7.6; AMQP Client 4.0.3; Spring Boot 1.5.10-RELEASE
UPDATE
Some logs to demonstrate the theory that the message containers are restarting before connection refresh has completed, and that this might be why they don't reconnect:
ERROR o.s.a.r.c.CachingConnectionFactory - Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=403, reply-text=ACCESS_REFUSED - access to queue 'amq.gen-4-bqGxbLio9mu8Kc7MMexw' in vhost '/' refused for user 'cert-configserver-feb6e103-76a8-f5bf-3f23-1e8150812bc4', class-id=50, method-id=10)
INFO u.c.c.c.r.ReauthenticatingChannelListener - Channel shutdown: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=403, reply-text=ACCESS_REFUSED - access to queue 'amq.gen-4-bqGxbLio9mu8Kc7MMexw' in vhost '/' refused for user 'cert-configserver-feb6e103-76a8-f5bf-3f23-1e8150812bc4', class-id=50, method-id=10)
INFO u.c.c.c.r.ReauthenticatingChannelListener - Channel closed with reply code 403. Assuming credentials have been revoked and refreshing config server properties to get new credentials. Cause: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=403, reply-text=ACCESS_REFUSED - access to queue 'amq.gen-4-bqGxbLio9mu8Kc7MMexw' in vhost '/' refused for user 'cert-configserver-feb6e103-76a8-f5bf-3f23-1e8150812bc4', class-id=50, method-id=10)
WARN u.c.c.c.r.ReauthenticatingChannelListener - Shutdown signalled: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=403, reply-text=ACCESS_REFUSED - access to queue 'amq.gen-4-bqGxbLio9mu8Kc7MMexw' in vhost '/' refused for user 'cert-configserver-feb6e103-76a8-f5bf-3f23-1e8150812bc4', class-id=50, method-id=10)
INFO u.c.c.c.r.RabbitMQReauthenticator - Refreshing Rabbit credentials for XXXXXXXX
INFO o.s.c.c.c.ConfigServicePropertySourceLocator - Fetching config from server at: http://localhost:8888/configuration
INFO u.c.c.c.r.ReauthenticatingChannelListener - Got ListenerContainerConsumerFailedEvent: Consumer raised exception, attempting restart
INFO o.s.a.r.l.SimpleMessageListenerContainer - Restarting Consumer#2db55dec: tags=[{amq.ctag-ebAfSnXLbw_W1hlZ5ag7sQ=consumer.myQ}], channel=Cached Rabbit Channel: AMQChannel(amqp://cert-configserver-feb6e103-76a8-f5bf-3f23-1e8150812bc4#127.0.0.1:5672/,2), conn: Proxy#12de62aa Shared Rabbit Connection: SimpleConnection#56c95789 [delegate=amqp://cert-configserver-feb6e103-76a8-f5bf-3f23-1e8150812bc4#127.0.0.1:5672/, localPort= 50052], acknowledgeMode=AUTO local queue size=0
INFO o.s.c.c.c.ConfigServicePropertySourceLocator - Located environment: name=myApp, profiles=[default], label=null, version=null, state=null
INFO com.zaxxer.hikari.HikariDataSource - XXXXXXXX - Shutdown initiated...
INFO com.zaxxer.hikari.HikariDataSource - XXXXXXXX - Shutdown completed.
INFO u.c.c.c.r.RabbitMQReauthenticator - Refreshed username: 'cert-configserver-feb6e103-76a8-f5bf-3f23-1e8150812bc4' => 'cert-configserver-d7b54af2-0735-a9ed-7cc4-394803bf5e58'
INFO u.c.c.c.r.RabbitMQReauthenticator - CachingConnectionFactory reset, proceeding...
UPDATE 2:
This does seem to be a race condition of sorts. Having removed the container stop / starts, if I add a thread-only breakpoint to SimpleMessageListenerContainer.restart() to let the resetConnection() race past, and then release the breakpoint, then I can see things start to come back:
16:18:47,208 INFO u.c.c.c.r.RabbitMQReauthenticator - CachingConnectionFactory reset
// Get ready to release the SMLC.restart() breakpoint...
16:19:02,072 INFO o.s.a.r.c.CachingConnectionFactory - Attempting to connect to: rabbitmq.service.consul:5672
16:19:02,083 INFO o.s.a.r.c.CachingConnectionFactory - Created new connection: connectionFactory#7489bca4:1/SimpleConnection#68546c13 [delegate=amqp://cert-configserver-132a07c2-94f3-0099-4de1-f0b1a9875d5a#127.0.0.1:5672/, localPort= 33350]
16:19:02,086 INFO o.s.amqp.rabbit.core.RabbitAdmin - Auto-declaring a non-durable, auto-delete, or exclusive Queue ...
16:19:02,095 DEBUG u.c.c.c.r.ReauthenticatingChannelListener - Active connection check succeeded for channel AMQChannel(amqp://cert-configserver-132a07c2-94f3-0099-4de1-f0b1a9875d5a#127.0.0.1:5672/,1)
16:19:02,120 INFO o.s.amqp.rabbit.core.RabbitAdmin - Auto-declaring a non-durable, auto-delete, or exclusive Queue (springCloudBus...
That being the case I now have to work out either how to delay the container restarts until the refresh is done (i.e. my ShutdownSignalException handler completes), or make the refresh blocking somehow...
UPDATE 3:
My overall problem, of which this was a symptom, was solved with: https://stackoverflow.com/a/49392990/954442
It's not at all clear why the channel would report as open; this works fine for me; it recovers after deleting user foo...
#SpringBootApplication
public class So49323291Application {
public static void main(String[] args) {
SpringApplication.run(So49323291Application.class, args);
}
#Bean
public ApplicationRunner runner(RabbitListenerEndpointRegistry registry, CachingConnectionFactory cf,
RabbitTemplate template) {
return args -> {
cf.setUsername("foo");
cf.setPassword("bar");
registry.start();
doSends(template);
registry.stop();
cf.resetConnection();
cf.setUsername("baz");
cf.setPassword("qux");
registry.start();
doSends(template);
};
}
public void doSends(RabbitTemplate template) {
while (true) {
try {
template.convertAndSend("foo", "Hello");
Thread.sleep(5_000);
}
catch (Exception e) {
e.printStackTrace();
break;
}
}
}
#RabbitListener(queues = "foo", autoStartup = "false")
public void in(Message in) {
System.out.println(in);
}
}
(Body:'Hello' MessageProperties [headers={}, contentType=text/plain, contentEncoding=UTF-8, contentLength=0, receivedDeliveryMode=PERSISTENT, priority=0, redelivered=false, receivedExchange=, receivedRoutingKey=foo, deliveryTag=4, consumerTag=amq.ctag-9zt3wUGYSJmoON3zw03wUw, consumerQueue=foo])
2018-03-16 11:24:01.451 ERROR 11867 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: connection error; protocol method: #method(reply-code=320, reply-text=CONNECTION_FORCED - user 'foo' is deleted, class-id=0, method-id=0)
...
Caused by: com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.
2018-03-16 11:24:01.745 ERROR 11867 --- [cTaskExecutor-2] o.s.a.r.l.SimpleMessageListenerContainer : Stopping container from aborted consumer
2018-03-16 11:24:03.740 INFO 11867 --- [cTaskExecutor-3] o.s.a.r.c.CachingConnectionFactory : Created new connection: rabbitConnectionFactory#2c4d1ac:3/SimpleConnection#5e9c036b [delegate=amqp://baz#127.0.0.1:5672/, localPort= 59346]
(Body:'Hello' MessageProperties [headers={}, contentType=text/plain, contentEncoding=UTF-8, contentLength=0, receivedDeliveryMode=PERSISTENT, priority=0, redelivered=false, receivedExchange=, receivedRoutingKey=foo, deliveryTag=1, consumerTag=amq.ctag-ljnY00TBuvy5cCAkpD3r4A, consumerQueue=foo])
However, you really don't need to stop/start the registry, just reconfigure the connection factory with the new credentials and call resetConnection(); the containers will recover.

Thread is not getting stopped while polling using camel file poller

I am trying to implement simple file polling from one folder to another using camel 2.14 version. I have used pollEnrich with basic timer to poll every 30 seconds. But whenever I tried to stop the tomcat 7.0 server I am getting the logs as:
Catalina.logs
SEVERE: The web application [/CamelPoller] appears to have started a thread named [Camel (camel-1) thread #0 - timer://myTimer] but has failed to stop it. This is very likely to create a memory leak.
Aug 14, 2015 2:50:06 PM org.apache.catalina.loader.WebappClassLoader clearReferencesThreads
SEVERE: The web application [/CamelPoller] appears to have started a thread named [Camel (camel-1) thread #1 - file://D:/Input] but has failed to stop it. This is very likely to create a memory leak.
FilePollerDemo.java
public class FilePollerDemo {
public FilePollerDemo() {
CamelContext context = new DefaultCamelContext();
try {
context.addRoutes(new RouteBuilder() {
public void configure() {
from("timer://myTimer?period=30000")
.pollEnrich("file://D:/Input?fileName=test.txt")
.to("file://D:/Output");
}
});
context.start();
// context.stop();
} catch (Exception e) {
System.out.println(e.getMessage());
}
}
}
I have commented out the context.stop(), because if I am using it, the file polling is not happening or if I used like this :
context.start();
thread.sleep(30000);
context.stop();
then the poller run only for once.
Please help me I am new to camel.

Stopping gobbler threads in blocking reads on Process InputStream

I have a gobbler that reads the output from a Process.
There is a case where we kill the process programatically using its PID and the external Windows taskkill command.
It is a 16-Bit DOS process
We taskkill because it is a DOS 16-bit process and Process.destroyForcibly() does not work with it because it resides in the ntvdm subsystem and the best way is to get the PID and use 'taskkill /T /F' which does indeed kill it and any children.
Normally, we have no problem with our DOS 16-bit (or 32 bit) processes. This one has some file locks in place. It is especially important that we ensure it is dead to have the OS release the locks.
We close all streams before and after the kill
Prior to calling taskkill, we attempt to flush and close all streams in an executor: in,out,err. After calling taskkill, we verify that all streams are closed by re-closing them.
We call Thread.interrupt() on all gobblers after the kill
Now, after the kill success, which is confirmed in the OS as well, the gobbler is still running and it does not respond to Thread.interrupt().
We even do a last-ditch Thread.stop (gasp!)
And furthermore, we have invoked Thread.stop() on it and it still stays waiting at the read stage ...
So, it seems, we are unable to stop the std-out and std-in gobblers on our Processes streams.
We know Thread.stop() is deprecated. To be somewhat safe, we catch ThreadDeath
then clean any monitors and then rethrow ThreadDeath. However,
ThreadDeath never in fact gets thrown and the thread just keeps on
waiting on inputStream.read ..
so Thread.stop being deprecated in this case is a moot point
... because it does not do anything.
Just so no one flames me and so that I have a clean conscience,
We have removed Thread.stop() from our production code.
I am not surprised that the Thread does not interrupt since that only happens on some InputStreams and not all reads are incorruptible. But I am surprised that the Thread will not stop when Thread.stop is invoked.
Thread trace shows
A thread trace shows that both main-in and main-er (the two outputs from the process) are still running even after the streams are closed, thread is interrupted and last ditch Thread.stop is called.
The task is dead, so why care about idle blocked gobblers?
It is not that we care that the gobblers won't quit. But we hate threads running that just pile up and clog the system. This particular process is called by a webserver and then .. it could amount to several hundred idle threads in a blocking state on dead processes...
We have tried launching the process two ways with no difference ...
run(working, "cmd", "/c", "start", "/B", "/W", "/SEPARATE", "C:\\workspace\\dotest.exe");
run(working, "cmd", "/c", "C:\\workspace\\dotest.exe");
The gobbler is in a read like this:
try (final InputStream is = inputStream instanceof BufferedInputStream
? inputStream : new BufferedInputStream(inputStream, 1024 * 64);
final BufferedReader br = new BufferedReader(new InputStreamReader(is, charset))) {
String line;
while ((line = br.readLine()) != null) {
lineCount++;
lines.add(line);
if (Thread.interrupted()) {
Thread.currentThread().interrupt();
throw new InterruptedException();
}
}
eofFound = true;
}
Our destroyer calls this on the gobbler thread after the taskkill:
int timeLimit = 500;
t.interrupt();
try {
t.join(timeLimit);
if (t.isAlive()) {
t.stop();
// we knows it's deprecated but we catch ThreadDeath
// then clean any monitors and then rethrow ThreadDeath
// But ThreadDeath never in fact gets thrown and the thread
// just keeps on waiting on inputStream.read ..
logger.warn("Thread stopped because it did not interrupt within {}ms: {}", timeLimit, t);
if (t.isAlive()) {
logger.warn("But thread is still alive! {}", t);
}
}
} catch (InterruptedException ie) {
logger.info("Interrupted exception while waiting on join({}) with {}", timeLimit, t, ie);
}
This is a snippet of the log output:
59.841 [main] INFO Destroying process '5952'
04.863 [main] WARN Timeout waiting for 'Close java.io.BufferedInputStream#193932a' to finish
09.865 [main] WARN Timeout waiting for 'Close java.io.FileInputStream#159f197' to finish
09.941 [main] DEBUG Executing [taskkill, /F, /PID, 5952].
10.243 [Thread-1] DEBUG SUCCESS: The process with PID 5952 has been terminated.
10.249 [main] DEBUG java.lang.ProcessImpl#620197 stopped with exit code 0
10.638 [main] INFO Destroyed WindowsProcess(5952) forcefully in 738 ms.
11.188 [main] WARN Thread stop called because it did not interrupt within 500ms: Thread[main-in,5,main]
11.188 [main] WARN But thread is still alive! Thread[main-in,5,main]
11.689 [main] WARN Thread stop because it did not interrupt within 500ms: Thread[main-err,5,main]
11.689 [main] WARN But thread is still alive! Thread[main-err,5,main]
Note: prior to calling taskkill, the Process std-out and std-err will not close. But they are closed manually after the taskkill (not shown in log because success).

Camel process do not shutdown because of (not existing) inflight exchanges

I've a Camel process (that I run from command line) which route is similar to this one:
public class ProfilerRoute extends RouteBuilder {
#Override
public void configure() {
from("kestrel://my_queue?concurrentConsumers=10&waitTimeMs=500")
.unmarshal().json(JsonLibrary.Jackson, MyClass.class)
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
/* Do the real processing [...] */
exchange.getIn().setBody(null);
}
})
.filter(body().isNotNull())
.to("file://nowhere");
}
}
Note that I'm trashing whatever message after having processed it, being this a pure consumer
process.
The process is run by its own. No other process is writing on the queue, the queue is empty.
However when I try to kill the process the process is not going to die.
From the logs I see the following lines (indented for readability):
[ Thread-1] MainSupport$HangupInterceptor INFO
Received hang up - stopping the main instance.
[ Thread-1] MainSupport INFO
Apache Camel stopping
[ Thread-1] GuiceCamelContext INFO
Apache Camel 2.11.1 (CamelContext: camel-1)
is shutting down
[ Thread-1] DefaultShutdownStrategy INFO
Starting to graceful shutdown 1 routes
(timeout 300 seconds)
[l-1) thread #12 - ShutdownTask] DefaultShutdownStrategy INFO
Waiting as there are still 10 inflight and
pending exchanges to complete,
timeout in 300 seconds.
And so on with decreasing timeout. At the end of the timeout I get on the logs:
[l-1) thread #12 - ShutdownTask] DefaultShutdownStrategy INFO
Waiting as there are still 10 inflight and
pending exchanges to complete,
timeout in 1 seconds.
[ Thread-1] DefaultShutdownStrategy WARN
Timeout occurred.
Now forcing the routes to be shutdown now.
[l-1) thread #12 - ShutdownTask] DefaultShutdownStrategy WARN
Interrupted while waiting during graceful
shutdown, will force shutdown now.
[ Thread-1] KestrelConsumer INFO
Stopping consumer for
kestrel://localhost:22133/my_queue?concurrentConsumers=10&waitTimeMs=500
But the process will not die anyway (even if I try to kill it at this point).
I would have expected that after the waiting time all the threads would realise that a shutdown is going on and stop.
I've read the "Graceful Shutdown" document, however I could not find something that explains the behaviour I'm facing.
As you can see from logs I'm using the 2.11.1 version of Apache Camel.
UPDATE: According to Claus Ibsen it might be a problem of the camel-kestrel component. I filed a issue on ASF Jira for Camel: CAMEL-6632
This is a bug in camel-kestrel, and a JIRA ticket has been logged to fix this: https://issues.apache.org/jira/browse/CAMEL-6632

Categories

Resources