JUnit test case for Camel route for ActiveMQ - java

I have a camel route in MyRouteBuilder.java file which is consuming messages from ActiveMQ:
from("activemq:queue:myQueue" )
.process(consumeDroppedMessage)
.log(">>> I am here");
I wrote a test case for the following like this :
#Override
public RouteBuilder createRouteBuilder() throws Exception {
return new MyRouteBuilder();
}
#Test
void testMyTest() throws Exception {
String queueInputMessage = "My Msg";
template.sendBody("activemq:queue:myQueue", queueInputMessage);
assertMockEndpointsSatisfied();
}
When I run the unit test case I get this strange error:
7:53:26.175 [main] DEBUG org.apache.camel.impl.engine.InternalRouteStartupManager - Route: route1 >>> Route[activemq://queue:null -> null]
17:53:26.175 [main] DEBUG org.apache.camel.impl.engine.InternalRouteStartupManager - Starting consumer (order: 1000) on route: route1
17:53:26.175 [main] DEBUG org.apache.camel.support.DefaultConsumer - Build consumer: Consumer[activemq://queue:null]
17:53:26.185 [main] DEBUG org.apache.camel.support.DefaultConsumer - Init consumer: Consumer[activemq://queue:null]
17:53:26.185 [main] DEBUG org.apache.camel.support.DefaultConsumer - Starting consumer: Consumer[activemq://queue:null]
17:53:26.213 [main] DEBUG org.apache.activemq.thread.TaskRunnerFactory - Initialized TaskRunnerFactory[ActiveMQ Task] using ExecutorService: java.util.concurrent.ThreadPoolExecutor#3fffff43[Running, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 0]
17:53:26.215 [main] DEBUG org.apache.activemq.transport.failover.FailoverTransport - Reconnect was triggered but transport is not started yet. Wait for start to connect the transport.
17:53:26.334 [main] DEBUG org.apache.activemq.transport.failover.FailoverTransport - Started unconnected
17:53:26.334 [main] DEBUG org.apache.activemq.transport.failover.FailoverTransport - Waking up reconnect task
17:53:26.335 [ActiveMQ Task-1] DEBUG org.apache.activemq.transport.failover.FailoverTransport - urlList connectionList:[tcp://localhost:61616], from: [tcp://localhost:61616]
17:53:26.339 [main] DEBUG org.apache.camel.component.jms.DefaultJmsMessageListenerContainer - Established shared JMS Connection
17:53:26.340 [main] DEBUG org.apache.camel.component.jms.DefaultJmsMessageListenerContainer - Resumed paused task: org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker#58c34bb3
17:53:26.372 [ActiveMQ Task-1] DEBUG org.apache.activemq.transport.failover.FailoverTransport - Attempting 0th connect to: tcp://localhost:61616
17:53:28.393 [ActiveMQ Task-1] DEBUG org.apache.activemq.transport.failover.FailoverTransport - Connect fail to: tcp://localhost:61616, reason: {}
I am especially stumped to see these messages:
Route: route1 >>> Route[activemq://queue:null -> null]
and
urlList connectionList:[tcp://localhost:61616], from: [tcp://localhost:61616]
Why is the queue coming up as null though I have a proper queue name? Also why is the broker url tcp://localhost:61616?
I want to run this unit test case so that it runs properly in all environments like: local, DIT , SIT, PROD etc. So, for that I cannot afford the broker url to be: tcp://localhost:61616.
Any ideas as to what I am doing wrong here and what I should be doing?
EDIT 1:
One of the issues that I am seeing is even before the test class is called, the MyRouteBuilder() inside createRouteBuilder() is invoked, leading to the issues that I see in the log.

The "activemq:queue:.." is telling Camel to use the auto-configure magic behind the scenes (which uses default url) and your use case is beyond that.
You need to configure a connection factory (ActiveMQConnectionFactory) and configure a camel-jms component to use that connection factory.
The connection factory allows you to specify url, userName, password, default connection settings and setup SSL.
A best practice is to externalize the url, userName, password and queue to a properties file so you can change those across the environments-- local, DIT, SIT and prod, etc.
NOTE: Use org.apache.camel/camel-jms component, and not the org.apache.activemq/activemq-camel component. activemq-camel is deprecated and being removed in ActiveMQ 5.17.x.

Instead of setting up an explicit active mq broker , I started using a VM broker .
#Override
protected RoutesBuilder createRouteBuilder() throws Exception {
return new RouteBuilder() {
#Override
public void configure() {
ConnectionFactory connectionFactory = new ActiveMQConnectionFactory("vm://localhost?broker.persistent=false");
ActiveMQComponent activeMQComponent = new ActiveMQComponent();
activeMQComponent.setConnectionFactory(connectionFactory);
context.addComponent("activemq", activeMQComponent);
from("activemq:queue:myQueue").to("mock:collector");
}
};
}
Also , I mistook camel junit as a traditional junit . We don't need to call explicitly the actual route builder class . Instead after setting up my activeMq component up above , I was able to write my test methods, mock my end points for queue and send messages and assert them . Camel is truly versatile . Requires a lot of study though .

Related

Activiti Job Executor problem with async serviceTasks (activiti >= 5.17)

Please consider the following diagram
MyProcess.bpmn
<?xml version="1.0" encoding="UTF-8"?>
<definitions xmlns="http://www.omg.org/spec/BPMN/20100524/MODEL" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:activiti="http://activiti.org/bpmn" xmlns:bpmndi="http://www.omg.org/spec/BPMN/20100524/DI" xmlns:omgdc="http://www.omg.org/spec/DD/20100524/DC" xmlns:omgdi="http://www.omg.org/spec/DD/20100524/DI" typeLanguage="http://www.w3.org/2001/XMLSchema" expressionLanguage="http://www.w3.org/1999/XPath" targetNamespace="http://www.activiti.org/test">
<process id="myProcess" name="My process" isExecutable="true">
<startEvent id="startevent1" name="Start"></startEvent>
<userTask id="evl" name="Evaluation"></userTask>
<boundaryEvent id="timer_event_autocomplete" name="Timer" attachedToRef="evl" cancelActivity="false">
<timerEventDefinition>
<timeDate>PT2S</timeDate>
</timerEventDefinition>
</boundaryEvent>
<serviceTask id="timer_service" name="Timed Autocomplete" activiti:async="true" activiti:class="com.example.service.TimerService"></serviceTask>
<serviceTask id="store_docs_service" name="Store Documents" activiti:async="true" activiti:class="com.example.service.StoreDocsService"></serviceTask>
<sequenceFlow id="flow1" sourceRef="startevent1" targetRef="evl"></sequenceFlow>
<sequenceFlow id="flow2" sourceRef="timer_event_autocomplete" targetRef="timer_service"></sequenceFlow>
<sequenceFlow id="flow3" sourceRef="evl" targetRef="store_docs_service"></sequenceFlow>
<sequenceFlow id="flow4" sourceRef="store_docs_service" targetRef="endevent1"></sequenceFlow>
<endEvent id="endevent1" name="End"></endEvent>
</process>
</definitions>
To describe it in words, there is one user task (Evaluation) and a timer attached to it (configured to trigger in 2 seconds). Upon triggering the timer, the Timed Autocomplete async service task in its Java Delegate, TimerService, tries to complete the user task (Evaluation). Completing the user task (Evaluation) the flow moves to the other async service task (Store Documents), it calls its Java Delegate, StoreDocsService, and the flow ends.
TimerService.java
public class TimerService implements JavaDelegate {
Logger LOGGER = LoggerFactory.getLogger(TimerService.class);
#Override
public void execute(DelegateExecution execution) throws Exception {
LOGGER.info("*** Executing Timer autocomplete ***");
Task task = execution.getEngineServices().getTaskService().createTaskQuery().active().singleResult();
execution.getEngineServices().getTaskService().complete(task.getId());
LOGGER.info("*** Task: {} autocompleted by timer ***", task.getId());
}
}
StoreDocsService.java
public class StoreDocsService implements JavaDelegate {
Logger LOGGER = LoggerFactory.getLogger(StoreDocsService.class);
#Override
public void execute(DelegateExecution execution) throws Exception {
LOGGER.info("*** Executing Store Documents ***");
}
}
App.java
public class App
{
public static void main( String[] args ) throws Exception
{
// DefaultAsyncJobExecutor demoAsyncJobExecutor = new DefaultAsyncJobExecutor();
// demoAsyncJobExecutor.setCorePoolSize(10);
// demoAsyncJobExecutor.setMaxPoolSize(50);
// demoAsyncJobExecutor.setKeepAliveTime(10000);
// demoAsyncJobExecutor.setMaxAsyncJobsDuePerAcquisition(50);
ProcessEngineConfiguration cfg = new StandaloneProcessEngineConfiguration()
.setJdbcUrl("jdbc:h2:mem:activiti;DB_CLOSE_DELAY=1000")
.setJdbcUsername("sa")
.setJdbcPassword("")
.setJdbcDriver("org.h2.Driver")
.setDatabaseSchemaUpdate(ProcessEngineConfiguration.DB_SCHEMA_UPDATE_TRUE)
// .setAsyncExecutorActivate(true)
// .setAsyncExecutorEnabled(true)
// .setAsyncExecutor(demoAsyncJobExecutor)
.setJobExecutorActivate(true)
;
ProcessEngine processEngine = cfg.buildProcessEngine();
String pName = processEngine.getName();
String ver = ProcessEngine.VERSION;
System.out.println("ProcessEngine [" + pName + "] Version: [" + ver + "]");
RepositoryService repositoryService = processEngine.getRepositoryService();
Deployment deployment = repositoryService.createDeployment()
.addClasspathResource("MyProcess.bpmn").deploy();
ProcessDefinition processDefinition = repositoryService.createProcessDefinitionQuery()
.deploymentId(deployment.getId()).singleResult();
System.out.println(
"Found process definition ["
+ processDefinition.getName() + "] with id ["
+ processDefinition.getId() + "]");
final Map<String, Object> variables = new HashMap<String, Object>();
final RuntimeService runtimeService = processEngine.getRuntimeService();
ProcessInstance id = runtimeService.startProcessInstanceByKey("myProcess", variables);
System.out.println("Started Process Id: "+id.getId());
try {
final TaskService taskService = processEngine.getTaskService();
// List<Task> tasks = taskService.createTaskQuery().active().list();
// while (!tasks.isEmpty()) {
// Task task = tasks.get(0);
// taskService.complete(task.getId());
// tasks = taskService.createTaskQuery().active().list();
// }
} catch (Exception e) {
System.out.println(e.getMessage());
} finally {
}
while(!runtimeService.createExecutionQuery().list().isEmpty()) {
}
processEngine.close();
}
}
Activiti 5.15
When the timer triggers, the above diagram executes as described. We use Activiti's DefaultJobExecutor
As we can see in the logs:
[main] INFO org.activiti.engine.impl.ProcessEngineImpl - ProcessEngine default created
[main] INFO org.activiti.engine.impl.jobexecutor.JobExecutor - Starting up the JobExecutor[org.activiti.engine.impl.jobexecutor.DefaultJobExecutor].
[Thread-1] INFO org.activiti.engine.impl.jobexecutor.AcquireJobsRunnable - JobExecutor[org.activiti.engine.impl.jobexecutor.DefaultJobExecutor] starting to acquire jobs
ProcessEngine [default] Version: [5.15]
[main] INFO org.activiti.engine.impl.bpmn.deployer.BpmnDeployer - Processing resource MyProcess.bpmn
Found process definition [My process] with id [myProcess:1:3]
Started Process Id: 4
[pool-1-thread-1] INFO com.example.service.TimerService - *** Executing Timer autocomplete ***
[pool-1-thread-1] INFO com.example.service.TimerService - *** Task: 9 autocompleted by timer ***
[pool-1-thread-1] INFO com.example.service.StoreDocsService - *** Executing Store Documents ***
[main] INFO org.activiti.engine.impl.jobexecutor.JobExecutor - Shutting down the JobExecutor[org.activiti.engine.impl.jobexecutor.DefaultJobExecutor].
[Thread-1] INFO org.activiti.engine.impl.jobexecutor.AcquireJobsRunnable - JobExecutor[org.activiti.engine.impl.jobexecutor.DefaultJobExecutor] stopped job acquisition
Activiti >= 5.17
Changing only the activiti's version in pom.xml to 5.17.0 and up (tested till 5.22.0) and executing the same code, the flow executes the timer's Java Delegate, TimerService, which completes the user task (Evaluation) but Store Documents Java Delegate, StoreDocsService is never called. To add more, it seems that the flow never ends and the execution remains stuck at Store Documents async service task.
Logs:
[main] INFO org.activiti.engine.impl.ProcessEngineImpl - ProcessEngine default created
[main] INFO org.activiti.engine.impl.jobexecutor.JobExecutor - Starting up the JobExecutor[org.activiti.engine.impl.jobexecutor.DefaultJobExecutor].
[Thread-1] INFO org.activiti.engine.impl.jobexecutor.AcquireJobsRunnableImpl - JobExecutor[org.activiti.engine.impl.jobexecutor.DefaultJobExecutor] starting to acquire jobs
ProcessEngine [default] Version: [5.17.0.2]
[main] INFO org.activiti.engine.impl.bpmn.deployer.BpmnDeployer - Processing resource MyProcess.bpmn
Found process definition [My process] with id [myProcess:1:3]
Started Process Id: 4
[pool-1-thread-2] INFO com.example.service.TimerService - *** Executing Timer autocomplete ***
[pool-1-thread-2] INFO com.example.service.TimerService - *** Task: 9 autocompleted by timer ***
Changing to Async Job Executor. One feature of 5.17 release was the new async job executor (however the default non-async executor remains configured as default). So trying to enable the async executor in App.java by the following lines:
DefaultAsyncJobExecutor demoAsyncJobExecutor = new DefaultAsyncJobExecutor();
demoAsyncJobExecutor.setCorePoolSize(10);
demoAsyncJobExecutor.setMaxPoolSize(50);
demoAsyncJobExecutor.setKeepAliveTime(10000);
demoAsyncJobExecutor.setMaxAsyncJobsDuePerAcquisition(50);
ProcessEngineConfiguration cfg = new StandaloneProcessEngineConfiguration()
.setJdbcUrl("jdbc:h2:mem:activiti;DB_CLOSE_DELAY=1000")
.setJdbcUsername("sa")
.setJdbcPassword("")
.setJdbcDriver("org.h2.Driver")
.setDatabaseSchemaUpdate(ProcessEngineConfiguration.DB_SCHEMA_UPDATE_TRUE)
.setAsyncExecutorActivate(true)
.setAsyncExecutorEnabled(true)
.setAsyncExecutor(demoAsyncJobExecutor)
;
The flow seems to execute correctly, StoreDocsService is called after TimerService, but it never ends (the while(!runtimeService.createExecutionQuery().list().isEmpty()) statement in App.java is always true)!
Logs:
[main] INFO org.activiti.engine.impl.ProcessEngineImpl - ProcessEngine default created
[main] INFO org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor - Starting up the default async job executor [org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor].
[main] INFO org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor - Creating thread pool queue of size 100
[main] INFO org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor - Creating executor service with corePoolSize 10, maxPoolSize 50 and keepAliveTime 10000
[Thread-1] INFO org.activiti.engine.impl.asyncexecutor.AcquireTimerJobsRunnable - {} starting to acquire async jobs due
[Thread-2] INFO org.activiti.engine.impl.asyncexecutor.AcquireAsyncJobsDueRunnable - {} starting to acquire async jobs due
ProcessEngine [default] Version: [5.17.0.2]
[main] INFO org.activiti.engine.impl.bpmn.deployer.BpmnDeployer - Processing resource MyProcess.bpmn
Found process definition [My process] with id [myProcess:1:3]
Started Process Id: 4
[pool-1-thread-2] INFO com.example.service.TimerService - *** Executing Timer autocomplete ***
[pool-1-thread-2] INFO com.example.service.TimerService - *** Task: 9 autocompleted by timer ***
[pool-1-thread-3] INFO com.example.service.StoreDocsService - *** Executing Store Documents ***
!!!! UPDATE !!!
Activiti 6.0.0
Tried the same scenario but with Activiti version 6.0.0.
Changes needed in TimerService, cannot get the EngineServices from DelegateExecution:
public class TimerService implements JavaDelegate {
Logger LOGGER = LoggerFactory.getLogger(TimerService.class);
#Override
public void execute(DelegateExecution execution) {
LOGGER.info("*** Executing Timer autocomplete ***");
Task task = Context.getProcessEngineConfiguration().getTaskService().createTaskQuery().active().singleResult();
Context.getProcessEngineConfiguration().getTaskService().complete(task.getId());
// Task task = execution.getEngineServices().getTaskService().createTaskQuery().active().singleResult();
// execution.getEngineServices().getTaskService().complete(task.getId());
LOGGER.info("*** Task: {} autocompleted by timer ***", task.getId());
}
}
and this version has only the async executor so the ProcessEngineConfiguration in App.java changes to:
ProcessEngineConfiguration cfg = new StandaloneProcessEngineConfiguration()
.setJdbcUrl("jdbc:h2:mem:activiti;DB_CLOSE_DELAY=1000")
.setJdbcUsername("sa")
.setJdbcPassword("")
.setJdbcDriver("org.h2.Driver")
.setDatabaseSchemaUpdate(ProcessEngineConfiguration.DB_SCHEMA_UPDATE_TRUE)
.setAsyncExecutorActivate(true)
// .setAsyncExecutorEnabled(true)
// .setAsyncExecutor(demoAsyncJobExecutor)
// .setJobExecutorActivate(true)
;
With 6.0.0 version and async executor the process completes successfully as we can see in the logs:
[main] INFO org.activiti.engine.impl.ProcessEngineImpl - ProcessEngine default created
[main] INFO org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor - Starting up the default async job executor [org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor].
[main] INFO org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor - Creating thread pool queue of size 100
[main] INFO org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor - Creating executor service with corePoolSize 2, maxPoolSize 10 and keepAliveTime 5000
[Thread-1] INFO org.activiti.engine.impl.asyncexecutor.AcquireAsyncJobsDueRunnable - {} starting to acquire async jobs due
[Thread-2] INFO org.activiti.engine.impl.asyncexecutor.AcquireTimerJobsRunnable - {} starting to acquire async jobs due
[Thread-3] INFO org.activiti.engine.impl.asyncexecutor.ResetExpiredJobsRunnable - {} starting to reset expired jobs
ProcessEngine [default] Version: [6.0.0.4]
Found process definition [My process] with id [myProcess:1:3]
Started Process Id: 4
[activiti-async-job-executor-thread-2] INFO com.example.service.TimerService - *** Executing Timer autocomplete ***
[activiti-async-job-executor-thread-2] INFO com.example.service.TimerService - *** Task: 10 autocompleted by timer ***
[activiti-async-job-executor-thread-2] INFO com.example.service.StoreDocsService - *** Executing Store Documents ***
[main] INFO org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor - Shutting down the default async job executor [org.activiti.engine.impl.asyncexecutor.DefaultAsyncJobExecutor].
[activiti-reset-expired-jobs] INFO org.activiti.engine.impl.asyncexecutor.ResetExpiredJobsRunnable - {} stopped resetting expired jobs
[activiti-acquire-timer-jobs] INFO org.activiti.engine.impl.asyncexecutor.AcquireTimerJobsRunnable - {} stopped async job due acquisition
[activiti-acquire-async-jobs] INFO org.activiti.engine.impl.asyncexecutor.AcquireAsyncJobsDueRunnable - {} stopped async job due acquisition
Process finished with exit code 0
2 Questions:
We have upgraded from Activiti 5.15 to 5.22.0 and we do not use the async job executor. Is there any way to keep the functionality of this piece of diagram to behave as it was behaving in 5.15?
If switching to the async job executor is inevitable, then what are we missing in order to make this process complete successfully?
A sample project of the above can be found at: https://github.com/pleft/DemoActiviti
Without answering your question explicitly which would require setting up your environment and debugging, I would recommend you at the very least move to Activiti 6.
The 5.x branch of Activiti hasn't been maintained for over 5 years and is effectively dead.
Even the 6.x line has pretty much been abandoned as the core developers have all moved to the "Flowable" project.
If you choose to stay with Activiti 5.x, your options are:
Maintain the codebase yourself (and hopefully contribute any changes/enhancements back to the project).
Contract Activiti support services. There are a couple of vendors offering such services.

unable to setup a connection between kafka(hortonworks sandbox) and intelliJ IDEA(local windows system)

Here is the exception:
exception:java.nio.channels.ClosedChannelException
whole logs in the console:
[main] INFO kafka.utils.Log4jControllerRegistration$ - Registered kafka:type=kafka.Log4jController MBean
[main] INFO kafka.utils.VerifiableProperties - Verifying properties
[main] INFO kafka.utils.VerifiableProperties - Property metadata.broker.list is overridden to xxx.xxx.xxx.xxx:6667
[main] INFO kafka.utils.VerifiableProperties - Property request.required.acks is overridden to 1
[main] INFO kafka.utils.VerifiableProperties - Property serializer.class is overridden to kafka.serializer.StringEncoder
[Thread-0] INFO kafka.client.ClientUtils$ - Fetching metadata from broker BrokerEndPoint(0,xxx.xxx.xxx.xxx,6667) with correlation id 0 for 1 topic(s) Set(test)
[Thread-0] INFO kafka.producer.SyncProducer - Connected to xxx.xxx.xxx.xxx:6667 for producing
[Thread-0] INFO kafka.producer.SyncProducer - Disconnecting from xxx.xxx.xxx.xxx:6667
[Thread-0] WARN kafka.client.ClientUtils$ - Fetching topic metadata with correlation id 0 for topics [Set(test)] from broker [BrokerEndPoint(0,xxx.xxx.xxx.xxx,6667)] failed
java.nio.channels.ClosedChannelException
I have found some answers online that told me that I should set the advertised.host.name in the server.properties but I really don't know to set which IP to advertised.host.name.
I am totally lost in this situation, here is the java code I write in IntelliJ, I just want to know which hostname should I put in BROKER_LIST to make a connection between hortonworks sandbox and local machine.
public class KafkaProperties {
public static final String ZK="127.0.0.1:2181";
public static final String TOPIC="test";
public static final String BROKER_LIST= "xx.xxx.xxx.xxx:6667";
}
server.properties:
# Generated by Apache Ambari. Sun Mar 1 19:04:58 2020
auto.create.topics.enable=true
auto.leader.rebalance.enable=true
compression.type=producer
controlled.shutdown.enable=true
controlled.shutdown.max.retries=3
controlled.shutdown.retry.backoff.ms=5000
controller.message.queue.size=10
controller.socket.timeout.ms=30000
default.replication.factor=1
delete.topic.enable=true
external.kafka.metrics.exclude.prefix=kafka.network.RequestMetrics,kafka.server.DelayedOperationPurgatory,kafka.server.BrokerTopicMetrics.BytesRejectedPerSec
external.kafka.metrics.include.prefix=kafka.network.RequestMetrics.ResponseQueueTimeMs.request.OffsetCommit.98percentile,kafka.network.RequestMetrics.ResponseQueueTimeMs.request.Offsets.95percentile,kafka.network.RequestMetrics.ResponseSendTimeMs.request.Fetch.95percentile,kafka.network.RequestMetrics.RequestsPerSec.request
fetch.purgatory.purge.interval.requests=10000
kafka.ganglia.metrics.group=kafka
kafka.ganglia.metrics.host=localhost
kafka.ganglia.metrics.port=8671
kafka.ganglia.metrics.reporter.enabled=true
kafka.metrics.reporters=
kafka.timeline.metrics.host_in_memory_aggregation=
kafka.timeline.metrics.host_in_memory_aggregation_port=
kafka.timeline.metrics.host_in_memory_aggregation_protocol=
kafka.timeline.metrics.hosts=
kafka.timeline.metrics.maxRowCacheSize=10000
kafka.timeline.metrics.port=
kafka.timeline.metrics.protocol=
kafka.timeline.metrics.reporter.enabled=true
kafka.timeline.metrics.reporter.sendInterval=5900
kafka.timeline.metrics.truststore.password=
kafka.timeline.metrics.truststore.path=
kafka.timeline.metrics.truststore.type=
leader.imbalance.check.interval.seconds=300
leader.imbalance.per.broker.percentage=10
listeners=PLAINTEXT://sandbox-hdp.hortonworks.com:6667
log.cleanup.interval.mins=10
log.dirs=/kafka-logs
log.index.interval.bytes=4096
log.index.size.max.bytes=10485760
log.retention.bytes=-1
log.retention.check.interval.ms=600000
log.retention.hours=168
log.roll.hours=168
log.segment.bytes=1073741824
message.max.bytes=1000000
min.insync.replicas=1
num.io.threads=8
num.network.threads=3
num.partitions=1
num.recovery.threads.per.data.dir=1
num.replica.fetchers=1
offset.metadata.max.bytes=4096
offsets.commit.required.acks=-1
offsets.commit.timeout.ms=5000
offsets.load.buffer.size=5242880
offsets.retention.check.interval.ms=600000
offsets.retention.minutes=86400000
offsets.topic.compression.codec=0
offsets.topic.num.partitions=50
offsets.topic.replication.factor=1
offsets.topic.segment.bytes=104857600
port=6667
producer.metrics.enable=false
producer.purgatory.purge.interval.requests=10000
queued.max.requests=500
replica.fetch.max.bytes=1048576
replica.fetch.min.bytes=1
replica.fetch.wait.max.ms=500
replica.high.watermark.checkpoint.interval.ms=5000
replica.lag.max.messages=4000
replica.lag.time.max.ms=10000
replica.socket.receive.buffer.bytes=65536
replica.socket.timeout.ms=30000
sasl.enabled.mechanisms=GSSAPI
sasl.mechanism.inter.broker.protocol=GSSAPI
security.inter.broker.protocol=PLAINTEXT
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
socket.send.buffer.bytes=102400
ssl.client.auth=none
ssl.key.password=
ssl.keystore.location=
ssl.keystore.password=
ssl.truststore.location=
ssl.truststore.password=
zookeeper.connect=sandbox-hdp.hortonworks.com:2181
zookeeper.connection.timeout.ms=25000
zookeeper.session.timeout.ms=30000
zookeeper.sync.time.ms=2000
now the code in IntelliJ:
public class KafkaProperties {
public static final String ZK="sandbox-hdp.hortonworks.com:2181";
public static final String TOPIC="yanzhao";
public static final String BROKER_LIST= "sandbox-hdp.hortonworks.com:6667";
}
/ect/hosts file:
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.18.0.2 sandbox-hdp.hortonworks.com sandbox-hdp
a part of jps command output
7846 QuorumPeerMain /usr/hdp/current/zookeeper-server/conf/zoo.cfg
363 AmbariServer
6444 JournalNode
25581 Kafka /usr/hdp/3.0.1.0-187/kafka/config/server.properties
connection:
[root#sandbox-hdp ~]# netstat -lpn | grep 6667
tcp 0 0 172.18.0.2:6667 0.0.0.0:* LISTEN 25581/java
the command I ran in the sandbox(set up the consumer):
kafka-console-consumer.sh --bootstrap-server sandbox-hdp.hortonworks.com:6667 --topic yanzhao
exception logs:
[main] INFO kafka.utils.Log4jControllerRegistration$ - Registered kafka:type=kafka.Log4jController MBean
[main] INFO kafka.utils.VerifiableProperties - Verifying properties
[main] INFO kafka.utils.VerifiableProperties - Property metadata.broker.list is overridden to sandbox-hdp.hortonworks.com:6667
[main] INFO kafka.utils.VerifiableProperties - Property request.required.acks is overridden to 1
[main] INFO kafka.utils.VerifiableProperties - Property serializer.class is overridden to kafka.serializer.StringEncoder
[Thread-0] INFO kafka.client.ClientUtils$ - Fetching metadata from broker BrokerEndPoint(0,sandbox-hdp.hortonworks.com,6667) with correlation id 0 for 1 topic(s) Set(yanzhao)
[Thread-0] INFO kafka.producer.SyncProducer - Connected to sandbox-hdp.hortonworks.com:6667 for producing
[Thread-0] INFO kafka.producer.SyncProducer - Disconnecting from sandbox-hdp.hortonworks.com:6667
[Thread-0] WARN kafka.client.ClientUtils$ - Fetching topic metadata with correlation id 0 for topics [Set(yanzhao)] from broker [BrokerEndPoint(0,sandbox-hdp.hortonworks.com,6667)] failed
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:112)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:80)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:79)
at kafka.producer.SyncProducer.send(SyncProducer.scala:124)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:63)
at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:83)
at kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:76)
at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:85)
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:76)
at kafka.producer.Producer.send(Producer.scala:78)
at kafka.javaapi.producer.Producer.send(Producer.scala:35)
at com.yanzhao.spark.kafka.KafkaProducer.run(KafkaProducer.java:32)
[Thread-0] INFO kafka.producer.SyncProducer - Disconnecting from sandbox-hdp.hortonworks.com:6667
[Thread-0] ERROR kafka.utils.CoreUtils$ - fetching topic metadata for topics [Set(yanzhao)] from broker [ArrayBuffer(BrokerEndPoint(0,sandbox-hdp.hortonworks.com,6667))] failed
kafka.common.KafkaException: fetching topic metadata for topics [Set(yanzhao)] from broker [ArrayBuffer(BrokerEndPoint(0,sandbox-hdp.hortonworks.com,6667))] failed
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:77)
at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:83)
at kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:76)
at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:85)
at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:76)
at kafka.producer.Producer.send(Producer.scala:78)
at kafka.javaapi.producer.Producer.send(Producer.scala:35)
at com.yanzhao.spark.kafka.KafkaProducer.run(KafkaProducer.java:32)
Caused by: java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:112)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:80)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:79)
at kafka.producer.SyncProducer.send(SyncProducer.scala:124)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:63)
... 7 more
You need to use the hostname of the machine on which your Kafka broker is running (and obviously not the IP of the machine the client is running on).
Now your client needs to use the address the Kafka broker publishes to the public. This address is configured through advertised.listeners:
Listeners to publish to ZooKeeper for clients to use, if different
than the listeners config property. In IaaS environments, this may
need to be different from the interface to which the broker binds. If
this is not set, the value for listeners will be used. Unlike
listeners it is not valid to advertise the 0.0.0.0 meta-address.
Therefore, you should use this address. In case the advertised.listeners is not configured in server.properties you can probably still use listeners address.
On a final note, I can see that you have used "127.0.0.1:2181" as the Zookeeper address. Likewise, you need to use the hostname of the machine where Zookeeper is running.

How to connect a reactive stream to an AMQP broker in quarkus / smallrye

I am attempting to migrate my Artimis-MQ clients to quarkus microservices. I consistently get a "Stream not connected" error when attempted to send a message.
I attempted to follow the suggestions in the answer (using microprofile-reactive-messaging): Quarkus with ActiveMQ?
in my build.gradle:
dependencies {
// ...
implementation enforcedPlatform("io.quarkus:quarkus-bom:0.15.0")
implementation 'io.quarkus:quarkus-resteasy'
implementation 'io.quarkus:quarkus-resteasy-jsonb'
implementation 'io.quarkus:quarkus-smallrye-metrics'
implementation 'io.quarkus:quarkus-smallrye-health'
implementation 'io.quarkus:quarkus-smallrye-reactive-messaging'
implementation 'io.quarkus:quarkus-vertx'
implementation 'io.smallrye.reactive:smallrye-reactive-messaging-amqp:0.0.8'
}
sample rest endpoint, forwarding a message to AMQP
#Path("/send")
public class MessageResource {
#Inject
#Stream("emitter-topic")
Emitter<String> topic;
#GET
#Produces(MediaType.TEXT_PLAIN)
public String send(#QueryParam("msg") final String msg) {
final String message = Objects.requireNonNullElse(msg, "").isBlank() ? "no message" : msg;
topic.send(message);
return "sent: " + message;
}
}
in src/main/resources/application.properties:
smallrye.messaging.source.emitter-topic.type=io.smallrye.reactive.messaging.amqp.Amqp
smallrye.messaging.source.emitter-topic.address=test-amqp
smallrye.messaging.source.emitter-topic.containerId=test-clientid
smallrye.messaging.source.emitter-topic.host=localhost
smallrye.messaging.source.emitter-topic.port=5672
I continuously see the "Illegal state exception". I can tell from the logs that smallrye find the amqp connector, but never actually initializes the connection.
2019-06-02 12:19:50,055 INFO [io.sma.rea.mes.ext.MediatorManager] (main) Deployment done... start processing
2019-06-02 12:19:50,101 INFO [io.sma.rea.mes.imp.ConfiguredStreamFactory] (main) Found incoming connectors: [class io.smallrye.reactive.messaging.amqp.Amqp]
2019-06-02 12:19:50,102 INFO [io.sma.rea.mes.imp.ConfiguredStreamFactory] (main) Found outgoing connectors: [class io.smallrye.reactive.messaging.amqp.Amqp]
2019-06-02 12:19:50,103 INFO [io.sma.rea.mes.imp.ConfiguredStreamFactory] (main) Stream manager initializing...
2019-06-02 12:19:50,106 INFO [io.sma.rea.mes.imp.LegacyConfiguredStreamFactory] (main) Stream manager initializing...
2019-06-02 12:19:50,125 INFO [io.sma.rea.mes.ext.MediatorManager] (main) Initializing mediators
2019-06-02 12:19:50,127 INFO [io.sma.rea.mes.ext.MediatorManager] (main) Connecting mediators
2019-06-02 12:19:50,136 INFO [io.quarkus] (main) Quarkus 0.15.0 started in 1.487s. Listening on: http://[::]:8080
2019-06-02 12:19:50,137 INFO [io.quarkus] (main) Installed features: [cdi, resteasy, resteasy-jsonb, smallrye-health, smallrye-metrics, smallrye-reactive-messaging, smallrye-reactive-streams-operators, vertx]
2019-06-02 12:20:01,964 ERROR [io.und.request] (executor-thread-1) UT005023: Exception handling request to /send: org.jboss.resteasy.spi.UnhandledException: java.lang.IllegalStateException: Stream not yet connected
at org.jboss.resteasy.core.ExceptionHandler.handleApplicationException(ExceptionHandler.java:106)
at org.jboss.resteasy.core.ExceptionHandler.handleException(ExceptionHandler.java:372)
at org.jboss.resteasy.core.SynchronousDispatcher.writeException(SynchronousDispatcher.java:209)
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:496)
at org.jboss.resteasy.core.SynchronousDispatcher.lambda$invoke$4(SynchronousDispatcher.java:252)
at org.jboss.resteasy.core.SynchronousDispatcher.lambda$preprocess$0(SynchronousDispatcher.java:153)
at org.jboss.resteasy.core.interception.jaxrs.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:362)
at org.jboss.resteasy.core.SynchronousDispatcher.preprocess(SynchronousDispatcher.java:156)
at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:238)
at org.jboss.resteasy.plugins.server.servlet.ServletContainerDispatcher.service(ServletContainerDispatcher.java:234)
at io.quarkus.resteasy.runtime.ResteasyFilter$ResteasyResponseWrapper.sendError(ResteasyFilter.java:72)
at io.undertow.servlet.handlers.DefaultServlet.doGet(DefaultServlet.java:175)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:686)
Ok, i figured out my problem. In application.properties, i had source and sink backwards. Describing emitter-topic as a sink, rather than source resolved the issue.

Rabbit SimpleMessageListenerContainer won't shut down

Following on from this question, we have a scenario where Rabbit credentials become invalidated, and we need to call resetConnection() on our CachingConnectionFactory to pick up a fresh set of credentials.
We're doing this in a ShutdownSignalException handler, and it basically works. What doesn't work is that we also need to restart our listeners. We have a few of these:
#RabbitListener(
id = ABC,
bindings = #QueueBinding(value = #Queue(value="myQ", durable="true"),
exchange = #Exchange(value="myExchange", durable="true"),
key = "myKey"),
containerFactory = "customQueueContainerFactory"
)
public void process(...) {
...
}
The impression given by this answer (also this) is that we just need to do:
#Autowired RabbitListenerEndpointRegistry registry;
#Autowired CachingConnectionFactory connectionFactory;
#Override
public void shutdownCompleted(ShutdownSignalException cause) {
refreshRabbitMQCredentials();
}
public void refreshRabbitMQCredentials() {
registry.stop(); // do this first
// Fetch credentials, update username/pass
connectionFactory.resetConnection(); // then this
registry.start(); // finally restart
}
The problem is that having debugged my way through SimpleMessageListenerContainer, when the very first of these containers has doShutdown() called, Spring tries to cancel the BlockingQueueConsumer.
Because the underlying Channel still reports as being open - even though the RabbitMQ UI doesn't report any connections or channels being open - a Cancel event is sent to the broker inside ChannelN.basicCancel(), but the channel then blocks forever for a reply, and as a result container shutdown is completely blocked.
I've tried injecting a TaskExecutor (a Executors.newCachedThreadPool()) into the containers and calling shutdownNow() or interrupting them, but none of this affects the channel's blocking wait.
It looks like my only option to unblock the channel is to trigger an additional ShutdownSignalException during cancellation, but (a) I don't know how I can do that, and (b) it looks like I would have to initiate cancellation of all listeners in parallel before trying to shutdown again).
// com.rabbitmq.client.impl.ChannelN
#Override
public void basicCancel(final String consumerTag) throws IOException
{
// [snip]
rpc(new Basic.Cancel(consumerTag, false), k);
try {
k.getReply(); // <== BLOCKS HERE
} catch(ShutdownSignalException ex) {
throw wrap(ex);
}
metricsCollector.basicCancel(this, consumerTag);
}
I'm not sure why this is proving so difficult. Is there a simpler way to force SimpleMessageListenerContainer shutdown?
Using Spring Rabbit 1.7.6; AMQP Client 4.0.3; Spring Boot 1.5.10-RELEASE
UPDATE
Some logs to demonstrate the theory that the message containers are restarting before connection refresh has completed, and that this might be why they don't reconnect:
ERROR o.s.a.r.c.CachingConnectionFactory - Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=403, reply-text=ACCESS_REFUSED - access to queue 'amq.gen-4-bqGxbLio9mu8Kc7MMexw' in vhost '/' refused for user 'cert-configserver-feb6e103-76a8-f5bf-3f23-1e8150812bc4', class-id=50, method-id=10)
INFO u.c.c.c.r.ReauthenticatingChannelListener - Channel shutdown: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=403, reply-text=ACCESS_REFUSED - access to queue 'amq.gen-4-bqGxbLio9mu8Kc7MMexw' in vhost '/' refused for user 'cert-configserver-feb6e103-76a8-f5bf-3f23-1e8150812bc4', class-id=50, method-id=10)
INFO u.c.c.c.r.ReauthenticatingChannelListener - Channel closed with reply code 403. Assuming credentials have been revoked and refreshing config server properties to get new credentials. Cause: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=403, reply-text=ACCESS_REFUSED - access to queue 'amq.gen-4-bqGxbLio9mu8Kc7MMexw' in vhost '/' refused for user 'cert-configserver-feb6e103-76a8-f5bf-3f23-1e8150812bc4', class-id=50, method-id=10)
WARN u.c.c.c.r.ReauthenticatingChannelListener - Shutdown signalled: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=403, reply-text=ACCESS_REFUSED - access to queue 'amq.gen-4-bqGxbLio9mu8Kc7MMexw' in vhost '/' refused for user 'cert-configserver-feb6e103-76a8-f5bf-3f23-1e8150812bc4', class-id=50, method-id=10)
INFO u.c.c.c.r.RabbitMQReauthenticator - Refreshing Rabbit credentials for XXXXXXXX
INFO o.s.c.c.c.ConfigServicePropertySourceLocator - Fetching config from server at: http://localhost:8888/configuration
INFO u.c.c.c.r.ReauthenticatingChannelListener - Got ListenerContainerConsumerFailedEvent: Consumer raised exception, attempting restart
INFO o.s.a.r.l.SimpleMessageListenerContainer - Restarting Consumer#2db55dec: tags=[{amq.ctag-ebAfSnXLbw_W1hlZ5ag7sQ=consumer.myQ}], channel=Cached Rabbit Channel: AMQChannel(amqp://cert-configserver-feb6e103-76a8-f5bf-3f23-1e8150812bc4#127.0.0.1:5672/,2), conn: Proxy#12de62aa Shared Rabbit Connection: SimpleConnection#56c95789 [delegate=amqp://cert-configserver-feb6e103-76a8-f5bf-3f23-1e8150812bc4#127.0.0.1:5672/, localPort= 50052], acknowledgeMode=AUTO local queue size=0
INFO o.s.c.c.c.ConfigServicePropertySourceLocator - Located environment: name=myApp, profiles=[default], label=null, version=null, state=null
INFO com.zaxxer.hikari.HikariDataSource - XXXXXXXX - Shutdown initiated...
INFO com.zaxxer.hikari.HikariDataSource - XXXXXXXX - Shutdown completed.
INFO u.c.c.c.r.RabbitMQReauthenticator - Refreshed username: 'cert-configserver-feb6e103-76a8-f5bf-3f23-1e8150812bc4' => 'cert-configserver-d7b54af2-0735-a9ed-7cc4-394803bf5e58'
INFO u.c.c.c.r.RabbitMQReauthenticator - CachingConnectionFactory reset, proceeding...
UPDATE 2:
This does seem to be a race condition of sorts. Having removed the container stop / starts, if I add a thread-only breakpoint to SimpleMessageListenerContainer.restart() to let the resetConnection() race past, and then release the breakpoint, then I can see things start to come back:
16:18:47,208 INFO u.c.c.c.r.RabbitMQReauthenticator - CachingConnectionFactory reset
// Get ready to release the SMLC.restart() breakpoint...
16:19:02,072 INFO o.s.a.r.c.CachingConnectionFactory - Attempting to connect to: rabbitmq.service.consul:5672
16:19:02,083 INFO o.s.a.r.c.CachingConnectionFactory - Created new connection: connectionFactory#7489bca4:1/SimpleConnection#68546c13 [delegate=amqp://cert-configserver-132a07c2-94f3-0099-4de1-f0b1a9875d5a#127.0.0.1:5672/, localPort= 33350]
16:19:02,086 INFO o.s.amqp.rabbit.core.RabbitAdmin - Auto-declaring a non-durable, auto-delete, or exclusive Queue ...
16:19:02,095 DEBUG u.c.c.c.r.ReauthenticatingChannelListener - Active connection check succeeded for channel AMQChannel(amqp://cert-configserver-132a07c2-94f3-0099-4de1-f0b1a9875d5a#127.0.0.1:5672/,1)
16:19:02,120 INFO o.s.amqp.rabbit.core.RabbitAdmin - Auto-declaring a non-durable, auto-delete, or exclusive Queue (springCloudBus...
That being the case I now have to work out either how to delay the container restarts until the refresh is done (i.e. my ShutdownSignalException handler completes), or make the refresh blocking somehow...
UPDATE 3:
My overall problem, of which this was a symptom, was solved with: https://stackoverflow.com/a/49392990/954442
It's not at all clear why the channel would report as open; this works fine for me; it recovers after deleting user foo...
#SpringBootApplication
public class So49323291Application {
public static void main(String[] args) {
SpringApplication.run(So49323291Application.class, args);
}
#Bean
public ApplicationRunner runner(RabbitListenerEndpointRegistry registry, CachingConnectionFactory cf,
RabbitTemplate template) {
return args -> {
cf.setUsername("foo");
cf.setPassword("bar");
registry.start();
doSends(template);
registry.stop();
cf.resetConnection();
cf.setUsername("baz");
cf.setPassword("qux");
registry.start();
doSends(template);
};
}
public void doSends(RabbitTemplate template) {
while (true) {
try {
template.convertAndSend("foo", "Hello");
Thread.sleep(5_000);
}
catch (Exception e) {
e.printStackTrace();
break;
}
}
}
#RabbitListener(queues = "foo", autoStartup = "false")
public void in(Message in) {
System.out.println(in);
}
}
(Body:'Hello' MessageProperties [headers={}, contentType=text/plain, contentEncoding=UTF-8, contentLength=0, receivedDeliveryMode=PERSISTENT, priority=0, redelivered=false, receivedExchange=, receivedRoutingKey=foo, deliveryTag=4, consumerTag=amq.ctag-9zt3wUGYSJmoON3zw03wUw, consumerQueue=foo])
2018-03-16 11:24:01.451 ERROR 11867 --- [ 127.0.0.1:5672] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: connection error; protocol method: #method(reply-code=320, reply-text=CONNECTION_FORCED - user 'foo' is deleted, class-id=0, method-id=0)
...
Caused by: com.rabbitmq.client.AuthenticationFailureException: ACCESS_REFUSED - Login was refused using authentication mechanism PLAIN. For details see the broker logfile.
2018-03-16 11:24:01.745 ERROR 11867 --- [cTaskExecutor-2] o.s.a.r.l.SimpleMessageListenerContainer : Stopping container from aborted consumer
2018-03-16 11:24:03.740 INFO 11867 --- [cTaskExecutor-3] o.s.a.r.c.CachingConnectionFactory : Created new connection: rabbitConnectionFactory#2c4d1ac:3/SimpleConnection#5e9c036b [delegate=amqp://baz#127.0.0.1:5672/, localPort= 59346]
(Body:'Hello' MessageProperties [headers={}, contentType=text/plain, contentEncoding=UTF-8, contentLength=0, receivedDeliveryMode=PERSISTENT, priority=0, redelivered=false, receivedExchange=, receivedRoutingKey=foo, deliveryTag=1, consumerTag=amq.ctag-ljnY00TBuvy5cCAkpD3r4A, consumerQueue=foo])
However, you really don't need to stop/start the registry, just reconfigure the connection factory with the new credentials and call resetConnection(); the containers will recover.

Debug eureka-client side http requests

I am trying to register my monolithic application to eureka server (first migration step into microservices world). The client & server versions that I use is 1.5.3. The registration request fails, due to bad request error.
My java code that creates the eureka client is:
private EurekaClient createEurekaClient(){
EurekaInstanceConfig instanceConfig = new MyDataCenterInstanceConfig(MY_NAMESPACE);
InstanceInfo instanceInfo = new EurekaConfigBasedInstanceInfoProvider(instanceConfig).get();
ApplicationInfoManager applicationInfoManager = new ApplicationInfoManager(instanceConfig, instanceInfo);
return new DiscoveryClient(applicationInfoManager, new DefaultEurekaClientConfig());
}
eureka-client.properties:
my-namespace.vipAddress=eureka
my-namespace.instance.preferIpAddress=true
eureka.region=default
my-namespace.name=MY-APP
my-namespace.port=8080
my-namespace.shouldUseDns=false
eureka.serviceUrl.default=http://localhost:9999/eureka/v2/
The logs output:
2016-09-20 10:35:54,325 DEBUG [DiscoveryClient-HeartbeatExecutor-0] (AbstractJerseyEurekaHttpClient.java:60) - Jersey HTTP POST http://localhost:9999/eureka/v2//apps/MY-APP with instance 7010; statusCode=400
2016-09-20 10:35:54,326 DEBUG [DiscoveryClient-HeartbeatExecutor-0] (ThreadSafeClientConnManager.java:282) - Released connection is not reusable.
2016-09-20 10:35:54,326 DEBUG [DiscoveryClient-HeartbeatExecutor-0] (ConnPoolByRoute.java:429) - Releasing connection [{}->http://localhost:9999][null]
2016-09-20 10:35:54,326 DEBUG [DiscoveryClient-HeartbeatExecutor-0] (ConnPoolByRoute.java:676) - Notifying no-one, there are no waiting threads
2016-09-20 10:35:54,326 DEBUG [DiscoveryClient-HeartbeatExecutor-0] (RedirectingEurekaHttpClient.java:121) - Pinning to endpoint null
2016-09-20 10:35:54,326 WARN [DiscoveryClient-HeartbeatExecutor-0] (RetryableEurekaHttpClient.java:127) - Request execution failure with status code 400; retrying on another server if available
The server returns a 400 error code which means bad request, so am looking for a way to print the full registration request to the log file.
I found the root cause to this issue, the com.fasterxml.jackson.core.jackson-databind that used in my project was outdated (version 2.1.1). While the eureka client needs minimum 2.5.4 version.

Categories

Resources