I am facing connection issues when running Kakfa test container(confluentinc/cp-kafka:5.4.3) with Spring Boot App. Wondering if someone has faced this issue as well. After kafka container starts, the Admin Client tries to connect to broker to fetch the metadata but fails to connect.
Error log:
[AdminClient clientId=adminclient-2] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
I tried following workarounds to ensure KafkaAdminClient uses right address but none of them worked:
Used boot strap server address
Used KAFKA_ADVERTISED_LISTENERS=BROKER://172.17.0.3:9092. This address was being set by testcontainers_start.sh within docker container
Used kafa.getContainerName() to form the address: Example: BROKER://t-adsad:9092
Used kafka.getHost() + “:” + kafka.getMappedPort(9092)
Test class:
#RunWith(SpringRunner.class)
#Import(KafkaTestContainersConfiguration.class)
#SpringBootTest
#DirtiesContext
public class KafkaTestContainersLiveTest {
#ClassRule
public static KafkaContainer kafka =
new KafkaContainer(DockerImageName.parse("confluentinc/cp-kafka:5.4.3"));
#BeforeClass
public static void setupBootstrapServer(){
String server = "BROKER://"+kafka.getNetworkAliases().get(0)+":9092";
System.setProperty("kafka.bootstrap.servers", server);
}
Configuration class:
#Configuration
#EnableKafka
public class KafkaTestContainersConfiguration {
#Value("${kafka.bootstrap.servers}")
private String bootstrapServer;
#Value("${kafka.topic}")
private String topic;
public final int NUM_PARTITIONS=1;
public final short REPLICATION_FACTOR=1;
#Bean
public AdminClient adminClient() {
return KafkaAdminClient.create(adminClientConfigs());
}
public Map<String, Object> adminClientConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServer);
props.put(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG, 5000);
return props;
}
}
Related
I have such test setup:
MyService connects to PostgtreSQL
MyService endpoint is being called from test suite
Both MyService and PostgreSQL are being run with Testcontainers.
Here is the network schema I want to achieve.
At first I tried to arrange communication by exposing ports.
static final PostgreSQLContainer<?> postgres =
new PostgreSQLContainer<>(DockerImageName.parse(POSTGRES_VERSION));
static final GenericContainer<?> myService = new GenericContainer<>(DockerImageName.parse(MY_SERVICE_IMAGE))
.withEnv(
Map.of(
"SPRING_DATASOURCE_URL", postgres.getJdbcUrl(),
"SPRING_DATASOURCE_USERNAME", postgres.getUsername(),
"SPRING_DATASOURCE_PASSWORD", postgres.getPassword()
)
)
.withExposedPorts(8080)
.withLogConsumer(new Slf4jLogConsumer(LoggerFactory.getLogger("MyService")))
According to logs MyService couldn't establish connection to PostgreSQL.
Caused by: java.net.ConnectException: Connection refused
Then I configured both services to share the same network.
static final Network SHARED_NETWORK = Network.newNetwork();
static final PostgreSQLContainer<?> postgres =
new PostgreSQLContainer<>(DockerImageName.parse(POSTGRES_VERSION))
.withNetwork(SHARED_NETWORK)
.withNetworkAliases("postgres");
static final GenericContainer<?> myService = new GenericContainer<>(DockerImageName.parse(MY_SERVICE_IMAGE))
.withEnv(
Map.of(
"SPRING_DATASOURCE_URL", "jdbc:postgresql://postgres:5432/" + postgres.getDatabaseName(),
"SPRING_DATASOURCE_USERNAME", postgres.getUsername(),
"SPRING_DATASOURCE_PASSWORD", postgres.getPassword()
)
)
.withExposedPorts(8080)
.withNetwork(SHARED_NETWORK)
.withNetworkAliases("MyService")
.withLogConsumer(new Slf4jLogConsumer(LoggerFactory.getLogger("MyService")))
Now MyService has established connection with PostgreSQL successfully. But when I perform HTTP request to MyService from the test suite, I get the same error.
restTemplate.getForObject("http://" + myService.getHost() + ":" + myService.getMappedPort(8080) +"/api/endpoint", Void.class)
Caused by: java.net.ConnectException: Connection refused
My question is how can I setup the containers network to make this architecture work?
You need to specify port bindings to expose a port to the "outside world".
Example similar to what you want:
Network network = Network.newNetwork();
GenericContainer mariaDbServer = getMariaDbContainer(network);
GenericContainer flywayRunner = getFlywayContainer(network);
...
#SuppressWarnings("rawtypes")
private GenericContainer getMariaDbContainer(Network network) {
return new GenericContainer<>("mariadb:10.4.21-focal")
.withEnv(Map.of("MYSQL_ROOT_PASSWORD", "password", "MYSQL_DATABASE", "somedatabase"))
.withCommand(
"mysqld", "--default-authentication-plugin=mysql_native_password", "--character-set-server=utf8mb4",
"--collation-server=utf8mb4_unicode_ci").withNetwork(network).withNetworkAliases("somedatabasedb")
.withNetworkMode(network.getId())
.withExposedPorts(3306).withCreateContainerCmdModifier(
cmd -> cmd.withNetworkMode(network.getId()).withHostConfig(
new HostConfig()
.withPortBindings(new PortBinding(Ports.Binding.bindPort(20306), new ExposedPort(3306))))
.withNetworkMode(network.getId())).withStartupTimeout(Duration.ofMinutes(2L));
}
#SuppressWarnings("rawtypes")
private GenericContainer getFlywayContainer(Network network) {
return new GenericContainer<>("flyway/flyway:7.15.0-alpine")
.withEnv(Map.of("MYSQL_ROOT_PASSWORD", "password", "MYSQL_DATABASE", "somedatabase"))
.withCommand(
"-url=jdbc:mariadb://somedatabasedb -schemas=somedatabase-user=root -password=password -connectRetries=300 migrate")
.withFileSystemBind(Paths.get(".", "infrastructure/database/schema").toAbsolutePath().toString(),
"/flyway/sql", BindMode.READ_ONLY).withNetwork(network).waitingFor(
Wait.forLogMessage(".*Successfully applied.*", 1)
).withStartupTimeout(Duration.of(60, ChronoUnit.SECONDS));
}
Container two communicates with container one using "internal" port.
Container one exposes 20306 (that redirects to 3306) port to the "outside world".
I have got a problem. A simple spring boot application works fine with existing MongoDB configuration.
For integration test, I added required configuration for embeded mongodb with flapdoodle configuration. All the unit tests are getting executed properly. When I run the main Spring Boot application, it by default considers the flapdoodle embeded mongodb configuration. As a result, the embeded mongodb never exits and while running the junit test cases, it still runs. I provide below the code snippet.
Whenever I start Spring Boot main application, it still runs the embeded mongodb. I see always the following lines in the console.
Download PRODUCTION:Windows:B64 START
Download PRODUCTION:Windows:B64 DownloadSize: 231162327
Download PRODUCTION:Windows:B64 0% 1% 2% 3% 4% 5% 6% 7% 8%
I provide the code for Mongodb configuration which should be picked up when running the main Spring Boot application.
#Slf4j
#Configuration
public class NoSQLAutoConfiguration {
#Autowired
private NoSQLEnvConfigProperties configProperties;
/**
* Morphia.
*
* #return the morphia
*/
private Morphia morphia() {
final Morphia morphia = new Morphia();
morphia.mapPackage(DS_ENTITY_PKG_NAME);
return morphia;
}
#Bean
public Datastore datastore(#Autowired #Qualifier("dev") MongoClient mongoClient) {
String dbName = configProperties.getDatabase();
final Datastore datastore = morphia().createDatastore(mongoClient, dbName);
datastore.ensureIndexes();
return datastore;
}
/**
* Mongo client.
*
* #return the mongo client
*/
#Primary
#Bean(name = "dev")
public MongoClient mongoClient() {
MongoClient mongoClient = null;
String dbHost = configProperties.getHost();
int dbPort = configProperties.getPort();
String database = configProperties.getDatabase();
log.debug("MongDB Host: {} - MongoDB Port: {}", dbHost, dbPort);
List<ServerAddress> serverAddresses = new ArrayList<>();
serverAddresses.add(new ServerAddress(dbHost, dbPort));
MongoClientOptions options = getMongoOptions();
String dbUserName = configProperties.getMongodbUsername();
String encRawPwd = configProperties.getMongodbPassword();
char[] dbPwd = null;
try {
dbPwd = Util.decode(encRawPwd).toCharArray();
} catch (Exception ex) {
// Ignore exception
dbPwd = null;
}
Optional<String> userName = Optional.ofNullable(dbUserName);
Optional<char[]> password = Optional.ofNullable(dbPwd);
if (userName.isPresent() && password.isPresent()) {
MongoCredential credential = MongoCredential.createCredential(dbUserName, database, dbPwd);
List<MongoCredential> credentialList = new ArrayList<>();
credentialList.add(credential);
mongoClient = new MongoClient(serverAddresses, credentialList, options);
} else {
log.debug("Connecting to local Mongo DB");
mongoClient = new MongoClient(dbHost, dbPort);
}
return mongoClient;
}
private MongoClientOptions getMongoOptions() {
MongoClientOptions.Builder builder = MongoClientOptions.builder();
builder.maxConnectionIdleTime(configProperties.getMongodbIdleConnection());
builder.minConnectionsPerHost(configProperties.getMongodbMinConnection());
builder.connectTimeout(configProperties.getMongodbConnectionTimeout());
return builder.build();
}
}
For integration testing, I have the configuration for embeded mongodb which is part of src/test.
#TestConfiguration
public class MongoConfiguration implements InitializingBean, DisposableBean {
MongodExecutable executable;
private static final String DBNAME = "embeded";
private static final String DBHOST = "localhost";
private static final int DBPORT = 27019;
#Override
public void afterPropertiesSet() throws Exception {
IMongodConfig mongodConfig = new MongodConfigBuilder().version(Version.Main.PRODUCTION)
.net(new Net(DBHOST, DBPORT, Network.localhostIsIPv6())).build();
MongodStarter starter = MongodStarter.getDefaultInstance();
executable = starter.prepare(mongodConfig);
executable.start();
}
private Morphia morphia() {
final Morphia morphia = new Morphia();
morphia.mapPackage(DS_ENTITY_PKG_NAME);
return morphia;
}
#Bean
public Datastore datastore(#Autowired #Qualifier("test") MongoClient mongoClient) {
final Datastore datastore = morphia().createDatastore(mongoClient, DBNAME);
datastore.ensureIndexes();
return datastore;
}
#Bean(name = "test")
public MongoClient mongoClient() {
return new MongoClient(DBHOST, DBPORT);
}
#Override
public void destroy() throws Exception {
executable.stop();
}
}
Please help me how to remove this embeded mongo configuration while running Spring Boot main application in eclipse.
I also provide below my main application below.
#EnableAspectJAutoProxy
#EnableSwagger2
#SpringBootApplication(scanBasePackages = { "com.blr.app" })
public class ValidationApplication {
/**
* The main method. f
*
* #param args the arguments
*/
public static void main(String[] args) {
SpringApplication.run(ValidationApplication.class, args);
}
}
I see the code that you have not added any profile to MongoConfiguration class. During eclipse build, this class is also picked up by Spring framework. Add the below lines to this class so that while running Spring Boot test this class will be picked up and while running main Spring Boot app, the actual Mongo Configuration file will be picked up. That is why Spring comes up with concept Profile. Add the profile appropriately for different environment.
#Profile("test")
#ActiveProfiles("test")
So final code will look like this.
#Profile("test")
#ActiveProfiles("test")
#TestConfiguration
public class MongoConfiguration implements InitializingBean, DisposableBean {
...
...
}
I am coding Kafka Broker and Consumer to catch messages from the application. When trying to get messages from Consumer, an error occurs
java.net.ConnectException: Connection refused: no further information
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:216)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:531)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:540)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:262)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:212)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:230)
at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:444)
at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1267)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)
at org.springframework.kafka.test.utils.KafkaTestUtils.getRecords(KafkaTestUtils.java:303)
at org.springframework.kafka.test.utils.KafkaTestUtils.getRecords(KafkaTestUtils.java:280)
On the application side (Producer), there is also a connection error
2020-03-25 12:29:33.689 WARN 25786 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1, transactionalId=tx0] Connection to node -1 (<here broker hostname>:9092) could not be established. Broker may not be available.
My project has the following dependencies:
compile "org.springframework.kafka:spring-kafka-test:2.4.4.RELEASE"
compile "org.springframework.kafka:spring-kafka:2.4.4.RELEASE"
Code of My Kafka Broker
public class KafkaServer {
private static final String BROKERPORT = "9092";
private static final String BROKERHOST = "localhost";
public static final String TOPIC1 = "fss-fsstransdata";
public static final String TOPIC2 = "fss-fsstransscores";
public static final String TOPIC3 = "fss-fsstranstimings";
public static final String TOPIC4 = "fss-fssdevicedata";
#Getter
private Consumer<String, String> consumer;
private EmbeddedKafkaBroker embeddedKafkaBroker;
public void run() {
String[] topics = {TOPIC1, TOPIC2, TOPIC3, TOPIC4};
this.embeddedKafkaBroker = new EmbeddedKafkaBroker(
1,
false,
1,
topics
).kafkaPorts(BROKERPORT);
Map<String, Object> configs = new HashMap<>(KafkaTestUtils.consumerProps("consumer", "false", this.embeddedKafkaBroker));
this.consumer = new DefaultKafkaConsumerFactory<>(configs, new StringDeserializer(), new StringDeserializer()).createConsumer();
this.consumer.subscribe(Arrays.asList(topics));
}
}
Please help to deal with the situation. I am not good at kafka architecture and how it can be implemented on Spring.
The EmbeddedKafkaBroker is designed to be used from a Spring application context or by a JUnit4 #Rule or #ClassRule or by a JUnit5 Condition.
To use it outside those environments, you must call afterPropertiesSet() to initialize it and destroy() to shut it down.
If you are using spring then you need to annotate your bean with #EmbeddedKafka and then use #Autowire on EmbeddedKafkaBroker
Example embeded kafka annotation configuration:
#EmbeddedKafka(
partitions = 1,
controlledShutdown = false,
brokerProperties = {// place your proerties here
})
What I would do is to create a spring bean KafkaServerConfig and place all my logic for configuration and bean construction inside.
PS: it should be noted that EmbeddedKafkaBroker is intended for unit tests.
My goal is to use kafka test containers with spring boot context in tests without #DirtiesContext. Problem is that without starting container separately for each test class I have no idea how to consume messages that were produced only by particular test class or method.
So I end up consuming messages that were not a part of even test class that is running.
One solution might be to purge topic of messages. I have no idea how to do this, I've tried to restart container but then next test was not able to connect to kafka.
Second solution that I had in mind is to have consumer that will be created at the beginning of test method and somehow record messages from latest while other staff in test will be called. I've been able to do so with embeded kafka, I have no idea how to do this using test containers.
Current configuration looks like this:
#TestConfiguration
public class KafkaContainerConfig {
#Bean(initMethod = "start", destroyMethod = "stop")
public KafkaContainer kafkaContainer() {
return new KafkaContainer("5.0.3");
}
#Bean
public KafkaAdmin kafkaAdmin(KafkaProperties kafkaProperties, KafkaContainer kafkaContainer) {
kafkaProperties.setBootstrapServers(List.of(kafkaContainer.getBootstrapServers()));
return new KafkaAdmin(kafkaProperties.buildAdminProperties());
}
}
With annotation that will provide above configuration
#Target({ElementType.TYPE})
#Retention(RetentionPolicy.RUNTIME)
#Import(KafkaContainerConfig.class)
#EnableAutoConfiguration(exclude = TestSupportBinderAutoConfiguration.class)
#TestPropertySource("classpath:/application-test.properties")
#DirtiesContext
public #interface IncludeKafkaTestContainer {
}
And in test class itself with multiple such configuration it would looks like:
#IncludeKafkaTestContainer
#IncludePostgresTestContainer
#SpringBootTest(webEnvironment = RANDOM_PORT)
class SomeTest {
...
}
Currently consumer in test method is created this way:
KafkaConsumer<String, String> kafkaConsumer = createKafkaConsumer("topic_name");
ConsumerRecords<String, String> consumerRecords = kafkaConsumer.poll(Duration.ofSeconds(1));
List<ConsumerRecord<String, String>> topicMsgs = Lists.newArrayList(consumerRecords.iterator());
And:
public static KafkaConsumer<String, String> createKafkaConsumer(String topicName) {
Properties properties = new Properties();
properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaContainer.getBootstrapServers());
properties.put(ConsumerConfig.GROUP_ID_CONFIG, "testGroup_" + topicName);
properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class)
KafkaConsumer<String, String> kafkaConsumer = new KafkaConsumer<>(properties);
kafkaConsumer.subscribe(List.of(topicName));
return kafkaConsumer;
}
I have my Java Spring app that deals with HBase.
Here is my configuration:
#Configuration
public class HbaseConfiguration {
#Bean
public HbaseTemplate hbaseTemplate(#Value("${hadoop.home.dir}") final String hadoopHome,
#Value("${hbase.zookeeper.quorum}") final String quorum,
#Value("${hbase.zookeeper.property.clientPort}") final String port)
throws IOException, ServiceException {
System.setProperty("hadoop.home.dir", hadoopHome);
org.apache.hadoop.conf.Configuration configuration = HBaseConfiguration.create();
configuration.set("hbase.zookeeper.quorum", quorum);
configuration.set("hbase.zookeeper.property.clientPort", port);
HBaseAdmin.checkHBaseAvailable(configuration);
return new HbaseTemplate(configuration);
}
}
#HBASE
hbase.zookeeper.quorum = localhost
hbase.zookeeper.property.clientPort = 2181
hadoop.home.dir = C:/hadoop
Before asking the question I tried to figure out the problem on my own and found this link https://github.com/sel-fish/hbase.docker
But still, I get an error
org.apache.hadoop.net.ConnectTimeoutException: 10000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=myhbase/192.168.99.100:60000]
Could I ask you to help me and clarify how can I connect my local Java app with HBase running in Docker?