kafka AdminClient API Timed out waiting for node assignment - java

I'm new to Kafka and am trying to use the AdminClient API to manage the Kafka server running on my local machine. I have it setup exactly the same as in the quick start section of the Kafka documentation. The only difference being that I have not created any topics.
I have no issues running any of the shell scripts on this setup but when I try to run the following java code:
public class ProducerMain{
public static void main(String[] args) {
Properties props = new Properties();
props.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG,
"localhost:9092");
try(final AdminClient adminClient =
KafkaAdminClient.create(props)){
try {
final NewTopic newTopic = new NewTopic("test", 1,
(short)1);
final CreateTopicsResult createTopicsResult =
adminClient.createTopics(
Collections.singleton(newTopic));
createTopicsResult.all().get();
}catch (InterruptedException | ExecutionException e) {
e.printStackTrace();
}
}
}
}
Error: TimeoutException: Timed out waiting for a node assignment
Exception in thread "main" java.lang.RuntimeException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
at ProducerMain.main(ProducerMain.java:41)
<br>Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:258)
at ProducerMain.main(ProducerMain.java:38)
<br>Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment.
I have searched online for an indication as to what the problem could be but have found nothing so far. Any suggestions are welcome as I am at the end of my rope.

Sounds like your broker isn't healthy...
This code works fine
public class Main {
static final Logger logger = LoggerFactory.getLogger(Main.class);
public static void main(String[] args) {
Properties properties = new Properties();
properties.setProperty(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
properties.setProperty(AdminClientConfig.CLIENT_ID_CONFIG, "local-test");
properties.setProperty(AdminClientConfig.RETRIES_CONFIG, "3");
try (AdminClient client = AdminClient.create(properties)) {
final CreateTopicsResult res = client.createTopics(
Collections.singletonList(
new NewTopic("foo", 1, (short) 1)
)
);
res.all().get(5, TimeUnit.SECONDS);
} catch (InterruptedException | ExecutionException | TimeoutException e) {
logger.error("unable to create topic", e);
}
}
}
And I can see in the broker logs that the topic was created

I started kafka service with bitnami/kafka, and got exactly the same error.
Try to start kafka by this version, it works:
https://hub.docker.com/r/wurstmeister/kafka
$ docker run -d --name zookeeper-server --network app-tier \
-e ALLOW_ANONYMOUS_LOGIN=yes -p 2181:2181 zookeeper:3.6.2
$ docker run -d --name kafka-server --network app-tier --publish 9092:9092 \
--env KAFKA_ZOOKEEPER_CONNECT=zookeeper-server:2181 \
--env KAFKA_ADVERTISED_HOST_NAME=30.225.51.235 \
--env KAFKA_ADVERTISED_PORT=9092 \
wurstmeister/kafka
30.225.51.235 is ip address for the host machine.

Related

Can't start localstack on Gitlab runner with LocalstackTestRunner

I have an issue of running Java integration tests with LocalstackTestRunner on Gitlab agent.
I've taken example from official localstack site:
import cloud.localstack.LocalstackTestRunner;
import cloud.localstack.TestUtils;
import cloud.localstack.docker.annotation.LocalstackDockerProperties;
#RunWith(LocalstackTestRunner.class)
#LocalstackDockerProperties(services = { "s3", "sqs", "kinesis:77077" })
public class MyCloudAppTest {
#Test
public void testLocalS3API() {
AmazonS3 s3 = TestUtils.getClientS3();
List<Bucket> buckets = s3.listBuckets();
}
}
and run it with help of gradle as gradle clean test.
If I run it locally on my Mac Book - all is ok but if it's run on Gitlab agent - there is an issue:
com.amazonaws.SdkClientException: Unable to execute HTTP request: Connect to localhost.localstack.cloud:4566 [localhost.localstack.cloud/127.0.0.1] failed: Connection refused (Connection refused)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1207)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1153)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
My gitlab ci job looks likes as follows:
Localstack_test:
stage: test
services:
- docker:dind
when: always
script:
- ./gradlew clean test --stacktrace
It happens that S3 client can't connect to localhost.localstack.cloud:4566 because docker container created by LocalstackTestRunner is started within parent docker:dind container and AmazonS3 client can't access it. I've tried with other AWS services - result is the same - AWS client can't access localstack endpoint.
I've found some workaround as follows:
add localstack as service in gitlab-ci
add alias to it
expose env variable HOSTNAME_EXTERNAL=alias
Make an implementation of IHostNameResolver to return my alias as HOSTNAME_EXTERNAL specified in gitlab-ci.
Something like that:
Gitlab-ci:
Localstack_test:
stage: test
services:
- docker:dind
- name: localstack/localstack
alias: localstack-it
variables:
HOSTNAME_EXTERNAL: "localstack-it"
when: always
script:
- ./gradlew clean test --stacktrace |& tee -a ./gradle.log
Java IT test:
#RunWith(LocalstackTestRunner.class)
#LocalstackDockerProperties(
services = { "s3", "sqs", "kinesis:77077" },
hostNameResolver = SystemEnvHostNameResolver.class
)
public class MyCloudAppTest {
#Test
public void testLocalS3API() {
AmazonS3 s3 = TestUtils.getClientS3();
List<Bucket> buckets = s3.listBuckets();
}
}
public class SystemEnvHostNameResolver implements IHostNameResolver {
private static final String HOSTNAME_EXTERNAL = "HOSTNAME_EXTERNAL";
#Override
public String getHostName() {
String external = System.getenv(HOSTNAME_EXTERNAL);
return !Strings.isNullOrEmpty(external) ?
external :
new LocalHostNameResolver().getHostName();
}
}
It works but as a result 2 localstack Docker containers are run and internal docker container is still not available. Maybe does somebody know better solution?
STR:
gradle-6.7
cloud.localstack:localstack-utils:0.2.5

How to read messages from a rabbitmq queue running in a local docker container

I am running rabbitmq in my local mac in docker container. I accessed it through the GUI (at port 15672) in browser and created a queue, exchange and published some messages in the queue. I am trying to write a java application which can read the messages from the queue and print it in the console but I am running in to this error.
rjashnani-ltm:rabbitmq rjashnani$ java -cp .:amqp-client-5.7.1.jar:slf4j-api-1.7.26.jar:slf4j-simple-1.7.26.jar Recv
Exception in thread "main" java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
I created docker container using this command
docker run -d --hostname my-rabbit --name some-rabbit2 -p 15672:15672 -p 5672:5672 rabbitmq:3-management
Here is my Java application code
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.ConnectionFactory;
import com.rabbitmq.client.DeliverCallback;
public class Recv {
private final static String QUEUE_NAME = "test-queue";
public static void main(String[] argv) throws Exception {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
factory.setPort(5762);
factory.setUsername("guest");
factory.setPassword("guest");
factory.setVirtualHost("/");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, false, false, false, null);
System.out.println(" [*] Waiting for messages. To exit press CTRL+C");
DeliverCallback deliverCallback = (consumerTag, delivery) -> {
String message = new String(delivery.getBody(), "UTF-8");
System.out.println(" [x] Received '" + message + "'");
};
channel.basicConsume(QUEUE_NAME, true, deliverCallback, consumerTag -> { });
}
}
docker run -it --rm --name rabbitmq -p 5672:5672 -p 15672:15672 rabbitmq:3-management
Running docker with -it argument, resolved the issue.

How to spark-submit a Spark Streaming application

I am new to Spark and does not have too much idea on it. I am working on an application in which data is traversing on different-2 Kafka topic and Spark Streaming reading the data from this topic. Its a SpringBoot project and i have 3 Spark consumer classes in it. The job of these SparkStreaming classes is to consume the data from a Kafka topic and send it to another topic. Code of SparkStreaming class is below-
#Service
public class EnrichEventSparkConsumer {
Collection<String> topics = Arrays.asList("eventTopic");
public void startEnrichEventConsumer(JavaStreamingContext javaStreamingContext) {
Map<String, Object> kafkaParams = new HashedMap();
kafkaParams.put("bootstrap.servers", "localhost:9092");
kafkaParams.put("key.deserializer", StringDeserializer.class);
kafkaParams.put("value.deserializer", StringDeserializer.class);
kafkaParams.put("group.id", "group1");
kafkaParams.put("auto.offset.reset", "latest");
kafkaParams.put("enable.auto.commit", true);
JavaInputDStream<ConsumerRecord<String, String>> enrichEventRDD = KafkaUtils.createDirectStream(javaStreamingContext,
LocationStrategies.PreferConsistent(),
ConsumerStrategies.<String, String>Subscribe(topics, kafkaParams));
JavaDStream<String> enrichEventDStream = enrichEventRDD.map((x) -> x.value());
JavaDStream<EnrichEventDataModel> enrichDataModelDStream = enrichEventDStream.map(convertIntoEnrichModel);
enrichDataModelDStream.foreachRDD(rdd1 -> {
saveDataToElasticSearch(rdd1.collect());
});
enrichDataModelDStream.foreachRDD(enrichDataModelRdd -> {
if(enrichDataModelRdd.count() > 0) {
if(executor != null) {
executor.executePolicy(enrichDataModelRdd.collect());
}
}
});
}
static Function convertIntoEnrichModel = new Function<String, EnrichEventDataModel>() {
#Override
public EnrichEventDataModel call(String record) throws Exception {
ObjectMapper mapper = new ObjectMapper();
EnrichEventDataModel csvDataModel = mapper.readValue(record, EnrichEventDataModel.class);
return csvDataModel;
}
};
private void saveDataToElasticSearch(List<EnrichEventDataModel> baseDataModelList) {
for (EnrichEventDataModel baseDataModel : baseDataModelList)
dataModelServiceImpl.save(baseDataModel);
}
}
I am calling the method startEnrichEventConsumer() using CommandLineRunner.
public class EnrichEventSparkConsumerRunner implements CommandLineRunner {
#Autowired
JavaStreamingContext javaStreamingContext;
#Autowired
EnrichEventSparkConsumer enrichEventSparkConsumer;
#Override
public void run(String... args) throws Exception {
//start Raw Event Spark Cosnumer.
JobContextImpl jobContext = new JobContextImpl(javaStreamingContext);
//start Enrich Event Spark Consumer.
enrichEventSparkConsumer.startEnrichEventConsumer(jobContext.streamingctx());
}
}
Now i want to submit these three Spark Streaming classes on to the cluster. I read somewhere that i have to create a Jar file first then after it i can use Spark-submit command but i have some questions in my mind -
Should i create a different project with these 3 Spark Streaming classes?
As of now i am using CommandLineRunner to initiate SparkStreaming then when to submit cluster , should i create main() method in these class?
Please tell me how to do it. Thanks in advance.
No need for a different project.
You should create entry point/ main which is responsible of the JavaStreamingContext creation.
Create your jar with dependencies, the dependencies in one single jar file, don't forget to put provided scope for all your spark dependencies since you will use cluster's libraries.
Executing assembled Spark application is using spark-submit command-line application as follows:
./bin/spark-submit \
--class <main-class> \
--master <master-url> \
--deploy-mode <deploy-mode> \
--conf <key>=<value> \
... # other options
<application-jar> \
[application-arguments]
For local submit
bin/spark-submit \
--class package.Main \
--master local[2] \
path/to/jar argument1 argument2

Why I'm not able to connect to HBase running as Docker container?

I have my Java Spring app that deals with HBase.
Here is my configuration:
#Configuration
public class HbaseConfiguration {
#Bean
public HbaseTemplate hbaseTemplate(#Value("${hadoop.home.dir}") final String hadoopHome,
#Value("${hbase.zookeeper.quorum}") final String quorum,
#Value("${hbase.zookeeper.property.clientPort}") final String port)
throws IOException, ServiceException {
System.setProperty("hadoop.home.dir", hadoopHome);
org.apache.hadoop.conf.Configuration configuration = HBaseConfiguration.create();
configuration.set("hbase.zookeeper.quorum", quorum);
configuration.set("hbase.zookeeper.property.clientPort", port);
HBaseAdmin.checkHBaseAvailable(configuration);
return new HbaseTemplate(configuration);
}
}
#HBASE
hbase.zookeeper.quorum = localhost
hbase.zookeeper.property.clientPort = 2181
hadoop.home.dir = C:/hadoop
Before asking the question I tried to figure out the problem on my own and found this link https://github.com/sel-fish/hbase.docker
But still, I get an error
org.apache.hadoop.net.ConnectTimeoutException: 10000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=myhbase/192.168.99.100:60000]
Could I ask you to help me and clarify how can I connect my local Java app with HBase running in Docker?

Connecting to Mongo database through SSH tunnel in Java

FIXED (edited code to reflect changes I made)
I'm trying to connect to a Mongo database through an SSH tunnel using Java.
I'm using the Mongo driver 3.0.2 and jcraft (JSch) to create an SSH tunnel.
The idea is that I:
connect to the machine hosting the MongoDB installation through SSH
set up port forwarding from a local port to the remote MongoDB port
connect to MongoDB remotely
My code looks like this:
// forwarding ports
private static final String LOCAL_HOST = "localhost";
private static final String REMOTE_HOST = "127.0.0.1";
private static final Integer LOCAL_PORT = 8988;
private static final Integer REMOTE_PORT = 27017; // Default mongodb port
// ssh connection info
private static final String SSH_USER = "<username>";
private static final String SSH_PASSWORD = "<password>";
private static final String SSH_HOST = "<remote host>";
private static final Integer SSH_PORT = 22;
private static Session sshSession;
public static void main(String[] args) {
try {
java.util.Properties config = new java.util.Properties();
config.put("StrictHostKeyChecking", "no");
JSch jsch = new JSch();
sshSession = null;
sshSession = jsch.getSession(SSH_USER, SSH_HOST, SSH_PORT);
sshSession.setPassword(SSH_PASSWORD);
sshSession.setConfig(config);
sshSession.connect();
sshSession.setPortForwardingL(LOCAL_PORT, REMOTE_HOST, REMOTE_PORT);
MongoClient mongoClient = new MongoClient(LOCAL_HOST, LOCAL_PORT);
mongoClient.setReadPreference(ReadPreference.nearest());
MongoCursor<String> dbNames = mongoClient.listDatabaseNames().iterator();
while (dbNames.hasNext()) {
System.out.println(dbNames.next());
}
} catch (Exception e) {
e.printStackTrace();
} finally {
sshSession.delPortForwardingL(LOCAL_PORT);
sshSession.disconnect();
}
}
This code, when run, doesn't EDIT: does work. Connecting to the SSH server works just fine, but connecting to the Mongo database behind it doesn't work and returns this error:
INFO: Exception in monitor thread while connecting to server localhost:8988
com.mongodb.MongoSocketReadException: Prematurely reached end of stream
at com.mongodb.connection.SocketStream.read(SocketStream.java:88)
at com.mongodb.connection.InternalStreamConnection.receiveResponseBuffers(InternalStreamConnection.java:491)
at com.mongodb.connection.InternalStreamConnection.receiveMessage(InternalStreamConnection.java:221)
at com.mongodb.connection.CommandHelper.receiveReply(CommandHelper.java:134)
at com.mongodb.connection.CommandHelper.receiveCommandResult(CommandHelper.java:121)
at com.mongodb.connection.CommandHelper.executeCommand(CommandHelper.java:32)
at com.mongodb.connection.InternalStreamConnectionInitializer.initializeConnectionDescription(InternalStreamConnectionInitializer.java:83)
at com.mongodb.connection.InternalStreamConnectionInitializer.initialize(InternalStreamConnectionInitializer.java:43)
at com.mongodb.connection.InternalStreamConnection.open(InternalStreamConnection.java:115)
at com.mongodb.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:127)
at java.lang.Thread.run(Unknown Source)
I've tried doing this through command line as follows:
$ ssh <user>#<host> -p 22 -X -C
$ <enter requested password>
<user>#<host>$ mongo
<user>#<host>$ MongoDB shell version: 2.6.10
<user>#<host>$ connecting to: test
So this seems to work. I'm at a loss as to why the Java code (which should be doing roughly the same thing) doesn't work.
I managed to make it work (tried to forward port to "localhost" rather than "127.0.0.1", changing it fixed it) edit: I guess the server was listening specifically on localhost rather than 127.0.0.1
This code is run successfully, but the main problem is your mongo db is stopped. Please check the instance of the mongo is running or not.
sudo systemctl status mongod
if it is not running
sudo systemctl start mongod

Categories

Resources