Error when trying to use kafka-client 2.6 - java

I'm learning Kafka and when I was trying to implement a simple producer that will only send "Hello World"
package com.github.joe.kafka.tutorial1;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;
import java.util.Properties;
public class ProducerDemo {
public static void main(String[] args) {
Properties properties = new Properties();
String bootstrapServers = "127.0.0.1:9092";
String topic = "first_topic";
properties.setProperty(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,bootstrapServers);
properties.setProperty(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
properties.setProperty(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,StringSerializer.class.getName());
KafkaProducer<String,String> producer = new KafkaProducer<String, String>(properties);
ProducerRecord<String,String> record = new ProducerRecord<String, String>(topic,"Hello World");
producer.send(record);
producer.flush();
producer.close();
}
}
The problem is when I'm using kafka-client v2.0.0 it works, but it fails when I'm using kafka-client 2.6.0 and I get the following error:
[kafka-producer-network-thread | producer-1] ERROR org.apache.kafka.common.utils.KafkaThread - Uncaught exception in thread 'kafka-producer-network-thread | producer-1':
java.lang.NoClassDefFoundError: com/fasterxml/jackson/databind/JsonNode
at org.apache.kafka.common.requests.ApiVersionsRequest$Builder.<clinit>(ApiVersionsRequest.java:36)
at org.apache.kafka.clients.NetworkClient.handleConnections(NetworkClient.java:910)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:555)
at org.apache.kafka.clients.producer.internals.Sender.runOnce(Sender.java:325)
at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:240)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.fasterxml.jackson.databind.JsonNode
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 6 more
[main] INFO org.apache.kafka.clients.producer.KafkaProducer - [Producer clientId=producer-1] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms.

as it turns out , for Kafka-client 2.6.0 there's an additional dependency:
Jackson Databind
this was not required for version 2.0.0

The above suggestion worked for me. I added the below dependency which resolved the problem.
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>2.9.8</version>
</dependency>

Related

Exception: org.springframework.messaging.MessageDeliveryException: Dispatcher has no subscribers for channel

I have a sandbox for exploring newly added functions in Spring Cloud Stream, but I've faced a problem with using Function and Supplier in one Spring Cloud Stream application.
In code I used examples described in docs.
Firstly I added to project Function<String, String> with corresponding spring.cloud.stream.bindings and spring.cloud.stream.function.definition properties in application.yml. Everything is working fine, I post message to my-fun-in Kafka topic, application execute function and send result to my-fun-out topic.
Then I added Supplier<Flux<String>> to the same project with corresponding spring.cloud.stream.bindings and updated spring.cloud.stream.function.definition value to fun;sup. And here weird things start to happen. When I try to start application I receive the following error:
2020-01-15 01:45:16.608 ERROR 10128 --- [oundedElastic-1] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessageDeliveryException: Dispatcher has no subscribers for channel 'application.sup-out-0'.; nested exception is org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers, failedMessage=GenericMessage [payload=byte[20], headers={contentType=application/json, id=89301e00-b285-56e0-cb4d-8133555c8905, timestamp=1579045516603}], failedMessage=GenericMessage [payload=byte[20], headers={contentType=application/json, id=89301e00-b285-56e0-cb4d-8133555c8905, timestamp=1579045516603}]
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:453)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:403)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:187)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:166)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:47)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:109)
at org.springframework.integration.router.AbstractMessageRouter.doSend(AbstractMessageRouter.java:206)
at org.springframework.integration.router.AbstractMessageRouter.handleMessageInternal(AbstractMessageRouter.java:188)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:170)
at org.springframework.integration.handler.AbstractMessageHandler.onNext(AbstractMessageHandler.java:219)
at org.springframework.integration.handler.AbstractMessageHandler.onNext(AbstractMessageHandler.java:57)
at org.springframework.integration.endpoint.ReactiveStreamsConsumer$DelegatingSubscriber.hookOnNext(ReactiveStreamsConsumer.java:165)
at org.springframework.integration.endpoint.ReactiveStreamsConsumer$DelegatingSubscriber.hookOnNext(ReactiveStreamsConsumer.java:148)
at reactor.core.publisher.BaseSubscriber.onNext(BaseSubscriber.java:160)
at reactor.core.publisher.FluxDoFinally$DoFinallySubscriber.onNext(FluxDoFinally.java:123)
at reactor.core.publisher.EmitterProcessor.drain(EmitterProcessor.java:426)
at reactor.core.publisher.EmitterProcessor.onNext(EmitterProcessor.java:268)
at reactor.core.publisher.FluxCreate$BufferAsyncSink.drain(FluxCreate.java:793)
at reactor.core.publisher.FluxCreate$BufferAsyncSink.next(FluxCreate.java:718)
at reactor.core.publisher.FluxCreate$SerializedSink.next(FluxCreate.java:153)
at org.springframework.integration.channel.FluxMessageChannel.doSend(FluxMessageChannel.java:63)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:453)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:403)
at org.springframework.integration.channel.FluxMessageChannel.lambda$subscribeTo$2(FluxMessageChannel.java:83)
at reactor.core.publisher.FluxPeekFuseable$PeekFuseableSubscriber.onNext(FluxPeekFuseable.java:189)
at reactor.core.publisher.FluxPublishOn$PublishOnSubscriber.runAsync(FluxPublishOn.java:398)
at reactor.core.publisher.FluxPublishOn$PublishOnSubscriber.run(FluxPublishOn.java:484)
at reactor.core.scheduler.WorkerTask.call(WorkerTask.java:84)
at reactor.core.scheduler.WorkerTask.call(WorkerTask.java:37)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: org.springframework.integration.MessageDispatchingException: Dispatcher has no subscribers, failedMessage=GenericMessage [payload=byte[20], headers={contentType=application/json, id=89301e00-b285-56e0-cb4d-8133555c8905, timestamp=1579045516603}]
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:139)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:106)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:73)
... 34 more
After it I've tried several thing:
Reverted spring.cloud.stream.function.definition to fun (disable sup bean binding to the external destination). Application started, function worked, supplier didn't work. Everything as expected.
Changed spring.cloud.stream.function.definition to sup (disable fun bean binding to the external destination). Application started, function didn't work, supplier worked (produced message to my-sup-out topic every second). Everything as expected as well.
Updated spring.cloud.stream.function.definition value to fun;sup. Application didn't start, got same MessageDeliveryException.
Swapped spring.cloud.stream.function.definition value to sup;fun. Application started, supplier worked, but function didn't work (didn't send messages to my-fun-out topic).
The last one confused me even more than error) So now I need someone's help to sort thing out.
Did I miss something in cofiguration? Why changing beans order separated by ; in spring.cloud.stream.function.definition leads to different results?
Full project is uploaded to GitHub and added below:
StreamApplication.java:
package com.kaine;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import reactor.core.publisher.Flux;
import java.util.function.Function;
import java.util.function.Supplier;
#SpringBootApplication
public class StreamApplication {
public static void main(String[] args) {
SpringApplication.run(StreamApplication.class);
}
#Bean
public Function<String, String> fun() {
return value -> value.toUpperCase();
}
#Bean
public Supplier<Flux<String>> sup() {
return () -> Flux.from(emitter -> {
while (true) {
try {
emitter.onNext("Hello from Supplier!");
Thread.sleep(1000);
} catch (Exception e) {
// ignore
}
}
});
}
}
application.yml
spring:
cloud:
stream:
function:
definition: fun;sup
bindings:
fun-in-0:
destination: my-fun-in
fun-out-0:
destination: my-fun-out
sup-out-0:
destination: my-sup-out
build.gradle.kts:
plugins {
java
}
group = "com.kaine"
version = "1.0-SNAPSHOT"
repositories {
mavenCentral()
}
dependencies {
implementation(platform("org.springframework.cloud:spring-cloud-dependencies:Hoxton.SR1"))
implementation("org.springframework.cloud:spring-cloud-starter-stream-kafka")
implementation(platform("org.springframework.boot:spring-boot-dependencies:2.2.2.RELEASE"))
}
configure<JavaPluginConvention> {
sourceCompatibility = JavaVersion.VERSION_11
}
Actually this is a problem with our documentation as I believe we provide a bad example of the reactive Supplier for his case. The issue is that your Supplier is in an infinite blocking loop. It basically never returns.
So please change it to something like:
#Bean
public Supplier<Flux<String>> sup() {
return () -> Flux.fromStream(Stream.generate(new Supplier<String>() {
#Override
public String get() {
try {
Thread.sleep(1000);
return "Hello from Supplier";
} catch (Exception e) {
// ignore
}
}
})).subscribeOn(Schedulers.elastic()).share();
}

ALLOW_TRAILING_COMMA error on using cosmosDB sdk with java

I'm getting the following exception when using cosmosdb sdk for Java:
Exception in thread "main" java.lang.NoSuchFieldError: ALLOW_TRAILING_COMMA
at com.microsoft.azure.cosmosdb.internal.Utils.<clinit>(Utils.java:75)
at com.microsoft.azure.cosmosdb.rx.internal.RxDocumentClientImpl.<clinit>(RxDocumentClientImpl.java:132)
at com.microsoft.azure.cosmosdb.rx.AsyncDocumentClient$Builder.build(AsyncDocumentClient.java:224)
at Program2.<init>(Program2.java:25)
at Program2.main(Program2.java:30)
I'm just trying to connect to the CosmosDB using AsyncDocumentClient. The exception occurs in that moment.
executorService = Executors.newFixedThreadPool(100);
scheduler = Schedulers.from(executorService);
client = new AsyncDocumentClient.Builder()
.withServiceEndpoint("[cosmosurl]")
.withMasterKeyOrResourceToken("[mykey]")
.withConnectionPolicy(ConnectionPolicy.GetDefault())
.withConsistencyLevel(ConsistencyLevel.Eventual)
.build();
I heard about some library conflict but I haven't found the properly fix.
Thanks!
Please refer to my working sample.
Java code:
import com.microsoft.azure.cosmosdb.ConnectionPolicy;
import com.microsoft.azure.cosmosdb.ConsistencyLevel;
import com.microsoft.azure.cosmosdb.rx.AsyncDocumentClient;
public class test {
public static void main(String[] args) throws Exception {
AsyncDocumentClient client = new AsyncDocumentClient.Builder()
.withServiceEndpoint("https://XXX.documents.azure.com:443/")
.withMasterKeyOrResourceToken("XXXX")
.withConnectionPolicy(ConnectionPolicy.GetDefault())
.withConsistencyLevel(ConsistencyLevel.Eventual)
.build();
System.out.println(client);
}
}
pom.xml
<dependencies>
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-cosmosdb</artifactId>
<version>2.6.4</version>
</dependency>
<dependency>
<groupId>com.microsoft.azure</groupId>
<artifactId>azure-cosmosdb-commons</artifactId>
<version>2.6.4</version>
</dependency>
</dependencies>

Executing Sample Flink kafka consummer

I'm trying to create a simple Flink Kafka consumer
public class ReadFromKafka {
public static void main(String[] args) throws Exception {
// create execution environment
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "localhost:9092");
properties.setProperty("group.id", "flink_consumer");
DataStream<String> stream = env
.addSource(new FlinkKafkaConsumer09<>("test", new SimpleStringSchema(), properties));
stream.map(new MapFunction<String, String>() {
private static final long serialVersionUID = -6867736771747690202L;
#Override
public String map(String value) throws Exception {
return "Stream Value: " + value;
}
}).print();
env.execute();
}
}
It is giving me this error :
INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.3.0
16:47:28,448 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: fc1aaa116b661c8a
16:47:28,448 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1563029248441
16:47:28,451 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer09 - Trying to get partitions for topic test
16:47:28,775 INFO org.apache.kafka.clients.Metadata - [Consumer clientId=consumer-1, groupId=flink_consumer] Cluster ID: 4rz71KZCS_CSasZMrFBNKw
16:47:29,858 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer09 - Got 1 partitions from these topics: [test]
16:47:29,859 INFO org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase - Consumer is going to read the following topics (with number of partitions):
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/flink/api/java/operators/Keys
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.addSource(StreamExecutionEnvironment.java:994)
at org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.addSource(StreamExecutionEnvironment.java:955)
at myflink.ReadFromKafka.main(ReadFromKafka.java:43)
Caused by: java.lang.ClassNotFoundException: org.apache.flink.api.java.operators.Keys
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)
... 3 more
Process finished with exit code 1
According to your stack-trace, java could not find a class.
Caused by: java.lang.ClassNotFoundException: org.apache.flink.api.java.operators.Keys
This class is in the flink-java_2.11 jar file which you might have missed from your dependencies.
https://www.javadoc.io/doc/org.apache.flink/flink-java_2.11/0.10.2

Kafka Java Producer with kerberos

Getting error while sending message to kafka topic in kerberosed enviornment. We have cluster on hdp 2.3
I followed this http://henning.kropponline.de/2016/02/21/secure-kafka-java-producer-with-kerberos/
But for sending messages, I have to do kinit explicitly first, then only I am able to send message to kafka topic.
I tried to do knit through java class but that also doesn't work.
PFB code:
package com.ct.test.kafka;
import java.util.Date;
import java.util.Properties;
import java.util.Random;
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
public class TestProducer {
public static void main(String[] args) {
String principalName = "ctadmin";
String keyTabPath = "/etc/security/keytabs/ctadmin.keytab";
boolean authStatus = CTSecurityUtil.loginUserFromKeytab(principalName, keyTabPath);
if (!authStatus) {
System.out.println("Authntication fails, try something else " + authStatus);
} else {
System.out.println("Authntication successfull " + authStatus);
}
System.setProperty("java.security.krb5.conf", "/etc/krb5.conf");
System.setProperty("java.security.auth.login.config", "/etc/kafka/2.3.4.0-3485/0/kafka_jaas.conf");
System.setProperty("javax.security.auth.useSubjectCredsOnly", "false");
System.setProperty("sun.security.krb5.debug", "true");
try {
long events = Long.parseLong("3");
Random rnd = new Random();
Properties props = new Properties();
System.out.println("After broker list- " + args[0]);
props.put("metadata.broker.list", args[0]);
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("request.required.acks", "1");
props.put("security.protocol", "PLAINTEXTSASL");
//props.put("partitioner.class", "com.ct.test.kafka.SimplePartitioner");
System.out.println("After config prop -1");
ProducerConfig config = new ProducerConfig(props);
System.out.println("After config prop -2 config" + config);
Producer<String, String> producer = new Producer<String, String>(config);
System.out.println("After config prop -3");
for (long nEvents = 0L; nEvents < events; nEvents += 1L) {
Date runtime = new Date();
String ip = "192.168.2" + rnd.nextInt(255);
String msg = runtime + " www.example.com, " + ip;
KeyedMessage<String, String> data = new KeyedMessage<String, String>("test_march4", ip, msg);
System.out.println("After config prop -1 data" + data);
producer.send(data);
}
producer.close();
} catch (Throwable th) {
th.printStackTrace();
}
}
}
Pom.xml : All dependency downloaded from hortonworks repo.
<dependencies>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.9.0.2.3.4.0-3485</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>0.9.0.2.3.4.0-3485</version>
</dependency>
<dependency>
<groupId>org.jasypt</groupId>
<artifactId>jasypt-spring31</artifactId>
<version>1.9.2</version>
<scope>compile</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.7.1.2.3.4.0-3485</version>
</dependency>
</dependencies>
Error :
Case1 : when I specify myuser kafka_jass.conf
log4j:WARN No appenders could be found for logger (kafka.utils.VerifiableProperties).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
After config prop -2 configkafka.producer.ProducerConfig#643293ae
java.lang.SecurityException: Configuration Error:
Line 6: expected [controlFlag]
at com.sun.security.auth.login.ConfigFile.<init>(ConfigFile.java:110)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:379)
at javax.security.auth.login.Configuration$2.run(Configuration.java:258)
at javax.security.auth.login.Configuration$2.run(Configuration.java:250)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.Configuration.getConfiguration(Configuration.java:249)
at org.apache.kafka.common.security.kerberos.Login.login(Login.java:291)
at org.apache.kafka.common.security.kerberos.Login.<init>(Login.java:104)
at kafka.common.security.LoginManager$.init(LoginManager.scala:36)
at kafka.producer.Producer.<init>(Producer.scala:50)
at kafka.producer.Producer.<init>(Producer.scala:73)
at kafka.javaapi.producer.Producer.<init>(Producer.scala:26)
at com.ct.test.kafka.TestProducer.main(TestProducer.java:51)
Caused by: java.io.IOException: Configuration Error:
Line 6: expected [controlFlag]
at com.sun.security.auth.login.ConfigFile.match(ConfigFile.java:563)
at com.sun.security.auth.login.ConfigFile.parseLoginEntry(ConfigFile.java:413)
at com.sun.security.auth.login.ConfigFile.readConfig(ConfigFile.java:383)
at com.sun.security.auth.login.ConfigFile.init(ConfigFile.java:283)
at com.sun.security.auth.login.ConfigFile.init(ConfigFile.java:219)
at com.sun.security.auth.login.ConfigFile.<init>(ConfigFile.java:108)
MyUser_Kafka_jass.conf
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
doNotPrompt=true
useTicketCache=true
renewTicket=true
principal="ctadmin/prod-dev1-dn1#PROD.COM";
useKeyTab=true
serviceName="kafka"
keyTab="/etc/security/keytabs/ctadmin.keytab"
client=true;
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/etc/security/keytabs/ctadmin.keytab"
storeKey=true
useTicketCache=true
serviceName="zookeeper"
principal="ctadmin/prod-dev1-dn1#PROD.COM";
};
case2 : When I specify Kafkas own jaas file
Java config name: /etc/krb5.conf
Loaded from Java config
javax.security.auth.login.LoginException: Could not login: the client is being asked for a password, but the Kafka client code does not currently support obtaining a password from the user. Make sure -Djava.security.auth.login.config property passed to JVM and the client is configured to use a ticket cache (using the JAAS configuration setting 'useTicketCache=true)'. Make sure you are using FQDN of the Kafka broker you are trying to connect to. not available to garner authentication information from the user
at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:899)
at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:719)
at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:584)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at javax.security.auth.login.LoginContext.invoke(LoginContext.java:762)
at javax.security.auth.login.LoginContext.access$000(LoginContext.java:203)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:690)
at javax.security.auth.login.LoginContext$4.run(LoginContext.java:688)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:687)
at javax.security.auth.login.LoginContext.login(LoginContext.java:595)
at org.apache.kafka.common.security.kerberos.Login.login(Login.java:298)
at org.apache.kafka.common.security.kerberos.Login.<init>(Login.java:104)
at kafka.common.security.LoginManager$.init(LoginManager.scala:36)
at kafka.producer.Producer.<init>(Producer.scala:50)
at kafka.producer.Producer.<init>(Producer.scala:73)
at kafka.javaapi.producer.Producer.<init>(Producer.scala:26)
at com.ct.test.kafka.TestProducer.main(TestProducer.java:51)
This works fine, if I do kinit before running this app, else it will through above error.
I cant do this in my production environment, if there is any way to do this by our app itself then please help me out.
Please let me know if you need any more details.
Thanks:)
The error is in a semicolon you have in your jaas file as you can see in this piece of output:
Line 6: expected [controlFlag]
This line cannot have the semicolon:
principal="ctadmin/prod-dev1-dn1#PROD.COM";
it can only exist in the last line:
I don't know what mistake did first time, below things I did again, and it works fine.
First give all access to topic:
bin/kafka-acls.sh --add --allow-principals user:ctadmin --operation ALL --topic marchTesting --authorizer-properties zookeeper.connect={hostname}:2181
create jass file:
kafka-jaas.conf
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
doNotPrompt=true
useTicketCache=true
principal="ctadmin#HSCALE.COM"
useKeyTab=true
serviceName="kafka"
keyTab="/etc/security/keytabs/ctadmin.keytab"
client=true;
};
Java Program:
package com.ct.test.kafka;
import java.util.Date;
import java.util.Properties;
import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;
public class KafkaProducer {
public static void main(String[] args) {
String topic = args[0];
Properties props = new Properties();
props.put("metadata.broker.list", "{Hostname}:6667");
props.put("serializer.class", "kafka.serializer.StringEncoder");
props.put("request.required.acks", "1");
props.put("security.protocol", "PLAINTEXTSASL");
ProducerConfig config = new ProducerConfig(props);
Producer<String, String> producer = new Producer<String, String>(config);
for (int i = 0; i < 10; i++){
producer.send(new KeyedMessage<String, String>(topic, "Test Date: " + new Date()));
}
}
}
Run application:
java -Djava.security.auth.login.config=/home/ctadmin/kafka-jaas.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=true -cp kafka-testing-0.0.1-jar-with-dependencies.jar com.ct.test.kafka.KafkaProducer

LEAK: ByteBuf.release() was not called in before it's garbage-collected. Spring Reactor TcpServer

I am using reactor-core [1.1.0.RELEASE] , reactor-net [1.1.0.RELEASE] is using netty-all [4.0.18.Final], reactor-spring-context [1.1.0.RELEASE] & Spring Reactor TcpServer [Spring 4.0.3.RELEASE].
I have created simple REST API in netty for health check: /health. I have followed gs-reactor-thumbnailer code
Please see the code as follows:
import io.netty.handler.codec.http.DefaultFullHttpResponse;
import io.netty.handler.codec.http.FullHttpRequest;
import io.netty.handler.codec.http.FullHttpResponse;
import io.netty.handler.codec.http.HttpHeaders;
import io.netty.handler.codec.http.HttpMethod;
import io.netty.handler.codec.http.HttpResponseStatus;
import io.netty.handler.codec.http.HttpVersion;
import org.springframework.stereotype.Service;
import reactor.function.Consumer;
import reactor.net.NetChannel;
#Service
public class HealthCheckNettyRestApi{
public Consumer<FullHttpRequest> getResponse(NetChannel<FullHttpRequest, FullHttpResponse> channel,int portNumber){
return req -> {
if (req.getMethod() != HttpMethod.GET) {
channel.send(badRequest(req.getMethod()
+ " not supported for this URI"));
} else {
DefaultFullHttpResponse resp = new DefaultFullHttpResponse( HttpVersion.HTTP_1_1, HttpResponseStatus.OK);
resp.content().writeBytes("Hello World".getBytes());
resp.headers().set(HttpHeaders.Names.CONTENT_TYPE, "text/plain");
resp.headers().set(HttpHeaders.Names.CONTENT_LENGTH,resp.content().readableBytes());
//resp.release();
channel.send((resp));
}
};
}
}
In Spring Boot Application I am wiring it as:
#Bean
public ServerSocketOptions serverSocketOptions(){
return new NettyServerSocketOptions().
pipelineConfigurer(pipeline -> pipeline.addLast(new HttpServerCodec()).
addLast(new HttpObjectAggregator(16*1024*1024)));
}
#Autowired
private HealthCheckNettyRestApi healthCheck;
#Value("${netty.port:5555}")
private Integer nettyPort;
#Bean
public NetServer<FullHttpRequest, FullHttpResponse> restApi(Environment env,ServerSocketOptions opts) throws InterruptedException{
NetServer<FullHttpRequest, FullHttpResponse> server =
new TcpServerSpec<FullHttpRequest, FullHttpResponse>(NettyTcpServer.class)
.env(env)
.dispatcher("sync")
.options(opts)
.listen(nettyPort)
.consume(ch -> {
Stream<FullHttpRequest> in = ch.in();
log.info("netty server is humming.....");
in.filter( (FullHttpRequest req) -> (req.getUri().matches(NettyRestConstants.HEALTH_CHECK)))
.when(Throwable.class, NettyHttpSupport.errorHandler(ch))
.consume(healthCheck.getResponse(ch, nettyPort));
}).get();
server.start().await(); //this is working as Tomcat is also deployed due to Spring JPA & web dependencies
return server;
}
When I am running benchmark using wrk:
wrk -t6 -c100 -d30s --latency 'http://localhost:8087/health'
Then I am getting following stack trace:
2015-04-22 17:23:21.072] - 16497 ERROR [reactor-tcp-io-22] --- i.n.u.ResourceLeakDetector: LEAK: ByteBuf.release() was not called before it's garbage-collected. Enable advanced leak reporting to find out where the leak occurred. To enable advanced leak reporting, specify the JVM option '-Dio.netty.leakDetectionLevel=advanced' or call ResourceLeakDetector.setLevel()
2015-04-22 23:09:26.354 ERROR 4308 --- [actor-tcp-io-13] io.netty.util.ResourceLeakDetector : LEAK: ByteBuf.release() was not called before it's garbage-collected.
Recent access records: 0
Created at:
io.netty.buffer.CompositeByteBuf.<init>(CompositeByteBuf.java:59)
io.netty.buffer.Unpooled.compositeBuffer(Unpooled.java:355)
io.netty.handler.codec.http.HttpObjectAggregator.decode(HttpObjectAggregator.java:144)
io.netty.handler.codec.http.HttpObjectAggregator.decode(HttpObjectAggregator.java:49)
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:341)
io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:327)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:155)
io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:148)
io.netty.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:341)
io.netty.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:327)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:785)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:116)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:494)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:461)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:378)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:350)
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
java.lang.Thread.run(Thread.java:745)
2015-04-22 23:09:55.217 INFO 4308 --- [actor-tcp-io-13] r.n.netty.NettyNetChannelInboundHandler : [id: 0x260faf6d, /127.0.0.1:50275 => /127.0.0.1:8087] Connection reset by peer
2015-04-22 23:09:55.219 ERROR 4308 --- [actor-tcp-io-13] reactor.core.Reactor : Connection reset by peer
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:192)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380)
at io.netty.buffer.UnpooledUnsafeDirectByteBuf.setBytes(UnpooledUnsafeDirectByteBuf.java:446)
at io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:871)
at io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:224)
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:108)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:494)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:461)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:378)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:350)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
at java.lang.Thread.run(Thread.java:745)
My Analyis: I think that since I am forwarding DefaultFullHttpResponse to Spring implementation, Spring APIs should take care about calling release() method. BTW I also tried calling release() method from my implementation but I am still getting the same error.
Could some one tell me what is wrong with the implementation?

Categories

Resources