I'm trying to write a test for my Spring Cloud service while it runs against Kafka and Schema Registry which run inside Docker containers.
Kafka and Schema Registry communicate with each other via a docker network, and have ports that are exposed on the host. The service I am testing is running on the host - it is able to communicate with both the docker kafka broker and docker schema registry. I am starting it up from a JUnit test which is annotated as shown below.
#ExtendWith(SpringExtension.class)
#SpringBootTest
#EnableAutoConfiguration(exclude = TestSupportBinderAutoConfiguration.class)
#AutoConfigureTestDatabase(replace = AutoConfigureTestDatabase.Replace.NONE)
public class MyTest {
...
}
My service spins up and is able to write a message to the Kafka broker running inside the Docker container, however when my service is started using the various Spring / JUnit test annotations, there appears to be something different about the way the message it writes is serialized compared to when my service runs in 'production mode' (i.e. if I run it using using java -jar com.xyz.MyService).
The message needs to be written in Avro format, so I've configured the binder in application.yml as
my-topic:
destination: my-topic
contentType: application/*+avro
producer:
useNativeEncoding: true
When attempting to consume the message that my service has written, AbstractKafkaAvroDeserializer blows up, complaining that it was unable to marshal it into a completely unrelated Avro type:
{"logger_name":"org.apache.kafka.streams.errors.LogAndFailExceptionHandler","message":"Exception caught during Deserialization, taskId: 0_0, topic: my-topic, partition: 0, offset: 1","stack_trace":"org.apache.kafka.common.errors.SerializationException: Could not find class com.xyz.SomeOtherMessageType specified in writer's schema whilst finding reader's schema for a SpecificRecord.
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.getSpecificReaderSchema(AbstractKafkaAvroDeserializer.java:265)
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.getReaderSchema(AbstractKafkaAvroDeserializer.java:247)
at io.confluent.kafka.serializers.AbstractKafkaAvroDeserializer.getDatumReader(AbstractKafkaAvroDeserializer.java:194)
...
This does not happen if my service runs in 'production mode'.
I think therefore that some setting is being applied to my service when I spin it up in 'test mode', which changes the way messages are encoded or serialized.
Can anyone suggest some things I can try to resolve this?
Update 1
So, it turns out that the messages looks pretty much identical when they are written to the topic and then read back (UUIDs are random for each test run):
Written to topic by service running in 'test mode':
Address 0 1 2 3 4 5 6 7 8 9 A B C D E F 0123456789ABCDEF
------- -------- -------- -------- -------- ----------------
000000: 00000000 01483335 63616366 62642D30 .....H35cacfbd-0
000010: 3165642D 34653564 2D613936 652D6665 1ed-4e5d-a96e-fe
000020: 30626339 65313033 34664832 35313436 0bc9e1034fH25146
000030: 6237392D 66643334 2D346430 322D6261 b79-fd34-4d02-ba
000040: 37362D36 61396535 62623861 31343448 76-6a9e5bb8a144H
000050: 30653364 30326536 2D383732 372D3466 0e3d02e6-8727-4f
000060: 64312D38 3730662D 33646633 35353166 d1-870f-3df3551f
000070: 37343861 084D7220 54064D72 730A4A69 748a.Mr T.Mrs.Ji
000080: 6D6D790A 57686974 6514536E 6F772068 mmy.White.Snow h
000090: 6F757365 00000012 4C697665 72706F6F ouse....Liverpoo
0000A0: 6C0C4C4C 32335252 0E456E67 6C616E64 l.XXXXXX.England
0000B0: 16303735 31323334 35363738 021E4D72 .XXXXXXXXXXX..Mr
0000C0: 20542773 20427573 696E6573 73483737 T's BusinessH77
0000D0: 32383064 36352D36 3633362D 34376565 280d65-6636-47ee
0000E0: 2D393864 302D6361 36646531 32373838 -98d0-ca6de12788
0000F0: 63610000 ca..
Written to topic by service running in 'production mode':
Address 0 1 2 3 4 5 6 7 8 9 A B C D E F 0123456789ABCDEF
------- -------- -------- -------- -------- ----------------
000000: 00000000 57483433 64343264 61372D30 ....WH43d42da7-0
000010: 6533392D 34646665 2D383966 362D6531 e39-4dfe-89f6-e1
000020: 37363036 34383730 61344833 38663864 76064870a4H38f8d
000030: 3561342D 65386532 2D346134 372D6235 5a4-e8e2-4a47-b5
000040: 30662D37 31623435 36653837 33393348 0f-71b456e87393H
000050: 63666463 33653964 2D303362 612D3464 cfdc3e9d-03ba-4d
000060: 62372D62 3034622D 31393137 37323634 b7-b04b-19177264
000070: 36623665 084D7220 54064D72 730A4A69 6b6e.Mr T.Mrs.Ji
000080: 6D6D790A 57686974 6514536E 6F772068 mmy.White.Snow h
000090: 6F757365 00000012 4C697665 72706F6F ouse....Liverpoo
0000A0: 6C0C4C4C 32335252 0E456E67 6C616E64 l.XXXXXX.England
0000B0: 16303735 31323334 35363738 021E4D72 .XXXXXXXXXXX..Mr
0000C0: 20542773 20427573 696E6573 73486161 T's BusinessHaa
0000D0: 35326636 34662D36 6131642D 34393030 52f64f-6a1d-4900
0000E0: 2D616537 612D3432 33326333 65613938 -ae7a-4232c3ea98
0000F0: 38330000 83..
Testcontainers Kafka module runs a single node Kafka installation. It doesn't spin up a Schema Registry. Which I suspect might be a problem for Avro serialization.
You can add it manually to the tests. Testcontainers allows to run any Docker image programmatically with a simple API call:
var schemaRegistry = new GenericContainer(DockerImageName.parse("confluentcp/cp-schema-registry:version"));
I don't know for certain, but you probably need to connect Kafka and the schema registry, which you can do with the Network, see the Advanced networking chapter in the docs.
Unfortunately, I don't have a good example to refer to.
You can also look at something like this: https://github.com/kreuzwerker/kafka-consumer-testing.
They mock schema registry url so there's no separate schema registry container.
While running jHipster command, I got the following errors:
+ jhipster axon --skip-git --blueprint cst
INFO! Using JHipster version installed globally
INFO! No custom sharedOptions found within blueprint: generator-jhipster-cst at /usr/local/lib/node_modules/generator-jhipster-cst
events.js:288
throw er; // Unhandled 'error' event
^
TypeError: Cannot read property 'replace' of undefined
at new module.exports (/Users/.../jhipster/generator-jhipster-cst/generators/subgenerator-base.js:27:49)
at new module.exports (/Users/.../jhipster/generator-jhipster-cst/generators/aws/index.js:3:18)
at Environment.instantiate (/usr/local/lib/node_modules/generator-jhipster/node_modules/yeoman-environment/lib/environment.js:673:23)
at Environment.create (/usr/local/lib/node_modules/generator-jhipster/node_modules/yeoman-environment/lib/environment.js:645:19)
at /usr/local/lib/node_modules/generator-jhipster/cli/cli.js:74:31
at Array.forEach (<anonymous>)
at Object.<anonymous> (/usr/local/lib/node_modules/generator-jhipster/cli/cli.js:62:29)
at Module._compile (internal/modules/cjs/loader.js:1158:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1178:10)
at Module.load (internal/modules/cjs/loader.js:1002:32)
Emitted 'error' event on Environment instance at:
at Environment.error (/usr/local/lib/node_modules/generator-jhipster/node_modules/yeoman-environment/lib/environment.js:293:12)
at Environment.create (/usr/local/lib/node_modules/generator-jhipster/node_modules/yeoman-environment/lib/environment.js:647:19)
at /usr/local/lib/node_modules/generator-jhipster/cli/cli.js:74:31
[... lines matching original stack trace ...]
at Module.load (internal/modules/cjs/loader.js:1002:32)
at Function.Module._load (internal/modules/cjs/loader.js:901:14)
at Module.require (internal/modules/cjs/loader.js:1044:19)
I tried to update npm and jHipster, but there was another problem with upgrading jHipster:
~ sudo jhipster upgrade
INFO! Using JHipster version installed globally
INFO! Executing jhipster:upgrade
This seems to be an app blueprinted project with jhipster 6.6.0 bug (https://github.com/jhipster/generator-jhipster/issues/11045), you should pass --blueprints to jhipster upgrade commmand.
Error: This seems to be an app blueprinted project with jhipster 6.6.0 bug (https://github.com/jhipster/generator-jhipster/issues/11045), you should pass --blueprints to jhipster upgrade commmand.
at module.exports.error (/usr/local/lib/node_modules/generator-jhipster/generators/generator-base.js:1590:15)
at new module.exports (/usr/local/lib/node_modules/generator-jhipster/generators/upgrade/index.js:95:18)
at Environment.instantiate (/usr/local/lib/node_modules/generator-jhipster/node_modules/yeoman-environment/lib/environment.js:673:23)
at Environment.create (/usr/local/lib/node_modules/generator-jhipster/node_modules/yeoman-environment/lib/environment.js:645:19)
at instantiateAndRun (/usr/local/lib/node_modules/generator-jhipster/node_modules/yeoman-environment/lib/environment.js:729:30)
at Environment.run (/usr/local/lib/node_modules/generator-jhipster/node_modules/yeoman-environment/lib/environment.js:758:12)
at runYoCommand (/usr/local/lib/node_modules/generator-jhipster/cli/cli.js:53:13)
at Command.<anonymous> (/usr/local/lib/node_modules/generator-jhipster/cli/cli.js:178:17)
at Command.listener [as _actionHandler] (/usr/local/lib/node_modules/generator-jhipster/node_modules/commander/index.js:413:31)
at Command._parseCommand (/usr/local/lib/node_modules/generator-jhipster/node_modules/commander/index.js:914:14)
NPM: 6.14.8
Node: 12.16.1
jhipster: 6.10.3
Java: Tested with 13.0.2 & 11.0.8
Updated
The part of the code from which the error originated ( 'replace' of undefined ):
const configuration = {
...opts,
...this.getAllJhipsterConfig(this, true)
};
this.baseName = configuration.baseName;
this.serverPort = configuration.serverPort;
this.packageName = configuration.packageName;
this.rootPackageName = this.packageName.replace(/\.[^.]+$/, '');
Could you explain to me, how can I fix the above problem, please?
Like anticipated by the title, I have some problems to submit a spark job to a spark cluster running on docker.
I wrote a very simple spark job in scala, subscribe to a kafka server arrange some data and store these in an elastichsearch database.
kafka and elasticsearch are already running in docker.
Everything works perfectly if I run the spark job from my Ide in my dev environment (Windows / IntelliJ).
Then (and I'm not a java guy at all), I added a spark cluster following these instructions: https://github.com/big-data-europe/docker-spark
The cluster looks healthy when consulting its dashboard. I created a cluster consisting of a master and a worker.
Now, this is my job written in scala:
import java.io.Serializable
import org.apache.commons.codec.StringDecoder
import org.apache.hadoop.fs.LocalFileSystem
import org.apache.hadoop.hdfs.DistributedFileSystem
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark
import org.apache.spark.SparkConf
import org.elasticsearch.spark._
import org.apache.spark.sql.SQLContext
import org.apache.spark.streaming.dstream.InputDStream
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.kafka010.{ConsumerStrategies, KafkaUtils}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import scala.util.parsing.json.JSON
object KafkaConsumer {
def main(args: Array[String]): Unit = {
val sc = new SparkConf()
.setMaster("local[*]")
.setAppName("Elastic Search Indexer App")
sc.set("es.index.auto.create", "true")
val elasticResource = "iot/demo"
val ssc = new StreamingContext(sc, Seconds(10))
//ssc.checkpoint("./checkpoint")
val kafkaParams = Map(
"bootstrap.servers" -> "kafka:9092",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"auto.offset.reset" -> "earliest",
"group.id" -> "group0"
)
val topics = List("test")
val stream = KafkaUtils.createDirectStream(
ssc,
PreferConsistent,
ConsumerStrategies.Subscribe[String, String](topics.distinct, kafkaParams)
)
case class message(key: String, timestamp: Long, payload: Object)
val rdds = stream.map(record => message(record.key, record.timestamp, record.value))
val es_config: scala.collection.mutable.Map[String, String] =
scala.collection.mutable.Map(
"pushdown" -> "true",
"es.nodes" -> "http://docker-host",
"es.nodes.wan.only" -> "true",
"es.resource" -> elasticResource,
"es.ingest.pipeline" -> "iot-test-pipeline"
)
rdds.foreachRDD { rdd =>
rdd.saveToEs(es_config)
rdd.collect().foreach(println)
}
ssc.start()
ssc.awaitTermination()
}
}
To submit this to the cluster I did:
With "sbt-assembly" plugin, I created a fat jar file with all dependencies.
Define an assembly strategy in build.sbt to avoid deduplicate errors on merging ...
Then submit with:
./spark-submit.cmd --class KafkaConsumer --master
spark://docker-host:7077
/c/Users/shams/Documents/Appunti/iot-demo-app/spark-streaming/target/scala-2.11/
spark-streaming-assembly-1.0.jar
BUT I have this error:
19/02/27 11:18:12 WARN NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where
applicable Exception in thread "main" java.io.IOException: No
FileSystem for scheme: C
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.spark.util.Utils$.getHadoopFileSystem(Utils.scala:1897)
at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:694)
at org.apache.spark.deploy.DependencyUtils$.downloadFile(DependencyUtils.scala:135)
at org.apache.spark.deploy.SparkSubmit$$anonfun$doPrepareSubmitEnvironment$7.apply(SparkSubmit.scala:416)
at org.apache.spark.deploy.SparkSubmit$$anonfun$doPrepareSubmitEnvironment$7.apply(SparkSubmit.scala:416)
at scala.Option.map(Option.scala:146)
at org.apache.spark.deploy.SparkSubmit$.doPrepareSubmitEnvironment(SparkSubmit.scala:415)
at org.apache.spark.deploy.SparkSubmit$.prepareSubmitEnvironment(SparkSubmit.scala:250)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:171)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
After a day of trying I have not solved and I can not understand where in my work wants to access a certain volume as seems to be said by the error
Can be related with the warning message?
Then, how I should edit my script to avoid that problem?
Thanks in advance.
UPDATE:
Problem seems not related to my code because I tried tu submit a simple hello world app compiled in the same way but I have the same issue.
After many attempts and research I have come to the conclusion that the problem could be that I'm using the windows version of spark-submit from my pc to submit the job.
I could not fully understand but for now, moving the file directly to the master and worker node I was able to submit it from there.
First copy on the container:
docker cp spark-streaming-assembly-1.0.jar 21b43cb2e698:/spark/bin
Then I execute (in /spark/bin folder):
./spark-submit --class KafkaConsumer --deploy-mode cluster --master spark://spark-master:7077 spark-streaming-assembly-1.0.jar
This is the workaround that i found at the moment.
You can mount the directory of your jobs to your container by running your submit container like this
docker run -it --rm\
--name spark-submit \
--mount type=bind,source="$(pwd)"/jobs,target=/home/jobs,readonly \
--network spark-net \
-p 4040:4040 \
-p 18080:18080 \
your-spark-image \
bash
This command will mount your jobs folder directly to your container and you can change in host and those changes will automatically be present in your container
Just downloaded and installed elasticsearch 1.3.2 in past hour
Opened IP tables to port 9200 and 9300:9400
Set my computer name and ip in /etc/hosts
Head Module and Paramedic Installed and running smoothly
curl on localhost works flawlessy
copied all jars from download into eclipse so same version client
--Java--
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.index.query.QueryBuilders;
public class Test{
public static void main(String[] args) {
Settings settings = ImmutableSettings.settingsBuilder().put("cluster.name", "elastictest").build();
TransportClient transportClient = new TransportClient(settings);
Client client = transportClient.addTransportAddress(new InetSocketTransportAddress("143.79.236.xxx",9300));//just masking ip with xxx for SO Question
try{
SearchResponse response = client.prepareSearch().setQuery(QueryBuilders.matchQuery("url", "twitter")).setSize(5).execute().actionGet();//bunch of urls indexed
String output = response.toString();
System.out.println(output);
}catch(Exception e){
e.printStackTrace();
}
client.close();
}
}
--Output--
log4j:WARN No appenders could be found for logger (org.elasticsearch.plugins).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:298)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:214)
at org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:105)
at org.elasticsearch.client.support.AbstractClient.search(AbstractClient.java:330)
at org.elasticsearch.client.transport.TransportClient.search(TransportClient.java:421)
at org.elasticsearch.action.search.SearchRequestBuilder.doExecute(SearchRequestBuilder.java:1097)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:91)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:65)
at Test.main(Test.java:20)
Update: Now I am REALLY confused. I just pressed run in eclipse 3 times. 2 times received the error above. 1 time the search worked!?? Brand new Centos 6.5 vps, brand new jdk installed. Then installed elasticsearch, have done nothing else to box.
Update: After running ./bin/elasticsearch console
[2014-09-18 08:56:13,694][INFO ][node ] [Acrobat] version[1.3.2], pid[2978], build[dee175d/2014-08-13T14:29:30Z]
[2014-09-18 08:56:13,695][INFO ][node ] [Acrobat] initializing ...
[2014-09-18 08:56:13,703][INFO ][plugins ] [Acrobat] loaded [], sites [head, paramedic]
[2014-09-18 08:56:15,941][WARN ][common.network ] failed to resolve local host, fallback to loopback
java.net.UnknownHostException: elasticsearchtest: elasticsearchtest: Name or service not known
at java.net.InetAddress.getLocalHost(InetAddress.java:1473)
at org.elasticsearch.common.network.NetworkUtils.<clinit>(NetworkUtils.java:54)
at org.elasticsearch.transport.netty.NettyTransport.<init>(NettyTransport.java:204)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:54)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.FactoryProxy.get(FactoryProxy.java:52)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)
at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)
at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.FactoryProxy.get(FactoryProxy.java:52)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)
at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:200)
at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:193)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:830)
at org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:193)
at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:175)
at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:110)
at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:93)
at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:70)
at org.elasticsearch.common.inject.ModulesBuilder.createInjector(ModulesBuilder.java:59)
at org.elasticsearch.node.internal.InternalNode.<init>(InternalNode.java:192)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:159)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:70)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:203)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
Caused by: java.net.UnknownHostException: elasticsearchtest: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293)
at java.net.InetAddress.getLocalHost(InetAddress.java:1469)
... 62 more
[2014-09-18 08:56:16,937][INFO ][node ] [Acrobat] initialized
[2014-09-18 08:56:16,937][INFO ][node ] [Acrobat] starting ...
[2014-09-18 08:56:17,110][INFO ][transport ] [Acrobat] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/143.79.236.31:9300]}
[2014-09-18 08:56:17,126][INFO ][discovery ] [Acrobat] elastictest/QvSNFajjQ9SFjU7WOdjaLw
[2014-09-18 08:56:20,145][INFO ][cluster.service ] [Acrobat] new_master [Acrobat][QvSNFajjQ9SFjU7WOdjaLw][localhost][inet[/143.79.236.31:9300]], reason: zen-disco-join (elected_as_master)
[2014-09-18 08:56:20,212][INFO ][http ] [Acrobat] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/143.79.236.31:9200]}
[2014-09-18 08:56:20,214][INFO ][node ] [Acrobat] started
--cluster config in elasticsearch.yml--
################################### Cluster ###################################
# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
#
cluster.name: elastictest
possible problem:
wrong port, if you use a Java or Scala client, correct port is 9300, not 9200
wrong cluster name, make sure the cluster name you set in your code is the same as the cluster.name you set in $ES_HOME/config/elasticsearch.yml
the sniff option, set client.transport.sniff to be true but can't connect to all nodes of ES cluster will cause this problem too. ES doc here explained why.
Elasticsearch settings are in $ES_HOME/config/elasticsearch.yml. There, if the cluster.name setting is commented out, it means ES would take just about any cluster name. So, in your code, the cluster.name as "elastictest" might be the problem. Try this:
Client client = new TransportClient()
.addTransportAddress(new InetSocketTransportAddress(
"143.79.236.xxx",
9300));
You should check the node's port, you could do it using head.
These ports are not same. Example,
The web URL you can open is localhost:9200,
but the node's port is 9300, so none of the configured nodes are available if you use the 9200 as the port.
NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{UfB9geJCR--spyD7ewmoXQ}{192.168.1.245}{192.168.1.245:9300}]]
In my case it was the difference in versions. If you check the logs in elasticsearch cluster you will see.
Elasticsearch logs
[node1] exception caught on transport layer [NettyTcpChannel{localAddress=/192.168.1.245:9300, remoteAddress=/172.16.1.47:65130}], closing connection
java.lang.IllegalStateException: Received message from unsupported version: [5.0.0] minimal compatible version is: [5.6.0]
I was using elasticsearch client and transport version 5.1.1. And my elasticsearch cluster version was in 6. So I changes my library version to 5.4.3.
Faced similar issue, and here is the solution
Example :
In elasticsearch.yml add the below properties
cluster.name: production
node.name: node1
network.bind_host: 10.0.1.22
network.host: 0.0.0.0
transport.tcp.port: 9300
Add the following in Java Elastic API for Bulk Push (just a code snippet).
For IP Address add public IP address of elastic search machine
Client client;
BulkRequestBuilder requestBuilder;
try {
client = TransportClient.builder().settings(Settings.builder().put("cluster.name", "production").put("node.name","node1")).build().addTransportAddress(
new InetSocketTransportAddress(InetAddress.getByName(""), 9300));
requestBuilder = (client).prepareBulk();
}
catch (Exception e) {
}
Open the Firewall ports for 9200,9300
I spend days together to figure out this issue. I know its late but this might be helpful:
I resolved this issue by changing the compatible/stable version of:
Spring boot: 2.1.1
Spring Data Elastic: 2.1.4
Elastic: 6.4.0 (default)
Maven:
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.1.RELEASE</version>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
<version>2.1.4.RELEASE</version>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
</dependency>
You don't need to mention Elastic version. By default it is 6.4.0. But if you want to add a specific verison. Use below snippet inside properties tag and use the compatible version of Spring Boot and Spring Data(if required)
<properties>
<elasticsearch.version>6.8.0</elasticsearch.version>
</properties>
Also, I used the Rest High Level client in ElasticConfiguration :
#Value("${elasticsearch.host}")
public String host;
#Value("${elasticsearch.port}")
public int port;
#Bean(destroyMethod = "close")
public RestHighLevelClient restClient1() {
final CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
RestClientBuilder builder = RestClient.builder(new HttpHost(host, port));
RestHighLevelClient client = new RestHighLevelClient(builder);
return client;
}
}
Important Note:
Elastic use 9300 port to communicate between nodes and 9200 as HTTP client. In application properties:
elasticsearch.host=10.40.43.111
elasticsearch.port=9200
spring.data.elasticsearch.cluster-nodes=10.40.43.111:9300 (customized Elastic server)
spring.data.elasticsearch.cluster-name=any-cluster-name (customized cluster name)
From Postman, you can use: http://10.40.43.111:9200/[indexname]/_search
Happy coding :)
For completion's sake, here's the snippet that creates the transport client using proper static method provided by InetSocketTransportAddress:
Client esClient = TransportClient.builder()
.settings(settings)
.build()
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("143.79.236.xxx"), 9300));
If the above advices do not work for you, change the log level of your logging framework configuration (log4j, logback...) to INFO. Then re-check the output.
The logger may be hiding messages like:
INFO org.elasticsearch.client.transport.TransportClientNodesService - failed to get node info for...
Caused by: ElasticsearchSecurityException: missing authentication token for action...
(in the example above, there was X-Pack plugin in ElasticSearch which requires authentication)
For other users getting this problem.
You may get this error if you are running a newer ElasticSearch (5.5 or later) while running Spring Boot <2 version.
Recommendation is to use the REST Client since the Java Client will be deprecated.
Other workaround would be to upgrade to Spring Boot 2, since that should be compatible.
See https://discuss.elastic.co/t/spring-data-elasticsearch-cant-connect-with-elasticsearch-5-5-0/94235 for more information.
Since most of the ansswers seem to be outdated here is the setting that worked for me:
Elasticsearch-Version: 7.2.0 (OSS) running on Docker
Java-Version: JDK-11
elasticsearch.yml:
cluster.name: production
node.name: node1
network.host: 0.0.0.0
transport.tcp.port: 9300
cluster.initial_master_nodes: node1
Setup:
client = new PreBuiltTransportClient(Settings.builder().put("cluster.name", "production").build());
client.addTransportAddress(new TransportAddress(InetAddress.getByName("localhost"), 9300));
Since PreBuiltTransportClient is deprecated you should use RestHighLevelClient for Elasticsearch-Version 7.3.0: https://artifacts.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-high-level-client/7.3.0/index.html
If you are using java Transport client
1.check 9300 is access able /open.
2.check the node and cluster name ,this should be the correct,you can check the node and cluster name by type ip:port in your browser.
3.Check the versions of your jar and Es installed version.
This one did work for me in ES 1.7.5:
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.common.xcontent.XContentBuilder;
public static void main(String[] args) throws IOException {
Settings settings = ImmutableSettings.settingsBuilder()
.put("client.transport.sniff",true)
.put("cluster.name","elasticcluster").build();
Client client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress("[ipaddress]",9300));
XContentBuilder builder = null;
try {
builder = jsonBuilder().startObject().field("user", "testdata").field("postdata",new Date()).field("message","testmessage")
.endObject();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println(builder.string());
IndexResponse response = client.prepareIndex("twitter","tweet","1").setSource(builder).execute().actionGet();
client.close();
}
Check your elasticsearch.yml, "transport.host" property must be "0.0.0.0" not "127.0.0.1" or "localhost"
This means we are not able to instantiate ES transportClient and throw this exception. There are couple of possibilities that cause this issue.
Cluster name is incorrect. So open ES_HOME_DIR/config/elasticserach.yml file and check the cluster name value OR use this command: curl -XGET 'http://localhost:9200/_nodes'
Verify port 9200 is http port but elasticsearch service is using tcp port 9300 [by default]. So verify that the port is not blocked.
Authentication issue: set the header in transportClient's context for authentication:
client.threadPool().getThreadContext()
.putHeader("Authorization", "Basic " + encodeBase64String(basicHeader.getBytes()));
If you are still facing this issue then add the following property:
put("client.transport.ignore_cluster_name", true)
The below basic code is working fine for me:
Settings settings = Settings.builder()
.put("cluster.name", "my-application").put("client.transport.sniff", true).put("client.transport.ignore_cluster_name", false).build();
TransportClient client = new PreBuiltTransportClient(settings).addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
I know I'm a bit late, incase the above answers didn't work, I recommend checking the logs in elasticsearch terminal. I found out that the error message says that i need to update my version from 5.0.0-rc1 to 6.8.0, i resolved it by updating my maven dependencies to:
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>6.8.0</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>transport</artifactId>
<version>6.8.0</version>
</dependency>
This changes my code as well, since InetSocketTransportAddress is deprecated. I have to change it to TransportAddress
TransportClient client = new PreBuiltTransportClient(Settings.EMPTY)
.addTransportAddress(new TransportAddress(InetAddress.getByName(host), port));
And you also need to add this to your config/elasticsearch.yml file (use your host address)
transport.host: localhost
Check the ES server logs
sudo tail -f /var/log/elasticsearch/elasticsearch.log
I was using an outdated client
Received message from unsupported version: [5.0.0] minimal compatible version is: [6.8.0]
I had the same problem. my problem was that the version of the dependency had conflict with the elasticsearch version. check the version in ip:9200 and use the dependency version that match it
This is a common issue for ES version 5.6.10+. The thing is that Elasticsearch had the TransportClient, that was using PreBuild and has been deprecated on that version. The alternative (which is the solution here in case you are using ES 7.14 or earlier), is to use the Java High Level REST client. See the documentation (they also have a great migration guide to migrate an application from TransportClient to the REST client).
From the 7.15 version, they dropped Java High Level REST, in favor to Java API Client, and thereās also migration guides too. They did this mainly to reduce dependencies from the client. See docs.
If you are using the same version of client as the cluster, and the fit client library, the issue should be resolved.
You should check logs
If you see like below
"stacktrace": ["java.lang.IllegalStateException: Received message from unsupported version: [6.4.3] minimal compatible version is: [6.8.0]"
You can check this link
https://discuss.elastic.co/t/java-client-or-spring-boot-for-elasticsearch-7-3-1/199778
You have to explicit declare es version.
I'm working with Cassandra-0.8.2.
I am working with the most recent version of Hector &
My java version is 1.6.0_26
I'm very new to Cassandra & Hector.
What I'm trying to do:
1. connect to an up & running instance of cassandra on a different server. I know it's running b/c I can ssh through my terminal into the server running this Cassandra instance and run the CLI with full functionality.
2. then I want to connect to a keyspace & create a column family and then add a value to that column family through Hector.
I think my problem is that this running instance of Cassandra on this server might not be configured to get commands that are not local. I think my next step will be to add a local instance of Cassandra on the cpu I'm working on and try to do this locally. What do you think?
Here's my Java code:
import me.prettyprint.cassandra.serializers.StringSerializer;
import me.prettyprint.cassandra.service.CassandraHostConfigurator;
import me.prettyprint.hector.api.Cluster;
import me.prettyprint.hector.api.Keyspace;
import me.prettyprint.hector.api.ddl.ColumnFamilyDefinition;
import me.prettyprint.hector.api.ddl.ComparatorType;
import me.prettyprint.hector.api.factory.HFactory;
import me.prettyprint.hector.api.mutation.Mutator;
public class MySample {
public static void main(String[] args) {
Cluster cluster = HFactory.getOrCreateCluster("Test Cluster", "xxx.xxx.x.41:9160");
Keyspace keyspace = HFactory.createKeyspace("apples", cluster);
ColumnFamilyDefinition cf = HFactory.createColumnFamilyDefinition("apples","ColumnFamily2",ComparatorType.UTF8TYPE);
StringSerializer stringSerializer = StringSerializer.get();
Mutator<String> mutator = HFactory.createMutator(keyspace, stringSerializer);
mutator.insert("jsmith", "Standard1", HFactory.createStringColumn("first", "John"));
}
}
My ERROR is:
16:22:19,852 INFO CassandraHostRetryService:37 - Downed Host Retry service started with queue size -1 and retry delay 10s
16:22:20,136 INFO JmxMonitor:54 - Registering JMX me.prettyprint.cassandra.service_Test Cluster:ServiceType=hector,MonitorType=hector
Exception in thread "main" me.prettyprint.hector.api.exceptions.HInvalidRequestException: InvalidRequestException(why:Keyspace apples does not exist)
at me.prettyprint.cassandra.connection.HThriftClient.getCassandra(HThriftClient.java:70)
at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:226)
at me.prettyprint.cassandra.service.KeyspaceServiceImpl.operateWithFailover(KeyspaceServiceImpl.java:131)
at me.prettyprint.cassandra.service.KeyspaceServiceImpl.batchMutate(KeyspaceServiceImpl.java:102)
at me.prettyprint.cassandra.service.KeyspaceServiceImpl.batchMutate(KeyspaceServiceImpl.java:108)
at me.prettyprint.cassandra.model.MutatorImpl$3.doInKeyspace(MutatorImpl.java:222)
at me.prettyprint.cassandra.model.MutatorImpl$3.doInKeyspace(MutatorImpl.java:219)
at me.prettyprint.cassandra.model.KeyspaceOperationCallback.doInKeyspaceAndMeasure(KeyspaceOperationCallback.java:20)
at me.prettyprint.cassandra.model.ExecutingKeyspace.doExecute(ExecutingKeyspace.java:85)
at me.prettyprint.cassandra.model.MutatorImpl.execute(MutatorImpl.java:219)
at me.prettyprint.cassandra.model.MutatorImpl.insert(MutatorImpl.java:59)
at org.cassandra.examples.MySample.main(MySample.java:25)
Caused by: InvalidRequestException(why:Keyspace apples does not exist)
at org.apache.cassandra.thrift.Cassandra$set_keyspace_result.read(Cassandra.java:5302)
at org.apache.cassandra.thrift.Cassandra$Client.recv_set_keyspace(Cassandra.java:481)
at org.apache.cassandra.thrift.Cassandra$Client.set_keyspace(Cassandra.java:456)
at me.prettyprint.cassandra.connection.HThriftClient.getCassandra(HThriftClient.java:68)
... 11 more
Thank you in advance for your help.
The exception you are getting is,
why:Keyspace apples does not exist
In your code, this line does not actually create the keyspace,
Keyspace keyspace = HFactory.createKeyspace("apples", cluster);
As described here, this is the code you need to define your keyspace,
ColumnFamilyDefinition cfDef = HFactory.createColumnFamilyDefinition("MyKeyspace", "ColumnFamilyName", ComparatorType.BYTESTYPE);
KeyspaceDefinition newKeyspace = HFactory.createKeyspaceDefinition("MyKeyspace", ThriftKsDef.DEF_STRATEGY_CLASS, replicationFactor, Arrays.asList(cfDef));
// Add the schema to the cluster.
// "true" as the second param means that Hector will block until all nodes see the change.
cluster.addKeyspace(newKeyspace, true);
We also have a getting started guide up on the wiki as well which might be of some help.