I am trying to add in data from logstash to elastic search bellow is the config file and the result on the terminal
input {
file {
path => "F:/business/data/clinical_trials/ctp.csv"
start_position => "beginning"
sincedb_path => "C:/Users/matth/Downloads/logstash-7.14.2-windows-x86_641/logstash-
7.14.2/data/plugins/inputs/file/.sincedb_88142d557695dc3df93b28d02940763d"
}
}
filter {
csv {
separator => ","
columns => ["web-scraper-order", "web-scraper-start-url", "First Submitted Date"]
}
}
output {
stdout { codec => "rubydebug"}
elasticsearch {
hosts => ["https://idm.es.eastus2.azure.elastic-cloud.com:9243"]
index => "DB"
user => "elastic"
password => "******"
}
}
and when i run i get the following it gets right down to the piplines and the next step should be establishing connection with elastic but it hangs on pipeline forever.
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
C:/Users/matth/Downloads/logstash-7.14.2-windows-x86_641/logstash-7.14.2/vendor/bundle/jruby/2.5.0/gems/bundler-1.17.3/lib/bundler/rubygems_integration.rb:200: warning: constant Gem::ConfigMap is deprecated
Sending Logstash logs to C:/Users/matth/Downloads/logstash-7.14.2-windows-x86_641/logstash-7.14.2/logs which is now configured via log4j2.properties
[2021-09-22T22:40:20,327][INFO ][logstash.runner ] Log4j configuration path used is: C:\Users\matth\Downloads\logstash-7.14.2-windows-x86_641\logstash-7.14.2\config\log4j2.properties
[2021-09-22T22:40:20,333][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.14.2", "jruby.version"=>"jruby 9.2.19.0 (2.5.8) 2021-06-15 55810c552b OpenJDK 64-Bit Server VM 11.0.12+7 on 11.0.12+7 +indy +jit [mswin32-x86_64]"}
[2021-09-22T22:40:20,383][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2021-09-22T22:40:21,625][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2021-09-22T22:40:22,007][INFO ][org.reflections.Reflections] Reflections took 46 ms to scan 1 urls, producing 120 keys and 417 values
[2021-09-22T22:40:22,865][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://idm.es.eastus2.azure.elastic-cloud.com:9243"]}
[2021-09-22T22:40:23,032][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://elastic:xxxxxx#idm.es.eastus2.azure.elastic-cloud.com:9243/]}}
[2021-09-22T22:40:23,707][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https://elastic:xxxxxx#idm.es.eastus2.azure.elastic-cloud.com:9243/"}
[2021-09-22T22:40:24,094][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (7.14.1) {:es_version=>7}
[2021-09-22T22:40:24,096][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2021-09-22T22:40:24,277][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1500, "pipeline.sources"=>["C:/Users/matth/Downloads/logstash-7.14.2-windows-x86_641/logstash-7.14.2/config/logstash-sample.conf"], :thread=>"#<Thread:0x52ff5a7a run>"}
[2021-09-22T22:40:24,346][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>7, :ecs_compatibility=>:disabled}
[2021-09-22T22:40:24,793][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>0.51}
[2021-09-22T22:40:24,835][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2021-09-22T22:40:24,856][INFO ][filewatch.observingtail ][main][c4df97cc309d579dc89a5115b7fcf43a440ef876f640039f520f6b116b913f49] START, creating Discoverer, Watch with file and sincedb collections
[2021-09-22T22:40:24,876][INFO ][logstash.agent ] Pipelines running {:count=>1, `:running_pipelines=>[:main], :non_running_pipelines=>[]}
any help as to why my data isnt getting to elastic??? please help
I have a Windows machine which has a Ubuntu machine on VirtualBox and it runs Elasticsearch 2.4. I have mapped ports 9200, 9300 and 5601 via VirtualBox's port mapping and also allowed ports on iptables in my Ubuntu instance.
However, when Java tries to connect I get this error:
org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{127.0.0.1}{127.0.0.1:9300}]
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:290) ~[elasticsearch-2.4.1.jar:2.4.1]
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:207) ~[elasticsearch-2.4.1.jar:2.4.1]
at org.elasticsearch.client.transport.support.TransportProxyClient.execute(TransportProxyClient.java:55) ~[elasticsearch-2.4.1.jar:2.4.1]
at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:288) ~[elasticsearch-2.4.1.jar:2.4.1]
at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359) ~[elasticsearch-2.4.1.jar:2.4.1]
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:86) ~[elasticsearch-2.4.1.jar:2.4.1]
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:56) ~[elasticsearch-2.4.1.jar:2.4.1]
at org.elasticsearch.action.ActionRequestBuilder.get(ActionRequestBuilder.java:64) ~[elasticsearch-2.4.1.jar:2.4.1]
at eu.adsdaq.elasticsearch.importer.sync.service.OrderHistoryService.sendBulkRequest(OrderHistoryService.java:133) [classes/:na]
at eu.adsdaq.elasticsearch.importer.sync.service.OrderHistoryService.syncOrderHistory(OrderHistoryService.java:69) [classes/:na]
at eu.adsdaq.elasticsearch.importer.sync.service.OrderHistoryService.sync(OrderHistoryService.java:45) [classes/:na]
at eu.adsdaq.elasticsearch.importer.sync.ElasticsearchSyncService.lambda$0(ElasticsearchSyncService.java:32) [classes/:na]
at eu.adsdaq.elasticsearch.importer.sync.ElasticsearchSyncService$$Lambda$3/910457645.accept(Unknown Source) [classes/:na]
at java.util.ArrayList.forEach(ArrayList.java:1249) [na:1.8.0_45] ...
Before this error, I also see something like this,
org.elasticsearch.plugins : [Chimera] modules [], plugins [], sites []
That logs happens on the last line of this snippet where I create the Client object initially:
Settings settings = Settings.builder()
.put("cluster.name", properties.getClusterName())
.put("client.transport.sniff", properties.isSniffing())
.build();
TransportAddress address;
try {
address = new InetSocketTransportAddress(InetAddress.getByName(properties.getHost()), properties.getPort());
} catch (UnknownHostException exception) {
logger.error("Error : unknown elasticsearch host " + properties.getHost() + ":"
+ properties.getPort(), exception);
throw new RuntimeException(exception);
}
client = TransportClient.builder().settings(settings).build().addTransportAddress(address);
So the strange thing is: I don't get the error initially, but I get the error later on when I try to insert data using the same Client.
I used localhost here as my host. When I tried 0.0.0.0 too I get a similar error. I can access http://localhost:9200 from my browser and console. So I know the port mapping works to some degree.
Any help on how I can get resolve this issue is appreciated.
So i have this code to connect to openfire
XMPPTCPConnectionConfiguration.Builder config = XMPPTCPConnectionConfiguration.builder();
config.setUsernameAndPassword(loginUser, passwordUser);
config.setSecurityMode(ConnectionConfiguration.SecurityMode.disabled);
config.setServiceName(serverAddress);
config.setHost(serverAddress);
config.setPort(5222);
config.setDebuggerEnabled(true);
connection = new XMPPTCPConnection(config.build());
ReconnectionManager.getInstanceFor(connection).enableAutomaticReconnection();
System.out.println("Reconnection enabled : " + ReconnectionManager.getInstanceFor(connection).isAutomaticReconnectEnabled());
ConnectionListener connectionListener = new XMPPConnectionListener();
connection.addConnectionListener(connectionListener);
but when i try to connect i get this error :
org.jivesoftware.smack.XMPPException$StreamErrorException: internal-server-error You can read more about the meaning of this stream error at http://xmpp.org/rfcs/rfc6120.html#streams-error-conditions
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketReader.parsePackets(XMPPTCPConnection.java:1007)
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketReader.access$300(XMPPTCPConnection.java:948)
at org.jivesoftware.smack.tcp.XMPPTCPConnection$PacketReader$1.run(XMPPTCPConnection.java:963)
at java.lang.Thread.run(Thread.java:744)
EDIT : Openfire's log :
Warn log :
2016.06.13 11:06:31 org.apache.mina.core.filterchain.DefaultIoFilterChain - Unexpected exception from exceptionCaught handler.
java.lang.NoSuchMethodError: java.util.concurrent.ConcurrentHashMap.keySet()Ljava/util/concurrent/ConcurrentHashMap$KeySetView;
at org.jivesoftware.openfire.roster.Roster.broadcastPresence(Roster.java:628)
at org.jivesoftware.openfire.handler.PresenceUpdateHandler.broadcastUpdate(PresenceUpdateHandler.java:309)
at org.jivesoftware.openfire.handler.PresenceUpdateHandler.process(PresenceUpdateHandler.java:163)
at org.jivesoftware.openfire.handler.PresenceUpdateHandler.process(PresenceUpdateHandler.java:138)
at org.jivesoftware.openfire.handler.PresenceUpdateHandler.process(PresenceUpdateHandler.java:202)
at org.jivesoftware.openfire.PresenceRouter.handle(PresenceRouter.java:144)
at org.jivesoftware.openfire.PresenceRouter.route(PresenceRouter.java:80)
at org.jivesoftware.openfire.spi.PacketRouterImpl.route(PacketRouterImpl.java:88)
at org.jivesoftware.openfire.SessionManager$ClientSessionListener.onConnectionClose(SessionManager.java:1267)
at org.jivesoftware.openfire.nio.NIOConnection.notifyCloseListeners(NIOConnection.java:266)
at org.jivesoftware.openfire.nio.NIOConnection.close(NIOConnection.java:248)
at org.jivesoftware.openfire.nio.ConnectionHandler.exceptionCaught(ConnectionHandler.java:162)
i tried to connect to a local openfire server(windows), i succeded, but I fail when i try to connect to an ubuntu openfre server.
Any help would be appreciated.
Newer versions of Openfire need Java 8 (or higher).
To be precise : openfire needs oracle jre 8 NOT Openjdk
Just downloaded and installed elasticsearch 1.3.2 in past hour
Opened IP tables to port 9200 and 9300:9400
Set my computer name and ip in /etc/hosts
Head Module and Paramedic Installed and running smoothly
curl on localhost works flawlessy
copied all jars from download into eclipse so same version client
--Java--
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.index.query.QueryBuilders;
public class Test{
public static void main(String[] args) {
Settings settings = ImmutableSettings.settingsBuilder().put("cluster.name", "elastictest").build();
TransportClient transportClient = new TransportClient(settings);
Client client = transportClient.addTransportAddress(new InetSocketTransportAddress("143.79.236.xxx",9300));//just masking ip with xxx for SO Question
try{
SearchResponse response = client.prepareSearch().setQuery(QueryBuilders.matchQuery("url", "twitter")).setSize(5).execute().actionGet();//bunch of urls indexed
String output = response.toString();
System.out.println(output);
}catch(Exception e){
e.printStackTrace();
}
client.close();
}
}
--Output--
log4j:WARN No appenders could be found for logger (org.elasticsearch.plugins).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:298)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:214)
at org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:105)
at org.elasticsearch.client.support.AbstractClient.search(AbstractClient.java:330)
at org.elasticsearch.client.transport.TransportClient.search(TransportClient.java:421)
at org.elasticsearch.action.search.SearchRequestBuilder.doExecute(SearchRequestBuilder.java:1097)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:91)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:65)
at Test.main(Test.java:20)
Update: Now I am REALLY confused. I just pressed run in eclipse 3 times. 2 times received the error above. 1 time the search worked!?? Brand new Centos 6.5 vps, brand new jdk installed. Then installed elasticsearch, have done nothing else to box.
Update: After running ./bin/elasticsearch console
[2014-09-18 08:56:13,694][INFO ][node ] [Acrobat] version[1.3.2], pid[2978], build[dee175d/2014-08-13T14:29:30Z]
[2014-09-18 08:56:13,695][INFO ][node ] [Acrobat] initializing ...
[2014-09-18 08:56:13,703][INFO ][plugins ] [Acrobat] loaded [], sites [head, paramedic]
[2014-09-18 08:56:15,941][WARN ][common.network ] failed to resolve local host, fallback to loopback
java.net.UnknownHostException: elasticsearchtest: elasticsearchtest: Name or service not known
at java.net.InetAddress.getLocalHost(InetAddress.java:1473)
at org.elasticsearch.common.network.NetworkUtils.<clinit>(NetworkUtils.java:54)
at org.elasticsearch.transport.netty.NettyTransport.<init>(NettyTransport.java:204)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:54)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.FactoryProxy.get(FactoryProxy.java:52)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)
at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)
at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.FactoryProxy.get(FactoryProxy.java:52)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)
at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:200)
at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:193)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:830)
at org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:193)
at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:175)
at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:110)
at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:93)
at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:70)
at org.elasticsearch.common.inject.ModulesBuilder.createInjector(ModulesBuilder.java:59)
at org.elasticsearch.node.internal.InternalNode.<init>(InternalNode.java:192)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:159)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:70)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:203)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
Caused by: java.net.UnknownHostException: elasticsearchtest: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293)
at java.net.InetAddress.getLocalHost(InetAddress.java:1469)
... 62 more
[2014-09-18 08:56:16,937][INFO ][node ] [Acrobat] initialized
[2014-09-18 08:56:16,937][INFO ][node ] [Acrobat] starting ...
[2014-09-18 08:56:17,110][INFO ][transport ] [Acrobat] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/143.79.236.31:9300]}
[2014-09-18 08:56:17,126][INFO ][discovery ] [Acrobat] elastictest/QvSNFajjQ9SFjU7WOdjaLw
[2014-09-18 08:56:20,145][INFO ][cluster.service ] [Acrobat] new_master [Acrobat][QvSNFajjQ9SFjU7WOdjaLw][localhost][inet[/143.79.236.31:9300]], reason: zen-disco-join (elected_as_master)
[2014-09-18 08:56:20,212][INFO ][http ] [Acrobat] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/143.79.236.31:9200]}
[2014-09-18 08:56:20,214][INFO ][node ] [Acrobat] started
--cluster config in elasticsearch.yml--
################################### Cluster ###################################
# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
#
cluster.name: elastictest
possible problem:
wrong port, if you use a Java or Scala client, correct port is 9300, not 9200
wrong cluster name, make sure the cluster name you set in your code is the same as the cluster.name you set in $ES_HOME/config/elasticsearch.yml
the sniff option, set client.transport.sniff to be true but can't connect to all nodes of ES cluster will cause this problem too. ES doc here explained why.
Elasticsearch settings are in $ES_HOME/config/elasticsearch.yml. There, if the cluster.name setting is commented out, it means ES would take just about any cluster name. So, in your code, the cluster.name as "elastictest" might be the problem. Try this:
Client client = new TransportClient()
.addTransportAddress(new InetSocketTransportAddress(
"143.79.236.xxx",
9300));
You should check the node's port, you could do it using head.
These ports are not same. Example,
The web URL you can open is localhost:9200,
but the node's port is 9300, so none of the configured nodes are available if you use the 9200 as the port.
NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{UfB9geJCR--spyD7ewmoXQ}{192.168.1.245}{192.168.1.245:9300}]]
In my case it was the difference in versions. If you check the logs in elasticsearch cluster you will see.
Elasticsearch logs
[node1] exception caught on transport layer [NettyTcpChannel{localAddress=/192.168.1.245:9300, remoteAddress=/172.16.1.47:65130}], closing connection
java.lang.IllegalStateException: Received message from unsupported version: [5.0.0] minimal compatible version is: [5.6.0]
I was using elasticsearch client and transport version 5.1.1. And my elasticsearch cluster version was in 6. So I changes my library version to 5.4.3.
Faced similar issue, and here is the solution
Example :
In elasticsearch.yml add the below properties
cluster.name: production
node.name: node1
network.bind_host: 10.0.1.22
network.host: 0.0.0.0
transport.tcp.port: 9300
Add the following in Java Elastic API for Bulk Push (just a code snippet).
For IP Address add public IP address of elastic search machine
Client client;
BulkRequestBuilder requestBuilder;
try {
client = TransportClient.builder().settings(Settings.builder().put("cluster.name", "production").put("node.name","node1")).build().addTransportAddress(
new InetSocketTransportAddress(InetAddress.getByName(""), 9300));
requestBuilder = (client).prepareBulk();
}
catch (Exception e) {
}
Open the Firewall ports for 9200,9300
I spend days together to figure out this issue. I know its late but this might be helpful:
I resolved this issue by changing the compatible/stable version of:
Spring boot: 2.1.1
Spring Data Elastic: 2.1.4
Elastic: 6.4.0 (default)
Maven:
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.1.RELEASE</version>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
<version>2.1.4.RELEASE</version>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
</dependency>
You don't need to mention Elastic version. By default it is 6.4.0. But if you want to add a specific verison. Use below snippet inside properties tag and use the compatible version of Spring Boot and Spring Data(if required)
<properties>
<elasticsearch.version>6.8.0</elasticsearch.version>
</properties>
Also, I used the Rest High Level client in ElasticConfiguration :
#Value("${elasticsearch.host}")
public String host;
#Value("${elasticsearch.port}")
public int port;
#Bean(destroyMethod = "close")
public RestHighLevelClient restClient1() {
final CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
RestClientBuilder builder = RestClient.builder(new HttpHost(host, port));
RestHighLevelClient client = new RestHighLevelClient(builder);
return client;
}
}
Important Note:
Elastic use 9300 port to communicate between nodes and 9200 as HTTP client. In application properties:
elasticsearch.host=10.40.43.111
elasticsearch.port=9200
spring.data.elasticsearch.cluster-nodes=10.40.43.111:9300 (customized Elastic server)
spring.data.elasticsearch.cluster-name=any-cluster-name (customized cluster name)
From Postman, you can use: http://10.40.43.111:9200/[indexname]/_search
Happy coding :)
For completion's sake, here's the snippet that creates the transport client using proper static method provided by InetSocketTransportAddress:
Client esClient = TransportClient.builder()
.settings(settings)
.build()
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("143.79.236.xxx"), 9300));
If the above advices do not work for you, change the log level of your logging framework configuration (log4j, logback...) to INFO. Then re-check the output.
The logger may be hiding messages like:
INFO org.elasticsearch.client.transport.TransportClientNodesService - failed to get node info for...
Caused by: ElasticsearchSecurityException: missing authentication token for action...
(in the example above, there was X-Pack plugin in ElasticSearch which requires authentication)
For other users getting this problem.
You may get this error if you are running a newer ElasticSearch (5.5 or later) while running Spring Boot <2 version.
Recommendation is to use the REST Client since the Java Client will be deprecated.
Other workaround would be to upgrade to Spring Boot 2, since that should be compatible.
See https://discuss.elastic.co/t/spring-data-elasticsearch-cant-connect-with-elasticsearch-5-5-0/94235 for more information.
Since most of the ansswers seem to be outdated here is the setting that worked for me:
Elasticsearch-Version: 7.2.0 (OSS) running on Docker
Java-Version: JDK-11
elasticsearch.yml:
cluster.name: production
node.name: node1
network.host: 0.0.0.0
transport.tcp.port: 9300
cluster.initial_master_nodes: node1
Setup:
client = new PreBuiltTransportClient(Settings.builder().put("cluster.name", "production").build());
client.addTransportAddress(new TransportAddress(InetAddress.getByName("localhost"), 9300));
Since PreBuiltTransportClient is deprecated you should use RestHighLevelClient for Elasticsearch-Version 7.3.0: https://artifacts.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-high-level-client/7.3.0/index.html
If you are using java Transport client
1.check 9300 is access able /open.
2.check the node and cluster name ,this should be the correct,you can check the node and cluster name by type ip:port in your browser.
3.Check the versions of your jar and Es installed version.
This one did work for me in ES 1.7.5:
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.common.xcontent.XContentBuilder;
public static void main(String[] args) throws IOException {
Settings settings = ImmutableSettings.settingsBuilder()
.put("client.transport.sniff",true)
.put("cluster.name","elasticcluster").build();
Client client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress("[ipaddress]",9300));
XContentBuilder builder = null;
try {
builder = jsonBuilder().startObject().field("user", "testdata").field("postdata",new Date()).field("message","testmessage")
.endObject();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println(builder.string());
IndexResponse response = client.prepareIndex("twitter","tweet","1").setSource(builder).execute().actionGet();
client.close();
}
Check your elasticsearch.yml, "transport.host" property must be "0.0.0.0" not "127.0.0.1" or "localhost"
This means we are not able to instantiate ES transportClient and throw this exception. There are couple of possibilities that cause this issue.
Cluster name is incorrect. So open ES_HOME_DIR/config/elasticserach.yml file and check the cluster name value OR use this command: curl -XGET 'http://localhost:9200/_nodes'
Verify port 9200 is http port but elasticsearch service is using tcp port 9300 [by default]. So verify that the port is not blocked.
Authentication issue: set the header in transportClient's context for authentication:
client.threadPool().getThreadContext()
.putHeader("Authorization", "Basic " + encodeBase64String(basicHeader.getBytes()));
If you are still facing this issue then add the following property:
put("client.transport.ignore_cluster_name", true)
The below basic code is working fine for me:
Settings settings = Settings.builder()
.put("cluster.name", "my-application").put("client.transport.sniff", true).put("client.transport.ignore_cluster_name", false).build();
TransportClient client = new PreBuiltTransportClient(settings).addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
I know I'm a bit late, incase the above answers didn't work, I recommend checking the logs in elasticsearch terminal. I found out that the error message says that i need to update my version from 5.0.0-rc1 to 6.8.0, i resolved it by updating my maven dependencies to:
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>6.8.0</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>transport</artifactId>
<version>6.8.0</version>
</dependency>
This changes my code as well, since InetSocketTransportAddress is deprecated. I have to change it to TransportAddress
TransportClient client = new PreBuiltTransportClient(Settings.EMPTY)
.addTransportAddress(new TransportAddress(InetAddress.getByName(host), port));
And you also need to add this to your config/elasticsearch.yml file (use your host address)
transport.host: localhost
Check the ES server logs
sudo tail -f /var/log/elasticsearch/elasticsearch.log
I was using an outdated client
Received message from unsupported version: [5.0.0] minimal compatible version is: [6.8.0]
I had the same problem. my problem was that the version of the dependency had conflict with the elasticsearch version. check the version in ip:9200 and use the dependency version that match it
This is a common issue for ES version 5.6.10+. The thing is that Elasticsearch had the TransportClient, that was using PreBuild and has been deprecated on that version. The alternative (which is the solution here in case you are using ES 7.14 or earlier), is to use the Java High Level REST client. See the documentation (they also have a great migration guide to migrate an application from TransportClient to the REST client).
From the 7.15 version, they dropped Java High Level REST, in favor to Java API Client, and there’s also migration guides too. They did this mainly to reduce dependencies from the client. See docs.
If you are using the same version of client as the cluster, and the fit client library, the issue should be resolved.
You should check logs
If you see like below
"stacktrace": ["java.lang.IllegalStateException: Received message from unsupported version: [6.4.3] minimal compatible version is: [6.8.0]"
You can check this link
https://discuss.elastic.co/t/java-client-or-spring-boot-for-elasticsearch-7-3-1/199778
You have to explicit declare es version.
I have installed logstash 1.1.13 with elasticcsearch-0.20.6 the below config for logstash.conf
input {
tcp {
port => 524
type => rsyslog
}
udp {
port => 524
type => rsyslog
}
}
filter {
grok {
type => "rsyslog"
pattern => [ "<%{POSINT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{PROG:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" ]
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{#source_host}" ]
}
syslog_pri {
type => "rsyslog"
}
date {
type => "rsyslog"
syslog_timestamp => [ "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
mutate {
type => "rsyslog"
exclude_tags => "_grokparsefailure"
replace => [ "#source_host", "%{syslog_hostname}" ]
replace => [ "#message", "%{syslog_message}" ]
}
mutate {
type => "rsyslog"
remove => [ "syslog_hostname", "syslog_message", "syslog_timestamp" ]
}
}
output {
elasticsearch {
host => "127.0.0.1"
port => 9300
node_name => "sysloG33r-1"
bind_host => "localhost"
}
}
and
elasticsearch.yml
cluster:
name: syslogcluster
node:
name: "sysloG33r-1"
path:
data: /var/lib/elasticsearch
path:
logs: /var/log/elasticsearch
network:
host: "0.0.0.0"
and started logstash with command
[root#clane elasticsearch]# java -jar /usr/local/bin/logstash/bin/logstash.jar agent -f /etc/logstash/logstash.conf
Using experimental plugin 'syslog_pri'. This plugin is untested and may change in the future. For more information about plugin statuses, see http://logstash.net/docs/1.1.13/plugin-status {:level=>:warn}
date: You used a deprecated setting 'syslog_timestamp => ["MMM d HH:mm:ss", "MMM dd HH:mm:ss"]'. You should use 'match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]' {:level=>:warn}
PORT SETTINGS 127.0.0.1:9300
log4j, [2013-06-21T14:40:08.013] WARN: org.elasticsearch.discovery: [sysloG33r-1] waited for 30s and no initial state was set by the discovery
Failed to index an event, will retry {:exception=>org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [1m], :event=>{"#source"=>"tcp://10.66.59.35:34662/", "#tags"=>[], "#fields"=>{"syslog_pri"=>["78"], "syslog_program"=>["crond"], "syslog_pid"=>["6511"], "received_at"=>["2013-06-21T13:40:01.845Z"], "received_from"=>["10.66.59.35"], "syslog_severity_code"=>6, "syslog_facility_code"=>9, "syslog_facility"=>"clock", "syslog_severity"=>"informational"}, "#timestamp"=>"2013-06-21T12:40:01.000Z", "#source_host"=>"kent", "#source_path"=>"/", "#message"=>"(root) CMD (/opt/bin/firewall-state.sh)", "#type"=>"rsyslog"}, :level=>:warn}
and elasticsearch
/usr/local/bin/elasticsearch start
I can see all the correct java ports for elasticsearch(9200,9300) and logstash(524)
tcp 0 0 :::524 :::* LISTEN 12557/java
tcp 0 0 :::9200 :::* LISTEN 10782/java
tcp 0 0 :::9300 :::* LISTEN 10782/java
tcp 0 0 ::ffff:127.0.0.1:9301 :::* LISTEN 12557/java
udp 0 0 :::524 :::* 12557/java
udp 0 0 :::54328 :::* 10782/java
however i see this error on logstash, any ideas?
Failed to index an event, will retry {:exception=>org.elasticsearch.discovery.MasterNotDiscoveredException: waited for [1m], :event=>{"#source"=>"tcp://10.66.59.35:33598/", "#tags"=>[], "#fields"=>{"syslog_pri"=>["78"], "syslog_program"=>["crond"], "syslog_pid"=>["12983"], "received_at"=>["2013-06-21T12:07:01.541Z"], "received_from"=>["10.66.59.35"], "syslog_severity_code"=>6, "syslog_facility_code"=>9, "syslog_facility"=>"clock", "syslog_severity"=>"informational"}, "#timestamp"=>"2013-06-21T11:07:01.000Z", "#source_host"=>"kent", "#source_path"=>"/", "#message"=>"(root) CMD (/opt/bin/firewall-state.sh)", "#type"=>"rsyslog"}, :level=>:warn}
I'm going to assume you've checked the obvious things, like "is ElasticSearch running?" and "can I open a TCP connection to port 9300 on localhost?"
Even though you're using a host parameter in your elasticsearch output, what's probably happening is that the ElasticSearch client in Logstash is trying to discover cluster members by multicast (which is how a new install is typically configured by default), and is failing. This is common on EC2, as well as many other environments where firewall configurations may interfere with multicast discovery. If this is the only member in your cluster, setting the following in your elasticsearch.yml should do the trick:
discovery:
zen:
ping:
multicast:
enabled: false
unicast:
hosts: <your_ip>[9300-9400]
On AWS, there's also an EC2 discovery plugin that will clear this right up for you.
This question really belongs on Server Fault rather than Stack Overflow, by the way.
I had a similar issue, and it came from my ip configuration. In a nutshell, check that you have only one ip address on the logstash host. If not, it can choose the wrong one.
Posted the same answer here: Logstash with Elasticsearch
I came across same kind of issue and fixed by adding cluster option in the elasticsearch conf in logstash. Since you have modified the cluster name in elasticsearch.yml, the logstash client will be not able to find the cluster using the default value.
Try doing this also