Problems in accessing S3 via AWS Java SDK - java

I'm trying to get and S3 object size via Java AWS SDK (v2), and send it back via HTTP response (this is all inside a HTTP Server using com.sun.net.httpserver.HttpServer). But it doesn't work and shows me the following debug messages.
What's going wrong here? Am I missing anything?
AwsBasicCredentials awsCreds = AwsBasicCredentials.create(
AdapterMain.ACCESS_KEY,
AdapterMain.SECRET_KEY);
s3Client = S3Client.builder().region(region)
.endpointOverride(URI.create(AdapterMain.S3server))
.credentialsProvider(StaticCredentialsProvider.create(awsCreds))
.build();
//TODO
HeadObjectRequest getObjectRequest = HeadObjectRequest.builder()
.bucket(bucketName).key("FILES/"+getMD5(id)+"/FILES/"+id+"/"+id+".txt").build();
HeadObjectResponse objectHead = s3Client.headObject(getObjectRequest);
long size = objectHead.contentLength();
System.out.println("=================================="+size);
response=size+"";
he.sendResponseHeaders(200, response.length());
And here are the logs:
18:44:14.898 [HTTP-Dispatcher] DEBUG software.amazon.awssdk.core.interceptor.ExecutionInterceptorChain - Creating an interceptor chain that will apply interceptors in the following order: [software.amazon.awssdk.core.internal.interceptor.HttpChecksumRequiredInterceptor#62ec69e1, software.amazon.awssdk.awscore.interceptor.HelpfulUnknownHostExceptionInterceptor#1d17dde7, software.amazon.awssdk.services.s3.internal.handlers.EnableChunkedEncodingInterceptor#339b31af, software.amazon.awssdk.services.s3.internal.handlers.DisableDoubleUrlEncodingInterceptor#3d96c2f6, software.amazon.awssdk.services.s3.internal.handlers.EnableTrailingChecksumInterceptor#3cb417c4, software.amazon.awssdk.services.s3.internal.handlers.CreateMultipartUploadRequestInterceptor#1f6f0d50, software.amazon.awssdk.services.s3.internal.handlers.GetObjectInterceptor#7513515b, software.amazon.awssdk.services.s3.internal.handlers.AsyncChecksumValidationInterceptor#7e99ac7d, software.amazon.awssdk.services.s3.internal.handlers.EndpointAddressInterceptor#620f9e5d, software.amazon.awssdk.services.s3.internal.handlers.ExceptionTranslationInterceptor#56d0ac1, software.amazon.awssdk.services.s3.internal.handlers.GetBucketPolicyInterceptor#6a1b0abe, software.amazon.awssdk.services.s3.internal.handlers.PutObjectInterceptor#5952b9c4, software.amazon.awssdk.services.s3.internal.handlers.SyncChecksumValidationInterceptor#473129c5, software.amazon.awssdk.services.s3.internal.handlers.DecodeUrlEncodedResponseInterceptor#2ae718e0, software.amazon.awssdk.services.s3.internal.handlers.CreateBucketInterceptor#181bb9f8]
18:44:14.939 [HTTP-Dispatcher] DEBUG software.amazon.awssdk.core.interceptor.ExecutionInterceptorChain - Interceptor 'software.amazon.awssdk.services.s3.internal.handlers.EndpointAddressInterceptor#620f9e5d' modified the message with its modifyHttpRequest method.
18:44:14.967 [HTTP-Dispatcher] DEBUG software.amazon.awssdk.request - Sending Request: DefaultSdkHttpFullRequest(httpMethod=HEAD, protocol=https, host=file-store.s3-server.dcstore.company.net, port=443, encodedPath=/FILES/f13e/FILES/id_1234/id_1234.txt, headers=[amz-sdk-invocation-id, User-Agent], queryParameters=[])
18:44:14.978 [HTTP-Dispatcher] DEBUG software.amazon.awssdk.auth.signer.Aws4Signer - AWS4 String to sign: AWS4-HMAC-SHA256
20210305T014414Z
20210305/us-east-1/s3/aws4_request
9bfed5fd14903f65ac34647985e2c8a4bbe0fbf311982cfbeb2e44b2b58a2390
18:44:14.991 [HTTP-Dispatcher] WARN software.amazon.awssdk.http.apache.internal.utils.ApacheUtils - NoSuchMethodException was thrown when disabling normalizeUri. This indicates you are using an old version (< 4.5.8) of Apache http client. It is recommended to use http client version >= 4.5.9 to avoid the breaking change introduced in apache client 4.5.7 and the latency in exception handling. See https://github.com/aws/aws-sdk-java/issues/1919 for more information
18:44:15.098 [HTTP-Dispatcher] DEBUG software.amazon.awssdk.http.apache.internal.conn.SdkTlsSocketFactory - socket.getSupportedProtocols(): [TLSv1.3, TLSv1.2, TLSv1.1, TLSv1, SSLv3, SSLv2Hello], socket.getEnabledProtocols(): [TLSv1.3, TLSv1.2, TLSv1.1, TLSv1]
18:44:15.099 [HTTP-Dispatcher] DEBUG software.amazon.awssdk.http.apache.internal.conn.SdkTlsSocketFactory - TLS protocol enabled for SSL handshake: [TLSv1.2, TLSv1.1, TLSv1, TLSv1.3]
18:44:15.506 [HTTP-Dispatcher] DEBUG software.amazon.awssdk.http.apache.internal.net.SdkSslSocket - created: file-store.s3-server.dcstore.company.net/10.111.111.20:443

The warning message there is a little bit misleading and technically should be error in this particular case as this is a breaking change in httpclinet library which can cause unexpected behavior of the program. This dependency itself comes as a transitive dependency from aws-java-sdk. So, to get it fixed just follow recommendation provided in the warning message and explicitly define the required version of httpclinet in your project pom file:
<dependency>
<groupId>org.apache.httpcomponents</groupId>
<artifactId>httpclient</artifactId>
<version>4.5.9</version>
</dependency>

Related

wildfly/jboss (WildFly Core 17.0.3.Final) handskake failure with a GM protocol named GMTLSv1

i am facing a problem, i want to config a ssl with GMTLS protocol,i have success config ssl with TLSV1.2.
the wireshark shows like that
TLSV1.2
GMTLSV1
For Wildfly/Jboss can establish GMTLS ssl connection , i have done
add some properties in standalone.xml
<tls>
<key-stores>
<key-store name="customKS">
<credential-reference clear-text="password"/>
<implementation type="PKCS12"/>
<file path="sm2.localhost.both.pfx" relative-to="jboss.server.config.dir"/>
</key-store>
</key-stores>
<key-managers>
<key-manager name="customKM" key-store="customKS" provider-name="GMJCE" algorithm="SunX509">
<credential-reference clear-text="passowrd"/>
</key-manager>
</key-managers>
<server-ssl-context name="customSSC" key-manager="customKM" provider-name="GMJSSE" protocols="GMSSLv1.1" />
</server-ssl-contexts>
</tls>
...
<https-listener name="https" socket-binding="https" ssl-context="customSSC" enable-http2="true"/>
let wildfly source code support GMSSLV1.1 protocol
IN class SSLDefinitions ALLOWED_PROTOCOLS add string "GMSSLv1.1"
line 231
private static final String[] ALLOWED_PROTOCOLS = { "SSLv2", "SSLv2Hello", "SSLv3", "TLSv1", "TLSv1.1", "TLSv1.2", "TLSv1.3" , "GMSSLv1.1" };
In enum class Protocol add a constant
line 15
SSLv2("SSLV2"),
SSLv3("SSLV3"),
TLSv1("TLSV1"),
TLSv1_1("TLSV1.1"),
TLSv1_2("TLSV1.2"),
TLSv1_3("TLSV1.3"),
GMSSLv1_1("GMSSLV1.1"),
SSLv2Hello("SSLV2HELLO");
when i have finish above things, the server start normally. The http uri visited successfully,but the https uri can't arrive, i use wireshark to capture package it show handshake failure. i don't know what's wrrog have happened!
I have solved this problem.
The core problem is handshake failure.
To build an SSL channel, we need a keystore and a GMSSL type of SSLContext. Then we need to perform the handshake, but it fails. The problem is happening in ciphersuite. At WildFly Core 17.0.3.Final, the default ciphersuite is for TLS1.3, but what I need is GMTSL. So I need to add my own ciphersuite.
Add ciphersuite in TLS13MechanismDatabase.properties
ECC_SM4_CBC_SM3 = ECC_SM4_CBC_SM3,ANY,ANY,AES128CCM8,AEAD,TLSv1.3,false,HIGH,false,128,128,13,05
Edit standalone.xml: add cipher-suite-names
<server-ssl-context name="customSSC" key-manager="customKM" provider-name="GMJSSE" protocols="GMSSLv1.1" cipher-suite-names="ECC_SM4_CBC_SM3"/>
Run the server
Wireshark output:

Consumer connect errors from AWS MSK via IAM - SSL problem or something else?

"thread_name":"org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1","message":"[Consumer clientId=consumer-groupId-1-1, groupId=groupId-1] Initiating connection to node node.kafka.us-west-2.amazonaws.com:9098 (id: -2 rack: null) using address node.kafka.us-west-2.amazonaws.com/10.x.x.x"}
"thread_name":"org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1","message":"[Consumer clientId=consumer-groupId-1-1, groupId=groupId-1] Set SASL client state to SEND_APIVERSIONS_REQUEST"}
"thread_name":"org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1","message":"[Consumer clientId=consumer-groupId-1-1, groupId=groupId-1] Creating SaslClient: client=null;service=kafka;serviceHostname=node.kafka.us-west-2.amazonaws.com;mechs=[AWS_MSK_IAM]"}
"thread_name":"org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1","message":"Setting SASL/AWS_MSK_IAM client state to SEND_CLIENT_FIRST_MESSAGE"}
"thread_name":"org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1","message":"[Consumer clientId=consumer-groupId-1-1, groupId=groupId-1] Created socket with SO_RCVBUF = 65562, SO_SNDBUF = 131124, SO_TIMEOUT = 0 to node -2"}
"thread_name":"org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1","message":"[Consumer clientId=consumer-groupId-1-1, groupId=groupId-1] Completed connection to node -2. Fetching API versions."}
"thread_name":"org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1","message":"[Consumer clientId=consumer-groupId-1-1, groupId=groupId-1] Connection with node.kafka.us-west-2.amazonaws.com/10.x.x.x disconnected","stack_trace":"java.io.IOException: Connection reset by peer\n\tat sun.nio.ch.FileDispatcherImpl.read0(FileDispatcherImpl.java)\n\tat sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)\n\tat sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:276)\n\tat sun.nio.ch.IOUtil.read(IOUtil.java:245)\n\tat sun.nio.ch.IOUtil.read(IOUtil.java:223)\n\tat sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:356)\n\tat org.apache.kafka.common.network.SslTransportLayer.readFromSocketChannel(SslTransportLayer.java:228)\n\tat org.apache.kafka.common.network.SslTransportLayer.handshake(SslTransportLayer.java:291)\n\tat org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:178)\n\tat org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:543)\n\tat org.apache.kafka.common.network.Selector.poll(Selector.java:481)\n\tat org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)\n\tat org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:246)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.coordinatorUnknownAndUnready(ConsumerCoordinator.java:459)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:487)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1262)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)\n\tat org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollConsumer(KafkaMessageListenerContainer.java:1584)\n\tat org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doPoll(KafkaMessageListenerContainer.java:1559)\n\tat org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1360)\n\tat org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1274)\n\t... 4 frames truncated\n"}
"level":"DEBUG","logger_name":"o.a.k.common.network.SslTransportLayer","thread_name":"org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1","message":"[SslTransportLayer channelId=-2 key=channel=java.nio.channels.SocketChannel[connection-pending remote=node.kafka.us-west-2.amazonaws.com/10.x.x.x:9098], selector=sun.nio.ch.KQueueSelectorImpl#690f136f, interestOps=8, readyOps=0] Failed to send SSL Close message","stack_trace":"java.io.IOException: Unexpected status returned by SSLEngine.wrap, expected CLOSED, received OK. Will not send close message to peer.\n\tat org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:194)\n\tat org.apache.kafka.common.utils.Utils.closeAll(Utils.java:974)\n\tat org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:155)\n\tat org.apache.kafka.common.network.Selector.doClose(Selector.java:955)\n\tat org.apache.kafka.common.network.Selector.close(Selector.java:939)\n\tat org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:625)\n\tat org.apache.kafka.common.network.Selector.poll(Selector.java:481)\n\tat org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:560)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)\n\tat org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:246)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.coordinatorUnknownAndUnready(ConsumerCoordinator.java:459)\n\tat org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:487)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1262)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1231)\n\tat org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1211)\n\tat org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollConsumer(KafkaMessageListenerContainer.java:1584)\n\tat org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doPoll(KafkaMessageListenerContainer.java:1559)\n\tat org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1360)\n\tat org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1274)\n\tat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat java.util.concurrent.FutureTask.run$$$capture(FutureTask.java:264)\n\tat java.util.concurrent.FutureTask.run(FutureTask.java)\n\tat java.lang.Thread.run(Thread.java:829)\n"}
{"#timestamp":"2022-09-29 08:05:12.245-0700","level":"INFO","logger_name":"org.apache.kafka.clients.NetworkClient","thread_name":"org.springframework.kafka.KafkaListenerEndpointContainer#0-0-C-1","message":"[Consumer clientId=consumer-groupId-1-1, groupId=groupId-1] Node -2 disconnected."}
I have attempted to follow all the directions from here and here.
Ive tested my iam policy using aws kafka <various> .
Ive checked that port 9098 is open.
My policy has everything permitted. If I can get this working I'll deal with limiting permissions later.
Just trying to get a consumer to start at this point.
Suggestions on where to look for the issue?
Edit:
Added some ssl debug to the call. Am seeing CLIENTHELLO being sent but nothing back from the server. Just the auth failure.
Using openssl s_client -connect <host> -tls1_2 I was able to get a "Verify return code: 0 (ok)" from the server
More:
I think something is blocking the SSL request:
CONNECTED(00000005)
write:errno=54
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 0 bytes
I can view the open port via NMAP so Im not sure what's happening there either.
Just to close this off: I think MSK doesn't allow iam auth from outside a VPC. I was able to get my test working from EKS but not otherwise.

MQ7 with Java 7 and SSL is not working., it was working before 6 months

We have One QM and One CHANNEL and many QUEUES created for clients. Around 5 clients are connected to this QM for their transactions. Each 5 clients connected to their respective QUEUES . There is a jks file created in this QM for SSL connection. Each 5 clients connect with jks file + SSL_RSA_WITH_RC4_128_SHA from their javaClient. QM is also configured with SSLCIPH(RC4_SHA_US).
Now all of a sudden , without any javaClient change , 1 client could not able to connect to configured QM. All others are able to connect to same QM , without any issue.
AMQERR01.LOG is not logged with any specific exception or error
In application logs its saying common MQ exception
Error as com.ibm.mq.MQException: MQJE001: Completion Code '2', Reason '2397'
2397 - Cipher spec<>suite not matching--is any possibility?
we enabled tracing (strmqtrc -m TEST.QM -t detail -t all) and saw Trace logs in path (C:\Program Files (x86)\IBM\Websphere MQ\trace) ,but could not get any details on why SSL-connection could not happening?
We done one more exercise like created a new QM for issue client and tested without SSL and its working. When we enabled SSL in new QM and javaClient , the same 2397 started logging.
Could someone guide me for better logging and tracing in MQ , which can see why 2397 is throwing?
Could someone guide me for better logging and tracing in Java using -D [-Djavax.net.debug=all] , which can see why 2397 is throwing?
MQ Version ->7
MQ Server in ->Windows
from trace logs
returning TEST.QM
Freeing cbmindex:0 pointer:24DDB540 length:2080
-----} TreeNode.getMQQmgrExtObject (rc=OK)
cbmindex:10
-------------} xcsFreeMemFn (rc=OK)
------------} amqjxcoa.wmqGetAttrs (rc=OK)
-----{ UiQueueManager.testQmgrAttribute
-------------{ Message.getMessage
testing object 'TEST.QM'
An internal method detected an unexpected system return code. The method {0} returned {1}. (AMQ4580)
checking attribute 'QmgrCmdLevelGreaterThan'
-------------} Message.getMessage (rc=OK)
for value '510'
-----------}! NativeCalls.getAttrs (rc=Unknown(C35E))
-----} UiQueueManager.testQmgrAttribute (rc=OK)
Message = An internal method detected an unexpected system return code. The method wmq_get_attrs returned "retval.rc2 = 268460388". (AMQ4580), msgID = AMQ4580, rc = 50014, reason = 268460388, severity = 30
result = true
---} TreeNode.testAttribute (rc=OK)
---{ TreeNode.testAttribute
-----{ QueueManagerTreeNode.toString
-----} QueueManagerTreeNode.toString (rc=OK)
testing object 'TEST.QM'
checking attribute 'OamTreeNode'
-----------{ NativeCalls.getAttrs
------------{ amqjxcoa.wmqGetAttrs
qmgr:2A7B32C8, stanza:2A7B32C4, version:1
for value 'true'
QMgrName('TEST.QM')
-----{ TreeNode.getMQQmgrExtObject
StanzaName('QMErrorLog')
testing object 'TEST.QM'
Full QM.INI filename: SOFTWARE\IBM\MQSeries\CurrentVersion\Configuration\QueueManager\TEST!QM, Multi-Instance: FALSE
--------------} xcsGetIniFilename (rc=OK)
--------------{ xcsGetIniAttrs
---------------{ xcsBrowseIniCallback
FileType = (1)
----------------{ xcsBrowseRegistryCallback
xcsBrowseRegistryCallback
-----------------{ xusAddStanzaLineList
------------------{ xcsGetMemFn
checking attribute 'PluginEnabled'
component:24 function:15 length:2080 options:0 cbmindex:0 *pointer:24DDB540
------------------} xcsGetMemFn (rc=OK)
for value 'com.ibm.mq.explorer.oam'
RetCode (OK)
-----------------} xusAddStanzaLineList (rc=OK)
-----------------{ xusAddStanzaLineList
------------------{ xcsGetMemFn
-----{ UiPlugin.isPluginEnabled
component:24 function:15 length:2080 options:0 cbmindex:1 *pointer:24DDDFE8
------------------} xcsGetMemFn (rc=OK)
RetCode (OK)
-----------------} xusAddStanzaLineList (rc=OK)
testing plugin_id: com.ibm.mq.explorer.oam
-----------------{ xurGetSpecificRegStanza
-------{ PluginRegistrationManager.isPluginEnabled
Couldn't open key (QMErrorLog) result 2: The system cannot find the file specified.
MQ version 7.0.1.9
jdk1.8.0_181-i586
com.ibm.mq*jar Version
Specification -version : 6.0.2.1
Implementation-Version :6.0.2.1 -j600-201-070305

Debug eureka-client side http requests

I am trying to register my monolithic application to eureka server (first migration step into microservices world). The client & server versions that I use is 1.5.3. The registration request fails, due to bad request error.
My java code that creates the eureka client is:
private EurekaClient createEurekaClient(){
EurekaInstanceConfig instanceConfig = new MyDataCenterInstanceConfig(MY_NAMESPACE);
InstanceInfo instanceInfo = new EurekaConfigBasedInstanceInfoProvider(instanceConfig).get();
ApplicationInfoManager applicationInfoManager = new ApplicationInfoManager(instanceConfig, instanceInfo);
return new DiscoveryClient(applicationInfoManager, new DefaultEurekaClientConfig());
}
eureka-client.properties:
my-namespace.vipAddress=eureka
my-namespace.instance.preferIpAddress=true
eureka.region=default
my-namespace.name=MY-APP
my-namespace.port=8080
my-namespace.shouldUseDns=false
eureka.serviceUrl.default=http://localhost:9999/eureka/v2/
The logs output:
2016-09-20 10:35:54,325 DEBUG [DiscoveryClient-HeartbeatExecutor-0] (AbstractJerseyEurekaHttpClient.java:60) - Jersey HTTP POST http://localhost:9999/eureka/v2//apps/MY-APP with instance 7010; statusCode=400
2016-09-20 10:35:54,326 DEBUG [DiscoveryClient-HeartbeatExecutor-0] (ThreadSafeClientConnManager.java:282) - Released connection is not reusable.
2016-09-20 10:35:54,326 DEBUG [DiscoveryClient-HeartbeatExecutor-0] (ConnPoolByRoute.java:429) - Releasing connection [{}->http://localhost:9999][null]
2016-09-20 10:35:54,326 DEBUG [DiscoveryClient-HeartbeatExecutor-0] (ConnPoolByRoute.java:676) - Notifying no-one, there are no waiting threads
2016-09-20 10:35:54,326 DEBUG [DiscoveryClient-HeartbeatExecutor-0] (RedirectingEurekaHttpClient.java:121) - Pinning to endpoint null
2016-09-20 10:35:54,326 WARN [DiscoveryClient-HeartbeatExecutor-0] (RetryableEurekaHttpClient.java:127) - Request execution failure with status code 400; retrying on another server if available
The server returns a 400 error code which means bad request, so am looking for a way to print the full registration request to the log file.
I found the root cause to this issue, the com.fasterxml.jackson.core.jackson-databind that used in my project was outdated (version 2.1.1). While the eureka client needs minimum 2.5.4 version.

Java ElasticSearch None of the configured nodes are available

Just downloaded and installed elasticsearch 1.3.2 in past hour
Opened IP tables to port 9200 and 9300:9400
Set my computer name and ip in /etc/hosts
Head Module and Paramedic Installed and running smoothly
curl on localhost works flawlessy
copied all jars from download into eclipse so same version client
--Java--
import org.elasticsearch.action.search.SearchResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.index.query.QueryBuilders;
public class Test{
public static void main(String[] args) {
Settings settings = ImmutableSettings.settingsBuilder().put("cluster.name", "elastictest").build();
TransportClient transportClient = new TransportClient(settings);
Client client = transportClient.addTransportAddress(new InetSocketTransportAddress("143.79.236.xxx",9300));//just masking ip with xxx for SO Question
try{
SearchResponse response = client.prepareSearch().setQuery(QueryBuilders.matchQuery("url", "twitter")).setSize(5).execute().actionGet();//bunch of urls indexed
String output = response.toString();
System.out.println(output);
}catch(Exception e){
e.printStackTrace();
}
client.close();
}
}
--Output--
log4j:WARN No appenders could be found for logger (org.elasticsearch.plugins).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: []
at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:298)
at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:214)
at org.elasticsearch.client.transport.support.InternalTransportClient.execute(InternalTransportClient.java:105)
at org.elasticsearch.client.support.AbstractClient.search(AbstractClient.java:330)
at org.elasticsearch.client.transport.TransportClient.search(TransportClient.java:421)
at org.elasticsearch.action.search.SearchRequestBuilder.doExecute(SearchRequestBuilder.java:1097)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:91)
at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:65)
at Test.main(Test.java:20)
Update: Now I am REALLY confused. I just pressed run in eclipse 3 times. 2 times received the error above. 1 time the search worked!?? Brand new Centos 6.5 vps, brand new jdk installed. Then installed elasticsearch, have done nothing else to box.
Update: After running ./bin/elasticsearch console
[2014-09-18 08:56:13,694][INFO ][node ] [Acrobat] version[1.3.2], pid[2978], build[dee175d/2014-08-13T14:29:30Z]
[2014-09-18 08:56:13,695][INFO ][node ] [Acrobat] initializing ...
[2014-09-18 08:56:13,703][INFO ][plugins ] [Acrobat] loaded [], sites [head, paramedic]
[2014-09-18 08:56:15,941][WARN ][common.network ] failed to resolve local host, fallback to loopback
java.net.UnknownHostException: elasticsearchtest: elasticsearchtest: Name or service not known
at java.net.InetAddress.getLocalHost(InetAddress.java:1473)
at org.elasticsearch.common.network.NetworkUtils.<clinit>(NetworkUtils.java:54)
at org.elasticsearch.transport.netty.NettyTransport.<init>(NettyTransport.java:204)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:54)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.FactoryProxy.get(FactoryProxy.java:52)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)
at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)
at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.FactoryProxy.get(FactoryProxy.java:52)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42)
at org.elasticsearch.common.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66)
at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:85)
at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:98)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:837)
at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42)
at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:57)
at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:45)
at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:200)
at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:193)
at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:830)
at org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:193)
at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:175)
at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:110)
at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:93)
at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:70)
at org.elasticsearch.common.inject.ModulesBuilder.createInjector(ModulesBuilder.java:59)
at org.elasticsearch.node.internal.InternalNode.<init>(InternalNode.java:192)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:159)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:70)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:203)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
Caused by: java.net.UnknownHostException: elasticsearchtest: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$1.lookupAllHostAddr(InetAddress.java:901)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1293)
at java.net.InetAddress.getLocalHost(InetAddress.java:1469)
... 62 more
[2014-09-18 08:56:16,937][INFO ][node ] [Acrobat] initialized
[2014-09-18 08:56:16,937][INFO ][node ] [Acrobat] starting ...
[2014-09-18 08:56:17,110][INFO ][transport ] [Acrobat] bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/143.79.236.31:9300]}
[2014-09-18 08:56:17,126][INFO ][discovery ] [Acrobat] elastictest/QvSNFajjQ9SFjU7WOdjaLw
[2014-09-18 08:56:20,145][INFO ][cluster.service ] [Acrobat] new_master [Acrobat][QvSNFajjQ9SFjU7WOdjaLw][localhost][inet[/143.79.236.31:9300]], reason: zen-disco-join (elected_as_master)
[2014-09-18 08:56:20,212][INFO ][http ] [Acrobat] bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address {inet[/143.79.236.31:9200]}
[2014-09-18 08:56:20,214][INFO ][node ] [Acrobat] started
--cluster config in elasticsearch.yml--
################################### Cluster ###################################
# Cluster name identifies your cluster for auto-discovery. If you're running
# multiple clusters on the same network, make sure you're using unique names.
#
cluster.name: elastictest
possible problem:
wrong port, if you use a Java or Scala client, correct port is 9300, not 9200
wrong cluster name, make sure the cluster name you set in your code is the same as the cluster.name you set in $ES_HOME/config/elasticsearch.yml
the sniff option, set client.transport.sniff to be true but can't connect to all nodes of ES cluster will cause this problem too. ES doc here explained why.
Elasticsearch settings are in $ES_HOME/config/elasticsearch.yml. There, if the cluster.name setting is commented out, it means ES would take just about any cluster name. So, in your code, the cluster.name as "elastictest" might be the problem. Try this:
Client client = new TransportClient()
.addTransportAddress(new InetSocketTransportAddress(
"143.79.236.xxx",
9300));
You should check the node's port, you could do it using head.
These ports are not same. Example,
The web URL you can open is localhost:9200,
but the node's port is 9300, so none of the configured nodes are available if you use the 9200 as the port.
NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{UfB9geJCR--spyD7ewmoXQ}{192.168.1.245}{192.168.1.245:9300}]]
In my case it was the difference in versions. If you check the logs in elasticsearch cluster you will see.
Elasticsearch logs
[node1] exception caught on transport layer [NettyTcpChannel{localAddress=/192.168.1.245:9300, remoteAddress=/172.16.1.47:65130}], closing connection
java.lang.IllegalStateException: Received message from unsupported version: [5.0.0] minimal compatible version is: [5.6.0]
I was using elasticsearch client and transport version 5.1.1. And my elasticsearch cluster version was in 6. So I changes my library version to 5.4.3.
Faced similar issue, and here is the solution
Example :
In elasticsearch.yml add the below properties
cluster.name: production
node.name: node1
network.bind_host: 10.0.1.22
network.host: 0.0.0.0
transport.tcp.port: 9300
Add the following in Java Elastic API for Bulk Push (just a code snippet).
For IP Address add public IP address of elastic search machine
Client client;
BulkRequestBuilder requestBuilder;
try {
client = TransportClient.builder().settings(Settings.builder().put("cluster.name", "production").put("node.name","node1")).build().addTransportAddress(
new InetSocketTransportAddress(InetAddress.getByName(""), 9300));
requestBuilder = (client).prepareBulk();
}
catch (Exception e) {
}
Open the Firewall ports for 9200,9300
I spend days together to figure out this issue. I know its late but this might be helpful:
I resolved this issue by changing the compatible/stable version of:
Spring boot: 2.1.1
Spring Data Elastic: 2.1.4
Elastic: 6.4.0 (default)
Maven:
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.1.RELEASE</version>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
<version>2.1.4.RELEASE</version>
</dependency>
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
</dependency>
You don't need to mention Elastic version. By default it is 6.4.0. But if you want to add a specific verison. Use below snippet inside properties tag and use the compatible version of Spring Boot and Spring Data(if required)
<properties>
<elasticsearch.version>6.8.0</elasticsearch.version>
</properties>
Also, I used the Rest High Level client in ElasticConfiguration :
#Value("${elasticsearch.host}")
public String host;
#Value("${elasticsearch.port}")
public int port;
#Bean(destroyMethod = "close")
public RestHighLevelClient restClient1() {
final CredentialsProvider credentialsProvider = new BasicCredentialsProvider();
RestClientBuilder builder = RestClient.builder(new HttpHost(host, port));
RestHighLevelClient client = new RestHighLevelClient(builder);
return client;
}
}
Important Note:
Elastic use 9300 port to communicate between nodes and 9200 as HTTP client. In application properties:
elasticsearch.host=10.40.43.111
elasticsearch.port=9200
spring.data.elasticsearch.cluster-nodes=10.40.43.111:9300 (customized Elastic server)
spring.data.elasticsearch.cluster-name=any-cluster-name (customized cluster name)
From Postman, you can use: http://10.40.43.111:9200/[indexname]/_search
Happy coding :)
For completion's sake, here's the snippet that creates the transport client using proper static method provided by InetSocketTransportAddress:
Client esClient = TransportClient.builder()
.settings(settings)
.build()
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("143.79.236.xxx"), 9300));
If the above advices do not work for you, change the log level of your logging framework configuration (log4j, logback...) to INFO. Then re-check the output.
The logger may be hiding messages like:
INFO org.elasticsearch.client.transport.TransportClientNodesService - failed to get node info for...
Caused by: ElasticsearchSecurityException: missing authentication token for action...
(in the example above, there was X-Pack plugin in ElasticSearch which requires authentication)
For other users getting this problem.
You may get this error if you are running a newer ElasticSearch (5.5 or later) while running Spring Boot <2 version.
Recommendation is to use the REST Client since the Java Client will be deprecated.
Other workaround would be to upgrade to Spring Boot 2, since that should be compatible.
See https://discuss.elastic.co/t/spring-data-elasticsearch-cant-connect-with-elasticsearch-5-5-0/94235 for more information.
Since most of the ansswers seem to be outdated here is the setting that worked for me:
Elasticsearch-Version: 7.2.0 (OSS) running on Docker
Java-Version: JDK-11
elasticsearch.yml:
cluster.name: production
node.name: node1
network.host: 0.0.0.0
transport.tcp.port: 9300
cluster.initial_master_nodes: node1
Setup:
client = new PreBuiltTransportClient(Settings.builder().put("cluster.name", "production").build());
client.addTransportAddress(new TransportAddress(InetAddress.getByName("localhost"), 9300));
Since PreBuiltTransportClient is deprecated you should use RestHighLevelClient for Elasticsearch-Version 7.3.0: https://artifacts.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-high-level-client/7.3.0/index.html
If you are using java Transport client
1.check 9300 is access able /open.
2.check the node and cluster name ,this should be the correct,you can check the node and cluster name by type ip:port in your browser.
3.Check the versions of your jar and Es installed version.
This one did work for me in ES 1.7.5:
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.client.Client;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.common.settings.ImmutableSettings;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.InetSocketTransportAddress;
import org.elasticsearch.common.xcontent.XContentBuilder;
public static void main(String[] args) throws IOException {
Settings settings = ImmutableSettings.settingsBuilder()
.put("client.transport.sniff",true)
.put("cluster.name","elasticcluster").build();
Client client = new TransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress("[ipaddress]",9300));
XContentBuilder builder = null;
try {
builder = jsonBuilder().startObject().field("user", "testdata").field("postdata",new Date()).field("message","testmessage")
.endObject();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
System.out.println(builder.string());
IndexResponse response = client.prepareIndex("twitter","tweet","1").setSource(builder).execute().actionGet();
client.close();
}
Check your elasticsearch.yml, "transport.host" property must be "0.0.0.0" not "127.0.0.1" or "localhost"
This means we are not able to instantiate ES transportClient and throw this exception. There are couple of possibilities that cause this issue.
Cluster name is incorrect. So open ES_HOME_DIR/config/elasticserach.yml file and check the cluster name value OR use this command: curl -XGET 'http://localhost:9200/_nodes'
Verify port 9200 is http port but elasticsearch service is using tcp port 9300 [by default]. So verify that the port is not blocked.
Authentication issue: set the header in transportClient's context for authentication:
client.threadPool().getThreadContext()
.putHeader("Authorization", "Basic " + encodeBase64String(basicHeader.getBytes()));
If you are still facing this issue then add the following property:
put("client.transport.ignore_cluster_name", true)
The below basic code is working fine for me:
Settings settings = Settings.builder()
.put("cluster.name", "my-application").put("client.transport.sniff", true).put("client.transport.ignore_cluster_name", false).build();
TransportClient client = new PreBuiltTransportClient(settings).addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName("127.0.0.1"), 9300));
I know I'm a bit late, incase the above answers didn't work, I recommend checking the logs in elasticsearch terminal. I found out that the error message says that i need to update my version from 5.0.0-rc1 to 6.8.0, i resolved it by updating my maven dependencies to:
<dependency>
<groupId>org.elasticsearch</groupId>
<artifactId>elasticsearch</artifactId>
<version>6.8.0</version>
</dependency>
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>transport</artifactId>
<version>6.8.0</version>
</dependency>
This changes my code as well, since InetSocketTransportAddress is deprecated. I have to change it to TransportAddress
TransportClient client = new PreBuiltTransportClient(Settings.EMPTY)
.addTransportAddress(new TransportAddress(InetAddress.getByName(host), port));
And you also need to add this to your config/elasticsearch.yml file (use your host address)
transport.host: localhost
Check the ES server logs
sudo tail -f /var/log/elasticsearch/elasticsearch.log
I was using an outdated client
Received message from unsupported version: [5.0.0] minimal compatible version is: [6.8.0]
I had the same problem. my problem was that the version of the dependency had conflict with the elasticsearch version. check the version in ip:9200 and use the dependency version that match it
This is a common issue for ES version 5.6.10+. The thing is that Elasticsearch had the TransportClient, that was using PreBuild and has been deprecated on that version. The alternative (which is the solution here in case you are using ES 7.14 or earlier), is to use the Java High Level REST client. See the documentation (they also have a great migration guide to migrate an application from TransportClient to the REST client).
From the 7.15 version, they dropped Java High Level REST, in favor to Java API Client, and there’s also migration guides too. They did this mainly to reduce dependencies from the client. See docs.
If you are using the same version of client as the cluster, and the fit client library, the issue should be resolved.
You should check logs
If you see like below
"stacktrace": ["java.lang.IllegalStateException: Received message from unsupported version: [6.4.3] minimal compatible version is: [6.8.0]"
You can check this link
https://discuss.elastic.co/t/java-client-or-spring-boot-for-elasticsearch-7-3-1/199778
You have to explicit declare es version.

Categories

Resources