AmazonClientException: Unable To Load Credentials from any Provider in the Chain - java

My mule application writes json record to a kinesis stream. I use KPL producer library. When run locally, it picks AWS credentials from .aws/credentials and writes record to kinesis successfully.
However, when I deploy my application to Cloudhub, it throws AmazonClientException, obviously due to not having access to any of directories that DefaultAWSCredentialsProviderChain class supports. (http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html)
This is how I attach credentials and it looks locally in .aws/credentials:
config.setCredentialsProvider( new
DefaultAWSCredentialsProviderChain());
I couldn't figure out a way to provide credentials explicitly using my-app.properies file.
Then I tried to create a separate configuration file with getters/setters. set access key and private key as private and then impement a getter:
public AWSCredentialsProvider getCredentials() {
if(accessKey == null || secretKey == null) {
return new DefaultAWSCredentialsProviderChain();
}
return new StaticCredentialsProvider(new BasicAWSCredentials(getAccessKey(), getSecretKey()));
}
}
This was intended to be used instead of DefaultAWSCredentialsProviderChain class this way---
config.setCredentialsProvider(new AWSConfig().getCredentials());
Still throws the same error when deployed.
The following repo states that it is possible to provide explicit credentials. I need help to figure out how because I can't find a proper documentation / example.
https://github.com/awslabs/amazon-kinesis-producer/blob/master/java/amazon-kinesis-producer-sample/src/com/amazonaws/services/kinesis/producer/sample/SampleProducer.java

I have Faced the same issue so, I got this solution I hope this will work for you also.
#Value("${s3_accessKey}")
private String s3_accessKey;
#Value("${s3_secretKey}")
private String s3_secretKey;
//this above value I am taking from Application.properties file
BasicAWSCredentials creds = new BasicAWSCredentials(s3_accessKey,
s3_secretKey);
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().
withCredentials(new AWSStaticCredentialsProvider(creds))
.withRegion(Regions.US_EAST_2)
.build();

Related

GOOGLE_APPLICATION_CREDENTIALS can't be found

Currently have to integrate Google Cloud Platform services into my app but recieving the following exception:
**W/System.err: java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
at com.google.auth.oauth2.DefaultCredentialsProvider.getDefaultCredentials(DefaultCredentialsProvider.java:134)
W/System.err: at com.google.auth.oauth2.GoogleCredentials.getApplicationDefault(GoogleCredentials.java:119)
at com.google.auth.oauth2.GoogleCredentials.getApplicationDefault(GoogleCredentials.java:91)
at com.google.api.gax.core.GoogleCredentialsProvider.getCredentials(GoogleCredentialsProvider.java:67)
at com.google.api.gax.rpc.ClientContext.create(ClientContext.java:135)
at com.google.cloud.speech.v1.stub.GrpcSpeechStub.create(GrpcSpeechStub.java:94)
at com.google.cloud.speech.v1.stub.SpeechStubSettings.createStub(SpeechStubSettings.java:131)
at com.google.cloud.speech.v1.SpeechClient.<init>(SpeechClient.java:144)
at com.google.cloud.speech.v1.SpeechClient.create(SpeechClient.java:126)
at com.google.cloud.speech.v1.SpeechClient.create(SpeechClient.java:118)
at com.dno.app.ui.TranscriptFragment$1.onClick(TranscriptFragment.java:72)**
Environment variable is set:
.json file is here:
the app crashes at authImplicit() in this code block (fragment):
transcriptBtn = getActivity().findViewById(R.id.transcript_button);
transcriptBtn.setOnClickListener(new View.OnClickListener() {
#Override
public void onClick(View v) {
try (SpeechClient speechClient = SpeechClient.create()) {
authImplicit(); // issue with Google Platform Login Authentification
// - does not read the environment variable, and therefore cannot get access to the .json file.
// The path to the audio file to transcribe
String fileName = getFile().getName();
// Reads the audio file into memory
Path path = Paths.get(fileName);
byte[] data = Files.readAllBytes(path);
ByteString audioBytes = ByteString.copyFrom(data);
// Builds the sync recognize request
RecognitionConfig config =
RecognitionConfig.newBuilder()
.setEncoding(RecognitionConfig.AudioEncoding.FLAC)
.setSampleRateHertz(16000)
.setLanguageCode("en-US")
.build();
RecognitionAudio audio = RecognitionAudio.newBuilder().setContent(audioBytes).build();
// Performs speech recognition on the audio file
RecognizeResponse response = speechClient.recognize(config, audio);
List<SpeechRecognitionResult> results = response.getResultsList();
for (SpeechRecognitionResult result : results) {
// There can be several alternative transcripts for a given chunk of speech. Just use the
// first (most likely) one here.
SpeechRecognitionAlternative alternative = result.getAlternativesList().get(0);
System.out.printf("Transcription: %s%n", alternative.getTranscript());
}
} catch (IOException e) {
e.printStackTrace();
}
}
});
code for authImplicit():
private void authImplicit() {
// If you don't specify credentials when constructing the client, the client library will
// look for credentials via the environment variable GOOGLE_APPLICATION_CREDENTIALS.
Storage storage = StorageOptions.getDefaultInstance().getService();
System.out.println("Buckets:");
Page<Bucket> buckets = storage.list();
for (Bucket bucket : buckets.iterateAll()) {
System.out.println(bucket.toString());
}
}
I have selected a service account of the type, Owner, so I shouldn't be lacking any permissions.
EDIT (Still not working):
I tried using this example but it still doesn't work: The Application Default Credentials are not available
EDIT #2 (Working on server-side):
As it turns out, Google does not currently support Android for this task. Since this, I've moved the code to an ASP.NET back-end, and the code is now running smoothly.
Thank you for the assistance below.
I can understand you you've read the documentation and implemented all the steps stated here - https://cloud.google.com/storage/docs/reference/libraries#windows
As #John Hanley mentioned, did you check printing environmental variables ?
Its definitely not owner related permissions issue as the exception says
java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
Now, to solve this, do you want to do only using environmental variables ? or other approaches are fine ?
if you are ok with other approaches, then take a look at this code
private void authImplicit() {
//please give exact file name for this credentials.json
Credentials credentials = GoogleCredentials
.fromStream(new FileInputStream("C:\\Users\\dbgno\\Keys\\credentials.json"));
Storage storage = StorageOptions.newBuilder().setCredentials(credentials)
.setProjectId("Some Project").build().getService();
System.out.println("Buckets:");
Page<Bucket> buckets = storage.list();
for (Bucket bucket : buckets.iterateAll()) {
System.out.println(bucket.toString());
}
}
Edit 1: working towards solution
Try this and see if you can read the JSON file that you are giving as input. The print statement should show the service account using which you are aiming to authenticate
public static Credential getDriveService(String fileLocation, String servAccAdmin)
throws IOException {
HttpTransport httpTransport = new NetHttpTransport();
JsonFactory jsonFactory = new JacksonFactory();
GoogleCredential googleCredential =
GoogleCredential.fromStream(new FileInputStream(fileLocation), httpTransport, jsonFactory)
.createScoped(SCOPES);
System.out.println("--------------- " + googleCredential.getServiceAccountId());
Credential credential = new GoogleCredential.Builder()
.setTransport(googleCredential.getTransport())
.setJsonFactory(googleCredential.getJsonFactory())
.setServiceAccountPrivateKeyId(googleCredential.getServiceAccountPrivateKeyId())
.setServiceAccountId(googleCredential.getServiceAccountId()).setServiceAccountScopes(SCOPES)
.setServiceAccountPrivateKey(googleCredential.getServiceAccountPrivateKey())
.setServiceAccountUser(servAccAdmin).build();
return credential;
}
Edit 2:
As we are seeing some credential issue, I am trying to see the way I access any other google API service, may it be drive or gmail where we manually pass the key file and build credentials, and these credentials should be used in further service calls. Now try adding these dependencies, this is just to troubleshoot by using the same way that we access google drive/gmail and etc. We will get to know if google cloud api is able to build the credential or not
<dependency>
<groupId>com.google.http-client</groupId>
<artifactId>google-http-client</artifactId>
<version>${project.http.version}</version>
</dependency>
<dependency>
<groupId>com.google.http-client</groupId>
<artifactId>google-http-client-jackson2</artifactId>
<version>${project.http.version}</version>
</dependency>
<dependency>
<groupId>com.google.oauth-client</groupId>
<artifactId>google-oauth-client-jetty</artifactId>
<version>${project.oauth.version}</version>
</dependency>

Apache Camel AWS S3: credential expiration and temporary credentials

Apache Camel interfaces well with AWS S3, but I have found a scenario in which it is not built correctly for. Going over all of the Camel examples I have seen online, I have never seen anyone use the recommended, industry standard, AWS temporary credentials on non-local environments. Using static credentials that live for ~6 months is a security issue as well as a manual burden (to refresh) and realistically shouldn't be used anywhere except for local environments.
Given a custom, s3 client setup, Camel can take temporary credentials, however, a Camel route pointed to AWS S3 will experience an expiration at some point. Camel is not smart enough to know this and will continue to try to poll a S3 bucket without throwing any exceptions or timeout errors indefinitely.
I have tried to add a timeout configuration to my endpoint like so:
aws-s3://" + incomingAWSBucket + "?" + "amazonS3Client=#amazonS3Client&timeout=4000
Can anyone explain how to interface Camel with AWS temporary credentials or throw an exception if AWS credentials expire (given the aforementioned setup)?
Thanks for the help!
UPDATE:
I pushed a feature to Apache Camel to handle the issue above:
https://github.com/apache/camel/blob/master/components/camel-aws-s3/src/main/docs/aws-s3-component.adoc#use-useiamcredentials-with-the-s3-component
The answer to this question is dense enough for a tutorial if others want it. For now, I will copy and paste it to the correct forums and threads to get the word out:
Without complaining too much, I'd just like to say that for how powerful Camel is, its documentation and example base is really lacking for production scenarios in the AWS world... Sigh... Thats a mouthful and probably a stretch for any open source lib.
I figured out how to solve the credential problem by referencing the official camel-s3 documentation to first see how to create an advanced S3 configuration (relying on the aws sdk itself -- you can see a bare bones example there -- it makes the s3 client manually).
After I figured this out, I went out to the aws sdk documentation on IAM credentials to figure out how this could work on an EC2 instance since I am able to build the client itself. In the aforementioned docs, there are a few bare bones examples as well. Upon testing testing with the examples listed, I found that the credential refresh (the sole purpose of this question) was not working. It could get credentials at first, but it was not refreshing them during my tests after they were manually expired.
Lastly, I figured out that you can specify a provider chain that can handle the refreshing of the credentials on its own. The aws documentation that explains this is here.
In the end, I still need to have static credentials for my local camel setups that poll aws s3 buckets, however, my remote environments that live on ec2s can access them with temporary credentials that refresh themselves flawlessly. WOWSA! :)
To do this, I simply made a factory that uses a local camel setup for my local development and remote camel setup that relies on the temporary IAM credentials. This saves me the security concern and the work on needing to manually refresh credentials for all remote environments!
I will not explain how to create a factory or how my local & remote configurations are set up entirely, but I will include my code sample of the AmazonS3ClientBuilder that creates an S3 Client for remote setups.
AmazonS3ClientBuilder.standard()
.withCredentials(new InstanceProfileCredentialsProvider(false))
.withRegion(Regions.US_WEST_2)
.build();
If there is a desire on how I got this to work, I can provide an example project that shows the entire process.
By request, here are my local and remote implementations of the s3 client:
Local:
public class LocalAWSS3ClientManagerImpl implements AWSS3ClientManager {
private static Logger logger = LoggerFactory.getLogger(LocalAWSS3ClientManagerImpl.class);
private PriorityCodeSourcesRoutesProperties priorityCodeSourcesRoutesProperties;
private SimpleRegistry registry = new SimpleRegistry();
private CamelContext camelContext;
public LocalAWSS3ClientManagerImpl(PriorityCodeSourcesRoutesProperties priorityCodeSourcesRoutesProperties) {
this.priorityCodeSourcesRoutesProperties = priorityCodeSourcesRoutesProperties;
registry.put("amazonS3Client", getS3Client());
camelContext = new DefaultCamelContext(registry);
logger.info("Creating an AWS S3 manager for a local instance (you should not see this on AWS EC2s).");
}
private AmazonS3 getS3Client() {
try {
String awsBucketAccessKey = priorityCodeSourcesRoutesProperties.getAwsBucketAccessKey();
String awsBucketSecretKey = priorityCodeSourcesRoutesProperties.getAwsBucketSecretKey();
AWSCredentials awsCredentials = new BasicAWSCredentials(awsBucketAccessKey, awsBucketSecretKey);
return AmazonS3ClientBuilder.standard().withCredentials(
new AWSStaticCredentialsProvider(awsCredentials)).build();
} catch (RuntimeException ex) {
logger.error("Could not create AWS S3 client with the given credentials from the local config.");
}
return null;
}
public Endpoint getIncomingAWSEndpoint(final String incomingAWSBucket, final String region,
final String fileNameToSaveAndDownload) {
return camelContext.getEndpoint(
"aws-s3://" + incomingAWSBucket + "?" + "amazonS3Client=#amazonS3Client"
+ "&region=" + region + "&deleteAfterRead=false" + "&prefix=" + fileNameToSaveAndDownload);
}
public Endpoint getOutgoingLocalEndpoint(final String outgoingEndpointDirectory,
final String fileNameToSaveAndDownload) {
return camelContext.getEndpoint(
"file://" + outgoingEndpointDirectory + "?" + "fileName="
+ fileNameToSaveAndDownload + "&readLock=markerFile");
}
}
Remote:
public class RemoteAWSS3ClientManagerImpl implements AWSS3ClientManager {
private static Logger logger = LoggerFactory.getLogger(RemoteAWSS3ClientManagerImpl.class);
private PriorityCodeSourcesRoutesProperties priorityCodeSourcesRoutesProperties;
private SimpleRegistry registry = new SimpleRegistry();
private CamelContext camelContext;
public RemoteAWSS3ClientManagerImpl(PriorityCodeSourcesRoutesProperties priorityCodeSourcesRoutesProperties) {
this.priorityCodeSourcesRoutesProperties = priorityCodeSourcesRoutesProperties;
registry.put("amazonS3Client", getS3Client());
camelContext = new DefaultCamelContext(registry);
logger.info("Creating an AWS S3 client for a remote instance (normal for ec2s).");
}
private AmazonS3 getS3Client() {
try {
logger.info("Attempting to create an AWS S3 client with IAM role's temporary credentials.");
return AmazonS3ClientBuilder.standard()
.withCredentials(new InstanceProfileCredentialsProvider(false))
.withRegion(Regions.US_WEST_2)
.build();
} catch (RuntimeException ex) {
logger.error("Could not create AWS S3 client with the given credentials from the instance. "
+ "The default credential chain was used to create the AWS S3 client. "
+ ex.toString());
}
return null;
}
public Endpoint getIncomingAWSEndpoint(final String incomingAWSBucket, final String region,
final String fileNameToSaveAndDownload) {
return camelContext.getEndpoint(
"aws-s3://" + incomingAWSBucket + "?" + "amazonS3Client=#amazonS3Client"
+ "&region=" + region + "&deleteAfterRead=false" + "&prefix=" + fileNameToSaveAndDownload);
}
public Endpoint getOutgoingLocalEndpoint(final String outgoingEndpointDirectory,
final String fileNameToSaveAndDownload) {
return camelContext.getEndpoint(
"file://" + outgoingEndpointDirectory + "?" + "fileName="
+ fileNameToSaveAndDownload + "&readLock=markerFile");
}
}

AmazonS3ClientBuilder.defaultClient() fails to account for region?

The Amazon Java SDK has marked the Constructors for AmazonS3Client deprecated in favor of some AmazonS3ClientBuilder.defaultClient(). Following the recommendation, though, does not result in an AmazonS3 client that works the same. In particular, the client has somehow failed to account for Region. If you run the tests below, the thisFails test demonstrates the problem.
public class S3HelperTest {
#Test
public void thisWorks() throws Exception {
AmazonS3 s3Client = new AmazonS3Client(); // this call is deprecated
s3Client.setS3ClientOptions(S3ClientOptions.builder().setPathStyleAccess(true).build());
assertNotNull(s3Client);
}
#Test
public void thisFails() throws Exception {
AmazonS3 s3Client = AmazonS3ClientBuilder.defaultClient();
/*
* The following line throws like com.amazonaws.SdkClientException:
* Unable to find a region via the region provider chain. Must provide an explicit region in the builder or
* setup environment to supply a region.
*/
s3Client.setS3ClientOptions(S3ClientOptions.builder().setPathStyleAccess(true).build());
}
}
com.amazonaws.SdkClientException: Unable to find a region via the region provider chain. Must provide an explicit region in the builder or setup environment to supply a region.
at com.amazonaws.client.builder.AwsClientBuilder.setRegion(AwsClientBuilder.java:371)
at com.amazonaws.client.builder.AwsClientBuilder.configureMutableProperties(AwsClientBuilder.java:337)
at com.amazonaws.client.builder.AwsSyncClientBuilder.build(AwsSyncClientBuilder.java:46)
at com.amazonaws.services.s3.AmazonS3ClientBuilder.defaultClient(AmazonS3ClientBuilder.java:54)
at com.climate.tenderfoot.service.S3HelperTest.thisFails(S3HelperTest.java:21)
...
Is this an AWS SDK Bug? Is there some "region default provider chain" or some mechanism to derive the region from the Environment and set it into the client? It seems really weak that the method to replace the deprecation doesn't result in the same capability.
Looks like a region is required for the builder.
Probably this thread is related (I would use .withRegion(Regions.US_EAST_1) though in the 3rd line):
To emulate the previous behavior (no region configured), you'll need
to also enable "forced global bucket access" in the client builder:
AmazonS3 client =
AmazonS3ClientBuilder.standard()
.withRegion("us-east-1") // The first region to try your request against
.withForceGlobalBucketAccess(true) // If a bucket is in a different region, try again in the correct region
.build();
This will suppress the exception you received and automatically retry
the request under the region in the exception. It is made explicit in
the builder so you are aware of this cross-region behavior. Note: The
SDK will cache the bucket region after the first failure, so that
every request against this bucket doesn't have to happen twice.
Also, from the AWS documentation if you want to use AmazonS3ClientBuilder.defaultClient(); then you need to have
~/.aws/credentials and ~/.aws/config files
~/.aws/credentials contents:
[default]
aws_access_key_id = your_id
aws_secret_access_key = your_key
~/.aws/config contents:
[default]
region = us-west-1
From the same AWS documentation page, if you don't want to hardcode the region/credentials, you can have it as environment variables in your Linux machine the usual way:
export AWS_ACCESS_KEY_ID=your_access_key_id
export AWS_SECRET_ACCESS_KEY=your_secret_access_key
export AWS_REGION=your_aws_region
BasicAWSCredentials creds = new BasicAWSCredentials("key_ID", "Access_Key");
AWSStaticCredentialsProvider provider = new
AWSStaticCredentialsProvider(creds);
AmazonSQS sqs =AmazonSQSClientBuilder.standard()
.withCredentials(provider)
.withRegion(Regions.US_EAST_2)
.build();
Create file named "config" under .aws.
And place below content.
~/.aws/config contents:
[default]
region = us-west-1
output = json
AmazonSQS sqsClient = AmazonSQSClientBuilder
.standard()
.withRegion(Regions.AP_SOUTH_1)
.build();

Spring boot/spring-cloud-aws SQS failing to poll messages when deployed to AWS/EC2 environment

Spring boot app works fine running locally connecting to sandbox S3 & sandbox SQS, using DefaultAWSCredentialsProviderChain and set as system property.
When application is deployed to EC2 environment and using ProfileCredentials, I get a continuous stream of following error in CloudWatch:
{
"Host": "<myhost>",
"Date": "2016-12-20T21:52:56,777",
"Thread": "simpleMessageListenerContainer-1",
"Level": "WARN ",
"Logger": "org.springframework.cloud.aws.messaging.listener.SimpleMessageListenerContainer",
"Msg": "An Exception occurred while polling queue 'my-queue-name'. The failing operation will be retried in 10000 milliseconds",
"Identifiers": {
"Jvm-Instance": "",
"App-Name": "my-app",
"Correlation-Id": "ca9a556e-2fbc-3g49-9fb8-0e9213bb79bc",
"Session-Id": "",
"Thread-Group": "main",
"Thread-Id": "32",
"Version": ""
}
}
java.lang.NullPointerException
at org.springframework.cloud.aws.messaging.listener.SimpleMessageListenerContainer$AsynchronousMessageListener.run(SimpleMessageListenerContainer.java:255) [spring-cloud-aws-messaging-1.1.1.RELEASE.jar:1.1.1.RELEASE]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_91]
The problem boils down to SimpleMessageListenerContainer.java:255 :
ReceiveMessageResult receiveMessageResult = getAmazonSqs().receiveMessage(this.queueAttributes.getReceiveMessageRequest());
this.queueAttributes is null.
I have tried everything, including #EnableContextCredentials(instanceProfile=true), to setting cloud.aws.credentials.instanceProfile=true while making sure access & secretKey is null. The SQS queue definitely exists, and I have verified through aws cli on the EC2 instance itself that the profile credentials exist and are valid.
Additionally, in AWS environment the app also used S3 client to generate unique keys for bucket storage, which all works. It's only when the app tries to poll messages from SQS that seems to be failing.
I am processing messages like so:
#SqsListener("${aws.sqs.queue.name}")
public void receive(S3EventNotification s3EventNotificationRecord) {
more config:
#Bean
public AWSCredentialsProvider awsCredentialsProvider(
#Value("${aws.credentials.accessKey}") String accessKey,
#Value("${aws.credentials.secretKey}") String secretKey,
JasyptPropertyDecryptor propertyDecryptor) {
if (!Strings.isNullOrEmpty(accessKey) || !Strings.isNullOrEmpty(secretKey)) {
Preconditions.checkState(
!Strings.isNullOrEmpty(accessKey) && !Strings.isNullOrEmpty(secretKey),
"Error in accessKey/secretKey config. Either both must be provided, or neither.");
System.setProperty("aws.accessKeyId", propertyDecryptor.decrypt(accessKey));
System.setProperty("aws.secretKey", propertyDecryptor.decrypt(secretKey));
}
return DefaultAWSCredentialsProviderChain.getInstance();
}
#Bean
public S3Client s3Client(
AWSCredentialsProvider awsCredentialsProvider,
#Value("${aws.s3.region.name}") String regionName,
#Value("${aws.s3.bucket.name}") String bucketName) {
return new S3Client(awsCredentialsProvider, regionName, bucketName);
}
#Bean
public QueueMessageHandlerFactory queueMessageHandlerFactory() {
MappingJackson2MessageConverter messageConverter = new MappingJackson2MessageConverter();
messageConverter.setStrictContentTypeMatch(false);
QueueMessageHandlerFactory factory = new QueueMessageHandlerFactory();
factory.setArgumentResolvers(
Collections.<HandlerMethodArgumentResolver>singletonList(
new PayloadArgumentResolver(messageConverter)));
return factory;
}
One additional thing I noticed is that on application start up, ContextConfigurationUtils.registerCredentialsProvider is called and unless you specify cloud.aws.credentials.profileName= as empty in your app.properties, this class will add a ProfileCredentialsProvider to the list of awsCredentialsProviders. I figured this might be problematic since I'm not providing credentials on the EC2 instance that way, and instead it should be using InstanceProfileCredentialsProvider. This change did not work.
Turns out the issue was that the services I was using in AWS such as SQS has proper access permissions on them, but the IAM profile itself lacked the permissions to even attempt the service operations that the application needed to make.

Jest client shutdown after the first execute operation

I created an AWS Lambda package (Java) with a function that reads some files from Amazon S3 and pushes the data to AWS ElasticSearch Service. Since I'm using AWS Elastic Search, I can't use the Transport client, in which case I'm working with the Jest Client to push via REST. The issue is with the Jest client.
Here's my Jest client instance:
public JestClient getClient() throws InterruptedException{
final Supplier<LocalDateTime> clock = () -> LocalDateTime.now(ZoneOffset.UTC);
DefaultAWSCredentialsProviderChain awsCredentialsProvider = new DefaultAWSCredentialsProviderChain();
final AWSSigner awsSigner = new AWSSigner(awsCredentialsProvider, REGION, SERVICE, clock);
JestClientFactory factory = new JestClientFactory() {
#Override
protected HttpClientBuilder configureHttpClient(HttpClientBuilder builder) {
builder.addInterceptorLast(new AWSSigningRequestInterceptor(awsSigner));
return builder;
}
#Override
protected HttpAsyncClientBuilder configureHttpClient(HttpAsyncClientBuilder builder) {
builder.addInterceptorLast(new AWSSigningRequestInterceptor(awsSigner));
return builder;
}
};
factory.setHttpClientConfig(
new HttpClientConfig.Builder(URL)
.discoveryEnabled(true)
.multiThreaded(true).build());
JestClient jestClient = factory.getObject();
return jestClient;
}
Since the AWS Elasticsearch domain is protected by an IAM access policy, I sign the requests for them to be authorized by AWS (example here). I use POJOs to index documents.
The problem I face is that I am not able to execute more than one action with the jest client instance. For example, if I created the index first :
client.execute(new CreateIndex.Builder(indexName).build());
and later on, I wanted to, for example do some bulk indexing:
for (Object object : listOfObjects) {
bulkIndexBuilder.addAction(new Index.Builder(object ).
index(INDEX_NAME).type(DOC_TYPE).build());
}
client.execute(bulkIndexBuilder.build());
only the first action will be executed and the second will fail. Why is that? Is it possible to execute more than one action?
Morover, using the provided code, I'm not able to execute more than 20 Bulk operations when I want to index the document. Basically, around 20 is fine, but anything more than that, the client.execute(bulkIndexBuilder.build()); just does not execute and the client shuts down.
Any help or suggestion would be appriciated.
UPDATE:
It seems that AWS ElasticSearch does not allow connecting to individual nodes. Simply turning off node discovery in the Jest client .discoveryEnabled(false) solved all the problems. This answer helped.

Categories

Resources