everyone I am new to AWS SDK. I am trying to create an EKS cluster from my java application.
I have used this eksctl create cluster command to create a cluster and I have also done this by using cluster templates.
I have tried to use AWS SDK to create clusters but that didn't work and have no idea how to go with it.
If anyone of you has a good sample code or explanation of using AWS SDK for creating a cluster using cluster template or anything which can help me to reach there would be helpful.
here i provide you a sample of Java code. i wish its serve your purpose on eks cluster creation:
String accessKey = "your_aws_access_key";
String secretKey = "your_aws_secret_key";
AWSCredentials credentials = new BasicAWSCredentials (accessKey, secretKey);
ClientConfiguration clientConfig = new ClientConfiguration ();
clientConfig.setProtocol (Protocol.HTTPS);
clientConfig.setMaxErrorRetry (DEFAULT_MAX_ERROR_RETRY);
clientConfig.setRetryPolicy (new RetryPolicy (PredefinedRetryPolicies.DEFAULT_RETRY_CONDITION,
DEFAULT_BACKOFF_STRATEGY, DEFAULT_MAX_ERROR_RETRY, false));
AmazonEKS amazonEKS = AmazonEKSClientBuilder.standard ()
.withClientConfiguration (clientConfig)
.withCredentials (new AWSStaticCredentialsProvider (credentials))
.withRegion ("us-east-1") //replace your region name
.build ();
CreateClusterResult eksCluster = amazonEKS.createCluster (
new CreateClusterRequest ().withName ("cluster-name") //with other param
);
Related
I am maintaining a JSP/Servlet application that uses the MongoDB 3.8 Java driver. At this point, I cannot change versions to a newer one.
I have occasionally experienced some timeouts when connecting from the application to the database. Based on what I read in https://mongodb.github.io/mongo-java-driver/3.8/driver/tutorials/connect-to-mongodb/ I wrote the following code:
CodecRegistry pojoCodecRegistry = fromRegistries(com.mongodb.MongoClient.getDefaultCodecRegistry(),
fromProviders(PojoCodecProvider.builder().automatic(true).build()));
MongoCredential credential
= MongoCredential.createCredential(theuser, database, thepassword.toCharArray());
MongoClientSettings settings = MongoClientSettings.builder()
.credential(credential)
.codecRegistry(pojoCodecRegistry)
.applyToClusterSettings(builder
-> builder.hosts(Arrays.asList(new ServerAddress(addr, 27017))))
.build();
MongoClient client = MongoClients.create(settings);
This works, but with the eventual timeout (usually when reloading a JSP page).
I figured out I could create a SocketSettings instance with:
SocketSettings socketOptions = SocketSettings.builder().connectTimeout(60,TimeUnit.SECONDS).build();
But I cannot figure out how to apply these settings to the creation of the instance of MongoClient. Any hints?
thanks
I'm trying to manage my ProfitBricks S3 Object Storage Bucket using java, I want to do the basics (add, remove, list) operations but all I have found on internet is to connect to a AWS, Google or IBM Object Storage.
I have tried to use one implementation of those but I don't find how to provide my provider Endpoint.
I have achieved this using the jets3t library and I have set the endpoint property using a Jets3tProperties object.
Jets3tProperties props = new Jets3tProperties();
props.setProperty("s3service.disable-dns-buckets", String.valueOf(true));
props.setProperty("s3service.s3-endpoint", PB_ENDPOINT);
props.setProperty("s3service.s3-endpoint-http-port", PB_ENDPOINT_HTTP_PORT);
props.setProperty("s3service.s3-endpoint-https-port", PB_ENDPOINT_HTTPS_PORT);
props.setProperty("s3service.https-only", String.valueOf(false));
AWSCredentials creds = new AWSCredentials(PB_ACCESS_KEY, PB_SECRET_KEY);
RestS3Service s3Service = new RestS3Service(creds, null, null, props);
And then with the s3Service object I was able to manage my ProfitBricks S3 Object Storage Bucket using java.
I just started working on AWS SDK for Java and .net.
currently i am creating a AWS SQS Queue. I was able to Create a QUEUE, List the existing queues, and talk to the queues with .net SDK.
When i tried the same with the java i m getting following error.
Unable to find a region via the region provider chain. Must provide an
explicit region in the builder or setup environment to supply a
region.
I have set all the necessary access keys, Region and credentials in the aws preferences in eclipse.
This is how i am initializing SQS client in a Java maven project
AmazonSQS sqs = AmazonSQSClientBuilder.defaultClient();
I have googled and found that there is a key word called withregion() for S3 where i can specify the region but its not there for SQS.
I also tried setting region as
sqs.setRegion(Region.AP_Mumbai);
This shows up following exception
The method setRegion(com.amazonaws.regions.Region) in the type
AmazonSQS is not applicable for the arguments
(com.amazonaws.services.s3.model.Region)
i tried setting the same using com.amazonaws.regions.Region but there is no provision as such.
Please Suggest
I setup the aws sqs client this way:
BasicAWSCredentials bAWSc = new BasicAWSCredentials(accessKey, secretKey);
return AmazonSQSClientBuilder.standard().withRegion(region).withCredentials(new AWSStaticCredentialsProvider(bAWSc)).build();
based on what #Francesco put, I created a more intuitive version
BasicAWSCredentials bAWSc = new BasicAWSCredentials(accessKey, secretKey);
final AmazonSQS sqs = AmazonSQSClientBuilder.standard()
.withRegion(Regions.US_EAST_1)
.withCredentials(new AWSStaticCredentialsProvider(bAWSc ))
.build();
I was trying to run Elastic MapReduce from Eclipse but couldn't do so.
My code is as below:
public class RunEMR {
/**
* #param args
*/
public static void main(String[] args) {
// TODO Auto-generated method stub
AWSCredentials credentials = new BasicAWSCredentials("xxxx","xxxx");
AmazonElasticMapReduceClient emr = new AmazonElasticMapReduceClient(credentials);
StepFactory stepFactory = new StepFactory();
StepConfig enableDebugging = new StepConfig()
.withName("Enable Debugging")
.withActionOnFailure("TERMINATE_JOB_FLOW")
.withHadoopJarStep(stepFactory.newEnableDebuggingStep());
StepConfig installHive = new StepConfig()
.withName("Install Hive")
.withActionOnFailure("TERMINATE_JOB_FLOW")
.withHadoopJarStep(stepFactory.newInstallHiveStep());
StepConfig hiveScript = new StepConfig().withName("Hive Script")
.withActionOnFailure("TERMINATE_JOB_FLOW")
.withHadoopJarStep(stepFactory.newRunHiveScriptStep("s3://mywordcountbuckett/binary/WordCount.jar"));
RunJobFlowRequest request = new RunJobFlowRequest()
.withName("Hive Interactive")
.withSteps(enableDebugging, installHive)
.withLogUri("s3://mywordcountbuckett/")
.withInstances(new JobFlowInstancesConfig()
.withEc2KeyName("xxxx")
.withHadoopVersion("0.20")
.withInstanceCount(3)
.withKeepJobFlowAliveWhenNoSteps(true)
.withMasterInstanceType("m1.small")
.withSlaveInstanceType("m1.small"));
RunJobFlowResult result = emr.runJobFlow(request);
}
}
The error that I got was :
Exception in thread "main" com.amazonaws.AmazonServiceException: InstanceProfile is required for creating cluster. (Service: AmazonElasticMapReduce; Status Code: 400; Error Code: ValidationException; Request ID: 7a96ee32-9744-11e5-947d-65ca8f7db0a5
I have tried for couple of hours but unable to fix it. Does anyone knows how ?
I got same exception InstanceProfile is required for creating cluster.
Had to set service-role, and job-flow-role like below
aRunJobFlowRequest.setServiceRole("EMR_DefaultRole")
aRunJobFlowRequest.setJobFlowRole("EMR_EC2_DefaultRole")
After that I was OK.
AWS Document for EMR IAM Roles said
AWS Identity and Access Management (IAM) roles provide a way for IAM users or AWS services to have certain specified permissions and access to resources. For example, this may allow users to access resources or other services to act on your behalf. You must specify two IAM roles for a cluster: a role for the Amazon EMR service (service role), and a role for the EC2 instances (instance profile) that Amazon EMR manages.
So word InstanceProfile in exception message might mean a role for the EC2 instances (instance profile) in the doc, but I got pass that exception after specifying JobFlowRole. little weird.
For an ec2 role (here jobflowrole), an instance profile with the same nameis created internally. Hence it uses these names interchangeably.
If you creating an emr cluster from scratch using boto3, you should also create emr service role, one ec2jobflow role , one instance profile linked to ec2jobflow role.
AWS doc
The version you are trying to use is deprecated and IAM roles are required. Follow the example as given in the documentation http://docs.aws.amazon.com/ElasticMapReduce/latest/ManagementGuide/calling-emr-with-java-sdk.html.
We've got a Rails app using Resque to push jobs on the queue. The consumer of the jobs is a Java app using the Jesque client. Both apps run on Heroku. What I can't figure out is how to use Jesque's ConfigBuilder class to populate the redis connection parameters from Heroku's REDISTOGO_URL config var. The source documentation is pretty thin. Examples other than the default final Config config = new ConfigBuilder().build(); would be great.
I'm not sure how to do it with Jesque's ConfigBuilder but here is how you do it with a JedisPool:
URI redisURI = new URI(System.getenv("REDISTOGO_URL"));
JedisPool pool = new JedisPool(new JedisPoolConfig(),
redisURI.getHost(),
redisURI.getPort(),
Protocol.DEFAULT_TIMEOUT,
redisURI.getUserInfo().split(":",2)[1]);