AWS SDK can not read environment variables - java

I am setting AWS_ env variables as below for Jenkins
sudo apt-get update -y
sudo apt-get install -y python3 python-pip python-devel
sudo pip install awscli
S3_LOGIN=$(aws sts assume-role --role-arn rolename --role-session-name s3_session)
export AWS_CREDENTIAL_PROFILES_FILE=~/.aws/credentials
export AWS_ACCESS_KEY_ID=$(echo ${S3_LOGIN}| jq --raw-output '.Credentials|"\(.AccessKeyId)"')
export AWS_SECRET_ACCESS_KEY=$(echo ${S3_LOGIN} | jq --raw-output '.Credentials|"\(.SecretAccessKey)"')
export AWS_SESSION_TOKEN=$(echo ${S3_LOGIN} | jq --raw-output '.Credentials|"\(.SessionToken)"')
aws configure set default.region us-east-2
aws configure set AWS_ACCESS_KEY_ID $AWS_ACCESS_KEY_ID
aws configure set AWS_SECRET_ACCESS_KEY $AWS_SECRET_ACCESS_KEY
But when I try to get them from code the sdk can not read the env variables already set
AWSCredentials evc = new EnvironmentVariableCredentialsProvider().getCredentials();
AmazonS3Client amazonS3 = new AmazonS3Client(evc);
amazonS3.setRegion(RegionUtils.getRegion("us-east-2"));
com.amazonaws.AmazonClientException: Unable to load AWS credentials
from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and
AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY))
The EnvironmentVariableCredentialsProvider in AWS SDK looks below,
public AWSCredentials getCredentials() {
String accessKey = System.getenv(ACCESS_KEY_ENV_VAR);
if (accessKey == null) {
accessKey = System.getenv(ALTERNATE_ACCESS_KEY_ENV_VAR);
}
String secretKey = System.getenv(SECRET_KEY_ENV_VAR);
if (secretKey == null) {
secretKey = System.getenv(ALTERNATE_SECRET_KEY_ENV_VAR);
}
accessKey = StringUtils.trim(accessKey);
secretKey = StringUtils.trim(secretKey);
String sessionToken =
StringUtils.trim(System.getenv(AWS_SESSION_TOKEN_ENV_VAR));
if (StringUtils.isNullOrEmpty(accessKey)
|| StringUtils.isNullOrEmpty(secretKey)) {
throw new AmazonClientException(
"Unable to load AWS credentials from environment variables " +
"(" + ACCESS_KEY_ENV_VAR + " (or " + ALTERNATE_ACCESS_KEY_ENV_VAR + ") and " +
SECRET_KEY_ENV_VAR + " (or " + ALTERNATE_SECRET_KEY_ENV_VAR + "))");
}
return sessionToken == null ?
new BasicAWSCredentials(accessKey, secretKey)
:
new BasicSessionCredentials(accessKey, secretKey, sessionToken);
}
EDIT: I try below approach also,
ProfileCredentialsProvider evc = new ProfileCredentialsProvider();
AmazonS3Client amazonS3 = new AmazonS3Client(evc);
amazonS3.setRegion(RegionUtils.getRegion("us-east-2"));
But even I set AWS_CREDENTIAL_PROFILES_FILE in the script because the credentials file is under ~/.aws/credentials, I still get below,
credential profiles file not found in the given path:
/root/.aws/credentials
Even though the AwsProfileFileLocationProvider code says below, i am not sure why it try to look at /root/.aws/credentials
Checks the environment variable override
* first, then checks the default location (~/.aws/credentials), and finally falls back to the
* legacy config file (~/.aws/config) that we still support loading credentials from

I am assuming you are configuring your Jenkins Job with different build steps between set credential and consume credential.
Jenkins does not share environment variable between build steps.
If you are using old-style of Jenkins job you will need to use some Plugin like envinject, or use a file to share the variables between steps. Like below (just as example).
Step 1
echo "export AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}" > credential
echo "export AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}" >> credential
echo "export AWS_SESSION_TOKEN=${AWS_SESSION_TOKEN}" >> credential
Step 2
source credential && ./your_command_here
But if you are suing Jenkins Pipeline, you can use env. Like below (just as example).
pipeline {
parameters {
string(name: 'AWS_ACCESS_KEY_ID', defaultValue: '')
}
stage("set credential") {
steps {
tmp_AWS_ACCESS_KEY_ID = sh (script: 'your shell script here', returnStdout: true).trim()
env.AWS_ACCESS_KEY_ID = tmp_AWS_ACCESS_KEY_ID
}
}
stage("consume credential") {
steps {
echo "${env.AWS_ACCESS_KEY_ID}"
}
}
}

You should be able to change the credentials file location using the AWS_CREDENTIAL_PROFILES_FILE environment variable

Related

How to programmatically create a GitHub repository?

I have to use Java to programmatically create a GitHub repository and push code to it. Please advise on the best method to use. Also please share any code snippet or related links for the utility.
I referred jgit library, has anyone used it? I also referred hub, gh and command line utility.
You can use the GitHub Rest API.
Generate a Personal Access Token from Settings > Developers Settings > Personal Access Tokens
Once generated use that to call the endpoint -
https://api.github.com/user/repo
with body
{"name": "REPO_NAME"}
and Header
Authorization: token PERSONAL_ACCESS_TOKEN
Example Curl:
curl -H "Authorization: token PERSONAL_ACCESS TOKEN" https://api.github.com/user/repos -d '{"name": "REPO_NAME"}'
Reference Doc: https://docs.github.com/en/rest/repos/repos?apiVersion=2022-11-28#create-a-repository-for-the-authenticated-user
The gitlab4j api is a great library to do so : https://github.com/gitlab4j/gitlab4j-api
<!-- https://mvnrepository.com/artifact/org.gitlab4j/gitlab4j-api -->
<dependency>
<groupId>org.gitlab4j</groupId>
<artifactId>gitlab4j-api</artifactId>
<version>5.0.1</version>
</dependency>
I'm using to read files from repo, commit and create merge requests :
try {
var branchName = "BRANCH-" + random.nextLong();
var action = new CommitAction();
action.withContent(mapper.writerWithDefaultPrettyPrinter().writeValueAsString(data))
.withFilePath(JSON_FILE_PATH).setAction(Action.UPDATE);
try (var glapi = new GitLabApi(GIT_HOSTNAME, token)) {
glapi.enableRequestResponseLogging(java.util.logging.Level.INFO);
glapi.getCommitsApi().createCommit(REPO_NAME, branchName, message, VERSION, null, null, action);
return glapi.getMergeRequestApi().createMergeRequest(DB_REPO_NAME, branchName, VERSION, message,
message, ADMIN_USER_ID);
}
} catch (Exception ex) {
throw new DataException(ex);
}
how to create a new repo, a new master branch and your first commit :
try (var glapi = new GitLabApi("https://gitlab.com/", token)) {
var projectApi = glapi.getProjectApi();
var project = projectApi.createProject("my-repo");
// my-repo created
var action = new CommitAction();
action.withContent("### ignore some files ###").withFilePath(".gitignore").setAction(Action.CREATE);
glapi.getCommitsApi().createCommit("my-repo", "master", "my first commit", "master", "author#mail.com",
"author", action);
// yoru first commit
}
for github you can use https://github.com/hub4j/github-api
#Test
public void testCreateRepoPublic() throws Exception {
initGithubInstance();
GHUser myself = gitHub.getMyself();
String repoName = "test-repo-public";
GHRepository repo = gitHub.createRepository(repoName).private_(false).create();
try {
assertThat(repo.isPrivate(), is(false));
repo.setPrivate(true);
assertThat(myself.getRepository(repoName).isPrivate(), is(true));
repo.setPrivate(false);
assertThat(myself.getRepository(repoName).isPrivate(), is(false));
} finally {
repo.delete();
}
}
commiting multiple files :
var repo = github.getRepository("my-repo");
GHRef mainRef = repo.getRef("heads/master");
String mainTreeSha = repo.getTreeRecursive("master", 1).getSha();
GHTreeBuilder treeBuilder = repo.createTree().baseTree(mainTreeSha);
treeBuilder.add("file1.txt", Files.readAllBytes(Path.of("file1.txt"), false);
treeBuilder.add("file2.json", Files.readAllBytes(Path.of("file2.json"), false);
treeBuilder.add("dir1/dir2/file3.xml", Files.readAllBytes(Path.of("dir1/dir2/file3.xml"), false);
String treeSha = treeBuilder.create().getSha();
GHCommit commit = repo.createCommit()
.tree(treeSha)
.message("adding multiple files example")
.author("author", "author#mail.com", new Date())
.committer("committer", "committer#mail.com", new Date())
.parent(mainRef.getObject().getSha())
.create();
String commitSha = commit.getSHA1();
mainRef.updateTo(commitSha);
Using eclipse jgit
<!-- https://mvnrepository.com/artifact/org.eclipse.jgit/org.eclipse.jgit -->
<dependency>
<groupId>org.eclipse.jgit</groupId>
<artifactId>org.eclipse.jgit</artifactId>
<version>6.2.0.202206071550-r</version>
</dependency>
Examples :
// to use basic auth
var credentials = new UsernamePasswordCredentialsProvider("user", "password");
// to clone a repository
Git.cloneRepository().setURI("https://github.com/my-repo.git").setCredentialsProvider(credentials).call();
// to init a new repository
Git.init().setDirectory(new File("/opt/my-repo")).setInitialBranch("master").call();
// to use a cloned repo
var git = Git.open(new File("/opt/my-repo"));
// to pull changes from remote
git.pull().setCredentialsProvider(credentials).call();
// to stage one file
git.add().setUpdate(true).addFilepattern("dir/file.txt").call();
// to stage all files in a directory
git.add().setUpdate(true).addFilepattern("dir").call();
// to create a commit with the staged files
git.commit().setMessage("just adding some files.").call();
// to push changes to remote
git.push().call();
I tried the jgit to add files to the GitHub repo. Commit is getting created, but the files are not added, In addfilepattern - I tried "." as well. Below is my code snippet:
Git.cloneRepository().setURI("https://github.com/"+org+"/"+repo+".git").setCredentialsProvider(credentialsProvider).setDirectory(new File(Gitpath)).setCloneAllBranches(true).call();
Git.init().setDirectory(new File(Gitpath)).setInitialBranch("main").call();
var git = Git.open(new File(Gitpath));
git.add().setUpdate(true).addFilepattern(directoryName+fileName).call();
git.commit().setMessage("Adding files.").call();
git.push().setCredentialsProvider(credentialsProvider).call();

EMR cluster hangs in Step state 'Running/Pending'

I am launching an EMR cluster through java SDK with a custom jar step. The cluster launch is successful but when after bootstrapping while the step is pending/running state the cluster stucks.
I am not even able to ssh on the machine.
Following is my code to launch the cluster with custom jar step -
String dataTrasnferJar = s3://test/testApplication.jar;
if (dataTrasnferJar == null || dataTrasnferJar.isEmpty())
throw new InvalidS3ObjectException(
"EMR custom jar file path is null/empty. Please provide a valid jar file path");
HadoopJarStepConfig customJarConfig = new HadoopJarStepConfig().withJar(dataTrasnferJar);
StepConfig customJarStep = new StepConfig("Mongo_to_S3_Data_Transfer", customJarConfig)
.withActionOnFailure(ActionOnFailure.CONTINUE);
AmazonElasticMapReduce emr = AmazonElasticMapReduceClientBuilder.standard()
.withCredentials(awsCredentialsProvider)
.withRegion(region)
.build();
Application spark = new Application().withName("Spark");
String clusterName = "my-cluster-" + System.currentTimeMillis();
RunJobFlowRequest request = new RunJobFlowRequest()
.withName(clusterName)
.withReleaseLabel("emr-6.0.0")
.withApplications(spark)
.withVisibleToAllUsers(true)
.withSteps(customJarStep)
.withLogUri(loggingS3Bucket)
.withServiceRole("EMR_DefaultRole")
.withJobFlowRole("EMR_EC2_DefaultRole")
.withInstances(new JobFlowInstancesConfig()
.withEc2KeyName(key_pair)
.withInstanceCount(instanceCount)
.withEc2SubnetIds(subnetId)
.withAdditionalMasterSecurityGroups(securityGroup)
.withKeepJobFlowAliveWhenNoSteps(true)
.withMasterInstanceType(instanceType));
RunJobFlowResult result = emr.runJobFlow(request);
EMR emr-6.0.0 version is still in development. Can you try the same with emr-5.29.0?

Unable to execute java code in Linux server

I have made one java API service. While Calling this service I am trying to get response from Dialogue flow. Now When I am executing the code on window platform then It is working fine.But when I am publish it ,on Linux server. And trying to test it Giving error.
I have done it with export GOOGLE_APPLICATION_CREDENTIALS="[PATH]" on system variable.Even I have setup the path on Linux server but it is not working.Then I have put .json file on project and trying to read and then Again making call. (These things are working fine only on local system not Linux after publish the project)
List<String> texts= new ArrayList<String>();
texts.add("Hi");
String sessionId="3f46dfa4-5204-84f3-1488-5556f3d6b8a1";
String languageCode="en-US";
GoogleCredentials credentials = GoogleCredentials.fromStream(new FileInputStream("E:\\GoogleDialogFlow\\ChatBox\\Json.json"))
.createScoped(Lists.newArrayList("https://www.googleapis.com/auth/cloud-platform"));
SessionsSettings.Builder settingsBuilder = SessionsSettings.newBuilder();
SessionsSettings sessionsSettings = settingsBuilder.setCredentialsProvider(FixedCredentialsProvider.create(credentials)).build();
SessionsClient sessionsClient = SessionsClient.create(sessionsSettings);
SessionName session = SessionName.of(projectId, sessionId);
com.google.cloud.dialogflow.v2.TextInput.Builder textInput = TextInput.newBuilder().setText("Hi").setLanguageCode(languageCode);
QueryInput queryInput = QueryInput.newBuilder().setText(textInput).build();
DetectIntentResponse response = sessionsClient.detectIntent(session, queryInput);
QueryResult queryResult = response.getQueryResult();
System.out.println("====================");
System.out.format("Query Text: '%s'\n", queryResult.getQueryText());
System.out.format("Detected Intent: %s (confidence: %f)\n",
queryResult.getIntent().getDisplayName(), queryResult.getIntentDetectionConfidence());
System.out.format("Fulfillment Text: '%s'\n", queryResult.getFulfillmentText());
credentials.toBuilder().build();```

AWS was not able to validate the provided access credentials

I have been trying to create Security Group using AWS SDK, but somehow it fails to authenticate it. For the specific Access Key and Secret Key, i have provided the Administrative rights, then also it fails to validate. On the other side, I tried the same credentials on AWS S3 Example, it successfully executes.
Getting following error while creating security group:
com.amazonaws.AmazonServiceException: AWS was not able to validate the provided access credentials (Service: AmazonEC2; Status Code: 401; Error Code: AuthFailure; Request ID: 1584a035-9a88-4dc7-b5e2-a8b7bde6f43c)
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1077)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:725)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:460)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:295)
at com.amazonaws.services.ec2.AmazonEC2Client.invoke(AmazonEC2Client.java:9393)
at com.amazonaws.services.ec2.AmazonEC2Client.createSecurityGroup(AmazonEC2Client.java:1146)
at com.sunil.demo.ec2.SetupEC2.createSecurityGroup(SetupEC2.java:84)
at com.sunil.demo.ec2.SetupEC2.main(SetupEC2.java:25)
Here is the Java Code:
public class SetupEC2 {
AWSCredentials credentials = null;
AmazonEC2Client amazonEC2Client ;
public static void main(String[] args) {
SetupEC2 setupEC2Instance = new SetupEC2();
setupEC2Instance.init();
setupEC2Instance.createSecurityGroup();
}
public void init(){
// Intialize AWS Credentials
try {
credentials = new BasicAWSCredentials("XXXXXXXX", "XXXXXXXXX");
} catch (Exception e) {
throw new AmazonClientException(
"Cannot load the credentials from the credential profiles file. " +
"Please make sure that your credentials file is at the correct " +
"location (/home/sunil/.aws/credentials), and is in valid format.",
e);
}
// Initialize EC2 instance
try {
amazonEC2Client = new AmazonEC2Client(credentials);
amazonEC2Client.setEndpoint("ec2.ap-southeast-1.amazonaws.com");
amazonEC2Client.setRegion(Region.getRegion(Regions.AP_SOUTHEAST_1));
} catch (Exception e) {
e.printStackTrace();
}
}
public boolean createSecurityGroup(){
boolean securityGroupCreated = false;
String groupName = "sgec2securitygroup";
String sshIpRange = "0.0.0.0/0";
String sshprotocol = "tcp";
int sshFromPort = 22;
int sshToPort =22;
String httpIpRange = "0.0.0.0/0";
String httpProtocol = "tcp";
int httpFromPort = 80;
int httpToPort = 80;
String httpsIpRange = "0.0.0.0/0";
String httpsProtocol = "tcp";
int httpsFromPort = 443;
int httpsToProtocol = 443;
try {
CreateSecurityGroupRequest createSecurityGroupRequest = new CreateSecurityGroupRequest();
createSecurityGroupRequest.withGroupName(groupName).withDescription("Created from AWS SDK Security Group");
createSecurityGroupRequest.setRequestCredentials(credentials);
CreateSecurityGroupResult csgr = amazonEC2Client.createSecurityGroup(createSecurityGroupRequest);
String groupid = csgr.getGroupId();
System.out.println("Security Group Id : " + groupid);
System.out.println("Create Security Group Permission");
Collection<IpPermission> ips = new ArrayList<IpPermission>();
// Permission for SSH only to your ip
IpPermission ipssh = new IpPermission();
ipssh.withIpRanges(sshIpRange).withIpProtocol(sshprotocol).withFromPort(sshFromPort).withToPort(sshToPort);
ips.add(ipssh);
// Permission for HTTP, any one can access
IpPermission iphttp = new IpPermission();
iphttp.withIpRanges(httpIpRange).withIpProtocol(httpProtocol).withFromPort(httpFromPort).withToPort(httpToPort);
ips.add(iphttp);
//Permission for HTTPS, any one can accesss
IpPermission iphttps = new IpPermission();
iphttps.withIpRanges(httpsIpRange).withIpProtocol(httpsProtocol).withFromPort(httpsFromPort).withToPort(httpsToProtocol);
ips.add(iphttps);
System.out.println("Attach Owner to security group");
// Register this security group with owner
AuthorizeSecurityGroupIngressRequest authorizeSecurityGroupIngressRequest = new AuthorizeSecurityGroupIngressRequest();
authorizeSecurityGroupIngressRequest.withGroupName(groupName).withIpPermissions(ips);
amazonEC2Client.authorizeSecurityGroupIngress(authorizeSecurityGroupIngressRequest);
securityGroupCreated = true;
} catch (Exception e) {
// TODO: handle exception
e.printStackTrace();
securityGroupCreated = false;
}
System.out.println("securityGroupCreated: " + securityGroupCreated);
return securityGroupCreated;
}
}
Try to update your Systemtime.
When the diffrence between AWS-datetime and your datetime are too big, the credentials will not accepted.
For Debian/Ubuntu Users:
when you never set your time-zone you can do this with
sudo dpkg-reconfigure tzdata
Stop the ntp-Service, because too large time diffrences, cannot be changed by running service.
sudo /etc/init.d/ntp stop
Syncronize your time and date (-q Set the time and quit / Run only once) (-g Allow the first adjustment to be Big) (-x Slew up to 600 seconds / Adjuste also time witch large diffrences) (-n Do not fork / process will not going to background)
sudo ntpd -q -g -x -n
Restart service
sudo /etc/init.d/ntp start
check actual system-datetime
sudo date
set system-datetime to your hardware-datetime
sudo hwclock --systohc
show your hardware-datetime
sudo hwclock
you must specify the profile and the region
aws ec2 describe-instances --profile nameofyourprofile --region eu-west-1
"A client error (AuthFailure) occurred when calling the [Fill-in the blanks] operation: AWS was not able to validate the provided access credentials"
If you are confident of the validity of AWS credentials i.e. access key and secret key and corresponding profile name, your date and time being off-track is a very good culprit.
In my case, I was confident but I was wrong - I had used the wrong keys. Doesn't hurt to double check.
Let's say that you created an IAM user called "guignol". Configure "guignol" in ~/.aws/config as follows:
[profile guignol]
region = us-east-1
aws-access-key_id = AKXXXYYY...
aws-secret-key-access = ...
Install the aws cli (command level interface) if you haven't already done so. As a test, run aws ec2 describe-instances --profile guignol If you gat an error message that aws was not able to validate the credentials, run aws configure --profile guignol , enter your credentials and run the test command again.
If you put your credentials in ~/.aws/credentials then you don't need to provide a parameter to your AmazonEC2Client call. If you do this then on an EC2 instance the same code will work with Assumed STS roles.
For more info see: http://docs.aws.amazon.com/AWSSdkDocsJava/latest/DeveloperGuide/credentials.html
In my case, killing the terminal and running the command again helped
In my case I copied CDK env variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN for programmatic access, but it appeared that I already had an old session token in my ~/aws/.credentials which I forgot about. Needed to remove the old tokens from the file.

Executing OS Dependent Commands

I have two Java classes that are running commands on the local system. My dev system is a Mac, my QA system is Windows and the Prod system is UNIX. So there are different commands for each one, at the moment I have to go in and comment/uncomment the differences. Both classes are structured the same with executable and command. Here is what I have.
// Linux (QA/Prod)
final String executable = "/user1/Project/Manufacturer/CommandCLI";
// final String executable = "cat"; // Mac Dev
// final String executable = "cmd"; // Windows QA
final String command = "getarray model=" + model + " serialnum=" + serialnum;
// Windows QA(local laptop)
//final String command = "/C c:/Manufacturer/CommandCLI.bat getarray model=" + model + " serialnum=" + serialnum;
//Mac Dev
// final String command = "/TestData/" + computer.getId() + ".xml"
So, as you can see -- I am commenting and uncommenting depending on the environment. One of my main concerns is that I am relying on the model and serialnum variable -- and I don't know if that can somehow be inserted into a property (model and serialnum are given in the method call).
We are using Maven so during "mvn clean package" we are adding the -P flag to specify a properties file.
What is an elegant way to handle this?
I suggest to create 3 different method: one for each os that contains os-specific commands. And you can determine current os using system properties: check this question. And call appropriate method based on this property. Example:
private runOnLinux(int model, int serialNum) { ... }
private runOnWindows(int model, int serialNum) { ... }
private runOnMac(int model, int serialNum) { ... }
// Somewhere in source code...
String os = System.getProperty("os.name").toLowerCase();
if (os.contains("windows")) {
runOnWindows(model, serialNum);
} else if (os.contains("linux") || os.contains("unix")) {
runOnLinux(model, serialNum);
} else {
// Mac!
runOnMac(model, serialNum);
}
Of course I not sure all this checks are correct. Better check answers to the question I mentioned at the beginning. It contains much more useful information.

Categories

Resources