I have created a new Glacier vault to use in development. I setup SNS and SQS for job completion notifications.
I am using the java SDK from AWS. I am able to successfully add archives to the vault but I get an error when creating a retrieval job.
The code I am using is from the SDK
InitiateJobRequest initJobRequest = new InitiateJobRequest()
.withVaultName(vaultName)
.withJobParameters(new JobParameters().withType("archive-retrieval").withArchiveId(archiveId));
I use the same code in Test and Production and it works fine, yet in development I get this error:
Status Code: 400, AWS Service: AmazonGlacier, AWS Request ID: xxxxxxxx, AWS Error Code: InvalidParameterValueException, AWS Error Message: Invalid vault name: arn:aws:glacier:us-west-2:xxxxxxx:vaults/xxxxxx
I know the vault name is correct and it exists as I use the same name to run the add archive job and it completes fine.
I had a suspicion that the vault may take a bit of time after creation before it will allow retrieval requests, but I couldn't find any documentation to confirm this.
Anyone had any similar issues? Or know if there are delays on vaults before you can initiate a retrieval request?
try {
// Get the S3 directory file.
S3Object object = null;
try {
object = s3.getObject(new GetObjectRequest(s3BucketName, key));
} catch (com.amazonaws.AmazonClientException e) {
logger.error("Caught an AmazonClientException");
logger.error("Error Message: " + e.getMessage());
return;
}
// Show
logger.info("\tContent-Type: "
+ object.getObjectMetadata().getContentType());
GlacierS3Dir dir = GlacierS3Dir.digestS3GlacierDirectory(object
.getObjectContent());
logger.info("\tGlacier object ID is " + dir.getGlacierFileID());
// Connect to Glacier
ArchiveTransferManager atm = new ArchiveTransferManager(client,credentials);
logger.info("\tVault: " + vaultName);
// create a name
File f = new File(key);
String filename = f.getName();
filename = path + filename.replace("dir", "tgz");
logger.info("Downloading to '" + filename
+ "'. This will take up to 4 hours...");
atm.download(vaultName, dir.getGlacierFileID(), new File(filename));
logger.info("Done.");
} catch (AmazonServiceException ase) {
logger.error("Caught an AmazonServiceException.");
logger.error("Error Message: " + ase.getMessage());
logger.error("HTTP Status Code: " + ase.getStatusCode());
logger.error("AWS Error Code: " + ase.getErrorCode());
logger.error("Error Type: " + ase.getErrorType());
logger.error("Request ID: " + ase.getRequestId());
} catch (AmazonClientException ace) {
logger.error("Caught an AmazonClientException.");
logger.error("Error Message: " + ace.getMessage());
}
Error message "Invalid vault name" means this archive is located in a different Vault. Proof link: https://forums.aws.amazon.com/message.jspa?messageID=446187
Related
I have a program that is already working a couple of years but it is not working anymore now due to a jackson.core error and I can not figure out why it is thrown.
Chunck of code that throws the error (last line):
//Build inputstream
if(yearDirCheck == true && monthDirCheck == true){
//The folder already exists, upload the file directly
try (InputStream in = new FileInputStream(docname)) {
FileMetadata metadata = client.files().uploadBuilder(path + "/" + jaar + "/"+ maand +"/" + docname)
.uploadAndFinish(in);
}
catch (IOException ex) {
Logger.getLogger(maakPDF.class.getName()).log(Level.SEVERE, null, ex);
}
mail.verzendOverurenKaart(Technician, client.sharing().createSharedLinkWithSettings(path + "/" + jaar + "/"+ maand +"/" + docname).getUrl());
}
The error i get:
Exception in thread "AWT-EventQueue-0" java.lang.NoSuchMethodError: 'void com.fasterxml.jackson.core.JsonParseException.<init>(com.fasterxml.jackson.core.JsonParser, java.lang.String)'
at com.dropbox.core.stone.StoneSerializer.expectEndObject(StoneSerializer.java:98)
at com.dropbox.core.v2.sharing.LinkPermissions$Serializer.deserialize(LinkPermissions.java:310)
at com.dropbox.core.v2.sharing.LinkPermissions$Serializer.deserialize(LinkPermissions.java:242)
at com.dropbox.core.stone.StructSerializer.deserialize(StructSerializer.java:21)
at com.dropbox.core.v2.sharing.FileLinkMetadata$Serializer.deserialize(FileLinkMetadata.java:455)
at com.dropbox.core.v2.sharing.SharedLinkMetadata$Serializer.deserialize(SharedLinkMetadata.java:494)
at com.dropbox.core.v2.sharing.SharedLinkMetadata$Serializer.deserialize(SharedLinkMetadata.java:381)
at com.dropbox.core.stone.StructSerializer.deserialize(StructSerializer.java:21)
at com.dropbox.core.stone.StoneSerializer.deserialize(StoneSerializer.java:66)
at com.dropbox.core.v2.DbxRawClientV2$1.execute(DbxRawClientV2.java:103)
at com.dropbox.core.v2.DbxRawClientV2.executeRetriable(DbxRawClientV2.java:252)
at com.dropbox.core.v2.DbxRawClientV2.rpcStyle(DbxRawClientV2.java:97)
I am using the jackson-core-2.6.1 library and thus the dorpbox v2 core api in java. Not using maven or gradle or anything.
If you check your libs, you have two jackson-core dependencies. 2.6.1 and 2.7.4.
Usually this kind of exception is associated with dependencies conflicts, so removing one should fix it.
I am trying to create an HL7 message in Java and then print the resulting message. I am faking basic patient information and then adding the Drug Prescription information. Then, I want to print the complete message but I wasn't able to use the API correctly. I am new at using HL7, so I know I'm probably missing some required segments and even using the wrong ones, can you please help? This is my current code:
public RXO runDrugPrescriptionEvent(CMSGeneric cmsgen) {
CMSDrugPrescriptionEvent cmsic = (CMSDrugPrescriptionEvent) cmsgen;
ADT_A28 adt23 = new ADT_A28();
try {
adt23.initQuickstart("ADT", "A08", cmsic.getPDE_EVENT_ID());
// We set the sex identity (male or female)
if (cmsic.getBENE_SEX_IDENT_CD() == 1) {
adt23.getPID().getSex().setValue("Male");
}
else {
adt23.getPID().getSex().setValue("Female");
}
// We set a fake name and family name
adt23.getPID().insertPatientName(0).getGivenName().setValue("CMS Name " + MainTest.NEXT_PATIENT_ID);
adt23.getPID().insertPatientName(0).getFamilyName().setValue("CMS Family name " + MainTest.NEXT_PATIENT_ID);
MainTest.NEXT_PATIENT_ID++;
RXO rxo = new RXO(adt23, new DefaultModelClassFactory());
rxo.getRxo1_RequestedGiveCode().getCe1_Identifier().setValue("" + cmsic.getPDE_DRUG_CD());
rxo.getRxo18_RequestedGiveStrength().setValue("" + cmsic.getPDE_DRUG_STR_CD());
rxo.getRxo19_RequestedGiveStrengthUnits().getCe1_Identifier().setValue("" + cmsic.getPDE_DRUG_STR_UNITS());
rxo.getRxo5_RequestedDosageForm().getCe1_Identifier().setValue("" + cmsic.getPDE_DRUG_DOSE_CD());
rxo.getRxo11_RequestedDispenseAmount().setValue("" + cmsic.getPDE_DRUG_QTY_DIS());
HapiContext context = new DefaultHapiContext();
Parser parser = context.getPipeParser();
String encodedMessage = adt23.getParser().encode(rxo.getMessage());
logger.debug("Printing Message:");
logger.debug(encodedMessage);
return rxo;
} catch (IOException e) {
System.out.println("IOException creating HL7 message. " + e.getMessage());
e.printStackTrace();
} catch (HL7Exception e) {
System.out.println("HL7Exception creating HL7 message. " + e.getMessage());
e.printStackTrace();
}
return null;
}
With this code, the logger prints the following message:
MSH|^~\&|||||20160331101349.8+0100||ADT^A08|110001|PDE-00001E6FADAD3F57|2.3
PID|||||CMS Family name 100~^CMS Name 100|||Female
But I was expecting to see the RXO segment as well. How can I achieve that?
I found that changing the message type from ADT_A28 to ORP_O10 would let me have all the fields I need, as ADT_A28 wasn't the appropriate message for the kind of information I needed. There's a complete example on how to set a great amount of segments in this type of message here. Then, I was able to print the complete message using the PipeParser:
HapiContext context = new DefaultHapiContext();
Parser parser = context.getPipeParser();
String encodedMessage = parser.encode(msg);
logger.debug("Printing EREncoded Message:");
logger.debug(encodedMessage);
I am trying to delete multiple objects,but it is not going to delete. I am not getting any exception.If i go for single delete,there is no issue with it.
Following code i am using
public void deleteImage(){
List<KeyVersion> amazonKeys = new ArrayList<KeyVersion>();
amazonKeys.add(new KeyVersion("compressedc1eac77b-9c38-4036-9770-34a77a163bb0.jpeg"));
amazonKeys.add(new KeyVersion("compressedb52adf1e-5155-48b6-9051-bb679601f5ee.jpeg"));
imageService.removeS3Files("mubucketname/dev/3123",amazonKeys);
}
My service is
public void removeS3Files(String bucketName,List<KeyVersion> keys){
log.debug("deleting multiple objects from s3 with bucket::" + bucketName);
try{
DeleteObjectsRequest multiObjectDeleteRequest = new DeleteObjectsRequest(bucketName);
multiObjectDeleteRequest.setKeys(keys);
AmazonS3 s3client = new AmazonS3Client(CustomAwsCredentials.getInstance(envConfiguration));
s3client.setEndpoint(Constant.AWS_ENDPOINT);
DeleteObjectsResult deleteObjectsResult = s3client.deleteObjects(multiObjectDeleteRequest);
System.out.println(deleteObjectsResult.getDeletedObjects());
}catch(AmazonServiceException exception){
log.debug("Caught an AmazonServiceException.");
log.debug("Error Message: " + exception.getMessage());
}catch (AmazonClientException clientException) {
log.debug("Caught an AmazonClientException.");
log.debug("Error Message: " + clientException.getMessage());
}
}
My data store in bucket looks like
bucketname/dev/3123/compressedc1eac77b-9c38-4036-9770-34a77a163bb0.jpeg
bucketname/dev/3123/compressedb52adf1e-5155-48b6-9051-bb679601f5ee.jpeg
I have used below code for delete single object(working fine)
try{
AmazonS3 s3client = new AmazonS3Client(CustomAwsCredentials.getInstance(envConfiguration));
System.out.println(s3client.doesBucketExist(bucketName));
s3client.setEndpoint(Constant.AWS_ENDPOINT);
s3client.deleteObject(bucketName, key);
}catch(AmazonServiceException exception){
log.debug("Caught an AmazonServiceException.");
log.debug("Error Message: " + exception.getMessage());
}catch (AmazonClientException clientException) {
log.debug("Caught an AmazonClientException.");
log.debug("Error Message: " + clientException.getMessage());
}
Please help me what i am missing here in multiple object delete ?
Thanks in advance
This is not a valid bucket name:
mubucketname/dev/3123
The bucket name is separate from the key and you can't put path prefixes from the key on the bucket name. Try this:
List<KeyVersion> keys = new ArrayList<KeyVersion>();
keys.add(new KeyVersion("dev/3123/compressedc1eac77b-9c38-4036-9770-34a77a163bb0.jpeg"));
keys.add(new KeyVersion("dev/3123/compressedb52adf1e-5155-48b6-9051-bb679601f5ee.jpeg"));
DeleteObjectsRequest request = new DeleteObjectsRequest("mubucketname").withKeys(keys);
DeleteObjectsResult result = s3client.deleteObjects(request);
I am trying to connect to my AWS S3 bucket to upload a file per these links' instructions.
http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadObjSingleOpJava.html
http://docs.aws.amazon.com/AWSSdkDocsJava/latest/DeveloperGuide/credentials.html#credentials-specify-provider
For some reason when it tries to instantiate the AmazonS3Client object it throws an exception that's being swallowed and it exits my Struts Action. Because of this, I don't have much information to debug off of.
I've tried both the The default credential profiles file (~/.aws/credentials) approach and the explicit secret and access key (new BasicAWSCredentials(access_key_id, secret_access_key)
/**
* Uses the secret key and access key to return an object for accessing AWS features
* #return BasicAWSCredentials
*/
public static BasicAWSCredentials getAWSCredentials() {
final Properties props = new Properties();
try {
props.load(Utils.class.getResourceAsStream("/somePropFile"));
BasicAWSCredentials credObj = new BasicAWSCredentials(props.getProperty("accessKey"),
props.getProperty("secretKey"));
return credObj;
} catch (IOException e) {
log.error("getAWSCredentials IOException" + e.getMessage());
return null;
}
catch (Exception e) {
log.error("getAWSCredentials Exception: " + e.getMessage());
e.printStackTrace();
return null;
}
}
********* Code attempting S3 Access **********
try {
AmazonS3 s3client = new AmazonS3Client(Utils.getAWSCredentials());
//AmazonS3 s3client = new AmazonS3Client(new ProfileCredentialsProvider());
String fileKey = "catering/" + catering.getId() + fileUploadsFileName.get(i);
System.out.println("Uploading a new object to S3 from a file\n");
s3client.putObject(new PutObjectRequest(
Utils.getS3BucketName(),
fileKey, file));
// Save Attachment record
Attachment newAttach = new Attachment();
newAttach.setFile_key(fileKey);
newAttach.setFilename(fileUploadsFileName.get(i));
newAttach.setFiletype(fileUploadsContentType.get(i));
newAttach = aDao.add(newAttach);
} catch (AmazonServiceException ase) {
System.out.println("Caught an AmazonServiceException, which " +
"means your request made it " +
"to Amazon S3, but was rejected with an error response" +
" for some reason.");
System.out.println("Error Message: " + ase.getMessage());
System.out.println("HTTP Status Code: " + ase.getStatusCode());
System.out.println("AWS Error Code: " + ase.getErrorCode());
System.out.println("Error Type: " + ase.getErrorType());
System.out.println("Request ID: " + ase.getRequestId());
fileErrors.add(fileUploadsFileName.get(i));
} catch (AmazonClientException ace) {
System.out.println("Caught an AmazonClientException, which " +
"means the client encountered " +
"an internal error while trying to " +
"communicate with S3, " +
"such as not being able to access the network.");
System.out.println("Error Message: " + ace.getMessage());
fileErrors.add(fileUploadsFileName.get(i));
} catch(Exception e) {
System.out.println("Error Message: " + e.getMessage());
}
It never makes it past the AmazonS3 s3client = new AmazonS3Client(Utils.getAWSCredentials()); line. I've verified that the BasicAWSCredentials object contains the correct field values. Based on this information what might be going wrong to prevent the S3 client from connecting?
** EDIT **
I found this in the resulting stack trace that seems like useful information:
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745) Caused by:
java.lang.NoClassDefFoundError: Could not initialize class
com.amazonaws.ClientConfiguration at
com.amazonaws.services.s3.AmazonS3Client.(AmazonS3Client.java:384)
at
gearup.actions.CateringController.assignAttachments(CateringController.java:176)
at
gearup.actions.CateringController.update(CateringController.java:135)
Earlier I tried following a demo that created a ClientConfiguration object and set the protocol to HTTP. However I ran into an issue where invoking the new ClientConfiguration(); constructor threw a NullPointerException. Am I missing some requirement here?
It looks like your project is missing some dependencies.
You clearly have the aws-java-sdk-s3 jar configured in your project since it's resolving AmazonS3Client, but this jar also depends on aws-java-sdk-core. You need to add the core jar to your classpath.
This is totally weird, since aws-java-sdk-s3 explicitly depends on aws-java-sdk-core (see the pom.xml). Something is fishy here.
For me, it turned out it was a clash of apache httpclient versions (I had older version in one of my POMs than the one amazon library uses).
I've heard from others of similar clashes, e.g. jackson.
So for anyone in this situation, I suggest that you check out Dependency hierarchy when you open a POM.xml in Eclipse (or use mvn dependency:tree. See here for more info).
Also, check the first error message that the AWS throws. It seems that it's not linked as the cause in all further stack traces, which only tell you something like java.lang.NoClassDefFoundError: Could not initialize class com.amazonaws.http.AmazonHttpClient.
I have a problem with a java based DXL import.
We want to maintain properties files via our Java framework. I am working with a temporary file on my filesystem (I am working on a local server). I am exporting the new properties file to my filesystem, generate a DXL file in the same folder and then try to import the DXL into my database.
I have done several options of the importer and we are creating the stream and the importer with sessionAsSignerWithFullAccess. The code is signed with the ID of the Server Admin who has Full Access to everything.
When importing the DXL I receive only the error message "DXL Import Operation failed", the error log of the importer says I am not authorized to perform this operation.
Do you have any idea what could be the problem? From my point of view I can't give my User any more rights on the server.
Here is the code for the import function:
private void importDXLFile(String filepath) {
String dxlPath = filepath.replaceAll(".properties", ".dxl");
DxlImporter importer = null;
Stream stream = null;
System.out.println("dxlPath: " + dxlPath);
try {
stream = BCCJsfUtil.getCurrentSessionAsSignerWithFullAccess(FacesContextEx.getCurrentInstance()).createStream();
if (!stream.open(dxlPath, "ISO-8859-1")) {
System.out.println("Cannot read " + dxlPath + " from server");
}
System.out.println("User: " + BCCJsfUtil.getCurrentSessionAsSignerWithFullAccess(FacesContextEx.getCurrentInstance()).getEffectiveUserName());
importer = BCCJsfUtil.getCurrentSessionAsSignerWithFullAccess(FacesContextEx.getCurrentInstance()).createDxlImporter();
importer.setReplaceDbProperties(false);
importer.setReplicaRequiredForReplaceOrUpdate(false);
importer.setDesignImportOption(DxlImporter.DXLIMPORTOPTION_REPLACE_ELSE_CREATE);
importer.setInputValidationOption(DxlImporter.DXLVALIDATIONOPTION_VALIDATE_NEVER);
importer.setExitOnFirstFatalError(false);
importer.importDxl(stream.readText(), BCCJsfUtil.getCurrentDatabase());
stream.close();
} catch (NotesException e) {
e.printStackTrace();
try {
System.out.println("Log: " + importer.getLog());
System.out.println("LogComment: " + importer.getLogComment());
} catch (NotesException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
}
}
As you can see I tried several options, hoping it would change anything, but it is always the same error message.
The generated DXL seems valid, as we can import it manually with Ytria.
I hope someone has an idea. Any help would be appreciated.
Thanks in advance.
Matthias
Please check your ACL settings:
Is "Maximum internet name and password" set to "Manager" or "Designer"?