I am trying to delete multiple objects,but it is not going to delete. I am not getting any exception.If i go for single delete,there is no issue with it.
Following code i am using
public void deleteImage(){
List<KeyVersion> amazonKeys = new ArrayList<KeyVersion>();
amazonKeys.add(new KeyVersion("compressedc1eac77b-9c38-4036-9770-34a77a163bb0.jpeg"));
amazonKeys.add(new KeyVersion("compressedb52adf1e-5155-48b6-9051-bb679601f5ee.jpeg"));
imageService.removeS3Files("mubucketname/dev/3123",amazonKeys);
}
My service is
public void removeS3Files(String bucketName,List<KeyVersion> keys){
log.debug("deleting multiple objects from s3 with bucket::" + bucketName);
try{
DeleteObjectsRequest multiObjectDeleteRequest = new DeleteObjectsRequest(bucketName);
multiObjectDeleteRequest.setKeys(keys);
AmazonS3 s3client = new AmazonS3Client(CustomAwsCredentials.getInstance(envConfiguration));
s3client.setEndpoint(Constant.AWS_ENDPOINT);
DeleteObjectsResult deleteObjectsResult = s3client.deleteObjects(multiObjectDeleteRequest);
System.out.println(deleteObjectsResult.getDeletedObjects());
}catch(AmazonServiceException exception){
log.debug("Caught an AmazonServiceException.");
log.debug("Error Message: " + exception.getMessage());
}catch (AmazonClientException clientException) {
log.debug("Caught an AmazonClientException.");
log.debug("Error Message: " + clientException.getMessage());
}
}
My data store in bucket looks like
bucketname/dev/3123/compressedc1eac77b-9c38-4036-9770-34a77a163bb0.jpeg
bucketname/dev/3123/compressedb52adf1e-5155-48b6-9051-bb679601f5ee.jpeg
I have used below code for delete single object(working fine)
try{
AmazonS3 s3client = new AmazonS3Client(CustomAwsCredentials.getInstance(envConfiguration));
System.out.println(s3client.doesBucketExist(bucketName));
s3client.setEndpoint(Constant.AWS_ENDPOINT);
s3client.deleteObject(bucketName, key);
}catch(AmazonServiceException exception){
log.debug("Caught an AmazonServiceException.");
log.debug("Error Message: " + exception.getMessage());
}catch (AmazonClientException clientException) {
log.debug("Caught an AmazonClientException.");
log.debug("Error Message: " + clientException.getMessage());
}
Please help me what i am missing here in multiple object delete ?
Thanks in advance
This is not a valid bucket name:
mubucketname/dev/3123
The bucket name is separate from the key and you can't put path prefixes from the key on the bucket name. Try this:
List<KeyVersion> keys = new ArrayList<KeyVersion>();
keys.add(new KeyVersion("dev/3123/compressedc1eac77b-9c38-4036-9770-34a77a163bb0.jpeg"));
keys.add(new KeyVersion("dev/3123/compressedb52adf1e-5155-48b6-9051-bb679601f5ee.jpeg"));
DeleteObjectsRequest request = new DeleteObjectsRequest("mubucketname").withKeys(keys);
DeleteObjectsResult result = s3client.deleteObjects(request);
Related
I'm getting CMISNotFoundException while accessing the root folder even though i already have many documents uploaded to the repository.I'm able to fetch the repository id but getRootFolder throws error
could not fetch folder due to org.apache.chemistry.opencmis.commons.exceptions.CmisObjectNotFoundException: Object not found
s = d.getSession().session;
p.append("Successfully established session \n");
p.append("id:"+s.getRepositoryInfo().getId()+"\n");
try {
Folder folder = s.getRootFolder();
}catch(Exception e) {
p.append("could not fetch folder due to "+e.toString()+"\n");
}
I'm able to get the root folder now after creating a new repository.But now for applying ACLS i'm facing problem.
LWhen i try to apply ACl to the root folder i get CmisObjectNotFound exception.
When i apply ACL to subfolders, it workds but the permission is not applied correctly.I want to give user1 all the permission and user 2 read permission.But for user1 , now i'm not able to even view the folder.And for user2 i'm able to do everything except download.
I have referred to this link for doing so sap-link
response.getWriter().println("<html><body>");
try {
// Use a unique name with package semantics e.g. com.foo.MyRepository
String uniqueName = "com.vat.VatDocumentsRepo";
// Use a secret key only known to your application (min. 10 chars)
String secretKey = "****";
Session openCmisSession = null;
InitialContext ctx = new InitialContext();
String lookupName = "java:comp/env/" + "EcmService";
EcmService ecmSvc = (EcmService) ctx.lookup(lookupName);
try {
// connect to my repository
openCmisSession = ecmSvc.connect(uniqueName, secretKey);
}
catch (CmisObjectNotFoundException e) {
// repository does not exist, so try to create it
RepositoryOptions options = new RepositoryOptions();
options.setUniqueName(uniqueName);
options.setRepositoryKey(secretKey);
options.setVisibility(Visibility.PROTECTED);
ecmSvc.createRepository(options);
// should be created now, so connect to it
openCmisSession = ecmSvc.connect(uniqueName, secretKey);
openCmisSession.getDefaultContext().setIncludeAcls(true);
openCmisSession.getDefaultContext().setIncludeAllowableActions(true);
openCmisSession.getDefaultContext().setIncludePolicies(false);
}
response.getWriter().println(
"<h3>You are now connected to the Repository with Id "
+ openCmisSession.getRepositoryInfo().getId()
+ "</h3>");
Folder folder = openCmisSession.getRootFolder();
Map<String, String> newFolderProps = new HashMap<String, String>();
newFolderProps.put(PropertyIds.OBJECT_TYPE_ID, "cmis:folder");
newFolderProps.put(PropertyIds.NAME, "Attachments");
try {
folder.createFolder(newFolderProps);
} catch (CmisNameConstraintViolationException e) {
// Folder exists already, nothing to do
}
String userIdOfUser1 = "user1 ";
String userIdOfUser2 = "user2";
response.getWriter().println("<h3>Created By :"+folder.getCreatedBy()+"</h3>");
List<Ace> addAcl = new ArrayList<Ace>();
// build and add ACE for user U1
List<String> permissionsUser1 = new ArrayList<String>();
permissionsUser1.add("cmis:all");
Ace aceUser1 = openCmisSession.getObjectFactory().createAce(userIdOfUser1, permissionsUser1);
addAcl.add(aceUser1);
// build and add ACE for user U2
List<String> permissionsUser2 = new ArrayList<String>();
permissionsUser2.add("cmis:read");
Ace aceUser2 = openCmisSession.getObjectFactory().createAce(userIdOfUser2,
permissionsUser1);
addAcl.add(aceUser2);
response.getWriter().println("<b>Permissions for users"+addAcl.toString()+"</b>");
// list of ACEs which should be removed
List<Ace> removeAcl = new ArrayList<Ace>();
// build and add ACE for user {sap:builtin}everyone
List<String> permissionsEveryone = new ArrayList<String>();
permissionsEveryone.add("cmis:all");
Ace aceEveryone = openCmisSession.getObjectFactory().createAce(
"{sap:builtin}everyone", permissionsEveryone);
removeAcl.add(aceEveryone);
response.getWriter().println("<b>Removing Permissions for users"+removeAcl.toString()+"</b>");
ItemIterable<CmisObject> children = folder.getChildren();
response.getWriter().println("<h1> changing permissions of the following objects: </h1><ul>");
for (CmisObject o : children) {
response.getWriter().println("<li>");
if (o instanceof Folder) {
response.getWriter().println(" createdBy: " + o.getCreatedBy());
o.applyAcl(addAcl, removeAcl, AclPropagation.OBJECTONLY);
response.getWriter().println("Changed permission</li>");
} else {
Document doc = (Document) o;
response.getWriter().println(" createdBy: " + o.getCreatedBy() + " filesize: "
+ doc.getContentStreamLength() + " bytes");
doc.applyAcl(addAcl, removeAcl, AclPropagation.OBJECTONLY);
response.getWriter().println("Changed permission</li>");
}
}
response.getWriter().println("</ul>");
} catch (Exception e) {
response.getWriter().println("<h1>Error: "+e.toString()+"</h1>");
} finally {
response.getWriter().println("</body></html>");
}
I am trying to create an HL7 message in Java and then print the resulting message. I am faking basic patient information and then adding the Drug Prescription information. Then, I want to print the complete message but I wasn't able to use the API correctly. I am new at using HL7, so I know I'm probably missing some required segments and even using the wrong ones, can you please help? This is my current code:
public RXO runDrugPrescriptionEvent(CMSGeneric cmsgen) {
CMSDrugPrescriptionEvent cmsic = (CMSDrugPrescriptionEvent) cmsgen;
ADT_A28 adt23 = new ADT_A28();
try {
adt23.initQuickstart("ADT", "A08", cmsic.getPDE_EVENT_ID());
// We set the sex identity (male or female)
if (cmsic.getBENE_SEX_IDENT_CD() == 1) {
adt23.getPID().getSex().setValue("Male");
}
else {
adt23.getPID().getSex().setValue("Female");
}
// We set a fake name and family name
adt23.getPID().insertPatientName(0).getGivenName().setValue("CMS Name " + MainTest.NEXT_PATIENT_ID);
adt23.getPID().insertPatientName(0).getFamilyName().setValue("CMS Family name " + MainTest.NEXT_PATIENT_ID);
MainTest.NEXT_PATIENT_ID++;
RXO rxo = new RXO(adt23, new DefaultModelClassFactory());
rxo.getRxo1_RequestedGiveCode().getCe1_Identifier().setValue("" + cmsic.getPDE_DRUG_CD());
rxo.getRxo18_RequestedGiveStrength().setValue("" + cmsic.getPDE_DRUG_STR_CD());
rxo.getRxo19_RequestedGiveStrengthUnits().getCe1_Identifier().setValue("" + cmsic.getPDE_DRUG_STR_UNITS());
rxo.getRxo5_RequestedDosageForm().getCe1_Identifier().setValue("" + cmsic.getPDE_DRUG_DOSE_CD());
rxo.getRxo11_RequestedDispenseAmount().setValue("" + cmsic.getPDE_DRUG_QTY_DIS());
HapiContext context = new DefaultHapiContext();
Parser parser = context.getPipeParser();
String encodedMessage = adt23.getParser().encode(rxo.getMessage());
logger.debug("Printing Message:");
logger.debug(encodedMessage);
return rxo;
} catch (IOException e) {
System.out.println("IOException creating HL7 message. " + e.getMessage());
e.printStackTrace();
} catch (HL7Exception e) {
System.out.println("HL7Exception creating HL7 message. " + e.getMessage());
e.printStackTrace();
}
return null;
}
With this code, the logger prints the following message:
MSH|^~\&|||||20160331101349.8+0100||ADT^A08|110001|PDE-00001E6FADAD3F57|2.3
PID|||||CMS Family name 100~^CMS Name 100|||Female
But I was expecting to see the RXO segment as well. How can I achieve that?
I found that changing the message type from ADT_A28 to ORP_O10 would let me have all the fields I need, as ADT_A28 wasn't the appropriate message for the kind of information I needed. There's a complete example on how to set a great amount of segments in this type of message here. Then, I was able to print the complete message using the PipeParser:
HapiContext context = new DefaultHapiContext();
Parser parser = context.getPipeParser();
String encodedMessage = parser.encode(msg);
logger.debug("Printing EREncoded Message:");
logger.debug(encodedMessage);
I am trying to connect to my AWS S3 bucket to upload a file per these links' instructions.
http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadObjSingleOpJava.html
http://docs.aws.amazon.com/AWSSdkDocsJava/latest/DeveloperGuide/credentials.html#credentials-specify-provider
For some reason when it tries to instantiate the AmazonS3Client object it throws an exception that's being swallowed and it exits my Struts Action. Because of this, I don't have much information to debug off of.
I've tried both the The default credential profiles file (~/.aws/credentials) approach and the explicit secret and access key (new BasicAWSCredentials(access_key_id, secret_access_key)
/**
* Uses the secret key and access key to return an object for accessing AWS features
* #return BasicAWSCredentials
*/
public static BasicAWSCredentials getAWSCredentials() {
final Properties props = new Properties();
try {
props.load(Utils.class.getResourceAsStream("/somePropFile"));
BasicAWSCredentials credObj = new BasicAWSCredentials(props.getProperty("accessKey"),
props.getProperty("secretKey"));
return credObj;
} catch (IOException e) {
log.error("getAWSCredentials IOException" + e.getMessage());
return null;
}
catch (Exception e) {
log.error("getAWSCredentials Exception: " + e.getMessage());
e.printStackTrace();
return null;
}
}
********* Code attempting S3 Access **********
try {
AmazonS3 s3client = new AmazonS3Client(Utils.getAWSCredentials());
//AmazonS3 s3client = new AmazonS3Client(new ProfileCredentialsProvider());
String fileKey = "catering/" + catering.getId() + fileUploadsFileName.get(i);
System.out.println("Uploading a new object to S3 from a file\n");
s3client.putObject(new PutObjectRequest(
Utils.getS3BucketName(),
fileKey, file));
// Save Attachment record
Attachment newAttach = new Attachment();
newAttach.setFile_key(fileKey);
newAttach.setFilename(fileUploadsFileName.get(i));
newAttach.setFiletype(fileUploadsContentType.get(i));
newAttach = aDao.add(newAttach);
} catch (AmazonServiceException ase) {
System.out.println("Caught an AmazonServiceException, which " +
"means your request made it " +
"to Amazon S3, but was rejected with an error response" +
" for some reason.");
System.out.println("Error Message: " + ase.getMessage());
System.out.println("HTTP Status Code: " + ase.getStatusCode());
System.out.println("AWS Error Code: " + ase.getErrorCode());
System.out.println("Error Type: " + ase.getErrorType());
System.out.println("Request ID: " + ase.getRequestId());
fileErrors.add(fileUploadsFileName.get(i));
} catch (AmazonClientException ace) {
System.out.println("Caught an AmazonClientException, which " +
"means the client encountered " +
"an internal error while trying to " +
"communicate with S3, " +
"such as not being able to access the network.");
System.out.println("Error Message: " + ace.getMessage());
fileErrors.add(fileUploadsFileName.get(i));
} catch(Exception e) {
System.out.println("Error Message: " + e.getMessage());
}
It never makes it past the AmazonS3 s3client = new AmazonS3Client(Utils.getAWSCredentials()); line. I've verified that the BasicAWSCredentials object contains the correct field values. Based on this information what might be going wrong to prevent the S3 client from connecting?
** EDIT **
I found this in the resulting stack trace that seems like useful information:
org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745) Caused by:
java.lang.NoClassDefFoundError: Could not initialize class
com.amazonaws.ClientConfiguration at
com.amazonaws.services.s3.AmazonS3Client.(AmazonS3Client.java:384)
at
gearup.actions.CateringController.assignAttachments(CateringController.java:176)
at
gearup.actions.CateringController.update(CateringController.java:135)
Earlier I tried following a demo that created a ClientConfiguration object and set the protocol to HTTP. However I ran into an issue where invoking the new ClientConfiguration(); constructor threw a NullPointerException. Am I missing some requirement here?
It looks like your project is missing some dependencies.
You clearly have the aws-java-sdk-s3 jar configured in your project since it's resolving AmazonS3Client, but this jar also depends on aws-java-sdk-core. You need to add the core jar to your classpath.
This is totally weird, since aws-java-sdk-s3 explicitly depends on aws-java-sdk-core (see the pom.xml). Something is fishy here.
For me, it turned out it was a clash of apache httpclient versions (I had older version in one of my POMs than the one amazon library uses).
I've heard from others of similar clashes, e.g. jackson.
So for anyone in this situation, I suggest that you check out Dependency hierarchy when you open a POM.xml in Eclipse (or use mvn dependency:tree. See here for more info).
Also, check the first error message that the AWS throws. It seems that it's not linked as the cause in all further stack traces, which only tell you something like java.lang.NoClassDefFoundError: Could not initialize class com.amazonaws.http.AmazonHttpClient.
I'm trying to upload multiple files to Amazon S3 all under the same key, by appending the files. I have a list of file names and want to upload/append the files in that order. I am pretty much exactly following this tutorial but I am looping through each file first and uploading that in part. Because the files are on hdfs (the Path is actually org.apache.hadoop.fs.Path), I am using the input stream to send the file data. Some pseudocode is below (I am commenting the blocks that are word for word from the tutorial):
// Create a list of UploadPartResponse objects. You get one of these for
// each part upload.
List<PartETag> partETags = new ArrayList<PartETag>();
// Step 1: Initialize.
InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(
bk.getBucket(), bk.getKey());
InitiateMultipartUploadResult initResponse =
s3Client.initiateMultipartUpload(initRequest);
try {
int i = 1; // part number
for (String file : files) {
Path filePath = new Path(file);
// Get the input stream and content length
long contentLength = fss.get(branch).getFileStatus(filePath).getLen();
InputStream is = fss.get(branch).open(filePath);
long filePosition = 0;
while (filePosition < contentLength) {
// create request
//upload part and add response to our list
i++;
}
}
// Step 3: Complete.
CompleteMultipartUploadRequest compRequest = new
CompleteMultipartUploadRequest(bk.getBucket(),
bk.getKey(),
initResponse.getUploadId(),
partETags);
s3Client.completeMultipartUpload(compRequest);
} catch (Exception e) {
//...
}
However, I am getting the following error:
com.amazonaws.services.s3.model.AmazonS3Exception: The XML you provided was not well-formed or did not validate against our published schema (Service: Amazon S3; Status Code: 400; Error Code: MalformedXML; Request ID: 2C1126E838F65BB9), S3 Extended Request ID: QmpybmrqepaNtTVxWRM1g2w/fYW+8DPrDwUEK1XeorNKtnUKbnJeVM6qmeNcrPwc
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:1109)
at com.amazonaws.http.AmazonHttpClient.executeOneRequest(AmazonHttpClient.java:741)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:461)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:296)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3743)
at com.amazonaws.services.s3.AmazonS3Client.completeMultipartUpload(AmazonS3Client.java:2617)
If anyone knows what the cause of this error might be, that would be greatly appreciated. Alternatively, if there is a better way to concatenate a bunch of files into one s3 key, that would be great as well. I tried using java's builtin SequenceInputStream but that did not work. Any help would be greatly appreciated. For reference, the total size of all the files could be as large as 10-15 gb.
I know it's probably a bit late but worth giving my contribution.
I've managed to solve a similar problem using the SequenceInputStream.
The tricks is in being able to calculate the total size of the result file and then feeding the SequenceInputStream with an Enumeration<InputStream>.
Here's some example code that might help:
public void combineFiles() {
List<String> files = getFiles();
long totalFileSize = files.stream()
.map(this::getContentLength)
.reduce(0L, (f, s) -> f + s);
try {
try (InputStream partialFile = new SequenceInputStream(getInputStreamEnumeration(files))) {
ObjectMetadata resultFileMetadata = new ObjectMetadata();
resultFileMetadata.setContentLength(totalFileSize);
s3Client.putObject("bucketName", "resultFilePath", partialFile, resultFileMetadata);
}
} catch (IOException e) {
LOG.error("An error occurred while combining files. {}", e);
}
}
private Enumeration<? extends InputStream> getInputStreamEnumeration(List<String> files) {
return new Enumeration<InputStream>() {
private Iterator<String> fileNamesIterator = files.iterator();
#Override
public boolean hasMoreElements() {
return fileNamesIterator.hasNext();
}
#Override
public InputStream nextElement() {
try {
return new FileInputStream(Paths.get(fileNamesIterator.next()).toFile());
} catch (FileNotFoundException e) {
System.err.println(e.getMessage());
throw new RuntimeException(e);
}
}
};
}
Hope this helps!
I have created a new Glacier vault to use in development. I setup SNS and SQS for job completion notifications.
I am using the java SDK from AWS. I am able to successfully add archives to the vault but I get an error when creating a retrieval job.
The code I am using is from the SDK
InitiateJobRequest initJobRequest = new InitiateJobRequest()
.withVaultName(vaultName)
.withJobParameters(new JobParameters().withType("archive-retrieval").withArchiveId(archiveId));
I use the same code in Test and Production and it works fine, yet in development I get this error:
Status Code: 400, AWS Service: AmazonGlacier, AWS Request ID: xxxxxxxx, AWS Error Code: InvalidParameterValueException, AWS Error Message: Invalid vault name: arn:aws:glacier:us-west-2:xxxxxxx:vaults/xxxxxx
I know the vault name is correct and it exists as I use the same name to run the add archive job and it completes fine.
I had a suspicion that the vault may take a bit of time after creation before it will allow retrieval requests, but I couldn't find any documentation to confirm this.
Anyone had any similar issues? Or know if there are delays on vaults before you can initiate a retrieval request?
try {
// Get the S3 directory file.
S3Object object = null;
try {
object = s3.getObject(new GetObjectRequest(s3BucketName, key));
} catch (com.amazonaws.AmazonClientException e) {
logger.error("Caught an AmazonClientException");
logger.error("Error Message: " + e.getMessage());
return;
}
// Show
logger.info("\tContent-Type: "
+ object.getObjectMetadata().getContentType());
GlacierS3Dir dir = GlacierS3Dir.digestS3GlacierDirectory(object
.getObjectContent());
logger.info("\tGlacier object ID is " + dir.getGlacierFileID());
// Connect to Glacier
ArchiveTransferManager atm = new ArchiveTransferManager(client,credentials);
logger.info("\tVault: " + vaultName);
// create a name
File f = new File(key);
String filename = f.getName();
filename = path + filename.replace("dir", "tgz");
logger.info("Downloading to '" + filename
+ "'. This will take up to 4 hours...");
atm.download(vaultName, dir.getGlacierFileID(), new File(filename));
logger.info("Done.");
} catch (AmazonServiceException ase) {
logger.error("Caught an AmazonServiceException.");
logger.error("Error Message: " + ase.getMessage());
logger.error("HTTP Status Code: " + ase.getStatusCode());
logger.error("AWS Error Code: " + ase.getErrorCode());
logger.error("Error Type: " + ase.getErrorType());
logger.error("Request ID: " + ase.getRequestId());
} catch (AmazonClientException ace) {
logger.error("Caught an AmazonClientException.");
logger.error("Error Message: " + ace.getMessage());
}
Error message "Invalid vault name" means this archive is located in a different Vault. Proof link: https://forums.aws.amazon.com/message.jspa?messageID=446187