I'm getting CMISNotFoundException while accessing the root folder even though i already have many documents uploaded to the repository.I'm able to fetch the repository id but getRootFolder throws error
could not fetch folder due to org.apache.chemistry.opencmis.commons.exceptions.CmisObjectNotFoundException: Object not found
s = d.getSession().session;
p.append("Successfully established session \n");
p.append("id:"+s.getRepositoryInfo().getId()+"\n");
try {
Folder folder = s.getRootFolder();
}catch(Exception e) {
p.append("could not fetch folder due to "+e.toString()+"\n");
}
I'm able to get the root folder now after creating a new repository.But now for applying ACLS i'm facing problem.
LWhen i try to apply ACl to the root folder i get CmisObjectNotFound exception.
When i apply ACL to subfolders, it workds but the permission is not applied correctly.I want to give user1 all the permission and user 2 read permission.But for user1 , now i'm not able to even view the folder.And for user2 i'm able to do everything except download.
I have referred to this link for doing so sap-link
response.getWriter().println("<html><body>");
try {
// Use a unique name with package semantics e.g. com.foo.MyRepository
String uniqueName = "com.vat.VatDocumentsRepo";
// Use a secret key only known to your application (min. 10 chars)
String secretKey = "****";
Session openCmisSession = null;
InitialContext ctx = new InitialContext();
String lookupName = "java:comp/env/" + "EcmService";
EcmService ecmSvc = (EcmService) ctx.lookup(lookupName);
try {
// connect to my repository
openCmisSession = ecmSvc.connect(uniqueName, secretKey);
}
catch (CmisObjectNotFoundException e) {
// repository does not exist, so try to create it
RepositoryOptions options = new RepositoryOptions();
options.setUniqueName(uniqueName);
options.setRepositoryKey(secretKey);
options.setVisibility(Visibility.PROTECTED);
ecmSvc.createRepository(options);
// should be created now, so connect to it
openCmisSession = ecmSvc.connect(uniqueName, secretKey);
openCmisSession.getDefaultContext().setIncludeAcls(true);
openCmisSession.getDefaultContext().setIncludeAllowableActions(true);
openCmisSession.getDefaultContext().setIncludePolicies(false);
}
response.getWriter().println(
"<h3>You are now connected to the Repository with Id "
+ openCmisSession.getRepositoryInfo().getId()
+ "</h3>");
Folder folder = openCmisSession.getRootFolder();
Map<String, String> newFolderProps = new HashMap<String, String>();
newFolderProps.put(PropertyIds.OBJECT_TYPE_ID, "cmis:folder");
newFolderProps.put(PropertyIds.NAME, "Attachments");
try {
folder.createFolder(newFolderProps);
} catch (CmisNameConstraintViolationException e) {
// Folder exists already, nothing to do
}
String userIdOfUser1 = "user1 ";
String userIdOfUser2 = "user2";
response.getWriter().println("<h3>Created By :"+folder.getCreatedBy()+"</h3>");
List<Ace> addAcl = new ArrayList<Ace>();
// build and add ACE for user U1
List<String> permissionsUser1 = new ArrayList<String>();
permissionsUser1.add("cmis:all");
Ace aceUser1 = openCmisSession.getObjectFactory().createAce(userIdOfUser1, permissionsUser1);
addAcl.add(aceUser1);
// build and add ACE for user U2
List<String> permissionsUser2 = new ArrayList<String>();
permissionsUser2.add("cmis:read");
Ace aceUser2 = openCmisSession.getObjectFactory().createAce(userIdOfUser2,
permissionsUser1);
addAcl.add(aceUser2);
response.getWriter().println("<b>Permissions for users"+addAcl.toString()+"</b>");
// list of ACEs which should be removed
List<Ace> removeAcl = new ArrayList<Ace>();
// build and add ACE for user {sap:builtin}everyone
List<String> permissionsEveryone = new ArrayList<String>();
permissionsEveryone.add("cmis:all");
Ace aceEveryone = openCmisSession.getObjectFactory().createAce(
"{sap:builtin}everyone", permissionsEveryone);
removeAcl.add(aceEveryone);
response.getWriter().println("<b>Removing Permissions for users"+removeAcl.toString()+"</b>");
ItemIterable<CmisObject> children = folder.getChildren();
response.getWriter().println("<h1> changing permissions of the following objects: </h1><ul>");
for (CmisObject o : children) {
response.getWriter().println("<li>");
if (o instanceof Folder) {
response.getWriter().println(" createdBy: " + o.getCreatedBy());
o.applyAcl(addAcl, removeAcl, AclPropagation.OBJECTONLY);
response.getWriter().println("Changed permission</li>");
} else {
Document doc = (Document) o;
response.getWriter().println(" createdBy: " + o.getCreatedBy() + " filesize: "
+ doc.getContentStreamLength() + " bytes");
doc.applyAcl(addAcl, removeAcl, AclPropagation.OBJECTONLY);
response.getWriter().println("Changed permission</li>");
}
}
response.getWriter().println("</ul>");
} catch (Exception e) {
response.getWriter().println("<h1>Error: "+e.toString()+"</h1>");
} finally {
response.getWriter().println("</body></html>");
}
Related
Is it possible to delete a folder(In S3 bucket) and all its content with a single api request using java sdk for aws. For browser console we can delete and folder and its content with a single click and I hope that same behavior should be available using the APIs also.
There is no such thing as folders in S3. There are simply files (objects) with slashes in the filenames (keys).
The S3 browser console will visualize these slashes as folders, but they're not real.
You can delete all files with the same prefix, but first you need to look them up with list_objects(), then you can batch delete them.
For code snippet using Java SDK, please refer to Deleting multiple objects.
You can specify keyPrefix in ListObjectsRequest.
For example, consider a bucket that contains the following keys:
foo/bar/baz
foo/bar/bash
foo/bar/bang
foo/boo
And you want to delete files from foo/bar/baz.
if (s3Client.doesBucketExist(bucketName)) {
ListObjectsRequest listObjectsRequest = new ListObjectsRequest()
.withBucketName(bucketName)
.withPrefix("foo/bar/baz");
ObjectListing objectListing = s3Client.listObjects(listObjectsRequest);
while (true) {
for (S3ObjectSummary objectSummary : objectListing.getObjectSummaries()) {
s3Client.deleteObject(bucketName, objectSummary.getKey());
}
if (objectListing.isTruncated()) {
objectListing = s3Client.listNextBatchOfObjects(objectListing);
} else {
break;
}
}
}
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/model/ListObjectsRequest.html
There is no option of giving a folder name or more specifically prefix in java sdk to delete files. But there is an option of giving array of keys you want to delete.
Click for details
.
By using this, I have written a small method to delete all files corresponding to a prefix.
private AmazonS3 s3client = <Your s3 client>;
private String bucketName = <your bucket name, can be signed or unsigned>;
public void deleteDirectory(String prefix) {
ObjectListing objectList = this.s3client.listObjects( this.bucketName, prefix );
List<S3ObjectSummary> objectSummeryList = objectList.getObjectSummaries();
String[] keysList = new String[ objectSummeryList.size() ];
int count = 0;
for( S3ObjectSummary summery : objectSummeryList ) {
keysList[count++] = summery.getKey();
}
DeleteObjectsRequest deleteObjectsRequest = new DeleteObjectsRequest( bucketName ).withKeys( keysList );
this.s3client.deleteObjects(deleteObjectsRequest);
}
You can try the below methods, it will handle deletion even for truncated pages, and also it will recursively delete all the contents in the given directory:
public Set<String> listS3DirFiles(String bucket, String dirPrefix) {
ListObjectsV2Request s3FileReq = new ListObjectsV2Request()
.withBucketName(bucket)
.withPrefix(dirPrefix)
.withDelimiter("/");
Set<String> filesList = new HashSet<>();
ListObjectsV2Result objectsListing;
try {
do {
objectsListing = amazonS3.listObjectsV2(s3FileReq);
objectsListing.getCommonPrefixes().forEach(folderPrefix -> {
filesList.add(folderPrefix);
Set<String> tempPrefix = listS3DirFiles(bucket, folderPrefix);
filesList.addAll(tempPrefix);
});
for (S3ObjectSummary summary: objectsListing.getObjectSummaries()) {
filesList.add(summary.getKey());
}
s3FileReq.setContinuationToken(objectsListing.getNextContinuationToken());
} while(objectsListing.isTruncated());
} catch (SdkClientException e) {
System.out.println(e.getMessage());
throw e;
}
return filesList;
}
public boolean deleteDirectoryContents(String bucket, String directoryPrefix) {
Set<String> keysSet = listS3DirFiles(bucket, directoryPrefix);
if (keysSet.isEmpty()) {
System.out.println("Given directory {} doesn't have any file "+ directoryPrefix);
return false;
}
DeleteObjectsRequest deleteObjectsRequest = new DeleteObjectsRequest(bucket)
.withKeys(keysSet.toArray(new String[0]));
try {
amazonS3.deleteObjects(deleteObjectsRequest);
} catch (SdkClientException e) {
System.out.println(e.getMessage());
throw e;
}
return true;
}
First you need to fetch all object keys starting with the given prefix:
public List<FileKey> list(String keyPrefix) {
var objectListing = client.listObjects("bucket-name", keyPrefix);
var paths =
objectListing.getObjectSummaries().stream()
.map(s3ObjectSummary -> s3ObjectSummary.getKey())
.collect(Collectors.toList());
while (objectListing.isTruncated()) {
objectListing = client.listNextBatchOfObjects(objectListing);
paths.addAll(
objectListing.getObjectSummaries().stream()
.map(s3ObjectSummary -> s3ObjectSummary.getKey())
.toList());
}
return paths.stream().sorted().collect(Collectors.toList());
}
Then call deleteObjects:
client.deleteObjects(new DeleteObjectsRequest("bucket-name").withKeys(list("some-prefix")));
You can try this
void deleteS3Folder(String bucketName, String folderPath) {
for (S3ObjectSummary file : s3.listObjects(bucketName, folderPath).getObjectSummaries()){
s3.deleteObject(bucketName, file.getKey());
}
}
I have to scan whole data lake file system. Having code like:
PagedIterable<PathItem> pItems = ((DataLakeFileSystemClient)prmParent).listPaths();
for( PathItem pItem : pItems ){
if pItem.isDirectory() ){
((DataLakeFileSystemClient)prmParent).getDirectoryClient(pItem.getName());
} else {
((DataLakeFileSystemClient)prmParent).getFileClient(pItem.getName());
}
}
I get top level dirs/files. But to drill down there must be method listChild() in DataLakeDirectoryClient class.
But i did not find anything similar.
Does anybody know what is the proper way to walk thru the tree?
Thanks. Sergiy
If you want to list all path in Azure data lake gen2 file system, please refer to the following code
StorageSharedKeyCredential credential = new StorageSharedKeyCredential(accountName, accountKey);
String endpoint = String.format(Locale.ROOT, "https://%s.dfs.core.windows.net", accountName);
DataLakeServiceClient storageClient = new DataLakeServiceClientBuilder()
.endpoint(endpoint)
.credential(credential)
.buildClient();
DataLakeFileSystemClient dataLakeFileSystemClient = storageClient.getFileSystemClient("test");
ListPathsOptions options = new ListPathsOptions();
options.setRecursive(true);
PagedIterable<PathItem> pItems = dataLakeFileSystemClient.listPaths(options,null);
for( PathItem pItem : pItems ){
if(pItem.isDirectory()) {
System.out.println("The directory: " +pItem.getName());
}else{
System.out.println("The file : " +pItem.getName());
}
}
For more details, please refer to here
Is there any way to get timestamp of changed files, deleted files, newly added files from JGit? I have below code which walks the tree and get me these files but I am not able to figure out how can I get the timestamp of those files.
public static Map<String, Object> diffFormatter(Git git, ObjectId lastCommitId) {
Map<String, Object> m = new HashMap<String, Object>();
ByteArrayOutputStream out = new ByteArrayOutputStream();
DiffFormatter formatter = new DiffFormatter(out);
formatter.setRepository(git.getRepository());
AbstractTreeIterator commitTreeIterator = prepareTreeParser(git.getRepository(), lastCommitId);
FileTreeIterator workTreeIterator = new FileTreeIterator(git.getRepository());
List<DiffEntry> diffEntries = formatter.scan(commitTreeIterator, workTreeIterator);
Set<String> changedFiles = new HashSet<String>();
Set<String> newlyAddedFiles = new HashSet<String>();
Set<String> deletedFiles = new HashSet<String>();
if (diffEntries.size() < 1) {
return m;
}
for (DiffEntry entry : diffEntries) {
if (entry.getChangeType().name().equals(ChangeType.ADD.name())) {
newlyAddedFiles.add(entry.getNewPath());
// newlyAddedFiles.add(entry.getNewPath() + ":" + "file_timestamp");
} else if (entry.getChangeType().name().equals(ChangeType.DELETE.name())) {
deletedFiles.add(entry.getOldPath());
// deletedFiles.add(entry.getOldPath() + ":" + "file_timestamp");
} else {
formatter.format(entry);
changedFiles.add(entry.getNewPath());
// changedFiles.add(entry.getNewPath() + ":" + "file_timestamp");
}
}
m.put(Constants.CHANGED_FILE_STR, changedFiles);
m.put(Constants.NEWLY_ADDED_FILE_STR, newlyAddedFiles);
m.put(Constants.DELETED_FILE_STR, deletedFiles);
return m;
}
Git does not store file modification timestamps. What is stored, however, is when the commit was created.
This information can be obtained with RevCommit::getCommitTime()
I have a problem that needs solving where we use OpenOffice 1.1.4 templated reports and programmatically export them to PDF.
The team who create the templates have recently changed the header image and some images in a table to background images (before they were just inserted) since this change the current program is not creating the PDFs with the images. We can export from OpenOffice manually and the images are included. Can anyone help with a change I may need to make to get these background images included please?
The current code:
private void print(XInterface xComponent,
PrintRequestDTO printReq, File sourceFile,
Vector<String> pages) throws java.lang.Exception {
String pageRange;
// XXX create the PDF via OOo export facility
com.sun.star.frame.XStorable pdfCreator = (com.sun.star.frame.XStorable) UnoRuntime
.queryInterface(
com.sun.star.frame.XStorable.class,
xComponent);
PropertyValue[] outputOpts = new PropertyValue[2];
outputOpts[0] = new PropertyValue();
outputOpts[0].Name = "CompressionMode";
outputOpts[0].Value = "1"; // XXX Change this perhaps?
outputOpts[1] = new PropertyValue();
outputOpts[1].Name = "PageRange";
if (printReq.getPageRange() == null) {
pageRange = "1-";
}
else {
if (printReq.getPageRange().length() > 0) {
pageRange = printReq.getPageRange();
}
else {
pageRange = "1-";
}
}
log.debug("Print Instruction - page range = "
+ pageRange);
PropertyValue[] filterOpts = new PropertyValue[3];
filterOpts[0] = new PropertyValue();
filterOpts[0].Name = "FilterName";
filterOpts[0].Value = "writer_pdf_Export"; // MS Word 97
filterOpts[1] = new PropertyValue();
filterOpts[1].Name = "Overwrite";
filterOpts[1].Value = new Boolean(true);
filterOpts[2] = new PropertyValue();
filterOpts[2].Name = "FilterData";
filterOpts[2].Value = outputOpts;
if (pages.size() == 0) { // ie no forced page breaks
// set page range
outputOpts[1].Value = pageRange;
filterOpts[2] = new PropertyValue();
filterOpts[2].Name = "FilterData";
filterOpts[2].Value = outputOpts;
File outputFile = new File(
sourceFile.getParent(),
printReq.getOutputFileName()
+ ".pdf");
StringBuffer sPDFUrl = new StringBuffer(
"file:///");
sPDFUrl.append(outputFile.getCanonicalPath()
.replace('\\', '/'));
log.debug("PDF file = " + sPDFUrl.toString());
if (pdfCreator != null) {
sleep();
pdfCreator.storeToURL(sPDFUrl.toString(),
filterOpts);
}
}
else if (pages.size() > 1) {
throw new PrintDocumentException(
"Only one forced split catered for currently");
}
else { // a forced split exists.
log.debug("Page break found in "
+ (String) pages.firstElement());
String[] newPageRanges = calculatePageRanges(
(String) pages.firstElement(), pageRange);
int rangeCount = newPageRanges.length;
for (int i = 0; i < rangeCount; i++) {
outputOpts[1].Value = newPageRanges[i];
log.debug("page range = " + newPageRanges[i]);
filterOpts[2] = new PropertyValue();
filterOpts[2].Name = "FilterData";
filterOpts[2].Value = outputOpts;
String fileExtension = (i == 0 && rangeCount > 1) ? "__Summary.pdf"
: ".pdf";
File outputFile = new File(
sourceFile.getParent(),
printReq.getOutputFileName()
+ fileExtension);
StringBuffer sPDFUrl = new StringBuffer(
"file:///");
sPDFUrl.append(outputFile.getCanonicalPath()
.replace('\\', '/'));
log.debug("PDF file = " + sPDFUrl.toString());
if (pdfCreator != null) {
log.debug("about to create the PDF file");
sleep();
pdfCreator.storeToURL(
sPDFUrl.toString(), filterOpts);
log.debug("done");
}
}
}
}
Thanks in advance.
Glad that suggestion of making the document visible helped. Since it has ALSO fixed the problem you have a timing/threading issue. I suspect you'll find that another dodgy option of doing a sleep before executing the save to PDF will also allow the images to appear. Neither of these solutions is good.
Most likley best fix is to upgrade to a newer version of Open Office (the API calls you have should still work). Another option would be to try to call the API to ask the document to refresh itself.
After finding the correct property I was able to open the file with the hidden property set to false, this meant when the file was exported to PDF it included the background images. Its a shame I could not find another solultion that kept the file hidden but at least its working.
I am searching for a .txt file that is located at change set.
Then I need to create locally over my pc the full path directory of this file.
For example if there a file called"test.txt" that it's located at:
Project1-->Folder1-->Folder2-->test.txt
Till now I have managed to search for this file.
Now I need to fetch the full directory and create similar one over my pc:
Result at my pc:
Folder1-->Folder2-->test.txt
That's what I did to search for the file within a changeset and retrieve it:
public IFileItem getTextFileFile(IChangeSet changeSet, ITeamRepository repository) throws TeamRepositoryException{
IVersionableManager vm = SCMPlatform.getWorkspaceManager(repository).versionableManager();
List changes = changeSet.changes();
IFileItem toReturn = null;
for(int i=0;i<changes.size();i++) {="" <br=""> Change change = (Change) changes.get(i);
IVersionableHandle after = change.afterState();
if( after != null && after instanceof IFileItemHandle) {
IFileItem fileItem = (IFileItem) vm.fetchCompleteState(after, null);
if(fileItem.getName().contains(".txt")) {
toReturn = fileItem;
break;
} else {
continue;
}
}
}
if(toReturn == null){
throw new TeamRepositoryException("Could not find the file");
}
return toReturn;
}
I use RTC:4
Win:XP
Thanks in advance.
I have the following IConfiguration that I fetched by the following:
IWorkspaceManager workspaceManager = SCMPlatform.getWorkspaceManager(repository);
IWorkspaceSearchCriteria wsSearchCriteria = WorkspaceSearchCriteria.FACTORY.newInstance();
wsSearchCriteria.setKind(IWorkspaceSearchCriteria.STREAMS);
wsSearchCriteria.setPartialOwnerNameIgnoreCase(projectAreaName);
List <iworkspacehandle> workspaceHandles = workspaceManager.findWorkspaces(wsSearchCriteria, Integer.MAX_VALUE, Application.getMonitor());
IWorkspaceConnection workspaceConnection = workspaceManager.getWorkspaceConnection(workspaceHandles.get(0),Application.getMonitor());
IComponentHandle component = changeSet.getComponent();
IConfiguration configuration = workspaceConnection.configuration(component);
List lst = new ArrayList<string>();
lst=configuration.locateAncestors(lst,Application.getMonitor());
=========================================
Now to get the full path of the file item ,I made the following method I got from :
https://jazz.net/forum/questions/94927/how-do-i-find-moved-from-location-for-a-movedreparented-item-using-rtc-4-java-api
=========================================
private String getFullPath(List ancestor, ITeamRepository repository)
throws TeamRepositoryException {
String directoryPath = "";
for (Object ancestorObj : ancestor) {
IAncestorReport ancestorImpl = (IAncestorReport) ancestorObj;
for (Object nameItemPairObj : ancestorImpl.getNameItemPairs()) {
NameItemPairImpl nameItemPair = (NameItemPairImpl) nameItemPairObj;
Object item = SCMPlatform.getWorkspaceManager(repository)
.versionableManager()
.fetchCompleteState(nameItemPair.getItem(), null);
String pathName = "";
if (item instanceof IFolder) {
pathName = ((IFolder) item).getName();
}
else if (item instanceof IFileItem) {
pathName = ((IFileItem) item).getName();
}
if (!pathName.equals(""))
directoryPath = directoryPath + "\\" + pathName;
}
}
return directoryPath;
}
=========================================