I need help to ‘recursively’ grab files in s3:
For example, I have s3 structure like this:
My-bucket/2018/06/05/10/file1.json
My-bucket/2018/06/05/11/file2.json
My-bucket/2018/06/05/12/file3.json
My-bucket/2018/06/05/13/file5.json
My-bucket/2018/06/05/14/file4.json
My-bucket/2018/06/05/15/file6.json
I need to get all files pathes with file name for given bucket:
I tried following method, but it didn’t worked for me (its returning not whole path):
public List<String> getObjectsListFromFolder4(String bucketName, String keyPrefix) {
List<String> paths = new ArrayList<String>();
String delimiter = "/";
if (keyPrefix != null && !keyPrefix.isEmpty() && !keyPrefix.endsWith(delimiter)) {
keyPrefix += delimiter;
}
ListObjectsRequest listObjectRequest = new ListObjectsRequest().withBucketName(bucketName)
.withPrefix(keyPrefix).withDelimiter(delimiter);
ObjectListing objectListing;
do {
objectListing = s3Client.listObjects(listObjectRequest);
paths.addAll(objectListing.getCommonPrefixes());
listObjectRequest.setMarker(objectListing.getNextMarker());
} while (objectListing.isTruncated());
return paths;
}
There is a new utility class — S3Objects — that provides an easy way to iterate Amazon S3 objects in a "foreach" statement. Use its withPrefix method and then just iterate them. You can use filters and streams as well.
Here is an example (Kotlin):
val s3 = AmazonS3ClientBuilder
.standard()
.withCredentials(EnvironmentVariableCredentialsProvider())
.build()
S3Objects
.withPrefix(s3, bucket, folder)
.filter { s3ObjectSummary ->
s3ObjectSummary.key.endsWith(".gz")
}
.parallelStream()
.forEach { s3ObjectSummary ->
CSVParser.parse(
GZIPInputStream(s3.getObject(s3ObjectSummary.bucketName, s3ObjectSummary.key).objectContent),
StandardCharsets.UTF_8,
CSVFormat.DEFAULT
).use { csvParser ->
…
}
}
getCommonPrefixes() only lists the prefixes, not the actual keys. From the documentation:
For example, consider a bucket that contains the following keys:
"foo/bar/baz"
"foo/bar/bash"
"foo/bar/bang"
"foo/boo"
If calling
listObjects with the prefix="foo/" and the delimiter="/" on this
bucket, the returned S3ObjectListing will contain one entry in the
common prefixes list ("foo/bar/") and none of the keys beginning with
that common prefix will be included in the object summaries list.
Instead, use getObjectSummaries() to get the keys. You also need to remove withDelimiters(). This causes S3 to only list items in the current 'directory.' This method works for me:
public static List<String> getObjectsListFromS3(AmazonS3 s3, String bucket, String prefix) {
final String delimiter = "/";
if (!prefix.endsWith(delimiter)) {
prefix = prefix + delimiter;
}
List<String> paths = new LinkedList<>();
ListObjectsRequest request = new ListObjectsRequest().withBucketName(bucket).withPrefix(prefix);
ObjectListing result;
do {
result = s3.listObjects(request);
for (S3ObjectSummary summary : result.getObjectSummaries()) {
// Make sure we are not adding a 'folder'
if (!summary.getKey().endsWith(delimiter)) {
paths.add(summary.getKey());
}
}
request.setMarker(result.getMarker());
}
while (result.isTruncated());
return paths;
}
Consider an S3 bucket that contains the following keys:
particle.fs
test/
test/blur.fs
test/blur.vs
test/subtest/particle.fs
With this driver code:
public static void main(String[] args) {
String bucket = "playground-us-east-1-1234567890";
AmazonS3 s3 = AmazonS3ClientBuilder.standard().withRegion("us-east-1").build();
String prefix = "test";
for (String key : getObjectsListFromS3(s3, bucket, prefix)) {
System.out.println(key);
}
}
produces:
test/blur.fs
test/blur.vs
test/subtest/particle.fs
Here is an example about how to get all files in the directory, hope can help you :
public static List<String> getAllFile(String directoryPath,boolean isAddDirectory) {
List<String> list = new ArrayList<String>();
File baseFile = new File(directoryPath);
if (baseFile.isFile() || !baseFile.exists()) {
return list;
}
File[] files = baseFile.listFiles();
for (File file : files) {
if (file.isDirectory()) {
if(isAddDirectory){
list.add(file.getAbsolutePath());
}
list.addAll(getAllFile(file.getAbsolutePath(),isAddDirectory));
} else {
list.add(file.getAbsolutePath());
}
}
return list;
}
Related
Is it possible to delete a folder(In S3 bucket) and all its content with a single api request using java sdk for aws. For browser console we can delete and folder and its content with a single click and I hope that same behavior should be available using the APIs also.
There is no such thing as folders in S3. There are simply files (objects) with slashes in the filenames (keys).
The S3 browser console will visualize these slashes as folders, but they're not real.
You can delete all files with the same prefix, but first you need to look them up with list_objects(), then you can batch delete them.
For code snippet using Java SDK, please refer to Deleting multiple objects.
You can specify keyPrefix in ListObjectsRequest.
For example, consider a bucket that contains the following keys:
foo/bar/baz
foo/bar/bash
foo/bar/bang
foo/boo
And you want to delete files from foo/bar/baz.
if (s3Client.doesBucketExist(bucketName)) {
ListObjectsRequest listObjectsRequest = new ListObjectsRequest()
.withBucketName(bucketName)
.withPrefix("foo/bar/baz");
ObjectListing objectListing = s3Client.listObjects(listObjectsRequest);
while (true) {
for (S3ObjectSummary objectSummary : objectListing.getObjectSummaries()) {
s3Client.deleteObject(bucketName, objectSummary.getKey());
}
if (objectListing.isTruncated()) {
objectListing = s3Client.listNextBatchOfObjects(objectListing);
} else {
break;
}
}
}
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/model/ListObjectsRequest.html
There is no option of giving a folder name or more specifically prefix in java sdk to delete files. But there is an option of giving array of keys you want to delete.
Click for details
.
By using this, I have written a small method to delete all files corresponding to a prefix.
private AmazonS3 s3client = <Your s3 client>;
private String bucketName = <your bucket name, can be signed or unsigned>;
public void deleteDirectory(String prefix) {
ObjectListing objectList = this.s3client.listObjects( this.bucketName, prefix );
List<S3ObjectSummary> objectSummeryList = objectList.getObjectSummaries();
String[] keysList = new String[ objectSummeryList.size() ];
int count = 0;
for( S3ObjectSummary summery : objectSummeryList ) {
keysList[count++] = summery.getKey();
}
DeleteObjectsRequest deleteObjectsRequest = new DeleteObjectsRequest( bucketName ).withKeys( keysList );
this.s3client.deleteObjects(deleteObjectsRequest);
}
You can try the below methods, it will handle deletion even for truncated pages, and also it will recursively delete all the contents in the given directory:
public Set<String> listS3DirFiles(String bucket, String dirPrefix) {
ListObjectsV2Request s3FileReq = new ListObjectsV2Request()
.withBucketName(bucket)
.withPrefix(dirPrefix)
.withDelimiter("/");
Set<String> filesList = new HashSet<>();
ListObjectsV2Result objectsListing;
try {
do {
objectsListing = amazonS3.listObjectsV2(s3FileReq);
objectsListing.getCommonPrefixes().forEach(folderPrefix -> {
filesList.add(folderPrefix);
Set<String> tempPrefix = listS3DirFiles(bucket, folderPrefix);
filesList.addAll(tempPrefix);
});
for (S3ObjectSummary summary: objectsListing.getObjectSummaries()) {
filesList.add(summary.getKey());
}
s3FileReq.setContinuationToken(objectsListing.getNextContinuationToken());
} while(objectsListing.isTruncated());
} catch (SdkClientException e) {
System.out.println(e.getMessage());
throw e;
}
return filesList;
}
public boolean deleteDirectoryContents(String bucket, String directoryPrefix) {
Set<String> keysSet = listS3DirFiles(bucket, directoryPrefix);
if (keysSet.isEmpty()) {
System.out.println("Given directory {} doesn't have any file "+ directoryPrefix);
return false;
}
DeleteObjectsRequest deleteObjectsRequest = new DeleteObjectsRequest(bucket)
.withKeys(keysSet.toArray(new String[0]));
try {
amazonS3.deleteObjects(deleteObjectsRequest);
} catch (SdkClientException e) {
System.out.println(e.getMessage());
throw e;
}
return true;
}
First you need to fetch all object keys starting with the given prefix:
public List<FileKey> list(String keyPrefix) {
var objectListing = client.listObjects("bucket-name", keyPrefix);
var paths =
objectListing.getObjectSummaries().stream()
.map(s3ObjectSummary -> s3ObjectSummary.getKey())
.collect(Collectors.toList());
while (objectListing.isTruncated()) {
objectListing = client.listNextBatchOfObjects(objectListing);
paths.addAll(
objectListing.getObjectSummaries().stream()
.map(s3ObjectSummary -> s3ObjectSummary.getKey())
.toList());
}
return paths.stream().sorted().collect(Collectors.toList());
}
Then call deleteObjects:
client.deleteObjects(new DeleteObjectsRequest("bucket-name").withKeys(list("some-prefix")));
You can try this
void deleteS3Folder(String bucketName, String folderPath) {
for (S3ObjectSummary file : s3.listObjects(bucketName, folderPath).getObjectSummaries()){
s3.deleteObject(bucketName, file.getKey());
}
}
Say my workspace has certain files in the root folder like foo.xml, foo1.xml, foo2.xml, foo3.xml.
final List<String> configFiles = new ArrayList<>();
configFiles.add("foo.xml");
configFiles.add("foo1.xml");
configFiles.add("Foo2.xml");
final List<IFile> iFiles = configFiles.stream()
.map(project::getFile)
.filter(IFile::exists)
.collect(Collectors.toList());
When I do a getFile on the project, IFile expects a case sensitive fileName, say there is foo2.xml in my workspace and I try to access Foo2.xml, I don't get the file.
How can I get files regardless of the case ?
I don't think there is a simple way.
You could get call members() on the project:
IResource [] members = project.members();
and then match the member names using equalsIgnoreCase:
private IFile findFile(IResource [] members, String name)
{
for (IResource member : members) {
if (name.equalsIgnoreCase(member.getName())) {
if (member instanceof IFile) {
return (IFile)member;
}
return null;
}
}
return null;
}
so the stream would be:
final List<IFile> iFiles = configFiles.stream()
.map(file -> findFile(members, file))
.filter(Objects::nonNull)
.collect(Collectors.toList());
Is it possible to delete a folder(In S3 bucket) and all its content with a single api request using java sdk for aws. For browser console we can delete and folder and its content with a single click and I hope that same behavior should be available using the APIs also.
There is no such thing as folders in S3. There are simply files (objects) with slashes in the filenames (keys).
The S3 browser console will visualize these slashes as folders, but they're not real.
You can delete all files with the same prefix, but first you need to look them up with list_objects(), then you can batch delete them.
For code snippet using Java SDK, please refer to Deleting multiple objects.
You can specify keyPrefix in ListObjectsRequest.
For example, consider a bucket that contains the following keys:
foo/bar/baz
foo/bar/bash
foo/bar/bang
foo/boo
And you want to delete files from foo/bar/baz.
if (s3Client.doesBucketExist(bucketName)) {
ListObjectsRequest listObjectsRequest = new ListObjectsRequest()
.withBucketName(bucketName)
.withPrefix("foo/bar/baz");
ObjectListing objectListing = s3Client.listObjects(listObjectsRequest);
while (true) {
for (S3ObjectSummary objectSummary : objectListing.getObjectSummaries()) {
s3Client.deleteObject(bucketName, objectSummary.getKey());
}
if (objectListing.isTruncated()) {
objectListing = s3Client.listNextBatchOfObjects(objectListing);
} else {
break;
}
}
}
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/model/ListObjectsRequest.html
There is no option of giving a folder name or more specifically prefix in java sdk to delete files. But there is an option of giving array of keys you want to delete.
Click for details
.
By using this, I have written a small method to delete all files corresponding to a prefix.
private AmazonS3 s3client = <Your s3 client>;
private String bucketName = <your bucket name, can be signed or unsigned>;
public void deleteDirectory(String prefix) {
ObjectListing objectList = this.s3client.listObjects( this.bucketName, prefix );
List<S3ObjectSummary> objectSummeryList = objectList.getObjectSummaries();
String[] keysList = new String[ objectSummeryList.size() ];
int count = 0;
for( S3ObjectSummary summery : objectSummeryList ) {
keysList[count++] = summery.getKey();
}
DeleteObjectsRequest deleteObjectsRequest = new DeleteObjectsRequest( bucketName ).withKeys( keysList );
this.s3client.deleteObjects(deleteObjectsRequest);
}
You can try the below methods, it will handle deletion even for truncated pages, and also it will recursively delete all the contents in the given directory:
public Set<String> listS3DirFiles(String bucket, String dirPrefix) {
ListObjectsV2Request s3FileReq = new ListObjectsV2Request()
.withBucketName(bucket)
.withPrefix(dirPrefix)
.withDelimiter("/");
Set<String> filesList = new HashSet<>();
ListObjectsV2Result objectsListing;
try {
do {
objectsListing = amazonS3.listObjectsV2(s3FileReq);
objectsListing.getCommonPrefixes().forEach(folderPrefix -> {
filesList.add(folderPrefix);
Set<String> tempPrefix = listS3DirFiles(bucket, folderPrefix);
filesList.addAll(tempPrefix);
});
for (S3ObjectSummary summary: objectsListing.getObjectSummaries()) {
filesList.add(summary.getKey());
}
s3FileReq.setContinuationToken(objectsListing.getNextContinuationToken());
} while(objectsListing.isTruncated());
} catch (SdkClientException e) {
System.out.println(e.getMessage());
throw e;
}
return filesList;
}
public boolean deleteDirectoryContents(String bucket, String directoryPrefix) {
Set<String> keysSet = listS3DirFiles(bucket, directoryPrefix);
if (keysSet.isEmpty()) {
System.out.println("Given directory {} doesn't have any file "+ directoryPrefix);
return false;
}
DeleteObjectsRequest deleteObjectsRequest = new DeleteObjectsRequest(bucket)
.withKeys(keysSet.toArray(new String[0]));
try {
amazonS3.deleteObjects(deleteObjectsRequest);
} catch (SdkClientException e) {
System.out.println(e.getMessage());
throw e;
}
return true;
}
First you need to fetch all object keys starting with the given prefix:
public List<FileKey> list(String keyPrefix) {
var objectListing = client.listObjects("bucket-name", keyPrefix);
var paths =
objectListing.getObjectSummaries().stream()
.map(s3ObjectSummary -> s3ObjectSummary.getKey())
.collect(Collectors.toList());
while (objectListing.isTruncated()) {
objectListing = client.listNextBatchOfObjects(objectListing);
paths.addAll(
objectListing.getObjectSummaries().stream()
.map(s3ObjectSummary -> s3ObjectSummary.getKey())
.toList());
}
return paths.stream().sorted().collect(Collectors.toList());
}
Then call deleteObjects:
client.deleteObjects(new DeleteObjectsRequest("bucket-name").withKeys(list("some-prefix")));
You can try this
void deleteS3Folder(String bucketName, String folderPath) {
for (S3ObjectSummary file : s3.listObjects(bucketName, folderPath).getObjectSummaries()){
s3.deleteObject(bucketName, file.getKey());
}
}
I have one folder ("all_folders") which contains 5 sub folders ("folder_1","folder_2","folder_3","folder_4" and "folder_5" ).
Each of these sub-folders contains 2 text files having names like "file_1.txt" ,"file_2.txt" and so on.
Each of the text file contains address to the next file say "file_1.txt" content is GOTO "file_2.txt".
In the same manner a file can have multiple address and those file in turn can have address of other files.
Basically its like a binary tree.I want a user to input a file name for which he wants to know all the address the file he entered contains.
The output I want should be like a binary tree. I.e like file_10 contains address of file file_7 , file_8 and file_9.
Again file_9 contains address of file_6 and file_4.
file_8 contains address of file_5.
file_7 doesn't contain any address of file and so on....
I have attached the image of the output which I want and the files and folder i have.
Till now I have written the below code in which i am storing the address which file_10 contains (assuming user entered file_10) in a array list and able to print that.
But now I want this code to repeat till a file doesn't have any address (see image for the output required).
I am planning to use JTree to display output as a binary tree as shown in image.
But that is the second thing and first I need to get the output.
I need help on how we can repeatedly call function to show all file addresses.
Secondly I am using array list but my concern is , do I need to have as many array list as many level of parent child relationship I have in my tree.
Because at present I just only have 5 folders and 10 files but it may increase.so there will be lot many array list.
Can you please help me to achieve this output.
As this is a big code i have tried to write comments wherever possible but sorry as i might not be following good practices in the code as I am a beginner.
Output Image:
Attached all_folder files:
https://drive.google.com/open?id=0B9hvL6YZBpoTRkVYV0dUWEU5V2M
My Code is as below:
import java.io.File;
import java.io.FileNotFoundException;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.Scanner;
public class FindFile
{
String result;
static ArrayList<String> storeAllFileName = new ArrayList<String>(); // This array list will store all file names from all the sub-folders of all_folders
static int i = 0;
public void listFilesAndFilesSubDirectories(String directoryName)
{
File directory = new File(directoryName);
File[] fList = directory.listFiles();
for (File file : fList)
{
if (file.isFile())
{
if (file.getName().endsWith(".txt")) // Checking if the file is
// a text file
{
storeAllFileName.add(file.getName().toLowerCase());
i++;
}
} else if (file.isDirectory())
{
listFilesAndFilesSubDirectories(file.getAbsolutePath());
}
}
}
public static void main(String[] args) throws FileNotFoundException
{
recurrenceFileFind();
}
public static void recurrenceFileFind() throws FileNotFoundException
{
FindFile FindFile = new FindFile();
String fileName = "file_10.txt"; // Hardcoded this value assuming user
// have entered file_10.txt
final String directoryName = "C:\\all_folders"; // Hardcoded this value
// assuming all folder
// of user are placed in
// C:\all_folders
// directory
FindFile.listFilesAndFilesSubDirectories(directoryName);
FindFile.searchDirectory(new File(directoryName), fileName);
System.out.println("\nFile Found at: " + FindFile.getResult());
String filedirectoryName = FindFile.getResult(); // Passing the location
// of the file found
// at so that now we
// can read the text
// of the file and
// search for the
// address of child
// files
File file = new File(filedirectoryName);
Scanner in = new Scanner(file);
ArrayList<String> viewText = new ArrayList<String>(); // This array list
// will store the
// content of the
// file
while (in.hasNext())
{
viewText.add(in.next().toLowerCase()); // Store the content of file
// in a array list viewText
}
ArrayList<String> comparingList = new ArrayList<String>(viewText); // copy
// viewText
// array
// List
// to
// new
// array
// list
// comparingList
comparingList.retainAll(storeAllFileName); // store only those address
// in the comparingList for
// which we have file with
// that name in any of the
// sub-folder, as the file
// can have extra content
// like GOTO or any other
// words
System.out.println("\n\"" + file.getName() + "\"" + " contains below files:");
allListPrint(comparingList); // printing address of files which the
// parent file contains
}
public void searchDirectory(File directory, String fileNameToSearch)
{
if (directory.isDirectory())
{
search(directory, fileNameToSearch);
} else
{
System.out.println(directory.getAbsoluteFile() + " is not a directory!");
}
}
private void search(File directory, String fileNameToSearch)
{
if (directory.isDirectory())
{
System.out.println("Searching directory ... " + directory.getAbsoluteFile());
if (directory.canRead())
{
for (File temp : directory.listFiles())
{
if (temp.isDirectory())
{
search(temp, fileNameToSearch);
} else
{
if (fileNameToSearch.equalsIgnoreCase(temp.getName().toLowerCase()))
{
result = (temp.getAbsoluteFile().toString());
}
}
}
} else
{
System.out.println(directory.getAbsoluteFile() + "Permission Denied");
}
}
}
private static void allListPrint(ArrayList<String> List) // method to print
// array list
{
Iterator<String> itr = List.iterator();
while (itr.hasNext())
{
System.out.println(itr.next());
}
}
public String getResult()
{
return result;
}
}
Here is a recursive solution. I assume you can create HashMap<String,Node> from the directory of files yourself. I just manually created such HashMap to save time. But it's quite straightforward to do automatically. In one pass you read all files and create a Nodefor each file, and in the second pass you update their children field.
class Node {
String name;
List<Node> children = new ArrayList();
public Node(String name) {
this.name = name;
}
}
public class FileTree {
//recursive function for returning children
public void retChildHeirarchy(Node n) {
if (n == null) {
return;
}
for (Node child : n.children) {
retChildHeirarchy(child);
System.out.println(child.name);
}
}
public static void main(String[] args) {
HashMap<String, Node> treeStructure = new HashMap<>();
/*To save time, I manually create the nodes and update HashMap of Nodes
but you can do it automatically.
*/
Node f4 = new Node("file_4");
Node f6 = new Node("file_6");
Node f7 = new Node("file_7");
Node f8 = new Node("file_8");
Node f9 = new Node("file_9");
Node f10 = new Node("file_10");
//update f_10
f10.children.add(f9);
f10.children.add(f8);
f10.children.add(f7);
//update f9
f9.children.add(f6);
f9.children.add(f4);
treeStructure.put("file_4", f4);
treeStructure.put("file_6", f6);
treeStructure.put("file_7", f7);
treeStructure.put("file_8", f8);
treeStructure.put("file_9", f9);
treeStructure.put("file_10", f10);
FileTree ft = new FileTree();
//call the recursive function for the Node that you want:
ft.retChildHeirarchy(f9);
}
}
An the output is as follows. Note for f10 the recursive function works ok, but when manually updating f10 I didn't add 5, 2 3, and 1 to the list of its children.
ft.retChildHeirarchy(f9);
file_6
file_4
ft.retChildHeirarchy(f10);
file_6
file_4
file_9
file_8
file_7
I'm using java.util.Properties's store(Writer, String) method to store the properties. In the resulting text file, the properties are stored in a haphazard order.
This is what I'm doing:
Properties properties = createProperties();
properties.store(new FileWriter(file), null);
How can I ensure the properties are written out in alphabetical order, or in the order the properties were added?
I'm hoping for a solution simpler than "manually create the properties file".
As per "The New Idiot's" suggestion, this stores in alphabetical key order.
Properties tmp = new Properties() {
#Override
public synchronized Enumeration<Object> keys() {
return Collections.enumeration(new TreeSet<Object>(super.keySet()));
}
};
tmp.putAll(properties);
tmp.store(new FileWriter(file), null);
See https://github.com/etiennestuder/java-ordered-properties for a complete implementation that allows to read/write properties files in a well-defined order.
OrderedProperties properties = new OrderedProperties();
properties.load(new FileInputStream(new File("~/some.properties")));
Steve McLeod's answer used to work for me, but since Java 11, it doesn't.
The problem seemed to be EntrySet ordering, so, here you go:
#SuppressWarnings("serial")
private static Properties newOrderedProperties()
{
return new Properties() {
#Override public synchronized Set<Map.Entry<Object, Object>> entrySet() {
return Collections.synchronizedSet(
super.entrySet()
.stream()
.sorted(Comparator.comparing(e -> e.getKey().toString()))
.collect(Collectors.toCollection(LinkedHashSet::new)));
}
};
}
I will warn that this is not fast by any means. It forces iteration over a LinkedHashSet which isn't ideal, but I'm open to suggestions.
To use a TreeSet is dangerous!
Because in the CASE_INSENSITIVE_ORDER the strings "mykey", "MyKey" and "MYKEY" will result in the same index! (so 2 keys will be omitted).
I use List instead, to be sure to keep all keys.
List<Object> list = new ArrayList<>( super.keySet());
Comparator<Object> comparator = Comparator.comparing( Object::toString, String.CASE_INSENSITIVE_ORDER );
Collections.sort( list, comparator );
return Collections.enumeration( list );
The solution from Steve McLeod did not not work when trying to sort case insensitive.
This is what I came up with
Properties newProperties = new Properties() {
private static final long serialVersionUID = 4112578634029874840L;
#Override
public synchronized Enumeration<Object> keys() {
Comparator<Object> byCaseInsensitiveString = Comparator.comparing(Object::toString,
String.CASE_INSENSITIVE_ORDER);
Supplier<TreeSet<Object>> supplier = () -> new TreeSet<>(byCaseInsensitiveString);
TreeSet<Object> sortedSet = super.keySet().stream()
.collect(Collectors.toCollection(supplier));
return Collections.enumeration(sortedSet);
}
};
// propertyMap is a simple LinkedHashMap<String,String>
newProperties.putAll(propertyMap);
File file = new File(filepath);
try (FileOutputStream fileOutputStream = new FileOutputStream(file, false)) {
newProperties.store(fileOutputStream, null);
}
I'm having the same itch, so I implemented a simple kludge subclass that allows you to explicitly pre-define the order name/values appear in one block and lexically order them in another block.
https://github.com/crums-io/io-util/blob/master/src/main/java/io/crums/util/TidyProperties.java
In any event, you need to override public Set<Map.Entry<Object, Object>> entrySet(), not public Enumeration<Object> keys(); the latter, as https://stackoverflow.com/users/704335/timmos points out, never hits on the store(..) method.
In case someone has to do this in kotlin:
class OrderedProperties: Properties() {
override val entries: MutableSet<MutableMap.MutableEntry<Any, Any>>
get(){
return Collections.synchronizedSet(
super.entries
.stream()
.sorted(Comparator.comparing { e -> e.key.toString() })
.collect(
Collectors.toCollection(
Supplier { LinkedHashSet() })
)
)
}
}
If your properties file is small, and you want a future-proof solution, then I suggest you to store the Properties object on a file and load the file back to a String (or store it to ByteArrayOutputStream and convert it to a String), split the string into lines, sort the lines, and write the lines to the destination file you want.
It's because the internal implementation of Properties class is always changing, and to achieve the sorting in store(), you need to override different methods of Properties class in different versions of Java (see How to sort Properties in java?). If your properties file is not large, then I prefer a future-proof solution over the best performance one.
For the correct way to split the string into lines, some reliable solutions are:
Files.lines()/Files.readAllLines(), if you use a File
BufferedReader.readLine() (Java 7 or earlier)
IOUtils.readLines(bufferedReader) (org.apache.commons.io.IOUtils, Java 7 or earlier)
BufferedReader.lines() (Java 8+) as mentioned in Split Java String by New Line
String.lines() (Java 11+) as mentioned in Split Java String by New Line.
And you don't need to be worried about values with multiple lines, because Properties.store() will escape the whole multi-line String into one line in the output file.
Sample codes for Java 8:
public static void test() {
......
String comments = "Your multiline comments, this should be line 1." +
"\n" +
"The sorting should not mess up the comment lines' ordering, this should be line 2 even if T is smaller than Y";
saveSortedPropertiesToFile(inputProperties, comments, Paths.get("C:\\dev\\sorted.properties"));
}
public static void saveSortedPropertiesToFile(Properties properties, String comments, Path destination) {
try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream()) {
// Storing it to output stream is the only way to make sure correct encoding is used.
properties.store(outputStream, comments);
/* The encoding here shouldn't matter, since you are not going to modify the contents,
and you are only going to split them to lines and reorder them.
And Properties.store(OutputStream, String) should have translated unicode characters into (backslash)uXXXX anyway.
*/
String propertiesContentUnsorted = outputStream.toString("UTF-8");
String propertiesContentSorted;
try (BufferedReader bufferedReader = new BufferedReader(new StringReader(propertiesContentUnsorted))) {
List<String> commentLines = new ArrayList<>();
List<String> contentLines = new ArrayList<>();
boolean commentSectionEnded = false;
for (Iterator<String> it = bufferedReader.lines().iterator(); it.hasNext(); ) {
String line = it.next();
if (!commentSectionEnded) {
if (line.startsWith("#")) {
commentLines.add(line);
} else {
contentLines.add(line);
commentSectionEnded = true;
}
} else {
contentLines.add(line);
}
}
// Sort on content lines only
propertiesContentSorted = Stream.concat(commentLines.stream(), contentLines.stream().sorted())
.collect(Collectors.joining(System.lineSeparator()));
}
// Just make sure you use the same encoding as above.
Files.write(destination, propertiesContentSorted.getBytes(StandardCharsets.UTF_8));
} catch (IOException e) {
// Log it if necessary
}
}
Sample codes for Java 7:
import org.apache.commons.collections4.IterableUtils;
import org.apache.commons.io.IOUtils;
import org.apache.commons.lang.StringUtils;
......
public static void test() {
......
String comments = "Your multiline comments, this should be line 1." +
"\n" +
"The sorting should not mess up the comment lines' ordering, this should be line 2 even if T is smaller than Y";
saveSortedPropertiesToFile(inputProperties, comments, Paths.get("C:\\dev\\sorted.properties"));
}
public static void saveSortedPropertiesToFile(Properties properties, String comments, Path destination) {
try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream()) {
// Storing it to output stream is the only way to make sure correct encoding is used.
properties.store(outputStream, comments);
/* The encoding here shouldn't matter, since you are not going to modify the contents,
and you are only going to split them to lines and reorder them.
And Properties.store(OutputStream, String) should have translated unicode characters into (backslash)uXXXX anyway.
*/
String propertiesContentUnsorted = outputStream.toString("UTF-8");
String propertiesContentSorted;
try (BufferedReader bufferedReader = new BufferedReader(new StringReader(propertiesContentUnsorted))) {
List<String> commentLines = new ArrayList<>();
List<String> contentLines = new ArrayList<>();
boolean commentSectionEnded = false;
for (Iterator<String> it = IOUtils.readLines(bufferedReader).iterator(); it.hasNext(); ) {
String line = it.next();
if (!commentSectionEnded) {
if (line.startsWith("#")) {
commentLines.add(line);
} else {
contentLines.add(line);
commentSectionEnded = true;
}
} else {
contentLines.add(line);
}
}
// Sort on content lines only
Collections.sort(contentLines);
propertiesContentSorted = StringUtils.join(IterableUtils.chainedIterable(commentLines, contentLines).iterator(), System.lineSeparator());
}
// Just make sure you use the same encoding as above.
Files.write(destination, propertiesContentSorted.getBytes(StandardCharsets.UTF_8));
} catch (IOException e) {
// Log it if necessary
}
}
True that keys() is not triggered so instead of passing trough a list as Timmos suggested you can do it like this:
Properties alphaproperties = new Properties() {
#Override
public Set<Map.Entry<Object, Object>> entrySet() {
Set<Map.Entry<Object, Object>> setnontrie = super.entrySet();
Set<Map.Entry<Object, Object>> unSetTrie = new ConcurrentSkipListSet<Map.Entry<Object, Object>>(new Comparator<Map.Entry<Object, Object>>() {
#Override
public int compare(Map.Entry<Object, Object> o1, Map.Entry<Object, Object> o2) {
return o1.getKey().toString().compareTo(o2.getKey().toString());
}
});
unSetTrie.addAll(setnontrie);
return unSetTrie;
}
};
alphaproperties.putAll(properties);
alphaproperties.store(fw, "UpdatedBy Me");
fw.close();