I have a Spring boot project and I want to parse it and file the dependencies between classes I am using the JavaSymbolSolver to find out the Class Name
public static void main(String[] args) throws Exception {
Set<Map<String, Set<String>>> entries = new HashSet<>();
String jdkPath = "/usr/lib/jvm/java-11-openjdk-amd64/";
List<File> projectFiles = FileHandler.readJavaFiles(new File("/home/dell/MySpace/Tekit/soon-back/src/main"));
CombinedTypeSolver combinedSolver = new CombinedTypeSolver
(
new JavaParserTypeSolver(new File("/home/dell/MySpace/Tekit/soon-back/src/main/java/")),
new JavaParserTypeSolver(new File(jdkPath)),
new ReflectionTypeSolver()
);
JavaSymbolSolver symbolSolver = new JavaSymbolSolver(combinedSolver);
StaticJavaParser.getConfiguration().setSymbolResolver(symbolSolver);
CompilationUnit cu = null;
try {
cu = StaticJavaParser.parse(projectFiles.get(7));
} catch (FileNotFoundException e) {
e.printStackTrace();
}
List<ClassOrInterfaceDeclaration> classes = new ArrayList<>();
TypeDeclarationImp typeDeclarationImp = new TypeDeclarationImp();
typeDeclarationImp.visit(cu, classes);
Set<String> collect = classes.stream()
.map(classOrInterfaceDeclaration -> {
List<MethodCallExpr> collection = new ArrayList<>();
MethodInvocationImp methodInvocationImp = new MethodInvocationImp();
classOrInterfaceDeclaration.accept(methodInvocationImp, collection);
return collection;
})
.flatMap(Collection::stream)
.map(methodCallExpr -> {
return methodCallExpr
.getScope()
.stream()
.filter(Expression::isNameExpr)
.map(Expression::calculateResolvedType)
.map(ResolvedType::asReferenceType)
.map(ResolvedReferenceType::getQualifiedName)
.map(s -> s.split("\\."))
.map(strings -> strings[strings.length - 1])
.collect(Collectors.toSet());
})
.filter(expressions -> expressions.size() != 0)
.flatMap(Collection::stream)
.collect(Collectors.toSet());
collect.forEach(System.out::println);
}
I am facing this issue
Exception in thread "main" UnsolvedSymbolException{context='SecurityContextHolder', name='Solving SecurityContextHolder', cause='null'}
could you tell me if it is necessary to indicate all the libraries used by the project to parse it or there is another way for that
It's not entirely correct. If you only want to traverse the AST you don't need to provide project dependencies but if you want for example to know the type of a variable you must use the symbol solver and declare all the dependencies of the project to it.
Furthermore Javaparser can recover from parsing error (see https://matozoid.github.io/2017/06/11/parse-error-recovery.html)
Say my workspace has certain files in the root folder like foo.xml, foo1.xml, foo2.xml, foo3.xml.
final List<String> configFiles = new ArrayList<>();
configFiles.add("foo.xml");
configFiles.add("foo1.xml");
configFiles.add("Foo2.xml");
final List<IFile> iFiles = configFiles.stream()
.map(project::getFile)
.filter(IFile::exists)
.collect(Collectors.toList());
When I do a getFile on the project, IFile expects a case sensitive fileName, say there is foo2.xml in my workspace and I try to access Foo2.xml, I don't get the file.
How can I get files regardless of the case ?
I don't think there is a simple way.
You could get call members() on the project:
IResource [] members = project.members();
and then match the member names using equalsIgnoreCase:
private IFile findFile(IResource [] members, String name)
{
for (IResource member : members) {
if (name.equalsIgnoreCase(member.getName())) {
if (member instanceof IFile) {
return (IFile)member;
}
return null;
}
}
return null;
}
so the stream would be:
final List<IFile> iFiles = configFiles.stream()
.map(file -> findFile(members, file))
.filter(Objects::nonNull)
.collect(Collectors.toList());
I need help to ‘recursively’ grab files in s3:
For example, I have s3 structure like this:
My-bucket/2018/06/05/10/file1.json
My-bucket/2018/06/05/11/file2.json
My-bucket/2018/06/05/12/file3.json
My-bucket/2018/06/05/13/file5.json
My-bucket/2018/06/05/14/file4.json
My-bucket/2018/06/05/15/file6.json
I need to get all files pathes with file name for given bucket:
I tried following method, but it didn’t worked for me (its returning not whole path):
public List<String> getObjectsListFromFolder4(String bucketName, String keyPrefix) {
List<String> paths = new ArrayList<String>();
String delimiter = "/";
if (keyPrefix != null && !keyPrefix.isEmpty() && !keyPrefix.endsWith(delimiter)) {
keyPrefix += delimiter;
}
ListObjectsRequest listObjectRequest = new ListObjectsRequest().withBucketName(bucketName)
.withPrefix(keyPrefix).withDelimiter(delimiter);
ObjectListing objectListing;
do {
objectListing = s3Client.listObjects(listObjectRequest);
paths.addAll(objectListing.getCommonPrefixes());
listObjectRequest.setMarker(objectListing.getNextMarker());
} while (objectListing.isTruncated());
return paths;
}
There is a new utility class — S3Objects — that provides an easy way to iterate Amazon S3 objects in a "foreach" statement. Use its withPrefix method and then just iterate them. You can use filters and streams as well.
Here is an example (Kotlin):
val s3 = AmazonS3ClientBuilder
.standard()
.withCredentials(EnvironmentVariableCredentialsProvider())
.build()
S3Objects
.withPrefix(s3, bucket, folder)
.filter { s3ObjectSummary ->
s3ObjectSummary.key.endsWith(".gz")
}
.parallelStream()
.forEach { s3ObjectSummary ->
CSVParser.parse(
GZIPInputStream(s3.getObject(s3ObjectSummary.bucketName, s3ObjectSummary.key).objectContent),
StandardCharsets.UTF_8,
CSVFormat.DEFAULT
).use { csvParser ->
…
}
}
getCommonPrefixes() only lists the prefixes, not the actual keys. From the documentation:
For example, consider a bucket that contains the following keys:
"foo/bar/baz"
"foo/bar/bash"
"foo/bar/bang"
"foo/boo"
If calling
listObjects with the prefix="foo/" and the delimiter="/" on this
bucket, the returned S3ObjectListing will contain one entry in the
common prefixes list ("foo/bar/") and none of the keys beginning with
that common prefix will be included in the object summaries list.
Instead, use getObjectSummaries() to get the keys. You also need to remove withDelimiters(). This causes S3 to only list items in the current 'directory.' This method works for me:
public static List<String> getObjectsListFromS3(AmazonS3 s3, String bucket, String prefix) {
final String delimiter = "/";
if (!prefix.endsWith(delimiter)) {
prefix = prefix + delimiter;
}
List<String> paths = new LinkedList<>();
ListObjectsRequest request = new ListObjectsRequest().withBucketName(bucket).withPrefix(prefix);
ObjectListing result;
do {
result = s3.listObjects(request);
for (S3ObjectSummary summary : result.getObjectSummaries()) {
// Make sure we are not adding a 'folder'
if (!summary.getKey().endsWith(delimiter)) {
paths.add(summary.getKey());
}
}
request.setMarker(result.getMarker());
}
while (result.isTruncated());
return paths;
}
Consider an S3 bucket that contains the following keys:
particle.fs
test/
test/blur.fs
test/blur.vs
test/subtest/particle.fs
With this driver code:
public static void main(String[] args) {
String bucket = "playground-us-east-1-1234567890";
AmazonS3 s3 = AmazonS3ClientBuilder.standard().withRegion("us-east-1").build();
String prefix = "test";
for (String key : getObjectsListFromS3(s3, bucket, prefix)) {
System.out.println(key);
}
}
produces:
test/blur.fs
test/blur.vs
test/subtest/particle.fs
Here is an example about how to get all files in the directory, hope can help you :
public static List<String> getAllFile(String directoryPath,boolean isAddDirectory) {
List<String> list = new ArrayList<String>();
File baseFile = new File(directoryPath);
if (baseFile.isFile() || !baseFile.exists()) {
return list;
}
File[] files = baseFile.listFiles();
for (File file : files) {
if (file.isDirectory()) {
if(isAddDirectory){
list.add(file.getAbsolutePath());
}
list.addAll(getAllFile(file.getAbsolutePath(),isAddDirectory));
} else {
list.add(file.getAbsolutePath());
}
}
return list;
}
I'm using java.util.Properties's store(Writer, String) method to store the properties. In the resulting text file, the properties are stored in a haphazard order.
This is what I'm doing:
Properties properties = createProperties();
properties.store(new FileWriter(file), null);
How can I ensure the properties are written out in alphabetical order, or in the order the properties were added?
I'm hoping for a solution simpler than "manually create the properties file".
As per "The New Idiot's" suggestion, this stores in alphabetical key order.
Properties tmp = new Properties() {
#Override
public synchronized Enumeration<Object> keys() {
return Collections.enumeration(new TreeSet<Object>(super.keySet()));
}
};
tmp.putAll(properties);
tmp.store(new FileWriter(file), null);
See https://github.com/etiennestuder/java-ordered-properties for a complete implementation that allows to read/write properties files in a well-defined order.
OrderedProperties properties = new OrderedProperties();
properties.load(new FileInputStream(new File("~/some.properties")));
Steve McLeod's answer used to work for me, but since Java 11, it doesn't.
The problem seemed to be EntrySet ordering, so, here you go:
#SuppressWarnings("serial")
private static Properties newOrderedProperties()
{
return new Properties() {
#Override public synchronized Set<Map.Entry<Object, Object>> entrySet() {
return Collections.synchronizedSet(
super.entrySet()
.stream()
.sorted(Comparator.comparing(e -> e.getKey().toString()))
.collect(Collectors.toCollection(LinkedHashSet::new)));
}
};
}
I will warn that this is not fast by any means. It forces iteration over a LinkedHashSet which isn't ideal, but I'm open to suggestions.
To use a TreeSet is dangerous!
Because in the CASE_INSENSITIVE_ORDER the strings "mykey", "MyKey" and "MYKEY" will result in the same index! (so 2 keys will be omitted).
I use List instead, to be sure to keep all keys.
List<Object> list = new ArrayList<>( super.keySet());
Comparator<Object> comparator = Comparator.comparing( Object::toString, String.CASE_INSENSITIVE_ORDER );
Collections.sort( list, comparator );
return Collections.enumeration( list );
The solution from Steve McLeod did not not work when trying to sort case insensitive.
This is what I came up with
Properties newProperties = new Properties() {
private static final long serialVersionUID = 4112578634029874840L;
#Override
public synchronized Enumeration<Object> keys() {
Comparator<Object> byCaseInsensitiveString = Comparator.comparing(Object::toString,
String.CASE_INSENSITIVE_ORDER);
Supplier<TreeSet<Object>> supplier = () -> new TreeSet<>(byCaseInsensitiveString);
TreeSet<Object> sortedSet = super.keySet().stream()
.collect(Collectors.toCollection(supplier));
return Collections.enumeration(sortedSet);
}
};
// propertyMap is a simple LinkedHashMap<String,String>
newProperties.putAll(propertyMap);
File file = new File(filepath);
try (FileOutputStream fileOutputStream = new FileOutputStream(file, false)) {
newProperties.store(fileOutputStream, null);
}
I'm having the same itch, so I implemented a simple kludge subclass that allows you to explicitly pre-define the order name/values appear in one block and lexically order them in another block.
https://github.com/crums-io/io-util/blob/master/src/main/java/io/crums/util/TidyProperties.java
In any event, you need to override public Set<Map.Entry<Object, Object>> entrySet(), not public Enumeration<Object> keys(); the latter, as https://stackoverflow.com/users/704335/timmos points out, never hits on the store(..) method.
In case someone has to do this in kotlin:
class OrderedProperties: Properties() {
override val entries: MutableSet<MutableMap.MutableEntry<Any, Any>>
get(){
return Collections.synchronizedSet(
super.entries
.stream()
.sorted(Comparator.comparing { e -> e.key.toString() })
.collect(
Collectors.toCollection(
Supplier { LinkedHashSet() })
)
)
}
}
If your properties file is small, and you want a future-proof solution, then I suggest you to store the Properties object on a file and load the file back to a String (or store it to ByteArrayOutputStream and convert it to a String), split the string into lines, sort the lines, and write the lines to the destination file you want.
It's because the internal implementation of Properties class is always changing, and to achieve the sorting in store(), you need to override different methods of Properties class in different versions of Java (see How to sort Properties in java?). If your properties file is not large, then I prefer a future-proof solution over the best performance one.
For the correct way to split the string into lines, some reliable solutions are:
Files.lines()/Files.readAllLines(), if you use a File
BufferedReader.readLine() (Java 7 or earlier)
IOUtils.readLines(bufferedReader) (org.apache.commons.io.IOUtils, Java 7 or earlier)
BufferedReader.lines() (Java 8+) as mentioned in Split Java String by New Line
String.lines() (Java 11+) as mentioned in Split Java String by New Line.
And you don't need to be worried about values with multiple lines, because Properties.store() will escape the whole multi-line String into one line in the output file.
Sample codes for Java 8:
public static void test() {
......
String comments = "Your multiline comments, this should be line 1." +
"\n" +
"The sorting should not mess up the comment lines' ordering, this should be line 2 even if T is smaller than Y";
saveSortedPropertiesToFile(inputProperties, comments, Paths.get("C:\\dev\\sorted.properties"));
}
public static void saveSortedPropertiesToFile(Properties properties, String comments, Path destination) {
try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream()) {
// Storing it to output stream is the only way to make sure correct encoding is used.
properties.store(outputStream, comments);
/* The encoding here shouldn't matter, since you are not going to modify the contents,
and you are only going to split them to lines and reorder them.
And Properties.store(OutputStream, String) should have translated unicode characters into (backslash)uXXXX anyway.
*/
String propertiesContentUnsorted = outputStream.toString("UTF-8");
String propertiesContentSorted;
try (BufferedReader bufferedReader = new BufferedReader(new StringReader(propertiesContentUnsorted))) {
List<String> commentLines = new ArrayList<>();
List<String> contentLines = new ArrayList<>();
boolean commentSectionEnded = false;
for (Iterator<String> it = bufferedReader.lines().iterator(); it.hasNext(); ) {
String line = it.next();
if (!commentSectionEnded) {
if (line.startsWith("#")) {
commentLines.add(line);
} else {
contentLines.add(line);
commentSectionEnded = true;
}
} else {
contentLines.add(line);
}
}
// Sort on content lines only
propertiesContentSorted = Stream.concat(commentLines.stream(), contentLines.stream().sorted())
.collect(Collectors.joining(System.lineSeparator()));
}
// Just make sure you use the same encoding as above.
Files.write(destination, propertiesContentSorted.getBytes(StandardCharsets.UTF_8));
} catch (IOException e) {
// Log it if necessary
}
}
Sample codes for Java 7:
import org.apache.commons.collections4.IterableUtils;
import org.apache.commons.io.IOUtils;
import org.apache.commons.lang.StringUtils;
......
public static void test() {
......
String comments = "Your multiline comments, this should be line 1." +
"\n" +
"The sorting should not mess up the comment lines' ordering, this should be line 2 even if T is smaller than Y";
saveSortedPropertiesToFile(inputProperties, comments, Paths.get("C:\\dev\\sorted.properties"));
}
public static void saveSortedPropertiesToFile(Properties properties, String comments, Path destination) {
try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream()) {
// Storing it to output stream is the only way to make sure correct encoding is used.
properties.store(outputStream, comments);
/* The encoding here shouldn't matter, since you are not going to modify the contents,
and you are only going to split them to lines and reorder them.
And Properties.store(OutputStream, String) should have translated unicode characters into (backslash)uXXXX anyway.
*/
String propertiesContentUnsorted = outputStream.toString("UTF-8");
String propertiesContentSorted;
try (BufferedReader bufferedReader = new BufferedReader(new StringReader(propertiesContentUnsorted))) {
List<String> commentLines = new ArrayList<>();
List<String> contentLines = new ArrayList<>();
boolean commentSectionEnded = false;
for (Iterator<String> it = IOUtils.readLines(bufferedReader).iterator(); it.hasNext(); ) {
String line = it.next();
if (!commentSectionEnded) {
if (line.startsWith("#")) {
commentLines.add(line);
} else {
contentLines.add(line);
commentSectionEnded = true;
}
} else {
contentLines.add(line);
}
}
// Sort on content lines only
Collections.sort(contentLines);
propertiesContentSorted = StringUtils.join(IterableUtils.chainedIterable(commentLines, contentLines).iterator(), System.lineSeparator());
}
// Just make sure you use the same encoding as above.
Files.write(destination, propertiesContentSorted.getBytes(StandardCharsets.UTF_8));
} catch (IOException e) {
// Log it if necessary
}
}
True that keys() is not triggered so instead of passing trough a list as Timmos suggested you can do it like this:
Properties alphaproperties = new Properties() {
#Override
public Set<Map.Entry<Object, Object>> entrySet() {
Set<Map.Entry<Object, Object>> setnontrie = super.entrySet();
Set<Map.Entry<Object, Object>> unSetTrie = new ConcurrentSkipListSet<Map.Entry<Object, Object>>(new Comparator<Map.Entry<Object, Object>>() {
#Override
public int compare(Map.Entry<Object, Object> o1, Map.Entry<Object, Object> o2) {
return o1.getKey().toString().compareTo(o2.getKey().toString());
}
});
unSetTrie.addAll(setnontrie);
return unSetTrie;
}
};
alphaproperties.putAll(properties);
alphaproperties.store(fw, "UpdatedBy Me");
fw.close();
I need to get the list of properties which are in the .properties file. For example, if have the following .properties file:
users.admin.keywords = admin
users.admin.regexps = test-5,test-7
users.admin.rules = users.admin.keywords,users.admin.regexps
users.root.keywords = newKeyWordq
users.root.regexps = asdasd,\u0432[\u044By][\u0448s]\u043B\u0438\u0442[\u0435e]
users.root.rules = users.root.keywords,users.root.regexps,rules.creditcards
users.guest.keywords = guest
users.guest.regexps = *
users.guest.rules = users.guest.keywords,users.guest.regexps,rules.creditcards
rules.cc.creditcards = 1234123412341234,11231123123123123,ca
rules.common.regexps = pas
rules.common.keywords = asd
And as a result I'd like to get an ArrayList which consists of names of fields like this:
users.admin.keywords, users.admin.regexps, users.admin.rules and so on. And as you have noticed, I need to do this using apache.commons.config
You can use as below:
Configuration configuration = new PropertiesConfiguration(filename);
Iterator<String> keys = configuration.getKeys();
List<String> keyList = new ArrayList<String>();
while(keys.hasNext()) {
keyList.add(keys.next());
}
Properties prop = new Properties();
prop.load(new FileInputStream("prop.properties"));
Set<Map.Entry<Object, Object>> set = prop.entrySet();
List<Object> list = new ArrayList<>();
for (Map.Entry<Object, Object> entry : prop.entrySet())
{
list.add(entry.getKey());
}
System.out.println(list);
Using Apache Commons version <2.1:
Configuration config = new PropertiesConfiguration("prop.properties");
List<String> list = new ArrayList<>();
Iterator<String> keys = config.getKeys();
while(keys.hasNext()){
String key = (String) keys.next();
list.add(key);
}
Edited for Apache Commons Version 2.1:
List<String> list = new ArrayList<>();
Parameters params = new Parameters();
FileBasedConfigurationBuilder<FileBasedConfiguration> builder =
new FileBasedConfigurationBuilder<FileBasedConfiguration>
(PropertiesConfiguration.class)
.configure(params.properties()
.setFileName("prop.properties"));
try
{
Configuration config = builder.getConfiguration();
Iterator<String> keys = config.getKeys();
while(keys.hasNext()){
String key = (String) keys.next();
list.add(key);
}
}
catch(ConfigurationException cex)
{
// handle exception here
}
You can use getKeys().
It returns an Iterator<String> on all the keys in the properties file.