I use Spark 2.0.1.
I am trying to find distinct values in a JavaRDD as below
JavaRDD<String> distinct_installedApp_Ids = filteredInstalledApp_Ids.distinct();
I see that this line is throwing the below exception
Exception in thread "main" java.lang.StackOverflowError
at org.apache.spark.rdd.RDD.checkpointRDD(RDD.scala:226)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:84)
at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:84)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
at scala.collection.immutable.List.foreach(List.scala:318)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
at scala.collection.AbstractTraversable.map(Traversable.scala:105)
at org.apache.spark.rdd.UnionRDD.getPartitions(UnionRDD.scala:84)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
at scala.Option.getOrElse(Option.scala:120)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:84)
at org.apache.spark.rdd.UnionRDD$$anonfun$1.apply(UnionRDD.scala:84)
..........
The same stacktrace is repeated again and again.
The input filteredInstalledApp_Ids has large input with millions of records.Will thh issue be the number of records or is there a efficient way to find distinct values in JavaRDD. Any help would be much appreciated. Thanks in advance. Cheers.
Edit 1:
Adding the filter method
JavaRDD<String> filteredInstalledApp_Ids = installedApp_Ids
.filter(new Function<String, Boolean>() {
#Override
public Boolean call(String v1) throws Exception {
return v1 != null;
}
}).cache();
Edit 2:
Added the method used to generate installedApp_Ids
public JavaRDD<String> getIdsWithInstalledApps(String inputPath, JavaSparkContext sc,
JavaRDD<String> installedApp_Ids) {
JavaRDD<String> appIdsRDD = sc.textFile(inputPath);
try {
JavaRDD<String> appIdsRDD1 = appIdsRDD.map(new Function<String, String>() {
#Override
public String call(String t) throws Exception {
String delimiter = "\t";
String[] id_Type = t.split(delimiter);
StringBuilder temp = new StringBuilder(id_Type[1]);
if ((temp.indexOf("\"")) != -1) {
String escaped = temp.toString().replace("\\", "");
escaped = escaped.replace("\"{", "{");
escaped = escaped.replace("}\"", "}");
temp = new StringBuilder(escaped);
}
// To remove empty character in the beginning of a
// string
JSONObject wholeventObj = new JSONObject(temp.toString());
JSONObject eventJsonObj = wholeventObj.getJSONObject("eventData");
int appType = eventJsonObj.getInt("appType");
if (appType == 1) {
try {
return (String.valueOf(appType));
} catch (JSONException e) {
return null;
}
}
return null;
}
}).cache();
if (installedApp_Ids != null)
return sc.union(installedApp_Ids, appIdsRDD1);
else
return appIdsRDD1;
} catch (Exception e) {
e.printStackTrace();
}
return null;
}
I assume the main dataset is in inputPath. It appears that it's a comma-separated file with JSON-encoded values.
I think you could make your code a bit simpler by combination of Spark SQL's DataFrames and from_json function. I'm using Scala and leave converting the code to Java as a home exercise :)
The lines where you load a inputPath text file and the line parsing itself can be as simple as the following:
import org.apache.spark.sql.SparkSession
val spark: SparkSession = ...
val dataset = spark.read.csv(inputPath)
You can display the content using show operator.
dataset.show(truncate = false)
You should see the JSON-encoded lines.
It appears that the JSON lines contain eventData and appType fields.
val jsons = dataset.withColumn("asJson", from_json(...))
See functions object for reference.
With JSON lines, you can select the fields of your interest:
val apptypes = jsons.select("eventData.appType")
And then union it with installedApp_Ids.
I'm sure the code gets easier to read (and hopefully to write too). The migration will give you extra optimizations that you may or may not be able to write yourself using assembler-like RDD API.
And the best is that filtering out nulls is as simple as using na operator that gives DataFrameNaFunctions like drop. I'm sure you'll like them.
It does not necessarily answer your initial question, but this java.lang.StackOverflowError might get away just by doing the code migration and the code gets easier to maintain, too.
Related
I'm using Java with JDBC to run MySql code. I want to execute a DDL script, but JDBC can only execute a single statement at a time, which makes it unsuitable to execute a whole .sql file out of the box.
What I'm trying to do is use Antlr4 to parse the .sql file so I can break up each individual statement and then iteratively execute them with JDBC.
I've gotten this far:
InputStream resourceAsStream = Main.class.getClassLoader()
.getResourceAsStream("an-arbitrary-ddl.sql");
CharStream codePointCharStream = CharStreams.fromStream(resourceAsStream);
MySqlLexer tokenSource = new MySqlLexer(new CaseChangingCharStream(codePointCharStream, true));
TokenStream tokenStream = new CommonTokenStream(tokenSource);
MySqlParser mySqlParser = new MySqlParser(tokenStream);
// Where do I go from here?
I'm sure I'm just not searching for the correct terms because I'm new to Antlr and manually parsing code. I can't find any reference from here as to what I need to do to get individual sql statements out of the MySqlParser. What do I need to do next?
A parser is not the right tool for this kind of problem. A statement splitter is pretty easy to write manually and much faster if you do it yourself. I implemented such a splitter in C++ in MySQL Workbench. Shouldn't be difficult to port this to Java. The code is very fast (1 Mio LOC SQL code in under 1 sec on an average machine). A parser would need much longer.
I'm sure this can be improved, however, as the most simple way I could create this was creating a listener and provide the constructor with a Consumer<String> object. The listener looks at individual statements and recursively constructs them. There is probably a more optimal solution, however, I no longer have time to try to optimize this if there is.
/**
* #author Paul Nelson Baker
* #see GitHub
* #see LinkedIn
* #since 2018-09
*/
public class SqlStatementListener extends MySqlParserBaseListener {
private final Consumer<String> sqlStatementConsumer;
public SqlStatementListener(Consumer<String> sqlStatementConsumer) {
this.sqlStatementConsumer = sqlStatementConsumer;
}
#Override
public void enterSqlStatement(MySqlParser.SqlStatementContext ctx) {
if (ctx.getChildCount() > 0) {
StringBuilder stringBuilder = new StringBuilder();
recreateStatementString(ctx.getChild(0), stringBuilder);
stringBuilder.setCharAt(stringBuilder.length() - 1, ';');
String recreatedSqlStatement = stringBuilder.toString();
sqlStatementConsumer.accept(recreatedSqlStatement);
}
super.enterSqlStatement(ctx);
}
private void recreateStatementString(ParseTree currentNode, StringBuilder stringBuilder) {
if (currentNode instanceof TerminalNode) {
stringBuilder.append(currentNode.getText());
stringBuilder.append(' ');
}
for (int i = 0; i < currentNode.getChildCount(); i++) {
recreateStatementString(currentNode.getChild(i), stringBuilder);
}
}
}
Next you need to traverse the statements, the string consumer from earlier allows you to lazily redirect the output wherever you need. This can be as simple as just printing to stdout, however, it can just as easily be used to append to a list.
public List<String> mySqlStatementsFrom(String sourceCode) {
List<String> statements = new ArrayList<>();
mySqlStatementsToConsumer(sourceCode, statements::add);
return statements;
}
public void mySqlStatementsToConsumer(String sourceCode, Consumer<String> mySqlStatementConsumer) {
CharStream codePointCharStream = CharStreams.fromString(sourceCode);
MySqlLexer tokenSource = new MySqlLexer(new CaseChangingCharStream(codePointCharStream, true));
TokenStream tokenStream = new CommonTokenStream(tokenSource);
MySqlParser mySqlParser = new MySqlParser(tokenStream);
SqlStatementListener statementListener = new SqlStatementListener(mySqlStatementConsumer);
ParseTreeWalker.DEFAULT.walk(statementListener, mySqlParser.sqlStatements());
}
I have setup a pipeline. I have to parse hundreds of *.gz files. Therefore glob works quite good.
But I need the original name of the currently processed file, because i want to name the result files as the original files.
Can anyone help me here?
Here is my code.
#Default.String(LOGS_PATH + "*.gz")
String getInputFile();
void setInputFile(String value);
TextIO.Read read = TextIO.read().withCompressionType(TextIO.CompressionType.GZIP).from(options.getInputFile());
read.getName();
p.apply("ReadLines", read).apply(new CountWords())
.apply(MapElements.via(new FormatAsTextFn()))
.apply("WriteCounts", TextIO.write().to(WordCountOptions.LOGS_PATH + "_" + options.getOutput()));
p.run().waitUntilFinish();
This is possible starting with Beam 2.2 using a combination of FileIO.match(), FileIO.read() and custom code to read lines of text. You can already use this at HEAD, or you can wait until release 2.2 is finalized (it's currently in progress).
PCollection<KV<String, String>> filesAndLines =
p.apply(FileIO.match().filepattern(...))
.apply(FileIO.read())
.apply(ParDo.of(new DoFn<ReadableFile, KV<String, String>>() {
#ProcessElement
public void process(ProcessContext c) {
ReadableFile f = c.element();
String filename = f.getMetadata().resourceId().toString();
String line;
try (BufferedReader r = new BufferedReader(Channels.newInputStream(f.open()))) {
while ((line = r.readLine()) != null) {
c.output(KV.of(filename, line));
}
}
}
}));
I'm trying to override the default XMLUnit behavior of reporting only the first difference found between two inputs with a (text) report that includes all the differences found.
I've so far accomplished this:
private static void reportXhtmlDifferences(String expected, String actual) {
Diff ds = DiffBuilder.compare(Input.fromString(expected))
.withTest(Input.fromString(actual))
.checkForSimilar()
.normalizeWhitespace()
.ignoreComments()
.withDocumentBuilderFactory(dbf).build();
DefaultComparisonFormatter formatter = new DefaultComparisonFormatter();
if (ds.hasDifferences()) {
StringBuffer expectedBuffer = new StringBuffer();
StringBuffer actualBuffer = new StringBuffer();
for (Difference d: ds.getDifferences()) {
expectedBuffer.append(formatter.getDetails(d.getComparison().getControlDetails(), null, true));
expectedBuffer.append("\n----------\n");
actualBuffer.append(formatter.getDetails(d.getComparison().getTestDetails(), null, true));
actualBuffer.append("\n----------\n");
}
throw new ComparisonFailure("There are HTML differences", expectedBuffer.toString(), actualBuffer.toString());
}
}
But I don't like:
Having to iterate through the Differences in client code.
Reaching into the internals of DefaultComparisonFormatter and calling getDetails with that null ComparisonType.
Concating the differences with line dashes.
Maybe this is just coming from an unjustified bad gut feeling, but I'd like to know if anyone has some input on this use case.
XMLUnit suggests to simply print out the differences, see the section on "Old XMLUnit 1.x DetailedDiff": https://github.com/xmlunit/user-guide/wiki/Migrating-from-XMLUnit-1.x-to-2.x
Your code would look like this:
private static void reportXhtmlDifferences(String expected, String actual) {
Diff ds = DiffBuilder.compare(Input.fromString(expected))
.withTest(Input.fromString(actual))
.checkForSimilar()
.normalizeWhitespace()
.ignoreComments()
.withDocumentBuilderFactory(dbf).build();
if (ds.hasDifferences()) {
StringBuffer buffer = new StringBuffer();
for (Difference d: ds.getDifferences()) {
buffer.append(d.toString());
}
throw new RuntimeException("There are HTML differences\n" + buffer.toString());
}
}
The default behavior when the parser doesn't know what to do is to print messages to the terminal like:
line 1:23 missing DECIMAL at '}'
This is a good message, but in the wrong place. I'd rather receive this as an exception.
I've tried using the BailErrorStrategy, but this throws a ParseCancellationException without a message (caused by a InputMismatchException, also without a message).
Is there a way I can get it to report errors via exceptions while retaining the useful info in the message?
Here's what I'm really after--I typically use actions in rules to build up an object:
dataspec returns [DataExtractor extractor]
#init {
DataExtractorBuilder builder = new DataExtractorBuilder(layout);
}
#after {
$extractor = builder.create();
}
: first=expr { builder.addAll($first.values); } (COMMA next=expr { builder.addAll($next.values); })* EOF
;
expr returns [List<ValueExtractor> values]
: a=atom { $values = Arrays.asList($a.val); }
| fields=fieldrange { $values = values($fields.fields); }
| '%' { $values = null; }
| ASTERISK { $values = values(layout); }
;
Then when I invoke the parser I do something like this:
public static DataExtractor create(String dataspec) {
CharStream stream = new ANTLRInputStream(dataspec);
DataSpecificationLexer lexer = new DataSpecificationLexer(stream);
CommonTokenStream tokens = new CommonTokenStream(lexer);
DataSpecificationParser parser = new DataSpecificationParser(tokens);
return parser.dataspec().extractor;
}
All I really want is
for the dataspec() call to throw an exception (ideally a checked one) when the input can't be parsed
for that exception to have a useful message and provide access to the line number and position where the problem was found
Then I'll let that exception bubble up the callstack to whereever is best suited to present a useful message to the user--the same way I'd handle a dropped network connection, reading a corrupt file, etc.
I did see that actions are now considered "advanced" in ANTLR4, so maybe I'm going about things in a strange way, but I haven't looked into what the "non-advanced" way to do this would be since this way has been working well for our needs.
Since I've had a little bit of a struggle with the two existing answers, I'd like to share the solution I ended up with.
First of all I created my own version of an ErrorListener like Sam Harwell suggested:
public class ThrowingErrorListener extends BaseErrorListener {
public static final ThrowingErrorListener INSTANCE = new ThrowingErrorListener();
#Override
public void syntaxError(Recognizer<?, ?> recognizer, Object offendingSymbol, int line, int charPositionInLine, String msg, RecognitionException e)
throws ParseCancellationException {
throw new ParseCancellationException("line " + line + ":" + charPositionInLine + " " + msg);
}
}
Note the use of a ParseCancellationException instead of a RecognitionException since the DefaultErrorStrategy would catch the latter and it would never reach your own code.
Creating a whole new ErrorStrategy like Brad Mace suggested is not necessary since the DefaultErrorStrategy produces pretty good error messages by default.
I then use the custom ErrorListener in my parsing function:
public static String parse(String text) throws ParseCancellationException {
MyLexer lexer = new MyLexer(new ANTLRInputStream(text));
lexer.removeErrorListeners();
lexer.addErrorListener(ThrowingErrorListener.INSTANCE);
CommonTokenStream tokens = new CommonTokenStream(lexer);
MyParser parser = new MyParser(tokens);
parser.removeErrorListeners();
parser.addErrorListener(ThrowingErrorListener.INSTANCE);
ParserRuleContext tree = parser.expr();
MyParseRules extractor = new MyParseRules();
return extractor.visit(tree);
}
(For more information on what MyParseRules does, see here.)
This will give you the same error messages as would be printed to the console by default, only in the form of proper exceptions.
When you use the DefaultErrorStrategy or the BailErrorStrategy, the ParserRuleContext.exception field is set for any parse tree node in the resulting parse tree where an error occurred. The documentation for this field reads (for people that don't want to click an extra link):
The exception which forced this rule to return. If the rule successfully completed, this is null.
Edit: If you use DefaultErrorStrategy, the parse context exception will not be propagated all the way out to the calling code, so you'll be able to examine the exception field directly. If you use BailErrorStrategy, the ParseCancellationException thrown by it will include a RecognitionException if you call getCause().
if (pce.getCause() instanceof RecognitionException) {
RecognitionException re = (RecognitionException)pce.getCause();
ParserRuleContext context = (ParserRuleContext)re.getCtx();
}
Edit 2: Based on your other answer, it appears that you don't actually want an exception, but what you want is a different way to report the errors. In that case, you'll be more interested in the ANTLRErrorListener interface. You want to call parser.removeErrorListeners() to remove the default listener that writes to the console, and then call parser.addErrorListener(listener) for your own special listener. I often use the following listener as a starting point, as it includes the name of the source file with the messages.
public class DescriptiveErrorListener extends BaseErrorListener {
public static DescriptiveErrorListener INSTANCE = new DescriptiveErrorListener();
#Override
public void syntaxError(Recognizer<?, ?> recognizer, Object offendingSymbol,
int line, int charPositionInLine,
String msg, RecognitionException e)
{
if (!REPORT_SYNTAX_ERRORS) {
return;
}
String sourceName = recognizer.getInputStream().getSourceName();
if (!sourceName.isEmpty()) {
sourceName = String.format("%s:%d:%d: ", sourceName, line, charPositionInLine);
}
System.err.println(sourceName+"line "+line+":"+charPositionInLine+" "+msg);
}
}
With this class available, you can use the following to use it.
lexer.removeErrorListeners();
lexer.addErrorListener(DescriptiveErrorListener.INSTANCE);
parser.removeErrorListeners();
parser.addErrorListener(DescriptiveErrorListener.INSTANCE);
A much more complicated example of an error listener that I use to identify ambiguities which render a grammar non-SLL is the SummarizingDiagnosticErrorListener class in TestPerformance.
What I've come up with so far is based on extending DefaultErrorStrategy and overriding it's reportXXX methods (though it's entirely possible I'm making things more complicated than necessary):
public class ExceptionErrorStrategy extends DefaultErrorStrategy {
#Override
public void recover(Parser recognizer, RecognitionException e) {
throw e;
}
#Override
public void reportInputMismatch(Parser recognizer, InputMismatchException e) throws RecognitionException {
String msg = "mismatched input " + getTokenErrorDisplay(e.getOffendingToken());
msg += " expecting one of "+e.getExpectedTokens().toString(recognizer.getTokenNames());
RecognitionException ex = new RecognitionException(msg, recognizer, recognizer.getInputStream(), recognizer.getContext());
ex.initCause(e);
throw ex;
}
#Override
public void reportMissingToken(Parser recognizer) {
beginErrorCondition(recognizer);
Token t = recognizer.getCurrentToken();
IntervalSet expecting = getExpectedTokens(recognizer);
String msg = "missing "+expecting.toString(recognizer.getTokenNames()) + " at " + getTokenErrorDisplay(t);
throw new RecognitionException(msg, recognizer, recognizer.getInputStream(), recognizer.getContext());
}
}
This throws exceptions with useful messages, and the line and position of the problem can be gotten from either the offending token, or if that's not set, from the current token by using ((Parser) re.getRecognizer()).getCurrentToken() on the RecognitionException.
I'm fairly happy with how this is working, though having six reportX methods to override makes me think there's a better way.
For anyone interested, here's the ANTLR4 C# equivalent of Sam Harwell's answer:
using System; using System.IO; using Antlr4.Runtime;
public class DescriptiveErrorListener : BaseErrorListener, IAntlrErrorListener<int>
{
public static DescriptiveErrorListener Instance { get; } = new DescriptiveErrorListener();
public void SyntaxError(TextWriter output, IRecognizer recognizer, int offendingSymbol, int line, int charPositionInLine, string msg, RecognitionException e) {
if (!REPORT_SYNTAX_ERRORS) return;
string sourceName = recognizer.InputStream.SourceName;
// never ""; might be "<unknown>" == IntStreamConstants.UnknownSourceName
sourceName = $"{sourceName}:{line}:{charPositionInLine}";
Console.Error.WriteLine($"{sourceName}: line {line}:{charPositionInLine} {msg}");
}
public override void SyntaxError(TextWriter output, IRecognizer recognizer, Token offendingSymbol, int line, int charPositionInLine, string msg, RecognitionException e) {
this.SyntaxError(output, recognizer, 0, line, charPositionInLine, msg, e);
}
static readonly bool REPORT_SYNTAX_ERRORS = true;
}
lexer.RemoveErrorListeners();
lexer.AddErrorListener(DescriptiveErrorListener.Instance);
parser.RemoveErrorListeners();
parser.AddErrorListener(DescriptiveErrorListener.Instance);
For people who use Python, here is the solution in Python 3 based on Mouagip's answer.
First, define a custom error listener:
from antlr4.error.ErrorListener import ErrorListener
from antlr4.error.Errors import ParseCancellationException
class ThrowingErrorListener(ErrorListener):
def syntaxError(self, recognizer, offendingSymbol, line, column, msg, e):
ex = ParseCancellationException(f'line {line}: {column} {msg}')
ex.line = line
ex.column = column
raise ex
Then set this to lexer and parser:
lexer = MyScriptLexer(script)
lexer.removeErrorListeners()
lexer.addErrorListener(ThrowingErrorListener())
token_stream = CommonTokenStream(lexer)
parser = MyScriptParser(token_stream)
parser.removeErrorListeners()
parser.addErrorListener(ThrowingErrorListener())
tree = parser.script()
I'm using java.util.Properties's store(Writer, String) method to store the properties. In the resulting text file, the properties are stored in a haphazard order.
This is what I'm doing:
Properties properties = createProperties();
properties.store(new FileWriter(file), null);
How can I ensure the properties are written out in alphabetical order, or in the order the properties were added?
I'm hoping for a solution simpler than "manually create the properties file".
As per "The New Idiot's" suggestion, this stores in alphabetical key order.
Properties tmp = new Properties() {
#Override
public synchronized Enumeration<Object> keys() {
return Collections.enumeration(new TreeSet<Object>(super.keySet()));
}
};
tmp.putAll(properties);
tmp.store(new FileWriter(file), null);
See https://github.com/etiennestuder/java-ordered-properties for a complete implementation that allows to read/write properties files in a well-defined order.
OrderedProperties properties = new OrderedProperties();
properties.load(new FileInputStream(new File("~/some.properties")));
Steve McLeod's answer used to work for me, but since Java 11, it doesn't.
The problem seemed to be EntrySet ordering, so, here you go:
#SuppressWarnings("serial")
private static Properties newOrderedProperties()
{
return new Properties() {
#Override public synchronized Set<Map.Entry<Object, Object>> entrySet() {
return Collections.synchronizedSet(
super.entrySet()
.stream()
.sorted(Comparator.comparing(e -> e.getKey().toString()))
.collect(Collectors.toCollection(LinkedHashSet::new)));
}
};
}
I will warn that this is not fast by any means. It forces iteration over a LinkedHashSet which isn't ideal, but I'm open to suggestions.
To use a TreeSet is dangerous!
Because in the CASE_INSENSITIVE_ORDER the strings "mykey", "MyKey" and "MYKEY" will result in the same index! (so 2 keys will be omitted).
I use List instead, to be sure to keep all keys.
List<Object> list = new ArrayList<>( super.keySet());
Comparator<Object> comparator = Comparator.comparing( Object::toString, String.CASE_INSENSITIVE_ORDER );
Collections.sort( list, comparator );
return Collections.enumeration( list );
The solution from Steve McLeod did not not work when trying to sort case insensitive.
This is what I came up with
Properties newProperties = new Properties() {
private static final long serialVersionUID = 4112578634029874840L;
#Override
public synchronized Enumeration<Object> keys() {
Comparator<Object> byCaseInsensitiveString = Comparator.comparing(Object::toString,
String.CASE_INSENSITIVE_ORDER);
Supplier<TreeSet<Object>> supplier = () -> new TreeSet<>(byCaseInsensitiveString);
TreeSet<Object> sortedSet = super.keySet().stream()
.collect(Collectors.toCollection(supplier));
return Collections.enumeration(sortedSet);
}
};
// propertyMap is a simple LinkedHashMap<String,String>
newProperties.putAll(propertyMap);
File file = new File(filepath);
try (FileOutputStream fileOutputStream = new FileOutputStream(file, false)) {
newProperties.store(fileOutputStream, null);
}
I'm having the same itch, so I implemented a simple kludge subclass that allows you to explicitly pre-define the order name/values appear in one block and lexically order them in another block.
https://github.com/crums-io/io-util/blob/master/src/main/java/io/crums/util/TidyProperties.java
In any event, you need to override public Set<Map.Entry<Object, Object>> entrySet(), not public Enumeration<Object> keys(); the latter, as https://stackoverflow.com/users/704335/timmos points out, never hits on the store(..) method.
In case someone has to do this in kotlin:
class OrderedProperties: Properties() {
override val entries: MutableSet<MutableMap.MutableEntry<Any, Any>>
get(){
return Collections.synchronizedSet(
super.entries
.stream()
.sorted(Comparator.comparing { e -> e.key.toString() })
.collect(
Collectors.toCollection(
Supplier { LinkedHashSet() })
)
)
}
}
If your properties file is small, and you want a future-proof solution, then I suggest you to store the Properties object on a file and load the file back to a String (or store it to ByteArrayOutputStream and convert it to a String), split the string into lines, sort the lines, and write the lines to the destination file you want.
It's because the internal implementation of Properties class is always changing, and to achieve the sorting in store(), you need to override different methods of Properties class in different versions of Java (see How to sort Properties in java?). If your properties file is not large, then I prefer a future-proof solution over the best performance one.
For the correct way to split the string into lines, some reliable solutions are:
Files.lines()/Files.readAllLines(), if you use a File
BufferedReader.readLine() (Java 7 or earlier)
IOUtils.readLines(bufferedReader) (org.apache.commons.io.IOUtils, Java 7 or earlier)
BufferedReader.lines() (Java 8+) as mentioned in Split Java String by New Line
String.lines() (Java 11+) as mentioned in Split Java String by New Line.
And you don't need to be worried about values with multiple lines, because Properties.store() will escape the whole multi-line String into one line in the output file.
Sample codes for Java 8:
public static void test() {
......
String comments = "Your multiline comments, this should be line 1." +
"\n" +
"The sorting should not mess up the comment lines' ordering, this should be line 2 even if T is smaller than Y";
saveSortedPropertiesToFile(inputProperties, comments, Paths.get("C:\\dev\\sorted.properties"));
}
public static void saveSortedPropertiesToFile(Properties properties, String comments, Path destination) {
try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream()) {
// Storing it to output stream is the only way to make sure correct encoding is used.
properties.store(outputStream, comments);
/* The encoding here shouldn't matter, since you are not going to modify the contents,
and you are only going to split them to lines and reorder them.
And Properties.store(OutputStream, String) should have translated unicode characters into (backslash)uXXXX anyway.
*/
String propertiesContentUnsorted = outputStream.toString("UTF-8");
String propertiesContentSorted;
try (BufferedReader bufferedReader = new BufferedReader(new StringReader(propertiesContentUnsorted))) {
List<String> commentLines = new ArrayList<>();
List<String> contentLines = new ArrayList<>();
boolean commentSectionEnded = false;
for (Iterator<String> it = bufferedReader.lines().iterator(); it.hasNext(); ) {
String line = it.next();
if (!commentSectionEnded) {
if (line.startsWith("#")) {
commentLines.add(line);
} else {
contentLines.add(line);
commentSectionEnded = true;
}
} else {
contentLines.add(line);
}
}
// Sort on content lines only
propertiesContentSorted = Stream.concat(commentLines.stream(), contentLines.stream().sorted())
.collect(Collectors.joining(System.lineSeparator()));
}
// Just make sure you use the same encoding as above.
Files.write(destination, propertiesContentSorted.getBytes(StandardCharsets.UTF_8));
} catch (IOException e) {
// Log it if necessary
}
}
Sample codes for Java 7:
import org.apache.commons.collections4.IterableUtils;
import org.apache.commons.io.IOUtils;
import org.apache.commons.lang.StringUtils;
......
public static void test() {
......
String comments = "Your multiline comments, this should be line 1." +
"\n" +
"The sorting should not mess up the comment lines' ordering, this should be line 2 even if T is smaller than Y";
saveSortedPropertiesToFile(inputProperties, comments, Paths.get("C:\\dev\\sorted.properties"));
}
public static void saveSortedPropertiesToFile(Properties properties, String comments, Path destination) {
try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream()) {
// Storing it to output stream is the only way to make sure correct encoding is used.
properties.store(outputStream, comments);
/* The encoding here shouldn't matter, since you are not going to modify the contents,
and you are only going to split them to lines and reorder them.
And Properties.store(OutputStream, String) should have translated unicode characters into (backslash)uXXXX anyway.
*/
String propertiesContentUnsorted = outputStream.toString("UTF-8");
String propertiesContentSorted;
try (BufferedReader bufferedReader = new BufferedReader(new StringReader(propertiesContentUnsorted))) {
List<String> commentLines = new ArrayList<>();
List<String> contentLines = new ArrayList<>();
boolean commentSectionEnded = false;
for (Iterator<String> it = IOUtils.readLines(bufferedReader).iterator(); it.hasNext(); ) {
String line = it.next();
if (!commentSectionEnded) {
if (line.startsWith("#")) {
commentLines.add(line);
} else {
contentLines.add(line);
commentSectionEnded = true;
}
} else {
contentLines.add(line);
}
}
// Sort on content lines only
Collections.sort(contentLines);
propertiesContentSorted = StringUtils.join(IterableUtils.chainedIterable(commentLines, contentLines).iterator(), System.lineSeparator());
}
// Just make sure you use the same encoding as above.
Files.write(destination, propertiesContentSorted.getBytes(StandardCharsets.UTF_8));
} catch (IOException e) {
// Log it if necessary
}
}
True that keys() is not triggered so instead of passing trough a list as Timmos suggested you can do it like this:
Properties alphaproperties = new Properties() {
#Override
public Set<Map.Entry<Object, Object>> entrySet() {
Set<Map.Entry<Object, Object>> setnontrie = super.entrySet();
Set<Map.Entry<Object, Object>> unSetTrie = new ConcurrentSkipListSet<Map.Entry<Object, Object>>(new Comparator<Map.Entry<Object, Object>>() {
#Override
public int compare(Map.Entry<Object, Object> o1, Map.Entry<Object, Object> o2) {
return o1.getKey().toString().compareTo(o2.getKey().toString());
}
});
unSetTrie.addAll(setnontrie);
return unSetTrie;
}
};
alphaproperties.putAll(properties);
alphaproperties.store(fw, "UpdatedBy Me");
fw.close();