Chronicle Queue: Usage with less or no lambdas - java

The documentation shows the usage of an appender or a tailer generally with a lambda, like this:
appender.writeDocument(wireOut -> wireOut.write("log").marshallable(m ->
m.write("mkey").text(mkey)
.write("timestamp").dateTime(now)
.write("msg").text(data)));
For a tailer I I use:
int count = 0;
while (read from tailer ) {
wire.read("log").marshallable(m -> {
String mkey = m.read("mkey").text();
LocalDateTime ts = m.read("timestamp").dateTime();
String bmsg = m.read("msg").text();
//... do more stuff, like updating counters
count++;
}
}
During the read I would like to do stuff like updating counters, but this is not possible in lambda (needs "effectively final" values/objects).
What is good practice for using the API without lambdas?
Any other ideas on how to do this? (Currently I use AtomicInteger objects)

static class Log extends AbstractMarshallable {
String mkey;
LocalDateTime timestamp;
String msg;
}
int count;
public void myMethod() {
Log log = new Log();
final SingleChronicleQueue q = SingleChronicleQueueBuilder.binary(new File("q4")).build();
final ExcerptAppender appender = q.acquireAppender();
final ExcerptTailer tailer = q.createTailer();
try (final DocumentContext dc = appender.writingDocument()) {
// this will store the contents of log to the queue
dc.wire().write("log").marshallable(log);
}
try (final DocumentContext dc = tailer.readingDocument()) {
if (!dc.isData())
return;
// this will replace the contents of log
dc.wire().read("log").marshallable(log);
//... do more stuff, like updating counters
count++;
}
}

Related

How to implement multiple threadsafe read/write locks (ConcurrentHashmap)

I have an application which reads and writes a number of files. The aim is to prevent a specific file from being read or written while it is being written by another thread. I do not want to lock the reading and writing of all files while a single file is being written as this causes unnecessary locking.
To try and achieve this I am using a concurrentHashMap in conjunction with a synchonized block, but if there is a better solution I am open to it.
Here is the rough code.
private static final ConcurrentMap<String, String> lockMap = new ConcurrentHashMap();
private void createCache(String templatePath, String cachePath){
//get template
String temp = getTemplate(templatePath);
String myRand = randomString();
lockMap.put(cachePath,myRand);
// save cache file
try {
// ** is lockMap.get(cachePath) still threadsafe if another thread has changed the row's value?
synchronized ( lockMap.get(cachePath) ){
Files.write(Paths.get(cachePath),temp.getBytes(StandardCharsets.UTF_8));
}
} finally {
// remove lock if not locked by another thread in the meantime
lockMap.remove(cachePath, myRand);
}
}
private String getCache(String cachePath){
String output = null;
//only lock if this specific file is being written at the moment
if ( lockMap.contains(cachePath) ){
synchronized ( lockMap.get(cachePath) ){
output = getFile(cachePath);
}
} else {
output = getFile(cachePath);
}
return output;
}
// main event
private String cacheToString (String templatePath, String cachePath){
File cache = new File(cachePath);
if ( !cache.exists() ){
createCache(templatePath, cachePath)
}
return getCache(cachePath);
}
The problem I have is that although the thread will only remove the lock for the requested file if it's unchanged by another thread, it's still possible for another thread to update the value in the lockMap for this entry - if this happens will the synchronisation fail?
I would write a new temporary file each time and rename it when finished. Renaming is atomic.
// a unique counter across restarts
final AtomicLong counter = new AtomicLong(System.currentTimeMillis()*1000);
private void createCache(String templatePath, String cachePath) {
//get template
String temp = getTemplate(templatePath);
Path path = Paths.get(cachePath);
Path tmpPath = Paths.get(path.getParent().toString(), counter.getAndIncrement() + ".tmp");
// save cache file
Files.write(tmpPath, temp.getBytes(StandardCharsets.UTF_8));
Files.move(tmpPath, path, ATOMIC_MOVE, REPLACE_EXISTING);
}
If multiple threads try to write to the same file, the last one to perform a move wins.

Using AmazonSQS sendMessage over multiple threads causes it to run slower

I've got an app that sends simple SQS messages to multiple queues. Previously, this sending happened serially, but now that we've got more queues we need to send to, I decided to parallelize it by doing all the sending in a thread pool (up to 10 threads).
However, I've noticed that sqs.sendMessage latency seems to increase when I throw more threads at the job!
I've created a sample program below to reproduce the problem (Note that numIterations is just to get more data, and this is just a simplified version of the code for demo purposes).
Running on EC2 instance in the same region and using 7 queues, I'm typically getting average results around 12-15ms with 1 thread, and 21-25ms with 7 threads - nearly double the latency!
Even running from my laptop remotely (when creating this demo), I'm getting average latency of ~90ms with 1 thread and ~120ms with 7 threads.
public static void main(String[] args) throws Exception {
AWSCredentialsProvider creds = new AWSStaticCredentialsProvider(new BasicAWSCredentials(A, B));
final int numThreads = 7;
final int numQueues = 7;
final int numIterations = 100;
final long sleepMs = 10000;
AmazonSQSClient sqs = new AmazonSQSClient(creds);
List<String> queueUrls = new ArrayList<>();
for (int i=0; i<numQueues; i++) {
queueUrls.add(sqs.getQueueUrl("testThreading-" + i).getQueueUrl());
}
Queue<Long> resultQueue = new ConcurrentLinkedQueue<>();
sqs.addRequestHandler(new MyRequestHandler(resultQueue));
runIterations(sqs, queueUrls, numThreads, numIterations, sleepMs);
System.out.println("Average: " + resultQueue.stream().mapToLong(Long::longValue).average().getAsDouble());
System.exit(0);
}
private static void runIterations(AmazonSQS sqs, List<String> queueUrls, int threadPoolSize, int numIterations, long sleepMs) throws Exception {
ExecutorService executor = Executors.newFixedThreadPool(threadPoolSize);
List<Future<?>> futures = new ArrayList<>();
for (int i=0; i<numIterations; i++) {
for (String queueUrl : queueUrls) {
final String message = String.valueOf(i);
futures.add(executor.submit(() -> sendMessage(sqs, queueUrl, message)));
}
Thread.sleep(sleepMs);
}
for (Future<?> f : futures) {
f.get();
}
}
private static void sendMessage(AmazonSQS sqs, String queueUrl, String messageBody) {
final SendMessageRequest request = new SendMessageRequest()
.withQueueUrl(queueUrl)
.withMessageBody(messageBody);
sqs.sendMessage(request);
}
// Use RequestHandler2 to get accurate timing metrics
private static class MyRequestHandler extends RequestHandler2 {
private final Queue<Long> resultQueue;
public MyRequestHandler(Queue<Long> resultQueue) {
this.resultQueue = resultQueue;
}
public void afterResponse(Request<?> request, Response<?> response) {
TimingInfo timingInfo = request.getAWSRequestMetrics().getTimingInfo();
Long start = timingInfo.getStartEpochTimeMilliIfKnown();
Long end = timingInfo.getEndEpochTimeMilliIfKnown();
if (start != null && end != null) {
long elapsed = end-start;
resultQueue.add(elapsed);
}
}
}
I'm sure this is some weird client configuration issue, but the default ClientConfiguration should be able to handle 50 concurrent connections.
Any suggestions?
Update: It's looking like the key to this problem is something I left out of the original simplified version - there is a delay between batches of messages being sent (relating to doing processing). The latency issue isn't there if the delay is ~2s, but it is an issue when the delay between batches is ~10s. I've tried different values for ClientConfiguration.validateAfterInactivityMillis with no effect.

Hadoop MapReduce querying on large json data

Hadoop n00b here.
I have installed Hadoop 2.6.0 on a server where I have stored twelve json files I want to perform MapReduce operations on. These files are large, ranging from 2-5 gigabytes each.
The structure of the JSON files is an array of JSON objects. Snippet of two objects below:
[{"campus":"Gløshaugen","building":"Varmeteknisk og Kjelhuset","floor":"4. etasje","timestamp":1412121618,"dayOfWeek":3,"hourOfDay":2,"latitude":63.419161638078066,"salt_timestamp":1412121602,"longitude":10.404867443910122,"id":"961","accuracy":56.083199914753536},{"campus":"Gløshaugen","building":"IT-Vest","floor":"2. etasje","timestamp":1412121612,"dayOfWeek":3,"hourOfDay":2,"latitude":63.41709424828986,"salt_timestamp":1412121602,"longitude":10.402167488838765,"id":"982","accuracy":7.315199988880896}]
I want to perform MapReduce operations based on the fields building and timestamp. At least in the beginning until I get the hang of this. E.g. mapReduce the data where building equals a parameter and timestamp is greater than X and less than Y. The relevant fields I need after the reduce process is latitude and longitude.
I know there are different tools(Hive, HBase, PIG, Spark etc) you can use with Hadoop that might solve this easier, but my boss wants an evaluation of the MapReduce performance of standalone Hadoop.
So far I have created the main class triggering the map and reduce classes, implemented what I believe is a start in the map class, but I'm stuck on the reduce class. Below is what I have so far.
public class Hadoop {
public static void main(String[] args) throws Exception {
try {
Configuration conf = new Configuration();
Job job = new Job(conf, "maze");
job.setJarByClass(Hadoop.class);
job.setMapperClass(Map.class);
job.setReducerClass(Reducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Text.class);
job.setInputFormatClass(KeyValueTextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class);
Path inPath = new Path("hdfs://xxx.xxx.106.23:50070/data.json");
FileInputFormat.addInputPath(job, inPath);
boolean result = job.waitForCompletion(true);
System.exit(result ? 0 : 1);
}catch (Exception e){
e.printStackTrace();
}
}
}
Mapper:
public class Map extends org.apache.hadoop.mapreduce.Mapper{
private Text word = new Text();
public void map(Text key, Text value, Context context) throws IOException, InterruptedException {
try {
JSONObject jo = new JSONObject(value.toString());
String latitude = jo.getString("latitude");
String longitude = jo.getString("longitude");
long timestamp = jo.getLong("timestamp");
String building = jo.getString("building");
StringBuilder sb = new StringBuilder();
sb.append(latitude);
sb.append("/");
sb.append(longitude);
sb.append("/");
sb.append(timestamp);
sb.append("/");
sb.append(building);
sb.append("/");
context.write(new Text(sb.toString()),value);
}catch (JSONException e){
e.printStackTrace();
}
}
}
Reducer:
public class Reducer extends org.apache.hadoop.mapreduce.Reducer{
private Text result = new Text();
protected void reduce(Text key, Iterable<Text> values, org.apache.hadoop.mapreduce.Reducer.Context context) throws IOException, InterruptedException {
}
}
UPDATE
public void map(Text key, Text value, Context context) throws IOException, InterruptedException {
private static String BUILDING;
private static int tsFrom;
private static int tsTo;
try {
JSONArray ja = new JSONArray(key.toString());
StringBuilder sb;
for(int n = 0; n < ja.length(); n++)
{
JSONObject jo = ja.getJSONObject(n);
String latitude = jo.getString("latitude");
String longitude = jo.getString("longitude");
int timestamp = jo.getInt("timestamp");
String building = jo.getString("building");
if (BUILDING.equals(building) && timestamp < tsTo && timestamp > tsFrom) {
sb = new StringBuilder();
sb.append(latitude);
sb.append("/");
sb.append(longitude);
context.write(new Text(sb.toString()), value);
}
}
}catch (JSONException e){
e.printStackTrace();
}
}
#Override
public void configure(JobConf jobConf) {
System.out.println("configure");
BUILDING = jobConf.get("BUILDING");
tsFrom = Integer.parseInt(jobConf.get("TSFROM"));
tsTo = Integer.parseInt(jobConf.get("TSTO"));
}
This works for a small data set. Since I am working with LARGE json files, I get Java Heap Space exception. Since I am not familiar with Hadoop, I'm having trouble understanding how MapR can read the data without getting outOfMemoryError.
If you simply want a list of LONG/LAT under the constraint of building=something and timestamp=somethingelse.
This is a simple filter operation; for this you do not need a reducer. In the mapper you should check if the current JSON satisfies the condition, and only then write it out to the context. If it fails to satisfy the condition you don't want it in the output.
The output should be LONG/LAT (no building/timestamp, unless you want them there as well)
If no reducer is present, the output of the mappers is the output of the job, which in your case is sufficient.
As for the code:
your driver should pass the building ID and the timestamp range to the mapper, using the job configuration. Anything you put there will be available to all your mappers.
Configuration conf = new Configuration();
conf.set("Building", "123");
conf.set("TSFROM", "12300000000");
conf.set("TSTO", "12400000000");
Job job = new Job(conf);
your mapper class needs to implement JobConfigurable.configure; in there you will read from the configuration object into local static variables
private static String BUILDING;
private static Long tsFrom;
private static Long tsTo;
public void configure(JobConf job) {
BUILDING = job.get("Building");
tsFrom = Long.parseLong(job.get("TSFROM"));
tsTo = Long.parseLong(job.get("TSTO"));
}
Now, your map function needs to check:
if (BUILDING.equals(building) && timestamp < TSTO && timestamp > TSFROM) {
sb = new StringBuilder();
sb.append(latitude);
sb.append("/");
sb.append(longitude);
context.write(new Text(sb.toString()),1);
}
this means any rows belonging to other buildings or outside the timestamp, would not appear in the result.

Java: Marshalling using JaxB to XML, how to properly multithread

I am trying to take a very long file of strings and convert it to an XML according to a schema I was given. I used jaxB to create classes from that schema. Since the file is very large I created a thread pool to improve the performance but since then it only processes one line of the file and marshalls it to the XML file, per thread.
Below is my home class where I read from the file. Each line is a record of a transaction, for every new user encountered a list is made to store all of that users transactions and each list is put into a HashMap. I made it a ConcurrentHashMap because multiple threads will work on the map simultaneously, is this the correct thing to do?
After the lists are created a thread is made for each user. Each thread runs the method ProcessCommands below and receives from home the list of transactions for its user.
public class home{
public static File XMLFile = new File("LogFile.xml");
Map<String,List<String>> UserMap= new ConcurrentHashMap<String,List<String>>();
String[] UserNames = new String[5000];
int numberOfUsers = 0;
try{
BufferedReader reader = new BufferedReader(new FileReader("test.txt"));
String line;
while ((line = reader.readLine()) != null)
{
parsed = line.split(",|\\s+");
if(!parsed[2].equals("./testLOG")){
if(Utilities.checkUserExists(parsed[2], UserNames) == false){ //User does not already exist
System.out.println("New User: " + parsed[2]);
UserMap.put(parsed[2],new ArrayList<String>()); //Create list of transactions for new user
UserMap.get(parsed[2]).add(line); //Add First Item to new list
UserNames[numberOfUsers] = parsed[2]; //Add new user
numberOfUsers++;
}
else{ //User Already Existed
UserMap.get(parsed[2]).add(line);
}
}
}
reader.close();
} catch (IOException x) {
System.err.println(x);
}
//get start time
long startTime = new Date().getTime();
tCount = numberOfUsers;
ExecutorService threadPool = Executors.newFixedThreadPool(tCount);
for(int i = 0; i < numberOfUsers; i++){
System.out.println("Starting Thread " + i + " for user " + UserNames[i]);
Runnable worker = new ProcessCommands(UserMap.get(UserNames[i]),UserNames[i], XMLfile);
threadPool.execute(worker);
}
threadPool.shutdown();
while(!threadPool.isTerminated()){
}
System.out.println("Finished all threads");
}
Here is the ProcessCommands class. The thread receives the list for its user and creates a marshaller. From what I unserstand marshalling is not thread safe so it is best to create one for each thread, is this the best way to do that?
When I create the marshallers I know that each from (from each thread) will want to access the created file causing conflicts, I used synchronized, is that correct?
As the thread iterates through it's list, each line calls for a certain case. There are a lot so I just made pseudo-cases for clarity. Each case calls the function below.
public class ProcessCommands implements Runnable{
private static final boolean DEBUG = false;
private List<String> list = null;
private String threadName;
private File XMLfile = null;
public Thread myThread;
public ProcessCommands(List<String> list, String threadName, File XMLfile){
this.list = list;
this.threadName = threadName;
this.XMLfile = XMLfile;
}
public void run(){
Date start = null;
int transactionNumber = 0;
String[] parsed = new String[8];
String[] quoteParsed = null;
String[] universalFormatCommand = new String[9];
String userCommand = null;
Connection connection = null;
Statement stmt = null;
Map<String, UserObject> usersMap = null;
Map<String, Stack<BLO>> buyMap = null;
Map<String, Stack<SLO>> sellMap = null;
Map<String, QLO> stockCodeMap = null;
Map<String, BTO> buyTriggerMap = null;
Map<String, STO> sellTriggerMap = null;
Map<String, USO> usersStocksMap = null;
String SQL = null;
int amountToAdd = 0;
int tempDollars = 0;
UserObject tempUO = null;
BLO tempBLO = null;
SLO tempSLO = null;
Stack<BLO> tempStBLO = null;
Stack<SLO> tempStSLO = null;
BTO tempBTO = null;
STO tempSTO = null;
USO tempUSO = null;
QLO tempQLO = null;
String stockCode = null;
String quoteResponse = null;
int usersDollars = 0;
int dollarAmountToBuy = 0;
int dollarAmountToSell = 0;
int numberOfSharesToBuy = 0;
int numberOfSharesToSell = 0;
int quoteStockInDollars = 0;
int shares = 0;
Iterator<String> itr = null;
int transactionCount = list.size();
System.out.println("Starting "+threadName+" - listSize = "+transactionCount);
//UO dollars, reserved
usersMap = new HashMap<String, UserObject>(3); //userName -> UO
//USO shares
usersStocksMap = new HashMap<String, USO>(); //userName+stockCode -> shares
//BLO code, timestamp, dollarAmountToBuy, stockPriceInDollars
buyMap = new HashMap<String, Stack<BLO>>(); //userName -> Stack<BLO>
//SLO code, timestamp, dollarAmountToSell, stockPriceInDollars
sellMap = new HashMap<String, Stack<SLO>>(); //userName -> Stack<SLO>
//BTO code, timestamp, dollarAmountToBuy, stockPriceInDollars
buyTriggerMap = new ConcurrentHashMap<String, BTO>(); //userName+stockCode -> BTO
//STO code, timestamp, dollarAmountToBuy, stockPriceInDollars
sellTriggerMap = new HashMap<String, STO>(); //userName+stockCode -> STO
//QLO timestamp, stockPriceInDollars
stockCodeMap = new HashMap<String, QLO>(); //stockCode -> QLO
//create user object and initialize stacks
usersMap.put(threadName, new UserObject(0, 0));
buyMap.put(threadName, new Stack<BLO>());
sellMap.put(threadName, new Stack<SLO>());
try {
//Marshaller marshaller = getMarshaller();
synchronized (this){
Marshaller marshaller = init.jc.createMarshaller();
marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, true);
marshaller.setProperty(Marshaller.JAXB_FRAGMENT, true);
marshaller.marshal(LogServer.Root,XMLfile);
marshaller.marshal(LogServer.Root,System.out);
}
} catch (JAXBException M) {
M.printStackTrace();
}
Date timing = new Date();
//universalFormatCommand = new String[8];
parsed = new String[8];
//iterate through workload file
itr = this.list.iterator();
while(itr.hasNext()){
userCommand = (String) itr.next();
itr.remove();
parsed = userCommand.split(",|\\s+");
transactionNumber = Integer.parseInt(parsed[0].replaceAll("\\[", "").replaceAll("\\]", ""));
universalFormatCommand = Utilities.FormatCommand(parsed, parsed[0]);
if(transactionNumber % 100 == 0){
System.out.println(this.threadName + " - " +transactionNumber+ " - "+(new Date().getTime() - timing.getTime())/1000);
}
/*System.out.print("UserCommand " +transactionNumber + ": ");
for(int i = 0;i<8;i++)System.out.print(universalFormatCommand[i]+ " ");
System.out.print("\n");*/
//switch for user command
switch (parsed[1].toLowerCase()) {
case "One"
*Do Stuff"
LogServer.create_Log(universalFormatCommand, transactionNumber, CommandType.ADD);
break;
case "Two"
*Do Stuff"
LogServer.create_Log(universalFormatCommand, transactionNumber, CommandType.ADD);
break;
}
}
}
The function create_Log has multiple cases so as before, for clarity I just left one. The case "QUOTE" only calls one object creation function but other other cases can create multiple objects. The type 'log' is a complex XML type that defines all the other object types so in each call to create_Log I create a log type called Root. The class 'log' generated by JaxB included a function to create a list of objects. The statement:
Root.getUserCommandOrQuoteServerOrAccountTransaction().add(quote_QuoteType);
takes the root element I created, creates a list and adds the newly created object 'quote_QuoteType' to that list. Before I added threading this method successfully created a list of as many objects as I wanted then marshalled them. So I'm pretty positive the bit in class 'LogServer' is not the issue. It is something to do with the marshalling and syncronization in the ProcessCommands class above.
public class LogServer{
public static log Root = new log();
public static QuoteServerType Log_Quote(String[] input, int TransactionNumber){
ObjectFactory factory = new ObjectFactory();
QuoteServerType quoteCall = factory.createQuoteServerType();
**Populate the QuoteServerType object called quoteCall**
return quoteCall;
}
public static void create_Log(String[] input, int TransactionNumber, CommandType Command){
System.out.print("TRANSACTION "+TransactionNumber + " is " + Command + ": ");
for(int i = 0; i<input.length;i++) System.out.print(input[i] + " ");
System.out.print("\n");
switch(input[1]){
case "QUOTE":
System.out.print("QUOTE CASE");
QuoteServerType quote_QuoteType = Log_Quote(input,TransactionNumber);
Root.getUserCommandOrQuoteServerOrAccountTransaction().add(quote_QuoteType);
break;
}
}
So you wrote a lot of code, but have you try if it is actually working? After quick look I doubt it. You should test your code logic part by part not going all the way till the end. It seems you are just staring with Java. I would recommend practice first on simple one threaded applications. Sorry if I sound harsh, but I will try to be constructive as well:
Per convention, the classes names are starts with capital letter, variables by small, you do it other way.
You should make a method in you home (Home) class not a put all your code in the static block.
You are reading the whole file to the memory, you do not process it line by line. After the Home is initialized literary whole content of file will be under UserMap variable. If the file is really large you will run out of the heap memory. If you assume large file than you cannot do it and you have to redisign your app to store somewhere partial results. If your file is smaller than memmory you could keep it like that (but you said it is large).
No need for UserNames, the UserMap.containsKey will do the job
Your thread pools size should be in the range of your cores not number of users as you will get thread trashing (if you have blocking operation in your code make tCount = 2*processors if not keep it as number of processors). Once one ProcessCommand finish, the executor will start another one till you finish all and you will be efficiently using all your processor cores.
DO NOT while(!threadPool.isTerminated()), this line will completely consume one processor as it will be constantly checking, call awaitTermination instead
Your ProcessCommand, has view map variables which will only had one entry cause as you said, each will process data from one user.
The synchronized(this) is Process will not work, as each thread will synchronized on different object (different isntance of process).
I believe creating marshaller is thread safe (check it) so no need to synchronization at all
You save your log (whatever it is) before you did actual processing in of the transactions lists
The marshalling will override content of the file with current state of LogServer.Root. If it is shared bettween your proccsCommand (seems so) what is the point in saving it in each thread. Do it once you are finished.
You dont need itr.remove();
The log class (for the ROOT variable !!!) needs to be thread-safe as all the threads will call the operations on it (so the list inside the log class must be concurrent list etc).
And so on.....
I would recommend, to
Start with simple one thread version that actually works.
Deal with processing line by line, (store reasults for each users in differnt file, you can have cache with transactions for recently used users so not to keep writing all the time to the disk (see guava cache)
Process multithreaded each user transaction to your user log objects (again if it is a lot you have to save them to the disk not keep all in memmory).
Write code that combines logs from diiffernt users to create one (again you may want to do it mutithreaded), though it will be mostly IO operations so not much gain and more tricky to do.
Good luck
override cont

How can I write Java properties in a defined order?

I'm using java.util.Properties's store(Writer, String) method to store the properties. In the resulting text file, the properties are stored in a haphazard order.
This is what I'm doing:
Properties properties = createProperties();
properties.store(new FileWriter(file), null);
How can I ensure the properties are written out in alphabetical order, or in the order the properties were added?
I'm hoping for a solution simpler than "manually create the properties file".
As per "The New Idiot's" suggestion, this stores in alphabetical key order.
Properties tmp = new Properties() {
#Override
public synchronized Enumeration<Object> keys() {
return Collections.enumeration(new TreeSet<Object>(super.keySet()));
}
};
tmp.putAll(properties);
tmp.store(new FileWriter(file), null);
See https://github.com/etiennestuder/java-ordered-properties for a complete implementation that allows to read/write properties files in a well-defined order.
OrderedProperties properties = new OrderedProperties();
properties.load(new FileInputStream(new File("~/some.properties")));
Steve McLeod's answer used to work for me, but since Java 11, it doesn't.
The problem seemed to be EntrySet ordering, so, here you go:
#SuppressWarnings("serial")
private static Properties newOrderedProperties()
{
return new Properties() {
#Override public synchronized Set<Map.Entry<Object, Object>> entrySet() {
return Collections.synchronizedSet(
super.entrySet()
.stream()
.sorted(Comparator.comparing(e -> e.getKey().toString()))
.collect(Collectors.toCollection(LinkedHashSet::new)));
}
};
}
I will warn that this is not fast by any means. It forces iteration over a LinkedHashSet which isn't ideal, but I'm open to suggestions.
To use a TreeSet is dangerous!
Because in the CASE_INSENSITIVE_ORDER the strings "mykey", "MyKey" and "MYKEY" will result in the same index! (so 2 keys will be omitted).
I use List instead, to be sure to keep all keys.
List<Object> list = new ArrayList<>( super.keySet());
Comparator<Object> comparator = Comparator.comparing( Object::toString, String.CASE_INSENSITIVE_ORDER );
Collections.sort( list, comparator );
return Collections.enumeration( list );
The solution from Steve McLeod did not not work when trying to sort case insensitive.
This is what I came up with
Properties newProperties = new Properties() {
private static final long serialVersionUID = 4112578634029874840L;
#Override
public synchronized Enumeration<Object> keys() {
Comparator<Object> byCaseInsensitiveString = Comparator.comparing(Object::toString,
String.CASE_INSENSITIVE_ORDER);
Supplier<TreeSet<Object>> supplier = () -> new TreeSet<>(byCaseInsensitiveString);
TreeSet<Object> sortedSet = super.keySet().stream()
.collect(Collectors.toCollection(supplier));
return Collections.enumeration(sortedSet);
}
};
// propertyMap is a simple LinkedHashMap<String,String>
newProperties.putAll(propertyMap);
File file = new File(filepath);
try (FileOutputStream fileOutputStream = new FileOutputStream(file, false)) {
newProperties.store(fileOutputStream, null);
}
I'm having the same itch, so I implemented a simple kludge subclass that allows you to explicitly pre-define the order name/values appear in one block and lexically order them in another block.
https://github.com/crums-io/io-util/blob/master/src/main/java/io/crums/util/TidyProperties.java
In any event, you need to override public Set<Map.Entry<Object, Object>> entrySet(), not public Enumeration<Object> keys(); the latter, as https://stackoverflow.com/users/704335/timmos points out, never hits on the store(..) method.
In case someone has to do this in kotlin:
class OrderedProperties: Properties() {
override val entries: MutableSet<MutableMap.MutableEntry<Any, Any>>
get(){
return Collections.synchronizedSet(
super.entries
.stream()
.sorted(Comparator.comparing { e -> e.key.toString() })
.collect(
Collectors.toCollection(
Supplier { LinkedHashSet() })
)
)
}
}
If your properties file is small, and you want a future-proof solution, then I suggest you to store the Properties object on a file and load the file back to a String (or store it to ByteArrayOutputStream and convert it to a String), split the string into lines, sort the lines, and write the lines to the destination file you want.
It's because the internal implementation of Properties class is always changing, and to achieve the sorting in store(), you need to override different methods of Properties class in different versions of Java (see How to sort Properties in java?). If your properties file is not large, then I prefer a future-proof solution over the best performance one.
For the correct way to split the string into lines, some reliable solutions are:
Files.lines()/Files.readAllLines(), if you use a File
BufferedReader.readLine() (Java 7 or earlier)
IOUtils.readLines(bufferedReader) (org.apache.commons.io.IOUtils, Java 7 or earlier)
BufferedReader.lines() (Java 8+) as mentioned in Split Java String by New Line
String.lines() (Java 11+) as mentioned in Split Java String by New Line.
And you don't need to be worried about values with multiple lines, because Properties.store() will escape the whole multi-line String into one line in the output file.
Sample codes for Java 8:
public static void test() {
......
String comments = "Your multiline comments, this should be line 1." +
"\n" +
"The sorting should not mess up the comment lines' ordering, this should be line 2 even if T is smaller than Y";
saveSortedPropertiesToFile(inputProperties, comments, Paths.get("C:\\dev\\sorted.properties"));
}
public static void saveSortedPropertiesToFile(Properties properties, String comments, Path destination) {
try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream()) {
// Storing it to output stream is the only way to make sure correct encoding is used.
properties.store(outputStream, comments);
/* The encoding here shouldn't matter, since you are not going to modify the contents,
and you are only going to split them to lines and reorder them.
And Properties.store(OutputStream, String) should have translated unicode characters into (backslash)uXXXX anyway.
*/
String propertiesContentUnsorted = outputStream.toString("UTF-8");
String propertiesContentSorted;
try (BufferedReader bufferedReader = new BufferedReader(new StringReader(propertiesContentUnsorted))) {
List<String> commentLines = new ArrayList<>();
List<String> contentLines = new ArrayList<>();
boolean commentSectionEnded = false;
for (Iterator<String> it = bufferedReader.lines().iterator(); it.hasNext(); ) {
String line = it.next();
if (!commentSectionEnded) {
if (line.startsWith("#")) {
commentLines.add(line);
} else {
contentLines.add(line);
commentSectionEnded = true;
}
} else {
contentLines.add(line);
}
}
// Sort on content lines only
propertiesContentSorted = Stream.concat(commentLines.stream(), contentLines.stream().sorted())
.collect(Collectors.joining(System.lineSeparator()));
}
// Just make sure you use the same encoding as above.
Files.write(destination, propertiesContentSorted.getBytes(StandardCharsets.UTF_8));
} catch (IOException e) {
// Log it if necessary
}
}
Sample codes for Java 7:
import org.apache.commons.collections4.IterableUtils;
import org.apache.commons.io.IOUtils;
import org.apache.commons.lang.StringUtils;
......
public static void test() {
......
String comments = "Your multiline comments, this should be line 1." +
"\n" +
"The sorting should not mess up the comment lines' ordering, this should be line 2 even if T is smaller than Y";
saveSortedPropertiesToFile(inputProperties, comments, Paths.get("C:\\dev\\sorted.properties"));
}
public static void saveSortedPropertiesToFile(Properties properties, String comments, Path destination) {
try (ByteArrayOutputStream outputStream = new ByteArrayOutputStream()) {
// Storing it to output stream is the only way to make sure correct encoding is used.
properties.store(outputStream, comments);
/* The encoding here shouldn't matter, since you are not going to modify the contents,
and you are only going to split them to lines and reorder them.
And Properties.store(OutputStream, String) should have translated unicode characters into (backslash)uXXXX anyway.
*/
String propertiesContentUnsorted = outputStream.toString("UTF-8");
String propertiesContentSorted;
try (BufferedReader bufferedReader = new BufferedReader(new StringReader(propertiesContentUnsorted))) {
List<String> commentLines = new ArrayList<>();
List<String> contentLines = new ArrayList<>();
boolean commentSectionEnded = false;
for (Iterator<String> it = IOUtils.readLines(bufferedReader).iterator(); it.hasNext(); ) {
String line = it.next();
if (!commentSectionEnded) {
if (line.startsWith("#")) {
commentLines.add(line);
} else {
contentLines.add(line);
commentSectionEnded = true;
}
} else {
contentLines.add(line);
}
}
// Sort on content lines only
Collections.sort(contentLines);
propertiesContentSorted = StringUtils.join(IterableUtils.chainedIterable(commentLines, contentLines).iterator(), System.lineSeparator());
}
// Just make sure you use the same encoding as above.
Files.write(destination, propertiesContentSorted.getBytes(StandardCharsets.UTF_8));
} catch (IOException e) {
// Log it if necessary
}
}
True that keys() is not triggered so instead of passing trough a list as Timmos suggested you can do it like this:
Properties alphaproperties = new Properties() {
#Override
public Set<Map.Entry<Object, Object>> entrySet() {
Set<Map.Entry<Object, Object>> setnontrie = super.entrySet();
Set<Map.Entry<Object, Object>> unSetTrie = new ConcurrentSkipListSet<Map.Entry<Object, Object>>(new Comparator<Map.Entry<Object, Object>>() {
#Override
public int compare(Map.Entry<Object, Object> o1, Map.Entry<Object, Object> o2) {
return o1.getKey().toString().compareTo(o2.getKey().toString());
}
});
unSetTrie.addAll(setnontrie);
return unSetTrie;
}
};
alphaproperties.putAll(properties);
alphaproperties.store(fw, "UpdatedBy Me");
fw.close();

Categories

Resources