Write in specific row via FileWriter Java - java

I have some data that I want to write.
Code:
private void saveStats(int early, int infected, int recovered, int deads, int notInfected, int vaccinated, int iteration){
try
{
FileWriter txt = new FileWriter("statistic.csv");
txt.write((String.valueOf(early)));
txt.write(";");
txt.write(String.valueOf(infected));
txt.write(";");
txt.write((String.valueOf(recovered)));
txt.write(";");
txt.write((String.valueOf(deads)));
txt.write(";");
txt.write((String.valueOf(notInfected)));
txt.write(";");
txt.write((String.valueOf(vaccinated)));
txt.write("\n");
txt.close();
} catch (IOException ex)
{
ex.printStackTrace();
System.out.println("Error!");
}
}
I will use this function to save the iteration number and some additional data; for example:
Iteration Infected Recovered Dead NotInfected Vaccinated
1 200 300 400 500
2 300 400 600 900
etc
A perfect solution would have the first row of the file hold names for each column, similar to what's written above.

For something like this, it is a good idea to use an existing Java CSV library. One possibility is Apache Commons CSV. "Google is your friend" if you want to find tutorials or other alternatives.
But if you wanted to "roll your own" code, there are various ways to do it. The simplest way to change your code so that it records multiple rows in the CSV would be to change
new FileWriter("statistic.csv");
to
new FileWriter("statistic.csv", true);
That opens the file in "append" mode, and the new row will be added at the end of the file instead of replacing the existing row.
You should also use Java 7+ try with resources to manage the FileWriter. That will make sure that the FileWriter is always properly closed.
If you want to get fancy with CSV headers, more efficient file handling, etc, you will need to write your own CSVWriter class. But if you are doing that, you would be better off using a library someone has already designed, written and tested. (See above!)

Related

Adding columns to existing csv file using super-csv

I admit I am not a great Java programmer and probably my question is pretty dumb.I need to add new columns in different places to an existing csv file. I'm using the super-csv library.
My input file is something like that
1,2011-5-14 16:30:0.250,A
2,2011-5-14 16:30:21.500,B
3,2011-5-14 16:30:27.000,C
4,2011-5-14 16:30:29.750,B
5,2011-5-14 16:30:34.500,F
AS you can see, i have (or need) no header.
And I need to add a column in column(2) and a column at the end of each row in order to get:
1,1,2011-5-14 16:30:0.250,A,1
2,1,2011-5-14 16:30:21.500,B,1
3,1,2011-5-14 16:30:27.000,C,1
4,1,2011-5-14 16:30:29.750,B,1
5,1,2011-5-14 16:30:34.500,F,1
From the library documentation i got (am i wrong?) that I cannot directly modify the original file, but the best idea is to read it and write it back. I guess using CsvMapReader and CsvMapwriter could be a good choice. But how can I add the columns in between existing ones? I should read each field of the existing column separately, and I tried to find suggestions in the library documentation but i cannot understand how to do it.
You can do it using CsvListReader and CsvListWriter classes. Below you can see simple example how to do it:
CsvListReader reader = new CsvListReader(new FileReader(inputCsv), CsvPreference.STANDARD_PREFERENCE);
CsvListWriter writer = new CsvListWriter(new FileWriter(outputCsv), CsvPreference.STANDARD_PREFERENCE);
List<String> columns;
while ((columns = reader.read()) != null) {
System.out.println("Input: " + columns);
// Add new columns
columns.add(1, "Column_2");
columns.add("Last_column");
System.out.println("Output: " + columns);
writer.write(columns);
}
reader.close();
writer.close();
This is a simple example. You have to catch all exception and close streams in finally block.

Fastest way to import millions of JSON documents to MongoDB

I have more than 10 million JSON documents of the form :
["key": "val2", "key1" : "val", "{\"key\":\"val", \"key2\":\"val2"}"]
in one file.
Importing using JAVA Driver API took around 3 hours, while using the following function (importing one BSON at a time):
public static void importJSONFileToDBUsingJavaDriver(String pathToFile, DB db, String collectionName) {
// open file
FileInputStream fstream = null;
try {
fstream = new FileInputStream(pathToFile);
} catch (FileNotFoundException e) {
e.printStackTrace();
System.out.println("file not exist, exiting");
return;
}
BufferedReader br = new BufferedReader(new InputStreamReader(fstream));
// read it line by line
String strLine;
DBCollection newColl = db.getCollection(collectionName);
try {
while ((strLine = br.readLine()) != null) {
// convert line by line to BSON
DBObject bson = (DBObject) JSON.parse(JSONstr);
// insert BSONs to database
try {
newColl.insert(bson);
}
catch (MongoException e) {
// duplicate key
e.printStackTrace();
}
}
br.close();
} catch (IOException e) {
e.printStackTrace(); //To change body of catch statement use File | Settings | File Templates.
}
}
Is there a faster way? Maybe, MongoDB settings may influence the insertion speed? (for, example adding key : "_id" which will function as index, so that MongoDB would not have to create artificial key and thus index for each document) or disable index creation at all at insertion.
Thanks.
I'm sorry but you're all picking minor performance issues instead of the core one. Separating the logic from reading the file and inserting is a small gain. Loading the file in binary mode (via MMAP) is a small gain. Using mongo's bulk inserts is a big gain, but still no dice.
The whole performance bottleneck is the BSON bson = JSON.parse(line). Or in other words, the problem with the Java drivers is that they need a conversion from json to bson, and this code seems to be awfully slow or badly implemented. A full JSON (encode+decode) via JSON-simple or specially via JSON-smart is 100 times faster than the JSON.parse() command.
I know Stack Overflow is telling me right above this box that I should be answering the answer, which I'm not, but rest assured that I'm still looking for an answer for this problem. I can't believe all the talk about Mongo's performance and then this simple example code fails so miserably.
I've done importing a multi-line json file with ~250M records. I just use mongoimport < data.txt and it took 10 hours. Compared to your 10M vs. 3 hours I think this is considerably faster.
Also from my experience writing your own multi-threaded parser would speed things up drastically. The procedure is simple:
Open the file as BINARY (not TEXT!)
Set markers(offsets) evenly across the file. The count of markers depends on the number of threads you want.
Search for '\n' near the markers, calibrate the markers so they are aligned to lines.
Parse each chunk with a thread.
A reminder:
when you want performance, don't use stream reader or any built-in line-based read methods. They are slow. Just use binary buffer and search for '\n' to identify a line, and (most preferably) do in-place parsing in the buffer without creating a string. Otherwise the garbage collector won't be so happy with this.
You can parse the entire file together at once and the insert the whole json in mongo document, Avoid multiple loops, You need to separate the logic as follows:
1)Parse the file and retrieve the json Object.
2)Once the parsing is over, save the json Object in the Mongo Document.
I've got a slightly faster way (I'm also inserting millions at the moment), insert collections instead of single documents with
insert(List<DBObject> list)
http://api.mongodb.org/java/current/com/mongodb/DBCollection.html#insert(java.util.List)
That said, it's not that much faster. I'm about to experiment with setting other WriteConcerns than ACKNOWLEDGED (mainly UNACKNOWLEDGED) to see if I can speed it up faster. See http://docs.mongodb.org/manual/core/write-concern/ for info
Another way to improve performance, is to create indexes after bulk inserting. However, this is rarely an option except for one off jobs.
Apologies if this is slightly wooly sounding, I'm still testing things myself. Good question.
You can also remove all the indexes (except for the PK index, of course) and rebuild them after the import.
Use bulk operations insert/upserts. After Mongo 2.6 you can do Bulk Updates/Upserts. Example below does bulk update using c# driver.
MongoCollection<foo> collection = database.GetCollection<foo>(collectionName);
var bulk = collection.InitializeUnorderedBulkOperation();
foreach (FooDoc fooDoc in fooDocsList)
{
var update = new UpdateDocument { {fooDoc.ToBsonDocument() } };
bulk.Find(Query.EQ("_id", fooDoc.Id)).Upsert().UpdateOne(update);
}
BulkWriteResult bwr = bulk.Execute();
You can use a bulk insertion
You can read the documentation at mongo website and you can also check this java example on StackOverflow

How to change the Properties.store() divider symbol from "=" to ":"?

I recently found out about java.util.Properties, which allows me to write and read from a config without writing my own function for it.
I was excited since it is so easy to use, but later noticed a flaw when I stored the modified config file.
Here is my code, quite simple for now:
FileWriter writer = null;
Properties configFile = new Properties();
configFile.load(ReadFileTest.class.getClassLoader().getResourceAsStream("config.txt"));
String screenwidth = configFile.getProperty("screenwidth");
String screenheight = configFile.getProperty("screenheight");
System.out.println(screenwidth);
System.out.println(screenheight);
configFile.setProperty("screenwidth", "1024");
configFile.setProperty("screenheight", "600");
try {
writer = new FileWriter("config.txt" );
configFile.store(writer, null);
} catch (IOException e) {
e.printStackTrace();
}
writer.flush();
writer.close();
The problem I noticed was that the config file I try to edit is stored like this:
foo: bar
bar: foo
foobar: barfoo
However, the output after properties.store(writer, null) is this:
foo=bar
bar=foo
foobar=barfoo
The config file I edit is not for my program, it is for an other application that needs the config file to be in the format shown above with : as divider or else it will reset the configuration to default.
Does anybody know how to easily change this?
I searched through the first 5 Google pages now but found noone with a similar problem.
I also checked the Javadoc and found no function that allows me to change it without writing a class for myself.
I would like to use Properties for now since it is there and quite easy to use.
I also got the idea of just replacing all = with : after I saved the file but maybe someone got a better suggestion?
Don't use a tool that isn't designed for the task - don't use Properties here. Instead, I'd just write your own - should be easy enough.
You can still use a Properties instance as your "store", but don't use it for serializing the properties to text. Instead, just use a FileWriter, iterate through the properties, and write the lines yourself - as key + ": " + value.
New idea here
Your comment about converting the = to : got me thinking: Properties.store() writes to a Stream. You could use an in-memory ByteArrayOutputStream, convert as appropriate in memory before you write to a file, then write the file. Likewise for Properties.load(). Or you could insert FilterXXXs instead. (I'd probably do it in memory).
I was looking into how hard it would be to subclass. It's nearly impossible. :-(
If you look at the source code for Properties, (I'm looking at Java 6) store() calls store0(). Now, unfortunately, store0 is private, not protected, and the "=" is given as a magic constant, not something read from a property. And it calls another private method called saveConvert() that also has a lot of magic constants.
Overall, I rate this code as D- quality. It breaks almost all the rules of good code and good style.
But, it's open source, so, theoretically, you could copy and paste (and improve!) a bunch of code into your own BetterProperties class.

Java - Fastest way, and best code to load a URL and get a response from the server

I was curious as to what was the best and FASTEST way to get a response from the server, say if I used a for loop to load a url that returned an XML file, which way could I use to load the url get the response 10 times in a row? speed is the most important thing. I know it can only go as fast as your internet but I need a way to load the url as fast as my internet will allow and then put the who output of the url in a string so i can append to JTextArea.. This is the code Ive been using but seek faster alternatives if possible
int times = Integer.parseInt(jTextField3.getText());
for(int abc = 0; abc!=times; abc++){
try {
URL gameHeader = new URL(jTextField2.getText());
InputStream in = gameHeader.openStream();
byte[] buffer = new byte[1024];
try {
for(int cwb; (cwb = in.read(buffer)) != -1;){
jTextArea1.append(new String(buffer, 0, cwb));
}
} catch (IOException e) {}
} catch (MalformedURLException e) {} catch (IOException e) {}
}
is there anything that would be faster than this?
Thanks
-CLUEL3SS
This seems like a job for Java NIO (Non-blocking IO). This article is from Java 1.4 but still will give you a good understanding of how to setup NIO. Since then NIO have evolved a lot and you may need to look up the API for Java 6 or Java 7 to find out whats new.
This solution is probably best as an async option. Basically it will allow you to load 10 URLs without waiting for each one to be complete before moving on and loading an other.
You can't load text this way as the 1024 byte boundary could break an encoded character in two.
Copy all the data to ByteArrayInputStream and use toString() on it or read Text as Text using BufferedReader.
Use a BufferedReader; use a much larger buffer size than 1024; don't swallow exceptions. You could also try re-using the same URL object instead of creating a new one each time, might help with connection pooling.
But why would you want to read the same URL 10 times in a row?

StringBuilders ending with mass nul characters

I'm having a very difficult time debugging a problem with an application I've been building. The problem itself I cannot seem to reproduce with a representitive test program with the same issue which makes it difficult to demonstrate. Unfortunately I cannot share my actual source because of security, however, the following test represents fairly well what I am doing, the fact that the files and data are unix style EOL, writing to a zip file with a PrintWriter, and the use of StringBuilders:
public class Tester {
public static void main(String[] args) {
// variables
File target = new File("TESTSAVE.zip");
PrintWriter printout1;
ZipOutputStream zipStream;
ZipEntry ent1;
StringBuilder testtext1 = new StringBuilder();
StringBuilder replacetext = new StringBuilder();
// ensure file replace
if (target.exists()) {
target.delete();
}
try {
// open the streams
zipStream = new ZipOutputStream(new FileOutputStream(target, true));
printout1 = new PrintWriter(zipStream);
ent1 = new ZipEntry("testfile.txt");
zipStream.putNextEntry(ent1);
// construct the data
for (int i = 0; i < 30; i++) {
testtext1.append("Testing 1 2 3 Many! \n");
}
replacetext.append("Testing 4 5 6 LOTS! \n");
replacetext.append("Testing 4 5 6 LOTS! \n");
// the replace operation
testtext1.replace(21, 42, replacetext.toString());
// write it
printout1 = new PrintWriter(zipStream);
printout1.println(testtext1);
// save it
printout1.flush();
zipStream.closeEntry();
printout1.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
The heart of the problem is that the file I see at my side is producing a file of 16.3k characters. My friend, whether he uses the app on his pc or whether he looks at exactly the same file as me sees a file of 19.999k characters, the extra characters being a CRLF followed by a massive number of null characters. No matter what application, encoding or views I use, I cannot at all see these nul characters, I only see a single LF at the last line, but I do see a file of 20k. In all cases there is a difference between what is seen with the exact same files on the two machines even though both are windows machines and both are using the same editing softwares to view.
I've not yet been able to reproduce this behaviour with any amount of dummy programs. I have been able to trace the final line's stray CRLF to my use of println on the PrintWriter, however. When I replaced the println(s) with print(s + '\n') the problem appeared to go away (the file size was 16.3k). However, when I returned the program to println(s), the problem does not appear to return. I'm currently having the files verified by a friend in france to see if the problem really did go away (since I cannot see the nuls but he can), but this behaviour has be thoroughly confused.
I've also noticed that the StringBuilder's replace function states "This sequence will be lengthened to accommodate the specified String if necessary". Given that the stringbuilders setLength function pads with nul characters and that the ensureCapacity function sets capacity to the greater of the input or (currentCapacity*2)+2, I suspected a relation somewhere. However, I have only once when testing with this idea been able to get a result that represented what I've seen, and have not been able to reproduce it since.
Does anyone have any idea what could be causing this error or at least have a suggestion on what direction to take the testing?
Edit since the comments section is broken for me:
Just to clarify, the output is required to be in unix format regardless of the OS, hence the use of '\n' directly rather than through a formatter. The original StringBuilder that is inserted into is not in fact generated to me but is the contents of a file read in by the program. I'm happy the reading process works, as the information in it is used heavily throughout the application. I've done a little probing too and found that directly prior to saving, the buffer IS the correct capacity and that the output when toString() is invoked is the correct length (i.e. it contains no null characters and is 16,363 long, not 19,999). This would put the cause of the error somewhere between generating the string and saving the zip file.
Finally found the cause. Managed to reproduce the problem a few times and traced the cause down not to the output side of the code but the input side. My file reading function was essentially this:
char[] buf;
int charcount = 0;
StringBuilder line = new StringBuilder(2048);
InputStreamReader reader = new InputStreamReader(stream);// provides a line-wise read
BufferedReader file = new BufferedReader(reader);
do { // capture loop
try {
buf = new char[2048];
charcount = file.read(buf, 0, 2048);
} catch (IOException e) {
return null; // unknown IO error
}
line.append(buf);
} while (charcount != -1);
// close and output
problem was appending a buffer that wasnt full, so the later values were still at their initial values of null. Reason I couldnt reproduce it was because some data filled in the buffers nicely, some didn't.
Why I couldn't seem to view the problem on my text editors I still have no idea of, but I should be able to resolve this now. Any suggestions on the best way to do so are welcome, as this is part of one of my long term utility libraries I want to keep it as generic and optimised as possible.

Categories

Resources