How to Selectively load ArrayList content into a database - java

I currently have some code that loads some WAPT load testing data from a CSV file into an ArrayList.
int count = 0;
String file = "C:\\Temp\\loadtest.csv";
List<String[]> content = new ArrayList<>();
try (BufferedReader br = new BufferedReader(new FileReader(file))) {
String line = "";
while (((line = br.readLine()) != null) && (count <18)) {
content.add(line.split(";"));
count++;
}
} catch (FileNotFoundException e) {
//Some error logging
}
So now it gets complicated. The CSV file looks like this, the seperator is a ";".
In this case I want to ignore the first line, it's just minutes. I want the second row. I need to ignore "AddToCartDemo" and "Users". So the first 10 entries, in this case all ten 5's get loaded into the first ten columns of the database. Likewise the third row "Pages/sec" is ignored and the data after it is loaded into the next ten columns of the database, etc.
;;0:01:00;0:02:00;0:03:00;0:04:00;0:05:00;0:06:00;0:07:00;0:08:00;0:09:00;0:10:00;
AddToCartDemo;Users;5;5;5;5;5;5;5;5;5;5;
;Pages/sec;0.25;0.1;0.22;0.65;0.03;0.4;0.43;0.17;0.22;0.4;
;Hits/sec;0.25;0.1;0.27;0.85;0.03;0.5;0.53;0.22;0.27;0.5;
;HTTP errors;0;0;0;0;0;0;0;0;0;0;
;KB received;1015;4595;422;1600;2.46;4374;1527;1491;2551;2954;
;KB sent;12.9;3.66;13.8;39.9;5.22;21.7;23.8;13.2;12.2;23.1;
;Receiving kbps;135;613;56.3;213;0.33;583;204;199;340;394;
;Sending kbps;1.73;0.49;1.84;5.32;0.7;2.89;3.17;1.76;1.63;3.07;
Anyone have any ideas on how to caccomplish this? As usual a search brings up nothing even close to this. Thanks much in advance!

Related

How do I read a CSV File with JAVA

I have a probem, and I didnt find any solution yet. Following Problem: I have to read a CSV File which has to look like this:
First Name,Second Name,Age,
Lucas,Miller,17,
Bob,Jefferson,55,
Andrew,Washington,31,
The assignment is to read this CSV File with JAVA and display it like this:
First Name: Lucas
Second Name: Miller
Age: 17
The Attribut Names are not always the same, so it also could be:
Street,Number,Postal Code,ID,
Schoolstreet,93,20000,364236492,
("," has to be replaced with ";")
Also the file Adress is not always the same.
I already have the view etc. I only need the MODEL.
Thanks for your help. :))
I already have a FileChooser class in Controller, which returns an URI.
If your CSV file(s) always contains a Header Line which indicates the Table Column Names then it's just a matter of catching this line and splitting it so as to place those column names into a String Array (or collection, or whatever). The length of this array determines the amount of data expected to be available for each record data line. Once you have the Column Names it's gets relatively easy from there.
How you acquire your CSV file path and it's format type is obviously up to you but here is a general concept how to carry out the task at hand:
public static void readCsvToConsole(String csvFilePath, String csvDelimiter) {
String line; // To hold each valid data line.
String[] columnNames = new String[0]; // To hold Header names.
int dataLineCount = 0; // Count the file lines.
StringBuilder sb = new StringBuilder(); // Used to build the output String.
String ls = System.lineSeparator(); // Use System Line Seperator for output.
// 'Try With Resources' to auto-close the reader
try (BufferedReader br = new BufferedReader(new FileReader(csvFilePath))) {
while ((line = br.readLine()) != null) {
// Skip Blank Lines (if any).
if (line.trim().equals("")) {
continue;
}
dataLineCount++;
// Deal with the Header Line. Line 1 in most CSV files is the Header Line.
if (dataLineCount == 1) {
/* The Regular Expression used in the String#split()
method handles any delimiter/spacing situation.*/
columnNames = line.split("\\s{0,}" + csvDelimiter + "\\s{0,}");
continue; // Don't process this line anymore. Continue loop.
}
// Split the file data line into its respective columnar slot.
String[] lineParts = line.split("\\s{0,}" + csvDelimiter + "\\s{0,}");
/* Iterate through the Column Names and buld a String
using the column names and its' respective data along
with a line break after each Column/Data line. */
for (int i = 0; i < columnNames.length; i++) {
sb.append(columnNames[i]).append(": ").append(lineParts[i]).append(ls);
}
// Display the data record in Console.
System.out.println(sb.toString());
/* Clear the StringBuilder object to prepare for
a new string creation. */
sb.delete(0, sb.capacity());
}
}
// Trap these Exceptions
catch (FileNotFoundException ex) {
System.err.println(ex.getMessage());
}
catch (IOException ex) {
System.err.println(ex.getMessage());
}
}
With this method you can have 1 to thousands of columns, it doesn't matter (not that you would ever have thousands of data columns in any given record but hey....you never know... lol). And to use this method:
// Read CSV To Console Window.
readCsvToConsole("test.csv", ",");
Here is some code that I recently worked on for an interview that might help: https://github.com/KemarCodes/ms3_csv/blob/master/src/main/java/CSVProcess.java
If you always have 3 attributes, I would read the first line of the csv and set values in an object that has three fields: attribute1, attribute2, and attribute3. I would create another class to hold the three values and read all the lines after, creating a new instance each time and reading them in an array list. To print I would just print the values in the attribute class each time alongside each set of values.

Populating a JTable from a .txt file

I know similar questions have been asked before, and I've been browsing through pretty much all of them, but I still can't populate my JTable by loading data from a file (from a .txt file).
The way I save the file is each column's info separated by a "|". So you get this:
LastName|FirstName|ID|LandPhone|MobilePhone|
For example, you get this:
Doe|John|1248234834|021324234234|0732553535|
Doe|Jane|1492349324|021432434325|0735345345|
And so on. The code I'm using to open the .txt file is this:
private void deschideMenuItemActionPerformed(java.awt.event.ActionEvent evt) {
JFileChooser jfc = new JFileChooser();
String line = null;
DefaultTableModel dtm = (DefaultTableModel) table.getModel();
FileFilter filter = new FileNameExtensionFilter("Fisiere text","txt");
jfc.setFileSelectionMode(JFileChooser.FILES_ONLY);
jfc.setMultiSelectionEnabled(false);
jfc.setDialogTitle("Deschideti un tabel...");
jfc.setFileFilter(filter);
if(jfc.showOpenDialog(this)==JFileChooser.APPROVE_OPTION){
File file = jfc.getSelectedFile();
try {
BufferedReader br = new BufferedReader(new FileReader(file));
while ((line = br.readLine()) != null) {
Vector data= new Vector();
StringTokenizer st1 = new StringTokenizer(line,"|");
while (st1.hasMoreTokens()) {
String nextToken = st1.nextToken();
data.add(nextToken);
}
System.out.println(data);
dtm.addRow(data);
}
br.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
Problem is, I'm always getting this error:
java.lang.ArrayIndexOutOfBoundsException: 0 >= 0
at java.util.Vector.elementAt(Vector.java:474)
at javax.swing.table.DefaultTableModel.justifyRows(DefaultTableModel.java:265)
at javax.swing.table.DefaultTableModel.insertRow(DefaultTableModel.java:375)
at javax.swing.table.DefaultTableModel.addRow(DefaultTableModel.java:350)
If I use the print method to see what I'm getting, I'm always getting the first line right, and then it crashes. It never reaches the second line. How do you force it to go to the second line? And what's up with the 0>=0 exception? From what I understood from the other topics related to populating a JTable from a .txt file, it has to do with the model not knowing about the number of columns?
This is seriously confusing. I've always wondered why does it have to be so complicated in programming, everything and anything, but that's just a rhetorical question.

Fastest way of reading a CSV file in Java

I have noticed that using java.util.Scanner is very slow when reading large files (in my case, CSV files).
I want to change the way I am currently reading files, to improve performance. Below is what I have at the moment. Note that I am developing for Android:
InputStreamReader inputStreamReader;
try {
inputStreamReader = new InputStreamReader(context.getAssets().open("MyFile.csv"));
Scanner inputStream = new Scanner(inputStreamReader);
inputStream.nextLine(); // Ignores the first line
while (inputStream.hasNext()) {
String data = inputStream.nextLine(); // Gets a whole line
String[] line = data.split(","); // Splits the line up into a string array
if (line.length > 1) {
// Do stuff, e.g:
String value = line[1];
}
}
inputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
Using Traceview, I managed to find that the main performance issues, specifically are: java.util.Scanner.nextLine() and java.util.Scanner.hasNext().
I've looked at other questions (such as this one), and I've come across some CSV readers, like the Apache Commons CSV, but they don't seem to have much information on how to use them, and I'm not sure how much faster they would be.
I have also heard about using FileReader and BufferedReader in answers like this one, but again, I do not know whether the improvements will be significant.
My file is about 30,000 lines in length, and using the code I have at the moment (above), it takes at least 1 minute to read values from about 600 lines down, so I have not timed how long it would take to read values from over 2,000 lines down, but sometimes, when reading information, the Android app becomes unresponsive and crashes.
Although I could simply change parts of my code and see for myself, I would like to know if there are any faster alternatives I have not mentioned, or if I should just use FileReader and BufferedReader. Would it be faster to split the huge file into smaller files, and choose which one to read depending on what information I want to retrieve? Preferably, I would also like to know why the fastest method is the fastest (i.e. what makes it fast).
uniVocity-parsers has the fastest CSV parser you'll find (2x faster than OpenCSV, 3x faster than Apache Commons CSV), with many unique features.
Here's a simple example on how to use it:
CsvParserSettings settings = new CsvParserSettings(); // many options here, have a look at the tutorial
CsvParser parser = new CsvParser(settings);
// parses all rows in one go
List<String[]> allRows = parser.parseAll(new FileReader(new File("your/file.csv")));
To make the process faster, you can select the columns you are interested in:
parserSettings.selectFields("Column X", "Column A", "Column Y");
Normally, you should be able to parse 4 million rows around 2 seconds. With column selection the speed will improve by roughly 30%.
It is even faster if you use a RowProcessor. There are many implementations out-of-the box for processing conversions to objects, POJOS, etc. The documentation explains all of the available features. It works like this:
// let's get the values of all columns using a column processor
ColumnProcessor rowProcessor = new ColumnProcessor();
parserSettings.setRowProcessor(rowProcessor);
//the parse() method will submit all rows to the row processor
parser.parse(new FileReader(new File("/examples/example.csv")));
//get the result from your row processor:
Map<String, List<String>> columnValues = rowProcessor.getColumnValuesAsMapOfNames();
We also built a simple speed comparison project here.
Your code is good to load big files. However, when an operation is going to be longer than you're expecting, it's good practice to execute it in a task and not in UI Thread, in order to prevent any lack of responsiveness.
The AsyncTask class help to do that:
private class LoadFilesTask extends AsyncTask<String, Integer, Long> {
protected Long doInBackground(String... str) {
long lineNumber = 0;
InputStreamReader inputStreamReader;
try {
inputStreamReader = new
InputStreamReader(context.getAssets().open(str[0]));
Scanner inputStream = new Scanner(inputStreamReader);
inputStream.nextLine(); // Ignores the first line
while (inputStream.hasNext()) {
lineNumber++;
String data = inputStream.nextLine(); // Gets a whole line
String[] line = data.split(","); // Splits the line up into a string array
if (line.length > 1) {
// Do stuff, e.g:
String value = line[1];
}
}
inputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
return lineNumber;
}
//If you need to show the progress use this method
protected void onProgressUpdate(Integer... progress) {
setYourCustomProgressPercent(progress[0]);
}
//This method is triggered at the end of the process, in your case when the loading has finished
protected void onPostExecute(Long result) {
showDialog("File Loaded: " + result + " lines");
}
}
...and executing as:
new LoadFilesTask().execute("MyFile.csv");
You should use a BufferedReader instead:
BufferedReader reader = null;
try {
reader = new BufferedReader( new InputStreamReader(context.getAssets().open("MyFile.csv"))) ;
reader.readLine(); // Ignores the first line
String data;
while ((data = reader.readLine()) != null) { // Gets a whole line
String[] line = data.split(","); // Splits the line up into a string array
if (line.length > 1) {
// Do stuff, e.g:
String value = line[1];
}
}
} catch (IOException e) {
e.printStackTrace();
} finally {
if (reader != null) {
try {
reader.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}

comparing data line by line from two large files

I need to analyze differences between two large data files which should each have identical structures. Each file is a couple of gigabytes in size, with perhaps 30 million lines or text data. The data files are so large that I hesitate to load each into its own array, when it might be easier to just iterate through the lines in order. Each line has the structure:
topicIdx, recordIdx, other fields...
topicIdx and recordIdx are sequential, starting at zero and incrementing +1 with each iteration, so it is easy to find them in the files. (No searching around required; just increment forward in order).
I need to do something like:
for each line in fileA
store line in String itemsA
get topicIdx and recordIdx
find line in fileB with same topicIdx and recordIdx
if exists
store this line in string itemsB
for each item in itemsA
compare value with same index in itemsB
if these two items are not virtually equal
//do something
else
//do something else
I wrote the following code with FileReader and BufferedReader, but the apis for these do not seem to provide the functionality that I need. Can anyone show me how to fix the code below so that it accomplishes what I desire?
void checkData(){
FileReader FileReaderA;
FileReader FileReaderB;
int topicIdx = 0;
int recordIdx = 0;
try {
int numLines = 0;
FileReaderA = new FileReader("B:\\mypath\\fileA.txt");
FileReaderB = new FileReader("B:\\mypath\\fileB.txt");
BufferedReader readerA = new BufferedReader(FileReaderA);
BufferedReader readerB = new BufferedReader(FileReaderB);
String lineA = null;
while ((lineA = readerA.readLine()) != null) {
if (lineA != null && !lineA.isEmpty()) {
List<String> itemsA = Arrays.asList(lineA.split("\\s*,\\s*"));
topicIdx = Integer.parseInt(itemsA.get(0));
recordIdx = Integer.parseInt(itemsA.get(1));
String lineB = null;
//lineB = readerB.readLine();//i know this syntax is wrong
setB = rows from FileReaderB where itemsB.get(0).equals(itemsA.get(0));
for each lineB in setB{
List<String> itemsB = Arrays.asList(lineB.split("\\s*,\\s*"));
for(int m = 0;m<itemsB.size();m++){}
for(int j=0;j<itemsA.size();j++){
double myDblA = Double.parseDouble(itemsA.get(j));
double myDblB = Double.parseDouble(itemsB.get(j));
if(Math.abs(myDblA-myDblB)>0.0001){
//do something
}
}
}
}
readerA.close();
} catch (IOException e) {e.printStackTrace();}
}
You need both files sorted by your search keys (recordIdx and topicIdx), so you can do kind of a merge operation like this
open file 1
open file 2
read lineA from file1
read lineB from file2
while (there is lineA and lineB)
if (key lineB < key lineA)
read lineB from file 2
continue loop
if (key lineB > key lineA)
read lineA from file 1
continue
// at this point, you have lineA and lineB with matching keys
process your data
read lineB from file 2
Note that you'll only ever have two records in memory.
If you really need this in Java, why not use java-diff-utils ? It implements a well known diff algorithm.
Consider https://code.google.com/p/java-diff-utils/. Let someone else do the heavy lifting.

How to parse a text file and create a database record from it

I'm trying to basically make a simple Test Generator. I want a button to parse a text file and add the records to my database. The questions and answers are in a text file. I have been searching the net for examples but I can't find one that matches my situation.
The text file has header information that I want to ignore up until the line that starts with "~ End of Syllabus". I want "~ End of Syllabus" to indicate the beginning of the questions. A couple of lines after that look for a line with a "(" in the seventh character position. I want that to indicate the Question Number line. The Question Number line is unique in that the "(" is in the seventh character position. I want to use that as an indicator to mark the start of a new question. In the Question Number line, the first three characters together "T1A" are the Question Group. The last part of the T1A*01* is the question number within that group.
So, as you can see I will also need to get the actual question text line and the answer lines as well. Also typically after the four Answer lines is the Question Terminator indicated by "~~". I don't know how I would be able to do this for all the questions in the text file. Do I keep adding them to an array String? How would I access this information from the file and add it to a database. This is very confusing for me and the way I feel I could learn how this works is by seeing an example that covers my situation. Here is a link to the text file I'm talking about:http://pastebin.com/3U3uwLHN
Code:
public static void main(String args[]) {
String endOfSyllabus = "~ End of Syllabus";
Path objPath = Paths.get("2014HamTechnician.txt");
String[] restOfTextFile = null;
if (Files.exists(objPath)){
File objFile = objPath.toFile();
try(BufferedReader in = new BufferedReader(
new FileReader(objFile))){
String line = in.readLine();
List<String> linesFile = new LinkedList<>();
while(line != null){
linesFile.add(line);
line = in.readLine();
}
System.out.println(linesFile);
}
catch(IOException e){
System.out.println(e);
}
}
else{
System.out.println(
objPath.toAbsolutePath() + " doesn't exist");
}
/* Create and display the form */
java.awt.EventQueue.invokeLater(new Runnable() {
public void run() {
new A19015_Form().setVisible(true);
}
});
}
Reading a text file in Java is straight forward (and there are sure to be other, more creative/efficient ways to do this):
try (BufferedReader reader = new BufferedReader(new FileReader(path))) { //try with resources needs JDK 7
int lineNum = 0;
String readLine;
while ((readLine = reader.readLine()) != null) { //read until end of stream
Skipping an arbitrary amount of lines can be accomplished like this:
if (lineNum == 0) {
lineNum++;
continue;
}
Your real problem is the text to split on. Had you been using CSV you could use String[] nextLine = readLine.split("\t"); to split each line into its respective cells based on tab separation. But your not, so you'll be stuck with reading each line, and than find something to split on.
It seems like you're in control of the text file format. If you are, go to an easier to consume format such as CSV, otherwise you're going to be designing a custom parser for your format.
A bonus to using CSV is it can mirror a database very effectivly. I.e. your CSV header column = database column.
As far as databases go, using JDBC is easy enough, just make sure you use prepared statements to insert your data to prevent against SQL injection:
public Connection connectToDatabase(){
String url = "jdbc:postgresql://url";
return DriverManager.getConnection(url);
}
Connection conn = connectToDatabase();
PreparedStatement pstInsert = conn.prepareStatement(cInsert);
pstInsert.setTimestamp(1, fromTS1);
pstInsert.setString(2, nextLine[1]);
pstInsert.execute();
pstInsert.close();
conn.close();
--Edit--
I didn't see your pastebin earlier on. It doesn't appear that you're in charge of the file format, so you're going to need to split on spaces ( each word ) and rely on regular expressions to determine if this is a question or not. Fortunately it seems the file is fairly consistent so you should be able to do this without too much problem.
--Edit 2--
As a possible solution you can try this untested code:
try{
BufferedReader reader = new BufferedReader(new FileReader("file.txt")); //try with resources needs JDK 7
boolean doRegex = false;
String readLine;
while ((readLine = reader.readLine()) != null) { //read until end of stream
if(readLine.startsWith("~~ End of Syllabus")){
doRegex = true;
continue; //immediately goto the next iteration
}
if(doRegex){
String[] line = readLine.split(" "); //split on spaces
if(line[0].matches("your regex here")){
//answer should be line[1]
//do logic with your answer here
}
}
}
} catch (IOException e) {
e.printStackTrace(); //To change body of catch statement use File | Settings | File Templates.
}

Categories

Resources