Java Parsing Framework for complex CSV files - java

I need to parse complex (non fixed length) csv files to Java objects in order to compare its values.
I first tried the Flatform Parsing Framework, i liked the approach of describing the values in an extra (xml) document. Maybe it's the right tool for simple csv (and also flat) files. Nevertheless my csv files contains lines that vary in quantity of fields - sometimes they span across multiple lines. There are also dependencies among those fields.
Here's a little sample: (each type has a certain amount of extra parameters)
; <COMMENTS (to be ignored)>
<NAME>,<TYPE_A>,<DESCRIPTION>,<PARAMETER>
<NAME>,<TYPE_B>,<DESCRIPTION>,<PARAMETER>,<PARAMETER>
<NAME>,<TYPE_C>,<DESCRIPTION>,<PARAMETER>,<PARAMETER>,<PARAMETER>,<PARAMETER>
<NAME>,<TYPE_D>,<DESCRIPTION>,<PARAMETER>,<PARAMETER>,<PARAMETER>,<PARAMETER>, -
<PARAMETER>,<PARAMETER>, -
<PARAMETER>,<PARAMETER>
<NAME>,<TYPE_B>,<DESCRIPTION>,<PARAMETER>,<PARAMETER>
<NAME>,<TYPE_A>,<DESCRIPTION>,<PARAMETER>
So i need something to describe and parse the csv file in a more complex manner. I'm new to this, I've heard about parser generator - is that what I need?

Try OpenCSV (see http://opencsv.sourceforge.net/#what-features). It handles embedded carriage returns just fine.

One option is to use the Scanner class or you might want to check out the Spring Batch. Ive never actually used SB but given batch jobs often read from simple text files i believe i read it caters for this including all sorts of object mapping.

You may also try japaki

Related

file reading in java- converting every line to an existing function

I have an input text file that contains different commands that I need to do,
the commands have to be done one by one, and I don't know-how.
I thought of just reading the text file--->putting the current line in a string and then comparing it with all the commands which is very not efficient
thanks
There are many ways to read commands from a file.
A lot depends on the format of commands and if they have parameters or not.
Here are possible solutions.
One command per row in a text file
Save in the file row by row the sequence of commands. Read the file row by row and check each row with a list of commands.
Pro:
Easy to implement
Cons:
Not easy to handle parameters
Difficult to handle blocks of commands
Difficult to handle jumps between commands
Commands saved as json objects
Hold the file as a text file having a single json array where each item holds a command, eventually with parameters.
Pro
Quite easy using libraries to parse json files
Easy to handle parameters
Cons
A list of commands as json array is less readable than a structured programming language
Create a parser and your own programming language
You can create your own programming language having only the details that you need.
Pro
This solution fit very well any need that you can have
Easy to read because you can decide the structure that you like more
Speed of code
Is possible to handle typical programming construct like loops, conditional statements, blocks of code...
Cons
Very hard to implement, you need to define your own language and implement it using a custom parser (example using ANTLR4)

Spring Batch CSV to Mongodb - 100s of columns

I am very new to Java and have been tasked to use spring batch to read in some text files. So far Spring batch resources online have helped me to get to a point where I am reading, processing and writing some simple test .csv files into Mongo.
The problem I have now is that the actual file I would like to read from has over 600 columns. Meaning that with the current way I am reading in my file to Java, I would need 600+ fields in my #Document mongo model.
I have been thinking of a couple of ways to get around this,
first I was thinking maybe I could read in each line as a string and then in my processor deal with splitting everything up and formatting the data to then return a list of my MongoTemplate but returning a List is not viable from the overridden process method.
So my question to you guys is,
What is the best way to handle reading in files with hundreds of
columns in spring batch? Or what would be the best resource to start
reading to help point me in the right direction.
Thanks!
I had a same problem I used
http://opencsv.sourceforge.net/apidocs/com/opencsv/CSVReader.html
for reading csvs.
I suggest you use Map instead of 600 java fields.
Besides, 600X600 java strings is not a big deal for java and neither for mongo.
To manipulate with mongo use http://jongo.org/
If you really need batch processing of data.
Your flow for batches should be,
Loop here : divide in batches(say 300 per loop)
Read 300X300 java objects(or in a Map) from file in memory.
Sanitize or Process them if needed.
Store in mongoDB.
return if EOF.
I ended up just reading in each line as a String object. Then in the processor looping over the String object with a delimiter creating my Mongo repository objects and storing them. So I am basically doing all of the writing inside the processor method which I would say is definitely not best practice but gives me the desired end result.

Dynamic parsing of fixed text files

I want to build a parser for fixed position text files.
What I want to achieve is to make it dynamic so that I could pass an external configuration file containing the format of the file that will be parsed.
Example of configuration file to make the application to load:
Field; Position
Name;0-20
Surname;21-40
Age;40-42
Sex;42-43
...
Example of file to parse:
John William Hoover23M
Deborah Foobar33F
...
I saw googling lot of libraries to parse fixed length file.
Problem is that all of them relies on creating some classes with annotated fields telling the fixed position in the text file.
I want to make a generic parser so this classes should be automatically generated and annotated based on some external configuration file.
Do you know any library or different kind of approach that I could follow?
I'm talking about parsing relatively big files around ~500Mb so also efficiency and speed is important factor.
Thank you all!
You dont need to "parse" the big file. You only need to extract at given positions
1 parse the "format" file, with classical regex, and store name, positions in an array. Time doesnt matter there.
2 open your big file, read the lines, and extract at the positions you want. It will be the faster your could do.
Try uniVocity-parsers' FixedWidthParser:
//define field lengths
FixedWidthFields fields = new FixedWidthFields();
accountFields.addField("ID", 10);
accountFields.addField("Bank", 8);
accountFields.addField("AccountNumber", 15);
accountFields.addField("Swift", 12);
//configure the parser
FixedWidthParserSettings settings = new FixedWidthParserSettings(fields); //many options here, check the tutorial
settings.getFormat().setLineSeparator("\n");
//We can now parse all rows
FixedWidthParser parser = new FixedWidthParser(settings);
List<String[]> rows = parser.parseAll(new File("/path/to/file.txt"));
This is just a rough example. There are many other examples here.
Disclosure: I'm the author of this library, it's open-source and free (Apache 2.0 License)

Writing one file per group in Pig Latin

The Problem:
I have numerous files that contain Apache web server log entries. Those entries are not in date time order and are scattered across the files. I am trying to use Pig to read a day's worth of files, group and order the log entries by date time, then write them to files named for the day and hour of the entries it contains.
Setup:
Once I have imported my files, I am using Regex to get the date field, then I am truncating it to hour. This produces a set that has the record in one field, and the date truncated to hour in another. From here I am grouping on the date-hour field.
First Attempt:
My first thought was to use the STORE command while iterating through my groups using a FOREACH and quickly found out that is not cool with Pig.
Second Attempt:
My second try was to use the MultiStorage() method in the piggybank which worked great until I looked at the file. The problem is that MulitStorage wants to write all fields to the file, including the field I used to group on. What I really want is just the original record written to the file.
The Question:
So...am I using Pig for something it is not intended for, or is there a better way for me to approach this problem using Pig? Now that I have this question out there, I will work on a simple code example to further explain my problem. Once I have it, I will post it here. Thanks in advance.
Out of the box, Pig doesn't have a lot of functionality. It does the basic stuff, but more times than not I find myself having to write custom UDFs or load/store funcs to get form 95% of the way there to 100% of the way there. I usually find it worth it since just writing a small store function is a lot less Java than a whole MapReduce program.
Your second attempt is really close to what I would do. You should either copy/paste the source code for MultiStorage or use inheritance as a starting point. Then, modify the putNext method to strip out the group value, but still write to that file. Unfortunately, Tuple doesn't have a remove or delete method, so you'll have to rewrite the entire tuple. Or, if all you have is the original string, just pull that out and output that wrapped in a Tuple.
Some general documentation on writing Load/Store functions in case you need a bit more help: http://pig.apache.org/docs/r0.10.0/udf.html#load-store-functions

Best file format regarding standard string and integer data?

For my project, I need to store info about protocols (the data sent (most likely integers) and in the order it's sent) and info that might be formatted something like this:
'ID' 'STRING' 'ADDITIONAL INTEGER DATA'
This info will be read by a Java program and stored in memory for processing, but I don't know what would be the most sensible format to store this data in?
EDIT: Here's some extra information:
1)I will be using this data in a game server.
2)Since it is a game server, speed is not the primary concern, since this data will primary be read and utilized during startup, which shouldn't occur very often.
3)Memory consumption I would like to keep at a minimum, however.
4)The second data "example" will be used as a "dictionary" to look up names of specific in-game items, their stats and other integer data (and therefore might become very large, unlike the first data containing the protocol information, where each file will only note small protocol bites, like a login protocol for instance).
5)And yes, I would like the data to be "human-editable".
EDIT 2: Here's the choices that I've made:
JSON - For the protocol descriptions
CSV - For the dictionaries
There are many factors that could come to weigh--here are things that might help you figure this out:
1) Speed/memory usage: If the data needs to load very quickly or is very large, you'll probably want to consider rolling your own binary format.
2) Portability/compatibility: Balanced against #1 is the consideration that you might want to use the data elsewhere, with programs that won't read a custom binary format. In this case, your heavy hitters are probably going to be CSV, dBase, XML, and my personal favorite, JSON.
3) Simplicity: Delimited formats like CSV are easy to read, write, and edit by hand. Either use double-quoting with proper escaping or choose a delimiter that will not appear in the data.
If you could post more info about your situation and how important these factors are, we might be able to guide you further.
How about XML, JSON or CSV ?
I've written a similar protocol-specification using XML. (Available here.)
I think it is a good match, since it captures the hierarchal nature of specifying messages / network packages / fields etc. Order of fields are well defined and so on.
I even wrote a code-generator that generated the message sending / receiving classes with methods for each message type in XSLT.
The only drawback as I see it is the verbosity. If you have a really simple structure of the specification, I would suggest you use some simple home-brewed format and write a parser for it using a parser-generator of your choice.
In addition to the formats suggested by others here (CSV, XML, JSON, etc.) you might consider storing the info in a Java properties file. (See the java.util.Properties class.) The code is already there for you, so all you have to figure out is the properties names (or name prefixes) you want to use.
The Properties class also provides for storing/loading properties in a simple XML format.

Categories

Resources