I have a input file whose delimiter is a combination of characters like #$#. But apache commons CSVParser consider only a character not multiple characters. Please find the input file:
Rajeev Kumar Singh ♥#$#rajeevs#example.com#$#+91-9999999999#$#India
Sachin Tendulkar#$#sachin#example.com#$#+91-9999999998#$#India
Barak Obama#$#barak.obama#example.com#$#+1-1111111111#$#United States
Donald Trump#$#donald.trump#example.com#$#+1-2222222222#$#United States
Code snippet:
import org.apache.commons.csv.CSVFormat;
import org.apache.commons.csv.CSVParser;
import org.apache.commons.csv.CSVRecord;
import java.io.IOException;
import java.io.Reader;
import java.io.StringReader;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.List;
import java.util.stream.Collectors;
import java.util.stream.Stream;
public class CSVReader {
private static final String SAMPLE_CSV_FILE_PATH = "./users.csv";
public static void main(String[] args) throws IOException {
try (
Reader reader = Files.newBufferedReader(Paths.get(SAMPLE_CSV_FILE_PATH));
CSVParser csvParser = new CSVParser(reader, CSVFormat.EXCEL.withDelimiter('#'))
) {
long recordCount;
List<CSVRecord> csvRecords = csvParser.getRecords();
}
}
}
Please help me in using delimiter with multiple characters in above example, delimiter is only a single character which is '#'. I need to set the delimiter as '#$#'.
public class ReplacingReader extends BufferedReader {
private final Function<String, String> replacer;
public ReplacingReader(Reader in, Function<String, String> replacer) {
super(in);
this.replacer = replacer;
}
#Override
public String readLine() throws IOException {
return replacer.apply(super.readLine());
}
}
Reader reader = new ReplacingReader(
Files.newBufferedReader(Paths.get(SAMPLE_CSV_FILE_PATH)),
line -> line.replace("#$#", "§")
);
CSVParser csvParser = new CSVParser(reader, CSVFormat.EXCEL.withDelimiter('§'))
The buffering now is done twice, one could use other Reader/FileInputStream and such.
I am not quite clear why use CSVParser in your case. I just tested it locally with your data and come up with this parsing demo:
public static void main(String... args) {
try (Stream<String> lines = Files.lines(Paths.get(Thread.currentThread().getContextClassLoader().getResource("csv.txt").toURI()))) {
lines.forEach(line -> {
String[] words = line.split("#\\$#");
System.out.println(Arrays.toString(words));
});
} catch (URISyntaxException | IOException ignored) {
ignored.printStackTrace();
}
}
The output will be:
[Rajeev Kumar Singh ♥, rajeevs#example.com, +91-9999999999, India]
[Sachin Tendulkar, sachin#example.com, +91-9999999998, India]
[Barak Obama, barak.obama#example.com, +1-1111111111, United States]
[Donald Trump, donald.trump#example.com, +1-2222222222, United States]
By the way, the csv.txt is in the resources:
public List<CSVRecord> getCSVRecords(String path, String delimiter) throws IOException {
List<CSVRecord> csvRecords = null;
Stream<String> lines = Files.lines(Paths.get(path));
List<String> replaced = lines.map(line -> line.replaceAll(Pattern.quote(delimiter), "§")).collect(Collectors.toList());
try (
BufferedReader buffer =
new BufferedReader(new StringReader(String.join("\n", replaced)));
CSVParser csvParser = new CSVParser(buffer, CSVFormat.EXCEL.withDelimiter('§'))
) {
csvRecords = csvParser.getRecords();
return csvRecords;
}
}
Related
I have a csv file with testdata:
31-September-2017 01:52:57 02:11:25
31-September-2017 01:52:57 02:11:25
I want to write the test result(PASS/FAIL) for every line of data at the end of each line, like this:
31-September-2017 01:52:57 02:11:25 PASSED
31-September-2017 01:52:57 02:11:25 FAILED
I am using openCSV api to read the file content. When I open the same file using CSVWriter, it is deleting all the contents of the file. Used BufferedWriter as well, same problem.
Please suggest me how I can achieve this with the original contents of file remaining same, and appending the test result at the end of each line. Thanks.
Use BufferedReader and BufferedWriter for this:
Read the file line by line
Write another file line by line
Add the PASSED / FAILED to each line before writing
Delete the old file and rename the new file
I'll try to explain why this will be the best way:
Its efficient (O(n) while n is the size of the file)
Easy to implement (Just about 20-30 lines of code for the reading and writing part)
Will be readable and understandable (Readability is a very important point.)
Something simillar like below should work with opencsv.
import com.opencsv.CSVReader;
import com.opencsv.CSVWriter;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import java.util.Random;
public class Example {
public static void main(String[] args) {
try {
String fileName = "C:\\Users\\eritrean\\Desktop\\yourfile.csv";
List<String[]> myEntries = readFile(fileName);
List<String[]> testedEntries = new ArrayList<>();
for(String[] row : myEntries){
String[] withTestResult = addTestResult(row,randomResult());
testedEntries.add(withTestResult);
}
writeToFile(fileName,testedEntries);
} catch (IOException ex) {
}
}
public static List<String[]> readFile(String fileName) throws IOException {
CSVReader reader = new CSVReader(new FileReader(fileName),'\t','\"',0);
List<String[]> myEntries = reader.readAll();
return myEntries;
}
public static String[] addTestResult(String[] row, String result){
return (String.join("\t", row)+"\t"+result).split("\t");
}
public static String randomResult(){
Random rand = new Random();
return rand.nextBoolean()?"PASSED":"FAILED";
}
public static void writeToFile(String fileName, List<String[]> myEntries) throws IOException {
try (CSVWriter writer = new CSVWriter(new FileWriter(fileName), '\t',CSVWriter.NO_QUOTE_CHARACTER)) {
for(String[] row: myEntries){
writer.writeNext(row);
}
}
}
}
import java.io.*;
import java.util.ArrayList;
import java.util.List;
import edu.stanford.nlp.tagger.maxent.MaxentTagger;
public class TagText {
public static void main(String[] args) throws IOException, ClassNotFoundException {
// Initializing the tagger
MaxentTagger tagger = new MaxentTagger("taggers/english-left3words-distsim.tagger");
List<String> lines = new ArrayList<>();
lines = new ReadCSV().readColumn("Tt2.csv", 4);
for (String line : lines) {
String tagged = tagger.tagString(line);
System.out.println(tagged);
}
}
}
I'm trying to parse a CSV file and i have a character (BIN 10010111, —) value which i wanted to the text parser to ignore this character. How would i do that ?
So i guess you want to remove all special characters?
I guess it was sth like: replaceAll("[^\w\s]", "");
Edit: Full Code
import java.io.*;
import java.util.ArrayList;
import java.util.List;
import edu.stanford.nlp.tagger.maxent.MaxentTagger;
public class TagText {
public static void main(String[] args) throws IOException, ClassNotFoundException {
// Initializing the tagger
MaxentTagger tagger = new MaxentTagger("taggers/english-left3words-distsim.tagger");
List<String> lines = new ArrayList<>();
lines = new ReadCSV().readColumn("Tt2.csv", 4);
for (String line : lines) {
String tagged = tagger.tagString(line.replace("\uFFFD",""));
System.out.println(tagged);
}
}
}
I have two text files. The first user inputs a paragraph of text. The second is a dictionary of terms gotten from an owl file. Like so:
Inferior salivatory nucleus
Retrosplenial area
lateral agranular part
I have coded the bits to make these files. I am stuck as to compare the files so that any whole phrases that appear in the dictionary and the paragraph of text are printed out in the command line in Java.
Try following code, it will help you. Correct your file path in fileName and enter your search condition into the while loop:
public class JavaReadFile {
public static void main(String[] args) throws IOException {
String fileName = "filePath.txt";
//read using BufferedReader, to read line by line
readUsingBufferedReader(fileName);
}
private static void readUsingBufferedReader(String fileName) throws IOException {
File file = new File(fileName);
FileReader fr = new FileReader(file);
BufferedReader br = new BufferedReader(fr);
String line;
while((line = br.readLine()) != null){
//process the line
System.out.println(line);
}
//close resources
br.close();
fr.close();
}
}
You could write the file to a string and iterate over the keys in your dictionary and check if they are present in the paragraph with contains. This probably isn't a particularly efficient solution, but it should work.
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.HashSet;
import java.util.Set;
public class Test {
public static void main(String[] args) throws IOException {
String fileString = new String(Files.readAllBytes(Paths.get("dictionary.txt")),StandardCharsets.UTF_8);
Set<String> set = new HashSet<String>();
set.add("ZYMURGIES");
for (String term : set) {
if(fileString.contains(term)) {
System.out.println(term);
}
}
}
}
Here's a Java 8 version of the contains checking.
package insert.name.here;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
public class InsertNameHere {
public static void main(String[] args) throws IOException {
String paragraph = new String(Files.readAllBytes(Paths.get("<paragraph file path>")));
Files.lines(Paths.get("<dictionary file path>"))
.filter(paragraph::contains)
.forEach(phrase -> System.out.printf("Paragraph contains %s", phrase));
}
}
I was wondering if anyone has logic in java that removes duplicate lines while maintaining the lines order.
I would prefer no regex solution.
public class UniqueLineReader extends BufferedReader {
Set<String> lines = new HashSet<String>();
public UniqueLineReader(Reader arg0) {
super(arg0);
}
#Override
public String readLine() throws IOException {
String uniqueLine;
if (lines.add(uniqueLine = super.readLine()))
return uniqueLine;
return "";
}
//for testing..
public static void main(String args[]) {
try {
// Open the file that is the first
// command line parameter
FileInputStream fstream = new FileInputStream(
"test.txt");
UniqueLineReader br = new UniqueLineReader(new InputStreamReader(fstream));
String strLine;
// Read File Line By Line
while ((strLine = br.readLine()) != null) {
// Print the content on the console
if (strLine != "")
System.out.println(strLine);
}
// Close the input stream
in.close();
} catch (Exception e) {// Catch exception if any
System.err.println("Error: " + e.getMessage());
}
}
}
Modified Version:
public class UniqueLineReader extends BufferedReader {
Set<String> lines = new HashSet<String>();
public UniqueLineReader(Reader arg0) {
super(arg0);
}
#Override
public String readLine() throws IOException {
String uniqueLine;
while (lines.add(uniqueLine = super.readLine()) == false); //read until encountering a unique line
return uniqueLine;
}
public static void main(String args[]) {
try {
// Open the file that is the first
// command line parameter
FileInputStream fstream = new FileInputStream(
"/home/emil/Desktop/ff.txt");
UniqueLineReader br = new UniqueLineReader(new InputStreamReader(fstream));
String strLine;
// Read File Line By Line
while ((strLine = br.readLine()) != null) {
// Print the content on the console
System.out.println(strLine);
}
// Close the input stream
in.close();
} catch (Exception e) {// Catch exception if any
System.err.println("Error: " + e.getMessage());
}
}
}
If you feed the lines into a LinkedHashSet, it ignores the repeated ones, since it's a set, but preserves the order, since it's linked. If you just want to know whether you've seena given line before, feed them into a simple Set as you go on, and ignore those which the Set already contains/contained.
It can be easy to remove duplicate line from text or File using new java Stream API. Stream support different aggregate feature like sort,distinct and work with different java's existing data structures and their methods. Following example can use to remove duplicate or sort the content in File using Stream API
package removeword;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.OpenOption;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Arrays;
import java.util.Scanner;
import java.util.stream.Stream;
import static java.nio.file.StandardOpenOption.*;
import static java.util.stream.Collectors.joining;
public class Java8UniqueWords {
public static void main(String[] args) throws IOException {
Path sourcePath = Paths.get("C:/Users/source.txt");
Path changedPath = Paths.get("C:/Users/removedDouplicate_file.txt");
try (final Stream<String> lines = Files.lines(sourcePath )
// .map(line -> line.toLowerCase()) /*optional to use existing string methods*/
.distinct()
// .sorted()) /*aggregrate function to sort disctincted line*/
{
final String uniqueWords = lines.collect(joining("\n"));
System.out.println("Final Output:" + uniqueWords);
Files.write(changedPath , uniqueWords.getBytes(),WRITE, TRUNCATE_EXISTING);
}
}
}
Read the text file using a BufferedReader and store it in a LinkedHashSet. Print it back out.
Here's an example:
public class DuplicateRemover {
public String stripDuplicates(String aHunk) {
StringBuilder result = new StringBuilder();
Set<String> uniqueLines = new LinkedHashSet<String>();
String[] chunks = aHunk.split("\n");
uniqueLines.addAll(Arrays.asList(chunks));
for (String chunk : uniqueLines) {
result.append(chunk).append("\n");
}
return result.toString();
}
}
Here's some unit tests to verify ( ignore my evil copy-paste ;) ):
import org.junit.Test;
import static org.junit.Assert.*;
public class DuplicateRemoverTest {
#Test
public void removesDuplicateLines() {
String input = "a\nb\nc\nb\nd\n";
String expected = "a\nb\nc\nd\n";
DuplicateRemover remover = new DuplicateRemover();
String actual = remover.stripDuplicates(input);
assertEquals(expected, actual);
}
#Test
public void removesDuplicateLinesUnalphabetized() {
String input = "z\nb\nc\nb\nz\n";
String expected = "z\nb\nc\n";
DuplicateRemover remover = new DuplicateRemover();
String actual = remover.stripDuplicates(input);
assertEquals(expected, actual);
}
}
Here's another solution. Let's just use UNIX!
cat MyFile.java | uniq > MyFile.java
Edit: Oh wait, I re-read the topic. Is this a legal solution since I managed to be language agnostic?
For better/optimum performance, it's wise to use Java 8's API features viz. Streams & Method references with LinkedHashSet for Collection as below:
import java.io.IOException;
import java.io.PrintWriter;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.LinkedHashSet;
import java.util.stream.Collectors;
public class UniqueOperation {
private static PrintWriter pw;
enter code here
public static void main(String[] args) throws IOException {
pw = new PrintWriter("abc.txt");
for(String p : Files.newBufferedReader(Paths.get("C:/Users/as00465129/Desktop/FrontEndUdemyLinks.txt")).
lines().
collect(Collectors.toCollection(LinkedHashSet::new)))
pw.println(p);
pw.flush();
pw.close();
System.out.println("File operation performed successfully");
}
here I'm using a hashset to store seen lines
Scanner scan;//input
Set<String> lines = new HashSet<String>();
StringBuilder strb = new StringBuilder();
while(scan.hasNextLine()){
String line = scan.nextLine();
if(lines.add(line)) strb.append(line);
}
I am trying to read contents of a file using string tokenizer and store all the tokens in an array but i keep getting exception in main error. I need advise on how to do this.Below is the code am using for that;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.io.Reader;
import java.util.ArrayList;
import java.util.List;
import java.util.Scanner;
import java.util.StringTokenizer;
public class FileTokenizer
{
private static final String DEFAULT_DELIMITERS = "< , { } >";
private static final String DEFAULT_TEST_FILE = "trans1.txt";
public List<String> tokenize(Reader reader) throws IOException
{
List<String> tokens = new ArrayList<String>();
BufferedReader br = null;
try
{
int i = 0;
br = new BufferedReader(reader);
Scanner scanner = new Scanner(br);
while (scanner.hasNext())
{
StringTokenizer st = new StringTokenizer(scanner.next(), DEFAULT_DELIMITERS, true);
while (st.hasMoreElements())
{
String[] t = new String[200];
tokens.add(st.nextToken());
t[i] = st.nextToken();
System.out.println(t[i]);
i++;
}
}
}
finally
{
close(br);
}
return tokens;
}
public static void close(Reader r)
{
try
{
if (r != null)
{
r.close();
}
}
catch (IOException e)
{
e.printStackTrace();
}
}
public static void main(String[] args)
{
try
{
String fileName = ((args.length > 0) ? args[0] : DEFAULT_TEST_FILE);
FileReader fileReader = new FileReader(new File(fileName));
FileTokenizer fileTokenizer = new FileTokenizer();
List<String> tokens = fileTokenizer.tokenize(fileReader);
//System.out.println(tokens);
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
My file looks like;
PDA = (
{ q1, q2, q3, q4},
{ 0, 1 },
{ 0, $ },
{ (q1, #, #) -> { (q2, $) }, (q2, 0, #) -> { (q2, 0) },
(q2, 1, 0) -> { (q3, #) }, (q3, 1, 0) -> { (q3, #) },
(q3, #, $) -> { (q4, #) } },
q1,
{ q1, q4}
)
You will get the java.util.NoSuchElementException since you are calling st.nextToken() twice within the loop
while (st.hasMoreElements())
Modifying harigm's example, you can then add t[i] to tokens as you require
String[] t = new String[200];
System.out.println(t[i]);
tokens.add(t[i]);
Delimiters shouldn't be separated by spaces:
private static final String DEFAULT_DELIMITERS = "<,{}>";
Also, keep the following in mind (from the Javadoc):
StringTokenizer is a legacy class that is retained for compatibility reasons although its use is discouraged in new code. It is recommended that anyone seeking this functionality use the split method of String or the java.util.regex package instead.
String.split() was introduced in JDK 1.4.
That said:
Using a Scanner to tokenize a stream together with a StringTokenizer looks a bit weird to me;
You call st.nextToken() twice in the inner loop;
t is useless. You re-create it each time in your inner loop and use only one element of it.
It seems that what you are trying to build is a lexical analyzer. Maybe you should look up some documentation on the subject.
HI,
I have modified your code and Now works perfectly fine, check this
package org.sample;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.io.Reader;
import java.util.ArrayList;
import java.util.List;
import java.util.Scanner;
import java.util.StringTokenizer;
public class FileTokenizer
{
private static final String DEFAULT_DELIMITERS = "< , { } >";
// private static final String DEFAULT_TEST_FILE = "trans1.txt";
public List<String> tokenize(Reader reader) throws IOException
{
List<String> tokens = new ArrayList<String>();
BufferedReader br = null;
try
{
int i = 0;
br = new BufferedReader(reader);
Scanner scanner = new Scanner(br);
while (scanner.hasNext())
{
StringTokenizer st = new StringTokenizer(scanner.next(), DEFAULT_DELIMITERS, true);
while (st.hasMoreElements())
{
String[] t = new String[200];
// tokens.add(st.nextToken());
// t[i] = st.nextToken();
System.out.println(t[i]);
i++;
}
}
}
finally
{
close(br);
}
return tokens;
}
public static void close(Reader r)
{
try
{
if (r != null)
{
r.close();
}
}
catch (IOException e)
{
e.printStackTrace();
}
}
public static void main(String[] args)
{
try
{
// String fileName = ((args.length > 0) ? args[0] : DEFAULT_TEST_FILE);
FileReader fileReader = new FileReader(new File("c:\\DevTest\\1.txt"));
FileTokenizer fileTokenizer = new FileTokenizer();
List<String> tokens = fileTokenizer.tokenize(fileReader);
//System.out.println(tokens);
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
Looking at your input file, I should point out that its hierarchical and irregular structure makes it more suited to be parsed by an actual parser. You may have to learn how to use a parser generator and write a lexer and grammar for it etc, but in the end you'll end up with a much more maintainable code. Doing this yourself is rather painstaking and error-prone.
I recommend ANTLR. It's quite mature, and it has a wide enough user base that I'm sure you can get help easily.