I was wondering if anyone has logic in java that removes duplicate lines while maintaining the lines order.
I would prefer no regex solution.
public class UniqueLineReader extends BufferedReader {
Set<String> lines = new HashSet<String>();
public UniqueLineReader(Reader arg0) {
super(arg0);
}
#Override
public String readLine() throws IOException {
String uniqueLine;
if (lines.add(uniqueLine = super.readLine()))
return uniqueLine;
return "";
}
//for testing..
public static void main(String args[]) {
try {
// Open the file that is the first
// command line parameter
FileInputStream fstream = new FileInputStream(
"test.txt");
UniqueLineReader br = new UniqueLineReader(new InputStreamReader(fstream));
String strLine;
// Read File Line By Line
while ((strLine = br.readLine()) != null) {
// Print the content on the console
if (strLine != "")
System.out.println(strLine);
}
// Close the input stream
in.close();
} catch (Exception e) {// Catch exception if any
System.err.println("Error: " + e.getMessage());
}
}
}
Modified Version:
public class UniqueLineReader extends BufferedReader {
Set<String> lines = new HashSet<String>();
public UniqueLineReader(Reader arg0) {
super(arg0);
}
#Override
public String readLine() throws IOException {
String uniqueLine;
while (lines.add(uniqueLine = super.readLine()) == false); //read until encountering a unique line
return uniqueLine;
}
public static void main(String args[]) {
try {
// Open the file that is the first
// command line parameter
FileInputStream fstream = new FileInputStream(
"/home/emil/Desktop/ff.txt");
UniqueLineReader br = new UniqueLineReader(new InputStreamReader(fstream));
String strLine;
// Read File Line By Line
while ((strLine = br.readLine()) != null) {
// Print the content on the console
System.out.println(strLine);
}
// Close the input stream
in.close();
} catch (Exception e) {// Catch exception if any
System.err.println("Error: " + e.getMessage());
}
}
}
If you feed the lines into a LinkedHashSet, it ignores the repeated ones, since it's a set, but preserves the order, since it's linked. If you just want to know whether you've seena given line before, feed them into a simple Set as you go on, and ignore those which the Set already contains/contained.
It can be easy to remove duplicate line from text or File using new java Stream API. Stream support different aggregate feature like sort,distinct and work with different java's existing data structures and their methods. Following example can use to remove duplicate or sort the content in File using Stream API
package removeword;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.OpenOption;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Arrays;
import java.util.Scanner;
import java.util.stream.Stream;
import static java.nio.file.StandardOpenOption.*;
import static java.util.stream.Collectors.joining;
public class Java8UniqueWords {
public static void main(String[] args) throws IOException {
Path sourcePath = Paths.get("C:/Users/source.txt");
Path changedPath = Paths.get("C:/Users/removedDouplicate_file.txt");
try (final Stream<String> lines = Files.lines(sourcePath )
// .map(line -> line.toLowerCase()) /*optional to use existing string methods*/
.distinct()
// .sorted()) /*aggregrate function to sort disctincted line*/
{
final String uniqueWords = lines.collect(joining("\n"));
System.out.println("Final Output:" + uniqueWords);
Files.write(changedPath , uniqueWords.getBytes(),WRITE, TRUNCATE_EXISTING);
}
}
}
Read the text file using a BufferedReader and store it in a LinkedHashSet. Print it back out.
Here's an example:
public class DuplicateRemover {
public String stripDuplicates(String aHunk) {
StringBuilder result = new StringBuilder();
Set<String> uniqueLines = new LinkedHashSet<String>();
String[] chunks = aHunk.split("\n");
uniqueLines.addAll(Arrays.asList(chunks));
for (String chunk : uniqueLines) {
result.append(chunk).append("\n");
}
return result.toString();
}
}
Here's some unit tests to verify ( ignore my evil copy-paste ;) ):
import org.junit.Test;
import static org.junit.Assert.*;
public class DuplicateRemoverTest {
#Test
public void removesDuplicateLines() {
String input = "a\nb\nc\nb\nd\n";
String expected = "a\nb\nc\nd\n";
DuplicateRemover remover = new DuplicateRemover();
String actual = remover.stripDuplicates(input);
assertEquals(expected, actual);
}
#Test
public void removesDuplicateLinesUnalphabetized() {
String input = "z\nb\nc\nb\nz\n";
String expected = "z\nb\nc\n";
DuplicateRemover remover = new DuplicateRemover();
String actual = remover.stripDuplicates(input);
assertEquals(expected, actual);
}
}
Here's another solution. Let's just use UNIX!
cat MyFile.java | uniq > MyFile.java
Edit: Oh wait, I re-read the topic. Is this a legal solution since I managed to be language agnostic?
For better/optimum performance, it's wise to use Java 8's API features viz. Streams & Method references with LinkedHashSet for Collection as below:
import java.io.IOException;
import java.io.PrintWriter;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.LinkedHashSet;
import java.util.stream.Collectors;
public class UniqueOperation {
private static PrintWriter pw;
enter code here
public static void main(String[] args) throws IOException {
pw = new PrintWriter("abc.txt");
for(String p : Files.newBufferedReader(Paths.get("C:/Users/as00465129/Desktop/FrontEndUdemyLinks.txt")).
lines().
collect(Collectors.toCollection(LinkedHashSet::new)))
pw.println(p);
pw.flush();
pw.close();
System.out.println("File operation performed successfully");
}
here I'm using a hashset to store seen lines
Scanner scan;//input
Set<String> lines = new HashSet<String>();
StringBuilder strb = new StringBuilder();
while(scan.hasNextLine()){
String line = scan.nextLine();
if(lines.add(line)) strb.append(line);
}
Related
The code should do a reverse and output the result to out.txt, but this does not happen, can you explain my mistake in the code. Thanks in advance
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.IOException;
public class Main {
public static void main(String[] args) throws IOException {
FileReader input = new FileReader("in.txt");
FileWriter output = new FileWriter("out.txt");
BufferedReader sb = new BufferedReader(input);
String data;
while ((data = sb.readLine()) != null) {
String[] words = data.split(" ");
for (String a : words) {
StringBuilder builder = new StringBuilder(a);
builder.reverse();
while ((sb.read()) != -1) {
output.write(String.valueOf(builder.reverse()));
}
}
}
}
}
You are trying to reverse the string twice because of that the string is getting back to the original string. Also, there is an unnecessary (as per my understanding) while loop inside the for loop (I have removed that in my answer).
Try the below code:
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.FileWriter;
import java.io.IOException;
public class Main {
public static void main(String[] args) throws IOException {
FileReader input = new FileReader("in.txt");
FileWriter output = new FileWriter("out.txt");
BufferedReader sb = new BufferedReader(input);
String data;
while ((data = sb.readLine()) != null) {
String[] words = data.split(" ");
// above statement can be replaced with
// String[] words = data.split(" {34}");
for (String a : words) {
StringBuilder builder = new StringBuilder(a);
// why while loop is required?
//while ((sb.read()) != -1) {
output.write(builder.reverse().toString());
output.flush(); // flush data to the file
//}
}
}
output.close();
}
}
Read about File writer here on how to flush data and also close the writer after writing is completed.
I'm a little confused as to how best to implement a simple DataProvider, having not done so before.
I have a very simple comma delimited .csv file:
978KAL,625JBH,876SSH,452GSH
I simply need to read it in and iterate over the records, running the same test for each record until done.
My code so far:
String csvFile = "src/test/resources/registrationsData.csv";
BufferedReader br = null;
String line = "";
String cvsSplitBy = ",";
#DataProvider(name="getRegistrations")
private Object[] getCSVTestData() {
Object [] registrationsObject;
try {
br = new BufferedReader(new FileReader(csvFile));
while ((line = br.readLine()) != null) {
// use comma as separator
String [] registrations = line.split(cvsSplitBy);
System.out.println( registrations[0] + "," + registrations[1]);
}
} catch//File not found & IOException handling here
registrationsObject = new Object[][]{registrations};
return registrationsObject;
}
#Test(dataProvider = "getRegistrations")
public void getRegistrations(String registration){
Object[] objRegArray = getCSVTestData();
for(int i=0; objRegArray.length>i; i++){
//run tests for every record in the array (csv file)
}
}
I know that I need to use an Object array return type for the Data Provider method.
I'm unclear as to how (and/or the best way) to retrieve each record from the objRegArray object.
This is a basic Collections question I guess; can anyone point me the right way?
Check this code with my explanation below:
package click.webelement.testng;
import org.testng.annotations.DataProvider;
import org.testng.annotations.Test;
import java.io.File;
import java.io.FileNotFoundException;
import java.util.Iterator;
import java.util.Scanner;
public class OneLineCSV {
final static String CSV_FILE = "/path_to_file/oneline.csv";
final static String DELIMETER = ",";
#DataProvider(name = "test")
public Iterator<Object[]> testDP(){
try {
Scanner scanner = new Scanner(new File(CSV_FILE)).useDelimiter(DELIMETER);
return new Iterator<Object[]>() {
#Override
public boolean hasNext() {
return scanner.hasNext();
}
#Override
public Object[] next() {
return new Object[]{scanner.next()};
}
};
} catch (FileNotFoundException e) {
e.printStackTrace();
return null;
}
}
#Test(dataProvider = "test")
public void testOneLineCSV(String value){
System.out.println(value);
}
}
So I would use Scanner class hence it has the convenient facility to parse a string into tokens.
I would also use the capability to return Iterator<Object[]> in your data provider since Scanner is designed in that way. You simply wrap it with new Iterator that converts String that is returned by Scanner.next() to new Object[]{scanner.next}.
Using Iterator with Scanner is really more comfortable since you may not know how many values you will have to provide. So you shouldn't care of defining array size.
I have a file called "marathon", where I have 7 keys:
sex
time
athlete
athlete's nationality
date
city
country
splitted by a comma ",". I have to put the second key (time) in a Treemap.
At the moment I am just trying to show only the time in the console.
So here is my code:
public class Text {
#SuppressWarnings("resource")
public static void main(String[] args) throws FileNotFoundException {
try {
BufferedReader in = new BufferedReader(new FileReader("marathon"));
String str;
str = in.readLine();
while ((str = in.readLine()) != null) {
//System.out.println(str);
String[] ar=str.split(",");
System.out.println(ar[0]);
}
in.close();
} catch (IOException e) {
System.out.println("File Read Error");
}
}
}
This is what a line of the text looks like:
M, 2:30:57.6, Harry Payne, GBR, 1929-07-05, Stamford Bridge, England
When I start the program of my code example and put in System.out.println(ar[0]); a[0] shows me the first line in the console so M's and F's. But when I put a[1] there is an exception:
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 1
As others have pointed out, you do readline twice before you get into the body of the loop, so you will miss the first line.
But you are also not checking that readline resulted in a properly formatted line. It may be an empty line or a line that in some other way does not result in an array that you expect.
So you should add an if-statement that checks that you have what you expected, like so...
public class Text {
#SuppressWarnings("resource")
public static void main(String[] args) throws FileNotFoundException {
try {
BufferedReader in = new BufferedReader(new FileReader("marathon"));
String str = "";
while ((str = in.readLine()) != null) {
String[] ar=str.split(",");
if(ar.length >= 7) {
System.out.println(ar[0] + ", " + ar[1]);
}
}
in.close();
} catch (IOException e) {
System.out.println("File Read Error");
}
}
}
Please try the code below. It's working for me.
You should have read the line only once in the while loop.
import java.io.BufferedReader;
import java.io.FileNotFoundException;
import java.io.FileReader;
import java.io.IOException;
public class Text {
#SuppressWarnings("resource")
public static void main(String[] args) throws FileNotFoundException {
try {
BufferedReader in = new BufferedReader(new FileReader("marathon"));
String str;
while ((str = in.readLine()) != null) {
//System.out.println(str);
String[] ar=str.split(",");
System.out.println(ar[0]);
System.out.println(ar[1]);
}
in.close();
} catch (IOException e) {
System.out.println("File Read Error");
}
}
}
SortedMap<String, String[]> map = new TreeMap<>();
Path path = Paths.get("marathon");
Files.lines(path, Charsets.defaultCharset())
.map(line -> line.split(",\\s*"))
.peek(words -> {
if (words.length != 7) {
Logger.getLogger(getClass().getName()).info("Line wrong: " + line);
}
})
.filter(words -> words.length == 7)
.forEach(words -> map.put(word[1], words));
However there are CSV reader classes out there, that can handle quoted fields with commas and such.
Java 8 just for fun
import java.io.IOException;
import java.nio.file.*;
import java.util.*;
import java.util.stream.Stream;
import static java.nio.charset.Charset.defaultCharset;
import static java.lang.System.out;
import static java.nio.file.Files.lines;
public class Main {
public static void main(String[] args) throws IOException {
Map<String, String[]> map = new TreeMap<>( );
try( Stream<String> lines = lines(Paths.get("marathon"), defaultCharset())){
lines.map(line -> line.split( "," )).forEach( entry -> map.put(entry[1], entry ));
map.values().forEach( entry -> out.println(Arrays.toString( entry )) );
}
}
}
I have two text files. The first user inputs a paragraph of text. The second is a dictionary of terms gotten from an owl file. Like so:
Inferior salivatory nucleus
Retrosplenial area
lateral agranular part
I have coded the bits to make these files. I am stuck as to compare the files so that any whole phrases that appear in the dictionary and the paragraph of text are printed out in the command line in Java.
Try following code, it will help you. Correct your file path in fileName and enter your search condition into the while loop:
public class JavaReadFile {
public static void main(String[] args) throws IOException {
String fileName = "filePath.txt";
//read using BufferedReader, to read line by line
readUsingBufferedReader(fileName);
}
private static void readUsingBufferedReader(String fileName) throws IOException {
File file = new File(fileName);
FileReader fr = new FileReader(file);
BufferedReader br = new BufferedReader(fr);
String line;
while((line = br.readLine()) != null){
//process the line
System.out.println(line);
}
//close resources
br.close();
fr.close();
}
}
You could write the file to a string and iterate over the keys in your dictionary and check if they are present in the paragraph with contains. This probably isn't a particularly efficient solution, but it should work.
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Paths;
import java.util.HashSet;
import java.util.Set;
public class Test {
public static void main(String[] args) throws IOException {
String fileString = new String(Files.readAllBytes(Paths.get("dictionary.txt")),StandardCharsets.UTF_8);
Set<String> set = new HashSet<String>();
set.add("ZYMURGIES");
for (String term : set) {
if(fileString.contains(term)) {
System.out.println(term);
}
}
}
}
Here's a Java 8 version of the contains checking.
package insert.name.here;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;
public class InsertNameHere {
public static void main(String[] args) throws IOException {
String paragraph = new String(Files.readAllBytes(Paths.get("<paragraph file path>")));
Files.lines(Paths.get("<dictionary file path>"))
.filter(paragraph::contains)
.forEach(phrase -> System.out.printf("Paragraph contains %s", phrase));
}
}
I am trying to read contents of a file using string tokenizer and store all the tokens in an array but i keep getting exception in main error. I need advise on how to do this.Below is the code am using for that;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.io.Reader;
import java.util.ArrayList;
import java.util.List;
import java.util.Scanner;
import java.util.StringTokenizer;
public class FileTokenizer
{
private static final String DEFAULT_DELIMITERS = "< , { } >";
private static final String DEFAULT_TEST_FILE = "trans1.txt";
public List<String> tokenize(Reader reader) throws IOException
{
List<String> tokens = new ArrayList<String>();
BufferedReader br = null;
try
{
int i = 0;
br = new BufferedReader(reader);
Scanner scanner = new Scanner(br);
while (scanner.hasNext())
{
StringTokenizer st = new StringTokenizer(scanner.next(), DEFAULT_DELIMITERS, true);
while (st.hasMoreElements())
{
String[] t = new String[200];
tokens.add(st.nextToken());
t[i] = st.nextToken();
System.out.println(t[i]);
i++;
}
}
}
finally
{
close(br);
}
return tokens;
}
public static void close(Reader r)
{
try
{
if (r != null)
{
r.close();
}
}
catch (IOException e)
{
e.printStackTrace();
}
}
public static void main(String[] args)
{
try
{
String fileName = ((args.length > 0) ? args[0] : DEFAULT_TEST_FILE);
FileReader fileReader = new FileReader(new File(fileName));
FileTokenizer fileTokenizer = new FileTokenizer();
List<String> tokens = fileTokenizer.tokenize(fileReader);
//System.out.println(tokens);
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
My file looks like;
PDA = (
{ q1, q2, q3, q4},
{ 0, 1 },
{ 0, $ },
{ (q1, #, #) -> { (q2, $) }, (q2, 0, #) -> { (q2, 0) },
(q2, 1, 0) -> { (q3, #) }, (q3, 1, 0) -> { (q3, #) },
(q3, #, $) -> { (q4, #) } },
q1,
{ q1, q4}
)
You will get the java.util.NoSuchElementException since you are calling st.nextToken() twice within the loop
while (st.hasMoreElements())
Modifying harigm's example, you can then add t[i] to tokens as you require
String[] t = new String[200];
System.out.println(t[i]);
tokens.add(t[i]);
Delimiters shouldn't be separated by spaces:
private static final String DEFAULT_DELIMITERS = "<,{}>";
Also, keep the following in mind (from the Javadoc):
StringTokenizer is a legacy class that is retained for compatibility reasons although its use is discouraged in new code. It is recommended that anyone seeking this functionality use the split method of String or the java.util.regex package instead.
String.split() was introduced in JDK 1.4.
That said:
Using a Scanner to tokenize a stream together with a StringTokenizer looks a bit weird to me;
You call st.nextToken() twice in the inner loop;
t is useless. You re-create it each time in your inner loop and use only one element of it.
It seems that what you are trying to build is a lexical analyzer. Maybe you should look up some documentation on the subject.
HI,
I have modified your code and Now works perfectly fine, check this
package org.sample;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.io.IOException;
import java.io.Reader;
import java.util.ArrayList;
import java.util.List;
import java.util.Scanner;
import java.util.StringTokenizer;
public class FileTokenizer
{
private static final String DEFAULT_DELIMITERS = "< , { } >";
// private static final String DEFAULT_TEST_FILE = "trans1.txt";
public List<String> tokenize(Reader reader) throws IOException
{
List<String> tokens = new ArrayList<String>();
BufferedReader br = null;
try
{
int i = 0;
br = new BufferedReader(reader);
Scanner scanner = new Scanner(br);
while (scanner.hasNext())
{
StringTokenizer st = new StringTokenizer(scanner.next(), DEFAULT_DELIMITERS, true);
while (st.hasMoreElements())
{
String[] t = new String[200];
// tokens.add(st.nextToken());
// t[i] = st.nextToken();
System.out.println(t[i]);
i++;
}
}
}
finally
{
close(br);
}
return tokens;
}
public static void close(Reader r)
{
try
{
if (r != null)
{
r.close();
}
}
catch (IOException e)
{
e.printStackTrace();
}
}
public static void main(String[] args)
{
try
{
// String fileName = ((args.length > 0) ? args[0] : DEFAULT_TEST_FILE);
FileReader fileReader = new FileReader(new File("c:\\DevTest\\1.txt"));
FileTokenizer fileTokenizer = new FileTokenizer();
List<String> tokens = fileTokenizer.tokenize(fileReader);
//System.out.println(tokens);
}
catch (IOException e)
{
e.printStackTrace();
}
}
}
Looking at your input file, I should point out that its hierarchical and irregular structure makes it more suited to be parsed by an actual parser. You may have to learn how to use a parser generator and write a lexer and grammar for it etc, but in the end you'll end up with a much more maintainable code. Doing this yourself is rather painstaking and error-prone.
I recommend ANTLR. It's quite mature, and it has a wide enough user base that I'm sure you can get help easily.