I'm currently trying to make a program in java to sort a content of a file in alphabetical order, the file contains :
camera10:192.168.112.43
camera12:192.168.2.112
camera1:127.0.0.1
camera2:133.192.31.42
camera3:145.154.42.58
camera8:192.168.12.205
camera3:192.168.2.122
camera5:192.168.112.54
camera123:192.168.2.112
camera4:192.168.112.1
camera6:192.168.112.234
camera7:192.168.112.20
camera9:192.168.2.112
And I would like to sort them and write that back into the file (which in this case is "daftarCamera.txt"). But somehow my algorithm sort them in the wrong way, which the result is :
camera10:192.168.112.43
camera123:192.168.2.112
camera12:192.168.2.112
camera1:127.0.0.1
camera2:133.192.31.42
camera3:145.154.42.58
camera3:192.168.2.122
camera4:192.168.112.1
camera5:192.168.112.54
camera6:192.168.112.234
camera7:192.168.112.20
camera8:192.168.12.205
camera9:192.168.2.112
while the result I want is :
camera1:127.0.0.1
camera2:133.192.31.42
camera3:145.154.42.58
camera3:192.168.2.122
camera4:192.168.112.1
camera5:192.168.112.54
camera6:192.168.112.234
camera7:192.168.112.20
camera8:192.168.12.205
camera9:192.168.2.112
camera10:192.168.112.43
camera12:192.168.2.112
camera123:192.168.2.112
Here's the code I use :
public void sortCamera() {
BufferedReader reader = null;
BufferedWriter writer = null;
ArrayList <String> lines = new ArrayList<String>();
try {
reader = new BufferedReader(new FileReader (log));
String currentLine = reader.readLine();
while (currentLine != null){
lines.add(currentLine);
currentLine = reader.readLine();
}
Collections.sort(lines);
writer = new BufferedWriter(new FileWriter(log));
for (String line : lines) {
writer.write(line);
writer.newLine();
}
} catch(IOException e) {
e.printStackTrace();
} finally {
try {
if (reader != null) {
reader.close();
}
if(writer != null){
writer.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
}
You are currently performing a lexical sort, to perform a numerical sort you would need a Comparator that sorts the lines numerically on the value between "camera" and : in each line. You could split by :, and then use a regular expression to grab the digits, parse them and then compare. Like,
Collections.sort(lines, (String a, String b) -> {
String[] leftTokens = a.split(":"), rightTokens = b.split(":");
Pattern p = Pattern.compile("camera(\\d+)");
int left = Integer.parseInt(p.matcher(leftTokens[0]).replaceAll("$1"));
int right = Integer.parseInt(p.matcher(rightTokens[0]).replaceAll("$1"));
return Integer.compare(left, right);
});
Making a fully reproducible example
List<String> lines = new ArrayList<>(Arrays.asList( //
"camera10:192.168.112.43", //
"camera12:192.168.2.112", //
"camera1:127.0.0.1", //
"camera2:133.192.31.42", //
"camera3:145.154.42.58", //
"camera8:192.168.12.205", //
"camera3:192.168.2.122", //
"camera5:192.168.112.54", //
"camera123:192.168.2.112", //
"camera4:192.168.112.1", //
"camera6:192.168.112.234", //
"camera7:192.168.112.20", //
"camera9:192.168.2.112"));
Collections.sort(lines, (String a, String b) -> {
String[] leftTokens = a.split(":"), rightTokens = b.split(":");
Pattern p = Pattern.compile("camera(\\d+)");
int left = Integer.parseInt(p.matcher(leftTokens[0]).replaceAll("$1"));
int right = Integer.parseInt(p.matcher(rightTokens[0]).replaceAll("$1"));
return Integer.compare(left, right);
});
System.out.println(lines);
And that outputs
[camera1:127.0.0.1, camera2:133.192.31.42, camera3:145.154.42.58, camera3:192.168.2.122, camera4:192.168.112.1, camera5:192.168.112.54, camera6:192.168.112.234, camera7:192.168.112.20, camera8:192.168.12.205, camera9:192.168.2.112, camera10:192.168.112.43, camera12:192.168.2.112, camera123:192.168.2.112]
Your sorting algorithm assumes the ordinary collating sequence, i.e. sorts the strings by alphabetical order, as if the digits were letters. Hence the shorter string come first.
You need to specify an ad-hoc comparison function that splits the string and extracts the numerical value of the suffix.
Related
I have a .csv file that is formated like this:
ID,date,itemName
456,1-4-2020,Lemon
345,1-3-2020,Bacon
345,1-4-2020,Sausage
123,1-1-2020,Apple
123,1-2-2020,Pineapple
234,1-2-2020,Beer
345,1-4-2020,Cheese
I have already implemented the algorithm to go through the file, scan for the first number and sort it in a descending order and make a new output:
123,1-1-2020,Apple
123,1-2-2020,Pineapple
234,1-2-2020,Beer
345,1-3-2020,Bacon
345,1-4-2020,Cheese
345,1-4-2020,Sausage
456,1-4-2020,Lemon
My question is, how do I implement my algorithm to make an output that counts the duplicate first number entries and reformat it to make it look like this...
123,1-1-2020,1,Apple
123,1-2-2020,1,Pineapple
234,1-2-2020,1,Beer
345,1-3-2020,1,Bacon
345,1-4-2020,2,Cheese,Sausage
456,1-4-2020,1,Lemon
...so that it counts the number of occurrence for each ID, denote it with the number of times, and if the date of that ID is also the same, combine the item names to the same line. Below is my source code (each line in the .csv is made into an object named 'receipt' that has ID, date, and name with their respective get() methods):
public class ReadFile {
private static List<Receipt> readFile() {
List<Receipt> receipts = new ArrayList<>();
try {
BufferedReader reader = new BufferedReader(new FileReader("dataset.csv"));
// Move past the first title line
reader.readLine();
String line = reader.readLine();
// Start reading from second line till EOF, split each string at ","
while (line != null) {
String[] attributes = line.split(",");
Receipt attribute = getAttributes(attributes);
receipts.add(attribute);
line = reader.readLine();
}
reader.close();
} catch (IOException e) {
e.printStackTrace();
}
return receipts;
}
private static Receipt getAttributes(String[] attributes) {
// Get ID located before the first ","
long memberNumber = Long.parseLong(attributes[0]);
// Get date located after the first ","
String date = attributes[1];
// Get name located after the second ","
String name = attributes[2];
return new Receipt(memberNumber, date, name);
}
// Parse the data into new file after sorting
private static void parse(List<Receipt> receipts) {
PrintWriter output = null;
try {
output = new PrintWriter("output.txt");
} catch (FileNotFoundException e) {
e.printStackTrace();
}
// For each receipts, assert the text output stream is not null, print line.
for (Receipt p : receipts) {
assert output != null;
output.println(p.getMemberNumber() + "," + p.getDate() + "," + p.getName());
}
assert output != null;
output.close();
}
// Main method, accept input file, sort and parse
public static void main(String[] args) {
List<Receipt> receipts = readFile();
QuickSort q = new QuickSort();
q.quickSort(receipts);
parse(receipts);
}
}
The easiest way is to use a map.
Sample data from your file.
String[] lines = {
"123,1-1-2020,Apple",
"123,1-2-2020,Pineapple",
"234,1-2-2020,Beer",
"345,1-3-2020,Bacon",
"345,1-4-2020,Cheese",
"345,1-4-2020,Sausage",
"456,1-4-2020,Lemon"};
Create a map
as you read the lines, split them and add them to the map using the compute method. This will put the line in if the key (number and date) doesn't exist. Otherwise it simply appends the last item to the existing entry.
the file does not have to be sorted but the values will be added to the end as they are encountered.
Map<String, String> map = new LinkedHashMap<>();
for (String line : lines) {
String[] vals = line.split(",");
// if v is null, add the line
// if v exists, take the existing line and append the last value
map.compute(vals[0]+vals[1], (k,v)->v == null ? line : v +","+vals[2]);
}
for (String line : map.values()) {
String[] fields = line.split(",",3);
int count = fields[2].split(",").length;
System.out.printf("%s,%s,%s,%s%n", fields[0],fields[1],count,fields[2]);
}
For this sample run prints
123,1-1-2020,1,Apple
123,1-2-2020,1,Pineapple
234,1-2-2020,1,Beer
345,1-3-2020,1,Bacon
345,1-4-2020,2,Cheese,Sausage
456,1-4-2020,1,Lemon
I am trying to read a text file in that contains numbers separated by commas. the file is large and may contain up to a few thousand numbers. i need to add these numbers to a list
List<Integer> listIntegers = new ArrayList<Integer>();
what would be the best approach to take? I am currently reading in the file like this;
StringBuilder sb = new StringBuilder();
BufferedReader br = null;
try {
br = new BufferedReader(new FileReader("D:\\generated30-1.cav"));
String line = null;
while ((line = br.readLine()) != null)
{
sb.append(line.replaceAll(",",""));
if (sb.length() > 0)
{
sb.append("\n");
}
}
}
catch (IOException e)
{
e.printStackTrace();
}
finally
{
try
{
if (br != null)
{
br.close();
}
}
catch (IOException ex)
{
ex.printStackTrace();
}
}
String contents = sb.toString();
If the numbers are separated by commas, you should not be removing the commas. I would use a Scanner. I would use try-with-resources instead of an explicit close(). And I would split each line on comma (\\s* globs optional whitespace). Like,
List<Integer> listIntegers = new ArrayList<>();
File f = new File("D:\\generated30-1.cav");
try (Scanner sc = new Scanner(f)) {
while (sc.hasNextLine()) {
String line = sc.nextLine();
String[] tokens = line.split("\\s*,\\s*");
for (String token : tokens) {
listIntegers.add(Integer.parseInt(token));
}
}
} catch (Exception e) {
e.printStackTrace();
}
return listIntegers;
Since your input file does not contain spaces after the commas, then you should not be replacing the commas, as if you have multi digit numbers, you will not be able to differentiate between where the number starts and ends. Instead just append line:
sb.append(line);
And then you can do:
List<Integer> list = Arrays.stream(contents.split(","))
.map(Integer::valueOf)
.collect(toList());
Which will create a stream from the Array created by split and then map them to int's and then collect them to a List
I'd definitely try and leverage the Files.lines as of JDK8:
List<Integer> result;
try (Stream<String> stream = Files.lines(Paths.get(fileName))) {
stream.map(s -> /* perform your mapping operation here */)
.collect(Collectors.toList());
} catch (IOException e) { e.printStackTrace(); }
reading:
Introduction to Java 8
Streams
Java 8 Map, Filter, and Collect Examples
The map method documentation
I am new in java. I just wants to read each string in java and print it on console.
Code:
public static void main(String[] args) throws Exception {
File file = new File("/Users/OntologyFile.txt");
try {
FileInputStream fstream = new FileInputStream(file);
BufferedReader infile = new BufferedReader(new InputStreamReader(
fstream));
String data = new String();
while ((data = infile.readLine()) != null) { // use if for reading just 1 line
System.out.println(""+data);
}
} catch (IOException e) {
// Error
}
}
If file contains:
Add label abc to xyz
Add instance cdd to pqr
I want to read each word from file and print it to a new line, e.g.
Add
label
abc
...
And afterwards, I want to extract the index of a specific string, for instance get the index of abc.
Can anyone please help me?
It sounds like you want to be able to do two things:
Print all words inside the file
Search the index of a specific word
In that case, I would suggest scanning all lines, splitting by any whitespace character (space, tab, etc.) and storing in a collection so you can later on search for it. Not the question is - can you have repeats and in that case which index would you like to print? The first? The last? All of them?
Assuming words are unique, you can simply do:
public static void main(String[] args) throws Exception {
File file = new File("/Users/OntologyFile.txt");
ArrayList<String> words = new ArrayList<String>();
try {
FileInputStream fstream = new FileInputStream(file);
BufferedReader infile = new BufferedReader(new InputStreamReader(
fstream));
String data = null;
while ((data = infile.readLine()) != null) {
for (String word : data.split("\\s+") {
words.add(word);
System.out.println(word);
}
}
} catch (IOException e) {
// Error
}
// search for the index of abc:
for (int i = 0; i < words.size(); i++) {
if (words.get(i).equals("abc")) {
System.out.println("abc index is " + i);
break;
}
}
}
If you don't break, it'll print every index of abc (if words are not unique). You could of course optimize it more if the set of words is very large, but for a small amount of data, this should suffice.
Of course, if you know in advance which words' indices you'd like to print, you could forego the extra data structure (the ArrayList) and simply print that as you scan the file, unless you want the printings (of words and specific indices) to be separate in output.
Split the String received for any whitespace with the regex \\s+ and print out the resultant data with a for loop.
public static void main(String[] args) { // Don't make main throw an exception
File file = new File("/Users/OntologyFile.txt");
try {
FileInputStream fstream = new FileInputStream(file);
BufferedReader infile = new BufferedReader(new InputStreamReader(fstream));
String data;
while ((data = infile.readLine()) != null) {
String[] words = data.split("\\s+"); // Split on whitespace
for (String word : words) { // Iterate through info
System.out.println(word); // Print it
}
}
} catch (IOException e) {
// Probably best to actually have this on there
System.err.println("Error found.");
e.printStackTrace();
}
}
Just add a for-each loop before printing the output :-
while ((data = infile.readLine()) != null) { // use if for reading just 1 line
for(String temp : data.split(" "))
System.out.println(temp); // no need to concatenate the empty string.
}
This will automatically print the individual strings, obtained from each String line read from the file, in a new line.
And afterwards, I want to extract the index of a specific string, for
instance get the index of abc.
I don't know what index are you actually talking about. But, if you want to take the index from the individual lines being read, then add a temporary variable with count initialised to 0.
Increment it till d equals abc here. Like,
int count = 0;
for(String temp : data.split(" ")){
count++;
if("abc".equals(temp))
System.out.println("Index of abc is : "+count);
System.out.println(temp);
}
Use Split() Function available in Class String.. You may manipulate according to your need.
or
use length keyword to iterate throughout the complete line
and if any non- alphabet character get the substring()and write it to the new line.
List<String> words = new ArrayList<String>();
while ((data = infile.readLine()) != null) {
for(String d : data.split(" ")) {
System.out.println(""+d);
}
words.addAll(Arrays.asList(data));
}
//words List will hold all the words. Do words.indexOf("abc") to get index
if(words.indexOf("abc") < 0) {
System.out.println("word not present");
} else {
System.out.println("word present at index " + words.indexOf("abc"))
}
I have a CSV file which contains rules and ruleversions. The CSV file looks like this:
CSV FILE:
#RULENAME, RULEVERSION
RULE,01-02-01
RULE,01-02-02
RULE,01-02-34
OTHER_RULE,01-02-04
THIRDRULE, 01-02-04
THIRDRULE, 01-02-04
As you can see, 1 rule can have 1 or more rule versions. What I need to do is read this CSV file and put them in an array. I am currently doing that with the following script:
private static List<String[]> getRulesFromFile() {
String csvFile = "rulesets.csv";
BufferedReader br = null;
String line = "";
String delimiter = ",";
List<String[]> input = new ArrayList<String[]>();
try {
br = new BufferedReader(new FileReader(csvFile));
while ((line = br.readLine()) != null) {
if (!line.startsWith("#")) {
String[] rulesetEntry = line.split(delimiter);
input.add(rulesetEntry);
}
}
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
} finally {
if (br != null) {
try {
br.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
return input;
}
But I need to adapt the script so that it saves the information in the following format:
ARRAY (
=> RULE => 01-02-01, 01-02-02, 01-02-04
=> OTHER_RULE => 01-02-34
=> THIRDRULE => 01-02-01, 01-02-02
)
What is the best way to do this? Multidimensional array? And how do I make sure it doesn't save the rulename more than once?
You should use a different data structure, for example an HashMap, like this.
HashMap<String, List<String>> myMap = new HashMap<>();
try {
br = new BufferedReader(new FileReader(csvFile));
while ((line = br.readLine()) != null) {
if (!line.startsWith("#")) {
String[] parts = string.split(delimiter);
String key = parts[0];
String value = parts[1];
if (myMap.containsKey(key)) {
myMap.get(key).add(value);
} else {
List<String> values = new ArrayList<String>();
values.add(value);
myMap.put(key, values);
}
}
}
This should work!
See using an ArrayList is not a good data structure of choice here.
I would personally suggest you to use a HashMap> for this particular purpose.
The rules will be your keys and rule versions will be your values which will be a list of strings.
While traversing your original file, just check if the rule (key) is present, then add the value to the list of rule versions (values) already present, otherwise add a new key and add the value to it.
For instance like this:
public List<String> removeDuplicates(List<String> myList) {
Hashtable<String, String> hashtable=new Hashtable<String, String>();
for(String s:myList) {
hashtable.put(s, s);
}
return new ArrayList<String>(hashtable.values());
}
This is exactly what key - value pairs can be used for. Just take a look at the Map Interface. There you can define a unique key containing various elements as value, perfectly for your issue.
Code:
// This collection will take String type as a Key
// and Prevent duplicates in its associated values
Map<String, HashSet<String>> map = new HashMap<String,HashSet<String>>();
// Check if collection contains the Key you are about to enter
// !REPLACE! -> "rule" with the Key you want to enter into your collection
// !REPLACE! -> "whatever" with the Value you want to associate with the key
if(!map.containsKey("rule")){
map.put("rule", new HashSet<String>());
}
else{
map.get("rule").add("whatever");
}
Reference:
Set
Map
I have a text file with content like, with 792 lines:
der 17788648
und 14355959
die 10939606
Die 10480597
Now I want to compare if "Die" and "die" are equal in lowercase.
So if two Strings in lowerCase are equal, copy the word into a new text file in lowerCase and sum the values.
Expected output:
der 17788648
und 14355959
die 114420203
I have that so far:
try {
BufferedReader bk = null;
BufferedWriter bw = null;
bk = new BufferedReader(new FileReader("outagain.txt"));
bw = new BufferedWriter(new FileWriter("outagain5.txt"));
List<String> list = new ArrayList<>();
String s = "";
while (s != null) {
s = bk.readLine();
list.add(s);
}
for (int k = 0; k < 793; k++) {
String u = bk.readLine();
if (list.contains(u.toLowerCase())) {
//sum values?
} else {
bw.write(u + "\n");
}
}
System.out.println(list.size());
} catch (Exception e) {
System.out.println("Exception caught : " + e);
}
Instead of list.add(s);, use list.add(s.toLowerCase());. Right now your code is comparing lines of indeterminate case to lower-cased lines.
With Java 8, the best approach to standard problems like reading files, comparing, grouping, collecting is to use the streams api, since it is much more concise to do that in that way. At least when the files is only a few KB, then there will be no problems with that.
Something like:
Map<String, Integer> nameSumMap = Files.lines(Paths.get("test.txt"))
.map(x -> x.split(" "))
.collect(Collectors.groupingBy(x -> x[0].toLowerCase(),
Collectors.summingInt(x -> Integer.parseInt(x[1]))
));
First, you can read the file with Files.lines(), which returns a Stream<String>, than you can split the strings into a Stream<String[]>,
finally you can use the groupingBy() and summingInt() functions to group by the first element of the array and sum by the second one.
If you don't want to use the stream API, you can also create a HashMap und do your summing manually in the loop.
Use a HashMap to keep track of the unique fields. Before you do a put, do a get to see if the value is already there. If it is, sum the old value with the new one and put it in again (this replaces the old line having same key)
package com.foundations.framework.concurrency;
import java.io.BufferedReader;
import java.io.FileReader;
import java.io.IOException;
import java.util.HashMap;
import java.util.Iterator;
public class FileSummarizer {
public static void main(String[] args) {
HashMap<String, Long> rows = new HashMap<String, Long>();
String line = "";
BufferedReader reader = null;
try {
reader = new BufferedReader(new FileReader("data.txt"));
while ((line = reader.readLine()) != null) {
String[] tokens = line.split(" ");
String key = tokens[0].toLowerCase();
Long current = Long.parseLong(tokens[1]);
Long previous = rows.get(key);
if(previous != null){
current += previous;
}
rows.put(key, current);
}
}
catch (IOException e) {
e.printStackTrace();
}
finally {
try {
reader.close();
Iterator<String> iterator = rows.keySet().iterator();
while (iterator.hasNext()) {
String key = iterator.next().toString();
String value = rows.get(key).toString();
System.out.println(key + " " + value);
}
}
catch (IOException e) {
e.printStackTrace();
}
}
}
}
The String class has an equalIgnoreCase method which you can use to compare two strings irrespective of case. so:
String var1 = "Die";
String var2 = "die";
System.out.println(var1.equalsIgnoreCase(var2));
Would print TRUE.
If I got your question right, you want to know how you can get the prefix from the file, compare it, get the value behind it and sum them up for each prefix. Is that about right?
You could use regular expressions to get the prefixes and values seperately. Then you can sum up all values with the same prefix and write them to the file for each one.
If you are not familiar with regular expressions, this links could help you:
Regex on tutorialpoint.com
Regex on vogella.com
For additional tutorials just scan google for "java regex" or similar tags.
If you do not want to differ between upper- and lowercase strings, just convert them all to lower/upper before comparing them as #spork explained already.