how to output sorted files in java - java

I have a problem where I want to scan the files that are in a certain folder and output them.
the only problem is that the output is: (1.jpg , 10.jpg , 11.jpg , 12.jpg , ... , 19.jpg , 2.jpg) when I want it to be: (1.jpg , 2.jpg and so on). Since I use: File actual = new File(i.); (i is the number of times the loop repeats) to scan for images, I don't know how to sort the output.
this is my code for now.
//variables
String htmlHeader = ("<!DOCTYPE html>:\n"
+ "<html lang=\"en\">\n"
+ "<head>\n"
+ "<meta charset=\"UTF-8\">\n"
+ "<meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge\">\n"
+ "<meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\">\n"
+ "<title>Document</title>\n"
+ "</head>"
+ "<body>;\n");
String mangaName = ("THREE DAYS OF HAPPINESS");
String htmlEnd = ("</body>\n</html>");
String image = ("image-");
//ask for page number
Scanner scan = new Scanner(System.in);
System.out.print("enter a chapter number: ");
int n = scan.nextInt();
//create file for chapter
File creator = new File("manga.html");
//for loop
for (int i = 1; i <= n; ++i) {
//writing to HTML file
BufferedWriter bw = null;
bw = new BufferedWriter(new FileWriter("manga"+i+".html"));
bw.write(htmlHeader);
bw.write("<h2><center>" + mangaName + "</center></h2</br>");
//scaning files
File actual = new File("Three Days Of Happiness Chapter "+i+" - Manganelo_files.");
for (File f : actual.listFiles()) {
String pageName = f.getName();
//create list
List<String> list = Arrays.asList(pageName);
list.sort(Comparator.nullsFirst(Comparator.comparing(String::length).thenComparing(Comparator.naturalOrder())));
System.out.println("list");
//for loop
//writing bpdy to html file
bw.write("<p><center><img src=\"Three Days Of Happiness Chapter "+i+" - Manganelo_files/" + pageName + "\" <br/></p>\n");
System.out.println(pageName);
}
bw.write(htmlEnd);
bw.close();
System.out.println("Process Finished");
}}
}```

When you try to sort the names, you'll most certainly notice that they are sorted alphanumerically (e.g. Comparing 9 with 12; 12 would come before 9 because the leftmost digit 1 < 9).
One way to get around this is to use an extended numbering format when naming & storing your files.
This has been working great for me when sorting pictures, for example. I use YYYY-MM-DD for all dates regardless whether the day contains one digit (e.g. 9) or two digits (11). This would mean that I always type 9 as 09. This also means that every file name in a given folder has the same length, and each digit (when compared to the corresponding digit to any other adjacent file) is compared properly.
One solution to your problem is to do the same and add zeros to the left of the file names so that they are easily sorted both by the OS and by your Java program. The drawback to this solution is that you'll need to decide the maximum number of files you'll want to store in a given folder beforehand – by setting the number of digits properly (e.g. 3 digits would mean a maximum of 1000 uniquely & linearly numbered file names from 000 to 999). The plus, however, is that this will save you the hassle of having to sort unevenly numerered files, while making it so that your files are pre-sorted once and are ready to be quickly read whenever.

Generally, file systems do not have an order to the files in a directory. Instead, anything that lists files (be it an ls or dir command on a command line, calling Files.list in java code, or opening Finder or Explorer) will apply a sorting order.
One common sorting order is 'alphanumerically'. In which case, the order you describe is correct: 2 comes after 1 and also after 10. You can't wave a magic wand and tell the OS or file system driver not to do that; files as a rule don't have an 'ordering' property.
Instead, make your filenames such that they do sort the way you want, when sorting alphanumerically. Thus, the right name for the first file would be 01.jpg. Or possibly even 0001.jpg - you're going to have to make a call about how many digits you're going to use before you start, unfortunately.
String.format("%05d", 1) becomes "00001" - that's pretty useful here.
The same principle applies to reading files - you can't just rely on the OS sorting it for you. Instead, read it all into e.g. a list of some sort and then sort that. You're going to have to write a fairly funky sorting order: Find the dot, strip off the left side, check if it is a number, etc. Quite complicated. It would be a lot simpler if the 'input' is already properly zero-prefixed, then you can just sort them naturally instead of having to write a complex comparator.
That comparator should probably by modal. Comparators work by being handed 2 elements, and you must say which one is 'earlier', and you must be consistent (if a is before b, and later I ask you: SO, how about b and a, you must indicate that b is after a).
Thus, an algorithm would look something like:
Determine if a is numeric or not (find the dot, parseInt the substring from start to the dot).
Determine if b is numeric or not.
If both are numeric, check ordering of these numbers. If they have an order (i.e. aren't identical), return an answer. Otherwise, compare the stuff after the dot (1.jpg should presumably be sorted before 1.png).
If neither are numeric, just compare alphanum (aName.compareTo(bName)).
If one is numeric and the other one is not, the numeric one always wins, and vice versa.

Related

Convert in reverse ascii to whole decimal in Java

Hi all and thank you for the help in advance.
I have scoured the webs and have not really turned up with anything concrete as to my initial question.
I have a program I am developing in JAVA thats primary purpose is to read a .DAT file and extract certain values from it and then calculate an output based on the extracted values which it then writes back to the file.
The file is made up of records that are all the same length and format and thus it should be fairly straightforward to access, currently I am using a loop and and an if statement to find the first occurrence of a record and then through user input determine the length of each record to then loop through each record.
HOWEVER! The first record of this file is a blank (Or so I thought). As it turns out this first record is the key to the rest of the file in that the first few chars are ascii and reference the record length and the number of records contained within the file respectively.
below are a list of the ascii values themselves as found in the files (Disregard the " " the ascii is contained within them)
"#¼ ä "
"#g â "
"ÇG # "
"lj ‰ "
"Çò È "
"=¼ "
A friend of mine who many years ago use to code in Basic recons the first 3 chars refer to the record length and the following 9 refer to the number of records.
Basically what I am needing to do is convert this initial string of ascii chars to two decimals in order to work out the length of each record and the number of records.
Any assistance will be greatly appreciated.
Edit...
Please find below the Basic code used to access the file in the past, perhaps this will help?
CLS
INPUT "Survey System Data File? : ", survey$
survey$ = "f:\apps\survey\" + survey$
reclen = 3004
OPEN survey$ + ".dat" FOR RANDOM AS 1 LEN = reclen
FIELD #1, 3 AS RL$, 9 AS n$
GET #1, 1
RL = CVI(RL$): n = CVI(n$)
PRINT "Record Length = "; RL
reclen = RL
PRINT "Number of Records = "; n
CLOSE #1
Basically what I am looking for is something similar but in java.
ASCII is a special way to translate a bit pattern in a byte to a character, and that gives each character a numerical value; for the letter 'A' is this 65.
In Java, you can get that numerical value by converting the char to an int (ok, this gives you the Unicode value, but as for the ASCII characters the Unicode value is the same as for ASCII, this does not matter).
But now you need to know how the length is calculated: do you have to add the values? Or multiply them? Or append them? Or multiply them with 128^p where p is the position, and add the result? And, in the latter case, is the first byte on position 0 or position 3?
Same for the number of records, of course.
Another possible interpretation of the data is that the bytes are BCD encoded numbers. In that case, each nibble (4bit set) represents a number from 0 to 9. In that case, you have to do some bit manipulation to extract the numbers and concatenate them, from left (highest) to right (lowest). At least you do not have to struggle with the sequence and further interpretation here …
But as BCD would require 8-bit, this would be not the right interpretation if the file really contains ASCII, as ASCII is 7-bit.

In Java , how can I create a text file that contains one string array and two int arrays separated by commas?

In my Java game, I would like to be able to display the user's name, win score and lose score when it's game over.
For example:
Even after the game has been exited and then recompiled and run again, the info from the text file would be read into the program and added to the arrays. At the end of that game, the list will grow longer and then the text file will have the updated info for the next game.
Thanks so much in advance
assuming the arrays are of the same length and each individual index is associated with the same player just use one for loop and write each item at the index in the three arrays to a file separated by commas then you can read back into file the same way you wrote into the file
PrintWriter writer = new PrintWriter("the-file-name.txt", "UTF-8");
writer.println("NAME W L");
writer.println("--------");
for(int i=0; i<a.length; i++){
writer.println(a[i] + ", " + b[i] + ", " + c[i] );
}
Then before each game just read the text file back into to the arrays splitting on the commas and trimming white space and taking each string leftover and writing it back into the right arrays(make sure you cast to the right types if you want ints). Also you should probably use ArrayLists instead of arrays if they are going to be growing and you don't know the size they are going to grow to.
There are many ways to read and write to files but this is probably the general pattern you want to follow for your needs.

Writing/reading array to file

I have an app that will create 3 arrays : 2 with double values and one with strings that can contain anything,alphanumeric,commas,points,anything the user might want to type or type by accident. The double arrays are easy.The string one i find to be tricky.
It can contain stuff like cake red,blue 1kg paper-clip,you get the ideea.
I will need to store those arrays somehow(i guess in a file is the easiest way),read them and get them back into the app whenever the user wants to.
Also,it would be well if they wouldn't be human readable,to only be able to read them thru my app.
What's the best way to do this ? My issue is,how can i read them back into arrays.Its easy to write to a file but then to get them back in the same array i put them in...How can i separate array elements for it not to split one element in two because it has a space or any other element.
Can i like,make 3 rows of text,each element split by a tab \t or something and when i read it each element will by split by that tab ? Will this be able to create any issues when reading ?
I guess i want to know how can i split the elements of the array so that it won't be able to ever read them wrong.
Thanks and have a nice day !
If you don't want the file to be human readable, you could usejava.io.RandomAccessFile.
You would probably want to specify a maximum string size if you did this.
To save a string:
String str = "hello";
RandomAccessFile file = new RandomAccessFile(new File("filename"));
final int MAX_STRING_BYTES = 100; // max number of bytes the string could use in the file
file.writeUTF(str);
file.skipBytes(MAX_STRING_BYTES - str.getBytes().length);
// then write another..
To read a string:
// instantiate again
final int STRING_POSITION = 100; // or whichever place you saved it
file.seek(STRING_POSITION);
String str = new String(file.read(MAX_STRING_BYTES));
You would probably want a use the beginning of the file to store the size of each array. Then just store all the values one by one in the file, no need for separators.

Construct document-term matrix via Java and MapReduce

Background:
I’m trying to make a “document-term” matrix in Java on Hadoop using MapReduce. A document-term matrix is like a huge table where each row represents a document and each column represents a possible word/term.
Problem Statement:
Assuming that I already have a term index list (so that I know which term is associated with which column number), what is the best way to look up the index for each term in each document so that I can build the matrix row-by-row (i.e., document-by-document)?
So far I can think of two approaches:
Approach #1:
Store the term index list on the Hadoop distributed file system. Each time a mapper reads a new document for indexing, spawn a new MapReduce job -- one job for each unique word in the document, where each job queries the distributed terms list for its term. This approach sounds like overkill, since I am guessing there is some overhead associated with starting up a new job, and since this approach might call for tens of millions of jobs. Also, I’m not sure if it’s possible to call a MapReduce job within another MapReduce job.
Approach #2:
Append the term index list to each document so that each mapper ends up with a local copy of the term index list. This approach is pretty wasteful with storage (there will be as many copies of the term index list as there are documents). Also, I’m not sure how to merge the term index list with each document -- would I merge them in a mapper or in a reducer?
Question Update 1
Input File Format:
The input file will be a CSV (comma separated value) file containing all of the documents (product reviews). There is no column header in the file, but the values for each review appear in the following order: product_id, review_id, review, stars. Below is a fake example:
“Product A”, “1”,“Product A is very, very expensive.”,”2”
“Product G”, ”2”, “Awesome product!!”, “5”
Term Index File Format:
Each line in the term index file consists of the following: an index number, a tab, and then a word. Each possible word is listed only once in the index file, such that the term index file is analogous to what could be a list of primary keys (the words) for an SQL table. For each word in a particular document, my tentative plan is to iterate through each line of the term index file until I find the word. The column number for that word is then defined as the column/term index associated with that word. Below is an example of the term index file, which was constructed using the two example product reviews mentioned earlier.
1 awesome
2 product
3 a
4 is
5 very
6 expensive
Output File Format:
I would like the output to be in the “Matrix Market” (MM) format, which is the industry standard for compressing matrices with many zeros. This is the ideal format because most reviews will contain only a small proportion of all possible words, so for a particular document it is only necessary to specify the non-zero columns.
The first row in the MM format has three tab separated values: the total number of documents, the total number of word columns, and the total number of lines in the MM file excluding the header. After the header, each additional row contains the matrix coordinates associated with a particular entry, and the value of the entry, in this order: reviewID, wordColumnID, entry (how many times this word appears in the review). For more details on the Matrix Market format, see this link: http://math.nist.gov/MatrixMarket/formats.html.
Each review’s ID will equal its row index in the document-term matrix. This way I can preserve the review’s ID in the Matrix Market format so that I can still associate each review with its star rating. My ultimate goal -- which is beyond the scope of this question -- is to build a natural language processing algorithm to predict the number of stars in a new review based on its text.
Using the example above, the final output file would look like this (I can't get Stackoverflow to show tabs instead of spaces):
2 6 7
1 2 1
1 3 1
1 4 1
1 5 2
1 6 1
2 1 1
2 2 1
Well, you can use something analogous to a inverted index concept.
I'm suggesting this becaue, I'm assuming both the files are big. Hence, comparing each other like one-to-one would be real performance bottle neck.
Here's a way that can be used -
You can feed both the Input File Format csv file(s) (say, datafile1, datafile2) and the term index file (say, term_index_file) as input to your job.
Then in each mapper, you filter the source file name, something like this -
Pseudo code for mapper -
map(key, row, context){
String filename= ((FileSplit)context.getInputSplit()).getPath().getName();
if (filename.startsWith("datafile") {
//split the review_id, words from row
....
context.write(new Text("word), new Text("-1 | review_id"));
} else if(filename.startsWith("term_index_file") {
//split index and word
....
context.write(new Text("word"), new Text("index | 0"));
}
}
e.g. output from different mappers
Key Value source
product -1|1 datafile
very 5|0 term_index_file
very -1|1 datafile
product -1|2 datafile
very -1|1 datafile
product 2|0 term_index_file
...
...
Explanation (the example):
As it clearly shows the key will be your word and the value will be made of two parts separated by a delimiter "|"
If the source is a datafile then you emit key=product and value=-1|1, where -1 is a dummy element and 1 is a review_id.
If the source is a term_index_file then you emit key=product and value=2|0, where 2 is a index of word 'product' and 0 is a dummy review_id, which we would use for sorting- explained later.
Definitely, no duplicate index will be processed by two different mappers if we are providing the term_index_file as a normal input file to the job.
So, 'product, vary' or any other indexed word in the term_index_file will only be available to one mapper. Note this is only valid for term_index_file not the datafile.
Next step:
Hadoop mapreduce framework, as you might well know, will group by keys
So, you will have something like this going to different reducers,
reduce-1: key=product, value=<-1|1, -1|2, 2|0>
reduce-2: key=very, value=<5|0, -1|1, -1|1>
But, we have a problem in the above case. We would want a sort in the values after '|' i.e. in the reduce-1 -> 2|0, -1|1, -1|2 and in reduce-2 -> <5|0, -1|1, -1|1>
To achieve that you can use a secondary sort implemented using a sort comparator. Please google for this but here's a link that might help. Mentioning it here can go real lengthy.
In each reduce-1, since the values are sorted as above, when we begin iteration, we would get the '0' in the first iteration and with it the index_id=2, which could then be used for subsequent iterations. In the next two iteration, we get review ids 1 and 2 consecutively, and we use a counter, so that we could keep track of any repeated review ids. When we get repeated review ids that would mean that a word appeared twice in the same review_id row. We reset the counter only when we find a different review_id and emit the previous review_id details for the particular index_id, something like this -
previous_review_id + "\t" + index_id + "\t" + count
When the loop ends, we'll be left with a single previous_review_id, which we finally emit in the same fashion.
Pseudo code for reducer -
reduce(key, Iterable values, context) {
String index_id = null;
count = 1;
String previousReview_id = null;
for(value: values) {
Split split[] = values.split("\\|");
....
//when consecutive review_ids are same, we increment count
//and as soon as the review_id differ, we emit, reset the counter and print
//the previous review_id detected.
if (split[0].equals("-1") && split[1].equals(previousReview_id)) {
count++;
} else if(split[0].equals("-1") && !split[1].equals(prevValue)) {
context.write(previousReview_id + "\t" + index_id + "\t" + count);
previousReview_id = split[1];//resting with new review_id id
count=1;//resetting count for new review_id
} else {
index_id = split[0];
}
}
//the last previousReview_id will be left out,
//so, writing it now after the loop completion
context.write(previousReview_id + "\t" + index_id + "\t" + count);
}
This job is done with multiple reducers in order to leverage Hadoop for what it best known for - performance, as a result, the final output will be scattered, something like the following, deviating from your desired output.
1 4 1
2 1 1
1 5 2
1 2 1
1 3 1
1 6 1
2 2 1
But, if you want everything to be sorted according to the review_id (as your desired outpout), you can write one more job that will do that for your using a single reducer and the output of the previos job as input. And also at the same time calculate 2 6 7 and put it at the front of the output.
This is just an approach ( or an idea), I think, that might help you. You definitely want to modify this, put a better algorithm and use it the your way that you think would benefit you.
You can also use Composite keys for better clarity than using a delimiter such as "|".
I am open for any clarification. Please ask if you think, it might be useful to you.
Thank you!
You can load the term index list in Hadoop distributed cache so that it is available to mappers and reducers. For instance, in Hadoop streaming, you can run your job as follows:
$ hadoop jar $HADOOP_INSTALL/contrib/streaming/hadoop-streaming-*.jar \
-input myInputDirs \
-output myOutputDir \
-mapper myMapper.py \
-reducer myReducer.py \
-file myMapper.py \
-file myReducer.py \
-file myTermIndexList.txt
Now in myMapper.py you can load the file myTermIndexList.txt and use it to your purpose. If you give a more detailed description of your input and desired output I can give you more details.
Approach #1 is not good but very common if you don't have much hadoop experience. Starting jobs is very expensive. What you are going to want to do is have 2-3 jobs that feed each other to get the desired result. A common solution to similar problems is to have the mapper tokenize the input and output pairs, group them in the reducer executing some kind of calculation and then feed that into job 2. In the mapper in job 2 you invert the data in some way and in the reducer do some other calculation.
I would highly recommend learning more about Hadoop through a training course. Interestingly Cloudera's dev course has a very similar problem to the one you are trying to address. Alternatively or perhaps in addition to a course I would look at "Data-Intensive Text Processing with MapReduce" specifically the sections on "COMPUTING RELATIVE FREQUENCIES" and "Inverted Indexing for Text Retrieval"
http://lintool.github.io/MapReduceAlgorithms/MapReduce-book-final.pdf

External Sorting from files in Java

I am wondering how do we write Java code from the following PseudoCode
foreach file F in file directory D
foreach int I in file F
sort all I from each file
Basically this is part of the External Sorting algorithm, so those files contain lists of sorted integer, and I want to read the first one from each file and sort it and then output to another file, and then move to the next integer from each file again until all the integers are fully sorted.
The problem is that as far as I understand for each file we need a reader, so if we have N files then does that mean we need N file readers?
======update=======
I am wondering is it something that look like this? Correct me if I miss anything or any other better approach.
int numOfFiles = 10;
Scanner [] scanners = new Scanner[numOfFiles];
try{
//reader all the files
for(int i = 0 ; i < numOfFiles; i++){
scanners[i] = new Scanner(new BufferedReader(
new FileReader("file"+i+".txt");
}
}
catch(FileNotFoundException fnfe){
}
The problem is that as far as I understand for each file we need a reader, so if we have N files then does that mean we need N file readers ?
Yes, that's right - unless you want to either have to go back over the data, or the whole of each file into memory. Either of those would let you get away with only one file open at a time - but that may well not suit what you want to do.
Operating systems usually only allow you to open a certain number of files at a time. If you're trying to do something like create a single sorted set of results from a very large number of files, you might want to consider operating on a few of them at a time, producing larger intermediate files. At its simplest, this would just sort two files at a time, e.g.
input1 + input2 => tmp-a1
input3 + input4 => tmp-a2
input5 + input6 => tmp-a3
input7 + input8 => tmp-a4
tmp-a1 + tmp-a2 => tmp-b1
tmp-a3 + tmp-a4 => tmp-b2
tmp-b1 + tmp-b2 => result
Yes, we must have N file readers for reading N files.
Inorder to iterate all the files in a directory, read the files one by one, and store them in a List. Then sort that list again to get your output.
There's a method called Polyphase merge sort I recently learnt in my ds class where you traverse the files in form of runs (a run is a sorted sequence). There are n sources, and a destination.
The gist of this polyphase method is having to keep no file (given a set of files) idle. It significantly reduces the iterations. It's done by taking an fibonacci sequence of an order equal to that of number of files. So in case of 5 files, I'll take the fib sequence of order 5: [1,1,2,4,8], which represent the number of runs you're going to take out of each file and place them, where from files corresponding to runs=1, one of them will be the destination.
In short:
Distribute a file into runs according to the fib sequence. [which would mean the entire dataset is in a single file. if that's not the case, you can always create in situ runs where you might want to add dummy runs to suit the sequence]
Take first n runs from every file into the buffer, sort them (insertion preferred) and dump them into ONE files. That ONE file is again selected by the fibonacci sequence.
Run to a point you get a single file with single run.
This is the paper which neatly explains the polyphase concept. ftp://reports.stanford.edu/pub/cstr/reports/cs/tr/76/543/CS-TR-76-543.pdf
http://en.wikipedia.org/wiki/Polyphase_merge_sort explains the algo better
Just presenting code, not answering "need N file readers ?" :)
use org.apache.commons.io:
//get line iterators :
Collection<File> files = FileUtils.listFiles(/* TODO : filter conf */);
List<LineIterator> iters = new ArrayList<LineIterator>();
for(File file : files) {
iters.add(FileUtils.lineIterator(file, "UTF-8"));
}
//collect a line from each file
List<String> numbers = new ArrayList<String>();
for(LineIterator li : iters) {
numbers.add(li.nextLine());
}
//sort
//Arrays.sort(numbers/*will fail*/);// :)
Yes, you need N File readers.
public void workOnFiles(){
File []D = new File("directoryName").listFiles(); //D.length should equal to N.
for(File F:D){
doSortingForEachFile(F);//do sorting part here. The same reader cannot open same file here again.
}
}
public void doSortingForEachFile(File f){
try{
ArrayList<Integer> list=new ArrayList<Integer>();
Scanner s=new Scanner(f);
while(s.hasNextInt()){//store ints inside the file.
list.add(s.nextInt());
}
s.close();//once closed, cannot open again.
Collections.sort(list);//this method will sort the ArrayList of int.
//...write numbers inside list to another file...
}catch(Exception e){}
}

Categories

Resources