Sort an 2D array by number of hits - java

I have two ArrayLists:
Array 01:
ArrayList<String> uniqueFiletypes --> which contains unique filetypes (e.g .zip etc..)
Array 02:
ArrayList<Integer> countFiletypes --> which counts how many of each filetype there is, for example 8 .zip's
And to skip right to the question:
I need to make some kind of "ranking", which means the highest count of filetypes gets the first place, etc...
Another problem: It must be an Object[][] (to support JTable), so it is possible to show the result easily.
Example of output: I have 8 .zips, 5 .java and 2 .docx
Object[][] = {{"1", ".zip", "8"},{"2", ".java", "5"}, {"3", ".docx", "2"}}
Where {PLACE, FILETYPE, COUNT}

I'm assuming the order of the items in both lists matches. I.e. the first item in the uniqueFiletypes list has the number of hits equal to the first number in the countFiletypes list.
I would do the following:
Loop through the lists, adding the entries to a Map.
Sort the list of countFiletypes in descending order.
Pull the file types from the map, adding them in the order they're now in in the ordered list.
Something like the following might do the trick:
public static void main(final String[] args) {
final ArrayList<String> uniqueFileTypes = new ArrayList<String>();
uniqueFileTypes.add(".java");
uniqueFileTypes.add(".zip");
uniqueFileTypes.add(".docx");
final ArrayList<Integer> countFileTypes = new ArrayList<Integer>();
countFileTypes.add(5);
countFileTypes.add(8);
countFileTypes.add(2);
final Map<Integer, String> countedFileTypes = new HashMap<Integer, String>();
for (int i = 0; i< uniqueFileTypes.size(); i++ ) {
countedFileTypes.put(countFileTypes.get(i), uniqueFileTypes.get(i) );
}
Collections.sort(countFileTypes);
Collections.reverse(countFileTypes);
final Object[][] data = new Object[countedFileTypes.size()][3];
for(int i = 0; i<countedFileTypes.size(); i++) {
final Integer count = countFileTypes.get(i);
data[i] = new Object[]{(i+1), countedFileTypes.get(count), count};
System.out.println("{" + (i+1) + "," + countedFileTypes.get(count) + "," + count + "}");
}
}
The main method and the system out aren't really needed, I just used them for testing my solution, which produced this output:
{1,.zip,8}
{2,.java,5}
{3,.docx,2}
Granted, this implies a link between the number of hits and the file type which may not be true. For example, if the docx and the java file format both have 9 hits, this solution wouldn't work.

Is it possible for you to merge the two ArrayLists to a single HashMap<String, Integer>?
This map can hold the entries consisting of the unique filetype (String) and its count (Integer). I suggest this because you have a direct link between a filetype and its count - this "link" can be expressed by a HashMap entry.
The conversion of the HashMap to an Object[][] can be done this way:
for (Map.Entry<?,?> entry : map.entrySet()) {
model.addRow(new Object[] { entry.getKey(), entry.getValue() });
}
With the HashMap sorting gets also easier as you do not need to handle two independent array lists.

First approach: As you will work with JTable, Use TableRowSorter as shown in the demo example of tutorial page
Second approach: Assuming you need it not only for JTable
Object[][] array = {{"1", ".zip", "8"},{"2", ".java", "5"}, {"3", ".docx", "2"}};
List<Object[]>data = Arrays.asList(array);
Comparator<Object[]>comparator = new Comparator<Object[]>() {
#Override
public int compare(Object[] o1, Object[] o2) {
return ((String)o1[2]).compareTo((String)o2[2]);
}
};
Collections.sort(data, Collections.reverseOrder(comparator));
array = (Object[][]) data.toArray();

Related

Replace strings populated in an ArrayList<String> with other values

I am currently working on a project where I need to check an arraylist for a certain string and if that condition is met, replace it with the new string.
I will only show the relevant code but basically what happened before is a long string is read in, split into groups of three, then those strings populate an array. I need to find and replace those values in the array, and then print them out. Here is the method that populates the arraylist:
private static ArrayList<String> splitText(String text)
{
ArrayList<String> DNAsplit = new ArrayList<String>();
for (int i = 0; i < text.length(); i += 3)
{
DNAsplit.add(text.substring(i, Math.min(i + 3, text.length())));
}
return DNAsplit;
}
How would I search this arraylist for multiple strings (Here's an example aminoAcids = aminoAcids.replaceAll ("TAT", "Y");) and then print the new values out.
Any help is greatly appreciated.
In Java 8
list.replaceAll(s-> s.replace("TAT", "Y"));
There is no such "replace all" method on a list. You need to apply the replacement element-wise; the only difference vs doing this on a single string is that you need to get the value out of the list, and set the new value back into the list:
ListIterator<String> it = DNAsplit.listIterator();
while (it.hasNext()) {
// Get from the list.
String current = it.next();
// Apply the transformation.
String newValue = current.replace("TAT", "Y");
// Set back into the list.
it.set(newValue);
}
And if you want to print the new values out:
System.out.println(DNAsplit);
Why dont you create a hashmap that has a key-value and use it during the load time to populate this list instead of revising it later ?
Map<String,String> dnaMap = new HashMap<String,String>() ;
dnaMap.push("X","XXX");
.
.
.
dnaMap.push("Z","ZZZ");
And use it like below :
//Use the hash map to lookup the temp key
temp= text.substring(i, Math.min(i + 3, text.length()));
DNAsplit.add(dnaMap.get(temp));

quick sort on multidimensional array

Edit: I want this method to sort in ascending order based on any column the user want (each data in the same respective row is 'attached' to each other). There are 4 column in the table. if the user want to sort in based on first column then he should do something like
mySortedTable[][] = ClassName.quickSort(myUnsortedTable,0);
As you can see I tried to code what my title said. For some reason our data table is organized in a 2d Array of Strings(which is NOT handy, had to convert back and forth to implement ArrayList). I used the first data as the pivot. Mind debugging it together with me? :)
public static String[][] quickSort(String[][] data,int column) {
//1st step, create ArrayLists needed and compare the data by the pivot to determine which ArrayList to be filled.
ArrayList<String[]> hiData = new ArrayList<String[]>();
ArrayList<String[]> loData = new ArrayList<String[]>();
ArrayList<String[]> pivots = new ArrayList<String[]>();
String[] pivot = {data[0][0],data[0][1],data[0][2],data[0][3]};
for(String[] row : data) {
if(row[column].compareTo(pivot[column])<0)
loData.add(row);
else if (row[column].compareTo(pivot[column])>0)
hiData.add(row);
else pivots.add(row);
}
//To decide whether is needed to create the array from the ArrayList for recursively sort the parted data.
if(loData.size()>0) {
String[][] loDataArr = new String[loData.size()][4];
for(int i=0;i<loData.size();i++)
loDataArr[i]=loData.get(i);
if(loData.size()>1)
loDataArr = quickSort(loDataArr,column);
}
if(hiData.size()>0) {
String[][] hiDataArr = new String[hiData.size()][4];
for(int i=0;i<hiData.size();i++)
hiDataArr[i]=hiData.get(i);
if(hiData.size()>1)
hiDataArr = quickSort(hiDataArr,column);
}
//Combine parted data into new array and return it to feed the recursive back up until the first recursive call.
String result[][] = new String[hiData.size()+loData.size()+pivots.size()][4];
int j=0;
for(String[] row : loData) {
result[j]=row;
j++;
}
for(String[] row : pivots) {
result[j]=row;
j++;
}
for(String[] row : hiData) {
result[j]=row;
j++;
}
return result;
}
It outputs all the arrays, but is not sorted and also not equals with the arrays it started with. Also side question, I want to ask if ArrayList<String[]> do NOT smells bad, or is it?
In Java, arrays are always one dimensional - although their element type may be an array itself. So a two dimensional array in Java is just an array of arrays.
This fact becomes very handy when you want to sort an array so that rows stick together. You treat each row as an object, and use a comparator to compare the object.
Here is a little demonstration:
public class SortDemonstration {
public static void main(String[] args) {
String[][] table = {
{"The", "Quick", "Brown"},
{"Fox", "Jumped", "Over"},
{"A", "Lazy", "Dog"}
};
Arrays.sort( table, new ColumnComparator(0));
System.out.println( Arrays.deepToString(table));
Arrays.sort( table, new ColumnComparator(1));
System.out.println( Arrays.deepToString(table));
Arrays.sort( table, new ColumnComparator(2));
System.out.println( Arrays.deepToString(table));
}
private static class ColumnComparator implements Comparator<String []>{
private final int index;
public ColumnComparator(int index) {
this.index = index;
}
#Override
public int compare(String[] o1, String[] o2) {
return o1[index].compareTo(o2[index]);
}
}
}
The ColumnComparator class is the key to the solution. It implements a comparator of two string arrays (two rows). It compares the rows based on the item at the index it was instantiated with.
Thus a new ColumnComparator(0) compares two rows based on the first column. A new ColumnComparator(1) compares two rows based on the second column, etc.
Now that you have this comparator class, you can sort an array of "rows" using it. The output from this program is:
[[A, Lazy, Dog], [Fox, Jumped, Over], [The, Quick, Brown]]
[[Fox, Jumped, Over], [A, Lazy, Dog], [The, Quick, Brown]]
[[The, Quick, Brown], [A, Lazy, Dog], [Fox, Jumped, Over]]
I found the problem in my code. I built the parted data using the unsorted ArrayList instead of the sorted 2 dimensional array.
//Second step...
String[][] loDataArr = new String[loData.size()][4];
String[][] hiDataArr = new String[hiData.size()][4];
if(loData.size()>0) {
for(int i=0;i<loData.size();i++)
loDataArr[i]=loData.get(i);
if(loData.size()>1)
loDataArr = quickSort(loDataArr,column);
}
if(hiData.size()>0) {
for(int i=0;i<hiData.size();i++)
hiDataArr[i]=hiData.get(i);
if(hiData.size()>1)
hiDataArr = quickSort(hiDataArr,column);
}
String result[][] = new String[hiData.size()+loData.size()+pivots.size()][4];
int j=0;
for(String[] row : loDataArr) {
result[j]=row;
j++;
}
for(String[] row : pivots) {
result[j]=row;
j++;
}
for(String[] row : hiDataArr) {
result[j]=row;
j++;
}
return result;
}

Finding common between Array-list of Strings

I've array-list "mArrayListvarinats" of String which contains pipe separated strings like
225356175|225356176|225356177|225356178|225356179|225356180|225356181|225356182|225356183|225356184|225356185|225356186|225356187|225356188|225356189|225356190|225356191|225356192
The size of mArrayListvarinats may be 0 upto n Now I want to find out common string between those Strings from mArrayListvarinats.
for ex. if it's size is two the code may be like as follows.
String temp[] = mArrayListvarinats.get(0).split("\\|");
String temp1[] = mArrayListvarinats.get(1).split("\\|");
and then loop will work on both the arrays to get common one.But how to achieve it for any no of size inside the loop as those temp arrays will be generated in the loop on mArrayListvarinats?
Something like this should work :
HashSet<String> allStrings = new HashSet<String>();
HashSet<String> repeatedStrings = new HashSet<String>();
for(String pipedStrings: mArrayListvarinats){
String temp[] = pipedStrings.split("\\|");
for(String str : temp){
if(!allStrings.add(str)){
repeatedStrings.add(str);
}
}
}
This way, you will have your HashSet allStrings that contains all your unique strings. And the other HashSet repeatedStrings which contains all the strings that appears more than once.
Try this short version:
public static void main(String[] args) {
List<String> a = new ArrayList<>(asList("225356176|225356177|225356178".split("\\|")));
List<String> b = new ArrayList<>(asList("225356175|225356176|225356177".split("\\|")));
a.retainAll(b);
b.retainAll(a);
System.out.println(b);
}
OUTPUT:
[225356176, 225356177]
Iterate through each of them and put them in a HashSet. If you get false while adding it means it's already there. You can put it in a separate Hashset. this way you will get a hashset of all unique string and an another hashset of strings that occured multiple times
Set<String> set=new HashSet<String>();
for (int i = 0; i < temp.length; i++) {
set.add(str[i]);
}
for (int i = 0; i < temp1.length; i++) {
set.add(str1[i]);
}
The set will contain common strings.
If you are only interested in getting common strings found in mArrayListvarinats variable, then you should use Set data structure. Set in java will only contain unique entries. From the example strings it sounds that you are collecting string which has numeric values. So there won't be any problem due to that. But if you are collecting alpha numeric value then you need to take care of alphabet's case, as Set collects values which are case sensitive. So for Set A is not equal to a.
This will find common among all your lists.
List<String> common = Arrays.asList(mArrayListvarinats.get(0).split("\\|"));
for(String varinats: mArrayListvarinats){
List<String> items = Arrays.asList(varinats.split("\\|"));
common = ListUtils.intersection(items,common);
}
But for this you have to use Apache commons collection library , which I hope is not a issue for you :)
Below code will return you the frequency of each string present in the array.
public static void main(String[] args) {
String mArrayListvarinats = "225356175,225356176,225356177,225356178,225356179,225356180,225356181,225356182,225356183,225356184,225356185,225356186,225356187,225356188,225356189,225356190,225356191,225356192,225356192";
List<String> list = Arrays.asList(mArrayListvarinats.split(","));
Set<String> uniqueWords = new HashSet<String>(list);
for (String word : uniqueWords) {
System.out.println(word + ": " + Collections.frequency(list, word));
}
}
String with frequency more than 1 are duplicates/common. You can take actions based on frequency.

PairWise matching millions of records

I have an algorithmic problem at hand. To easily explain the problem, I will be using a simple analogy.
I have an input file
Country,Exports
Austrailia,Sheep
US, Apple
Austrialia,Beef
End Goal:
I have to find the common products between the pairs of countries so
{"Austrailia,New Zealand"}:{"apple","sheep}
{"Austrialia,US"}:{"apple"}
{"New Zealand","US"}:{"apple","milk"}
Process :
I read in the input and store it in a TreeMap > Where the List, the strings are interned due to many duplicates.
Essentially, I am aggregating by country.
where Key is country, Values are its Exports.
{"austrailia":{"apple","sheep","koalas"}}
{"new zealand":{"apple","sheep","milk"}}
{"US":{"apple","beef","milk"}}
I have about 1200 keys (countries) and total number of values(exports) is 80 million altogether.
I sort all the values of each key:
{"austrailia":{"apple","sheep","koalas"}} -- > {"austrailia":{"apple","koalas","sheep"}}
This is fast as there are only 1200 Lists to sort.
for(k1:keys)
for(k2:keys)
if(k1.compareTo(k2) <0){ //Dont want to double compare
List<String> intersectList = intersectList_func(k1's exports,k2's exports);
countriespair.put({k1,k2},intersectList)
}
This code block takes so long.I realise it O(n2) and around 1200*1200 comparisions.Thus,Running for almost 3 hours till now..
Is there any way, I can speed it up or optimise it.
Algorithm wise is best option, or are there other technologies to consider.
Edit:
Since both List are sorted beforehand, the intersectList is O(n) where n is length of floor(listOne.length,listTwo.length) and NOT O(n2) as discussed below
private static List<String> intersectList(List<String> listOne,List<String> listTwo){
int i=0,j=0;
List<String> listResult = new LinkedList<String>();
while(i!=listOne.size() && j!=listTwo.size()){
int compareVal = listOne.get(i).compareTo(listTwo.get(j));
if(compareVal==0){
listResult.add(listOne.get(i));
i++;j++;} }
else if(compareVal < 0) i++;
else if (compareVal >0) j++;
}
return listResult;
}
Update 22 Nov
My current implementation is still running for almost 18 hours. :|
Update 25 Nov
I had run the new implementation as suggested by Vikram and a few others. It's been running this Friday.
My question, is that how does grouping by exports rather than country save computational complexity. I find that the complexity is the same. As Groo mentioned, I find that the complexity for the second part is O(E*C^2) where is E is exports and C is country.
This can be done in one statement as a self-join using SQL:
test data. First create a test data set:
Lines <- "Country,Exports
Austrailia,Sheep
Austrailia,Apple
New Zealand,Apple
New Zealand,Sheep
New Zealand,Milk
US,Apple
US,Milk
"
DF <- read.csv(text = Lines, as.is = TRUE)
sqldf Now that we have DF issue this command:
library(sqldf)
sqldf("select a.Country, b.Country, group_concat(Exports) Exports
from DF a, DF b using (Exports)
where a.Country < b.Country
group by a.Country, b.Country
")
giving this output:
Country Country Exports
1 Austrailia New Zealand Sheep,Apple
2 Austrailia US Apple
3 New Zealand US Apple,Milk
with index If its too slow add an index to the Country column (and be sure not to forget the main. parts:
sqldf(c("create index idx on DF(Country)",
"select a.Country, b.Country, group_concat(Exports) Exports
from main.DF a, main.DF b using (Exports)
where a.Country < b.Country
group by a.Country, b.Country
"))
If you run out memory then add the dbname = tempfile() sqldf argument so that it uses disk.
Store something like following datastructure:- (following is a pseudo code)
ValuesSet ={
apple = {"Austrailia","New Zealand"..}
sheep = {"Austrailia","New Zealand"..}
}
for k in ValuesSet
for k1 in k.values()
for k2 in k.values()
if(k1<k2)
Set(k1,k2).add(k)
time complextiy: O(No of distinct pairs with similar products)
Note: I might be wrong but i donot think u can reduce this time complexity
Following is a java implementation for your problem:-
public class PairMatching {
HashMap Country;
ArrayList CountNames;
HashMap ProdtoIndex;
ArrayList ProdtoCount;
ArrayList ProdNames;
ArrayList[][] Pairs;
int products=0;
int countries=0;
public void readfile(String filename) {
try {
BufferedReader br = new BufferedReader(new FileReader(new File(filename)));
String line;
CountNames = new ArrayList();
Country = new HashMap<String,Integer>();
ProdtoIndex = new HashMap<String,Integer>();
ProdtoCount = new ArrayList<ArrayList>();
ProdNames = new ArrayList();
products = countries = 0;
while((line=br.readLine())!=null) {
String[] s = line.split(",");
s[0] = s[0].trim();
s[1] = s[1].trim();
int k;
if(!Country.containsKey(s[0])) {
CountNames.add(s[0]);
Country.put(s[0],countries);
k = countries;
countries++;
}
else {
k =(Integer) Country.get(s[0]);
}
if(!ProdtoIndex.containsKey(s[1])) {
ProdNames.add(s[1]);
ArrayList n = new ArrayList();
ProdtoIndex.put(s[1],products);
n.add(k);
ProdtoCount.add(n);
products++;
}
else {
int ind =(Integer)ProdtoIndex.get(s[1]);
ArrayList c =(ArrayList) ProdtoCount.get(ind);
c.add(k);
}
}
System.out.println(CountNames);
System.out.println(ProdtoCount);
System.out.println(ProdNames);
} catch (FileNotFoundException ex) {
Logger.getLogger(PairMatching.class.getName()).log(Level.SEVERE, null, ex);
} catch (IOException ex) {
Logger.getLogger(PairMatching.class.getName()).log(Level.SEVERE, null, ex);
}
}
void FindPairs() {
Pairs = new ArrayList[countries][countries];
for(int i=0;i<ProdNames.size();i++) {
ArrayList curr = (ArrayList)ProdtoCount.get(i);
for(int j=0;j<curr.size();j++) {
for(int k=j+1;k<curr.size();k++) {
int u =(Integer)curr.get(j);
int v = (Integer)curr.get(k);
//System.out.println(u+","+v);
if(Pairs[u][v]==null) {
if(Pairs[v][u]!=null)
Pairs[v][u].add(i);
else {
Pairs[u][v] = new ArrayList();
Pairs[u][v].add(i);
}
}
else Pairs[u][v].add(i);
}
}
}
for(int i=0;i<countries;i++) {
for(int j=0;j<countries;j++) {
if(Pairs[i][j]==null)
continue;
ArrayList a = Pairs[i][j];
System.out.print("\n{"+CountNames.get(i)+","+CountNames.get(j)+"} : ");
for(int k=0;k<a.size();k++) {
System.out.print(ProdNames.get((Integer)a.get(k))+" ");
}
}
}
}
public static void main(String[] args) {
PairMatching pm = new PairMatching();
pm.readfile("Input data/BigData.txt");
pm.FindPairs();
}
}
[Update] The algorithm presented here shouldn't improve time complexity compared to the OP's original algorithm. Both algorithms have the same asymptotic complexity, and iterating through sorted lists (as OP does) should generally perform better than using a hash table.
You need to group the items by product, not by country, in order to be able to quickly fetch all countries belonging to a certain product.
This would be the pseudocode:
inputList contains a list of pairs {country, product}
// group by product
prepare mapA (product) => (list_of_countries)
for each {country, product} in inputList
{
if mapA does not contain (product)
create a new empty (list_of_countries)
and add it to mapA with (product) as key
add this (country) to the (list_of_countries)
}
// now group by country_pair
prepare mapB (country_pair) => (list_of_products)
for each {product, list_of_countries} in mapA
{
for each pair {countryA, countryB} in list_of_countries
{
if mapB does not countain country_pair {countryA, countryB}
create a new empty (list_of_products)
and add it to mapB with country_pair {countryA, countryB} as key
add this (product) to the (list_of_products)
}
}
If your input list is length N, and you have C distinct countries and P distinct products, then the running time of this algorithm should be O(N) for the first part and O(P*C^2) for the second part. Since your final list needs to have pairs of countries mapping to lists of products, I don't think you will be able to lose the P*C^2 complexity in any case.
I don't code in Java too much, so I added a C# example which I believe you'll be able to port pretty easily:
// mapA maps each product to a list of countries
var mapA = new Dictionary<string, List<string>>();
foreach (var t in inputList)
{
List<string> countries = null;
if (!mapA.TryGetValue(t.Product, out countries))
{
countries = new List<string>();
mapA[t.Product] = countries;
}
countries.Add(t.Country);
}
// note (this is very important):
// CountryPair tuple must have value-type comparison semantics,
// i.e. you need to ensure that two CountryPairs are compared
// by value to allow hashing (mapping) to work correctly, in O(1).
// In C# you can also simply use a Tuple<string,string> to
// represent a pair of countries (which implements this correctly),
// but I used a custom class to emphasize the algorithm
// mapB maps each CountryPair to a list of products
var mapB = new Dictionary<CountryPair, List<string>>();
foreach (var kvp in mapA)
{
var product = kvp.Key;
var countries = kvp.Value;
for (int i = 0; i < countries.Count; i++)
{
for (int j = i + 1; j < countries.Count; j++)
{
var pair = CountryPair.Create(countries[i], countries[j]);
List<string> productsForCountryPair = null;
if (!mapB.TryGetValue(pair, out productsForCountryPair))
{
productsForCountryPair = new List<string>();
mapB[pair] = productsForCountryPair;
}
productsForCountryPair.Add(product);
}*
}
}
This is a great example to use Map Reduce.
At your map phase you just collect all the exports that belong to each Country.
Then, the reducer sorts the products (Products belong to the same country, because of mapper)
You will benefit from distributed, parallel algorithm that can be distributed into a cluster.
You are actually taking O(n^2 * time required for 1 intersect).
Lets see if we can improve time for intersect. We can maintain map for every country which stores corresponding products, so you have n hash maps for n countries. Just need to iterate thru all products once for initializing. If you want quick lookup, maintain a map of maps as:
HashMap<String,HashMap<String,Boolean>> countryMap = new HashMap<String, HashMap<String,Boolean>>();
Now if you want to find the common products for countries str1 and str2 do:
HashMap<String,Boolean> map1 = countryMap.get("str1");
HashMap<String,Boolean> map2 = countryMap.get("str2");
ArrayList<String > common = new ArrayList<String>();
Iterator it = map1.entrySet().iterator();
while (it.hasNext()) {
Map.Entry<String,Boolean> pairs = (Map.Entry)it.next();
//Add to common if it is there in other map
if(map2.containsKey(pairs.getKey()))
common.add(pairs.getKey());
}
So, total it will be O(n^2 * k) if there are k entries in one map assuming hash map lookup implementation is O(1) (I guess it is log k for java).
Using hashmaps where necessary to speed things up:
1) Go through the data and create a map with keys Items and values a list of countries associated with that item. So e.g. Sheep:Australia, US, UK, New Zealand....
2) Create a hashmap with keys each pair of countries and (initially) an empty list as values.
3) For each Item retrieve the list of countries associated with it and for each pair of countries within that list, add that item to the list created for that pair in step (2).
4) Now output the updated list for each pair of countries.
The largest costs are in steps (3) and (4) and both of these costs are linear in the amount of output produced, so I think this is not too far from optimal.

How to convert a Navigablemap to String[][]

I need to convert a navigable map to a 2d String array.Below given is a code from an answer to one of my previous question.
NavigableMap<Integer,String> map =
new TreeMap<Integer, String>();
map.put(0, "Kid");
map.put(11, "Teens");
map.put(20, "Twenties");
map.put(30, "Thirties");
map.put(40, "Forties");
map.put(50, "Senior");
map.put(100, "OMG OMG OMG!");
System.out.println(map.get(map.floorKey(13))); // Teens
System.out.println(map.get(map.floorKey(29))); // Twenties
System.out.println(map.get(map.floorKey(30))); // Thirties
System.out.println(map.floorEntry(42).getValue()); // Forties
System.out.println(map.get(map.floorKey(666))); // OMG OMG OMG!
I have to convert this map to a 2d String array:
{
{"0-11","Kids"},
{"11-20","Teens"},
{"20-30","Twenties"}
...
}
Is there a fast and elegant way to do this?
Best bet is just to iterate through the Map and create an array for each entry, the troublesome part is generating things like "0-11" since this requires looking for the next highest key...but since the Map is sorted (because you're using a TreeMap) it's no big deal.
String[][] strArr = new String[map.size()][2];
int i = 0;
for(Entry<Integer, String> entry : map.entrySet()){
// current key
Integer key = entry.getKey();
// next key, or null if there isn't one
Integer nextKey = map.higherKey(key);
// you might want to define some behavior for when nextKey is null
// build the "0-11" part (column 0)
strArr[i][0] = key + "-" + nextKey;
// add the "Teens" part (this is just the value from the Map Entry)
strArr[i][1] = entry.getValue();
// increment i for the next row in strArr
i++;
}
you can create two Arrays, one with the keys and one with the values in an "elegant way" then you can construct an String[][] using this two arrays.
// Create an array containing the values in a map
Integer[] arrayKeys = (Integer[])map.keySet().toArray( new Integer[map.keySet().size()]);
// Create an array containing the values in a map
String[] arrayValues = (String[])map.values().toArray( new String[map.values().size()]);
String[][] stringArray = new String[arrayKeys.length][2];
for (int i=0; i < arrayValues.length; i++)
{
stringArray[i][0] = arrayKeys[i].toString() + (i+1 < arrayValues.length ? " - " + arrayKeys[i+1] : "");
stringArray[i][1] = arrayValues[i];
}

Categories

Resources