dynamically populate hashmap with human language dictionary for text analysis - java

I'm writing a software project to take as input a text in human language and determine what language it's written in.
My idea is that I'm going to store dictionaries in hashmaps, with the word as a key and a bool as a value.
If the document has that word I will flip the bool to ture.
Right now I'm trying to think of a good way to read in these dictionaries, put them into the hashmaps, the way I'm doing it now is very naieve and looks clunky, is there a better way to populate these hashmaps?
moreover, these dictionaries are huge. maybe this isn't the best way to do this, i.e. populate them all in succession like this.
I was thinking that it might be better to just consider one dictionary at a time, and then create a score, how many words of the input text registered with that document, save that, and then process the next dictionary. that would save on RAM, isn't it? Is that a good solution?
The code so far looks like this:
static HashMap<String, Boolean> de_map = new HashMap<String, Boolean>();
static HashMap<String, Boolean> fr_map = new HashMap<String, Boolean>();
static HashMap<String, Boolean> ru_map = new HashMap<String, Boolean>();
static HashMap<String, Boolean> eng_map = new HashMap<String, Boolean>();
public static void main(String[] args) throws IOException
{
ArrayList<File> sub_dirs = new ArrayList<File>();
final String filePath = "/home/matthias/Desktop/language_detective/word_lists_2";
listf( filePath, sub_dirs );
for(File dir : sub_dirs)
{
String word_holding_directory_path = dir.toString().toLowerCase();
BufferedReader br = new BufferedReader(new FileReader( dir ));
String line = null;
while ((line = br.readLine()) != null)
{
//System.out.println(line);
if(word_holding_directory_path.toLowerCase().contains("/de/") )
{
de_map.put(line, false);
}
if(word_holding_directory_path.toLowerCase().contains("/ru/") )
{
ru_map.put(line, false);
}
if(word_holding_directory_path.toLowerCase().contains("/fr/") )
{
fr_map.put(line, false);
}
if(word_holding_directory_path.toLowerCase().contains("/eng/") )
{
eng_map.put(line, false);
}
}
}
So I'm looking for advice on how I might populate them one at a time, and an opinion as to whether that's a good methodology, or suggestions about possibly better methodologies for acheiving this aim.
The full programme can be found here on my GitHub page.
27th

The task of language identification is well researched and there are a lot of good libraries.
For Java, try TIKA, or Language Detection Library for Java (they report "99% over precision for 53 languages"), or TextCat, or LingPipe - I'd suggest to start from the 1st, it seems to have the most detailed tutorial.
If you task is too specific for existed libraries (although I doubt this is the case), refer to this survey paper and adapt closest techniques.
If you do want to reinvent the wheel, e.g. for self-learning purposes, note that identification can be treated as a special case of text classification and read this basic tutorial for text classification.

Related

java.lang.ClassCastException: class java.util.TreeMap cannot be cast to class java.lang.Comparable

I receive the following error when attempting to run my java program
Exception in thread "main" java.lang.ClassCastException: class java.util.TreeMap cannot be cast to class java.lang.Comparable (java.util.TreeMap and java.lang.Comparable are in module java.base of loader 'bootstrap')
at java.base/java.util.TreeMap.compare(TreeMap.java:1569)
at java.base/java.util.TreeMap.addEntryToEmptyMap(TreeMap.java:776)
at java.base/java.util.TreeMap.put(TreeMap.java:785)
at java.base/java.util.TreeMap.put(TreeMap.java:534)
at exportsParser.exportsMap(exportsParser.java:53)
at exportsParser.main(exportsParser.java:28)
The applicable code:
import edu.duke.*;
import java.io.*;
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.regex.*;
import java.util.*;
import java.util.TreeMap;
import java.util.Iterator;
public class exportsParser{
void println(Object obj){
System.out.println(obj);
}
/* The rather involved pattern used to match CSV's consists of three
* alternations: the first matches aquoted field, the second unquoted,
* the third a null field.
*/
private final static Pattern csv_pattern = Pattern.compile("\"([^\"]+?)\",?|([^,]+),?|,");
public static void main(String[] argv) throws IOException {
//println(csv_pattern);
exportsParser parser = new exportsParser();
BufferedReader reader = new BufferedReader(new FileReader("./exports_small.csv"));
parser.exportsMap(reader);
}
public TreeMap<String, TreeMap<TreeMap<String,String> ,TreeMap<String, String>>> exportsMap(BufferedReader reader) throws IOException{
if(reader.readLine() == null) return null;
TreeMap<String, TreeMap<TreeMap<String,String>, TreeMap<String,String>>> exportsTable = new TreeMap<>();
TreeMap<String, String> products = new TreeMap<>();
TreeMap<String, String> value = new TreeMap<>();
TreeMap<TreeMap<String,String>,TreeMap<String,String>> exportsData = new TreeMap<>();
int countryIndex = 0;
ArrayList<String> exportsList = new ArrayList<String>();
String line;
try{
while((line = reader.readLine()) != null){
exportsList = parse(line);
String countryName = exportsList.get(0);
products.put("items", exportsList.get(1));
value.put("total", exportsList.get(2));
println(products);
println(value);
exportsData.put(products, value);
println(exportsData);
// exportsTable.put(countryName,exportsData);
println(exportsTable);
}
}
catch(IOException e)
{
e.printStackTrace();
}
reader.close();
return exportsTable;
}
/* Parse one line.
* #return List of Strings, minus their double quotes
*/
public ArrayList<String> parse(String line) {
ArrayList<String> list = new ArrayList<String>();
Matcher mat = csv_pattern.matcher(line);
// For each field
while (mat.find()) {
String match = mat.group();
if (match == null)
break;
if (match.endsWith(",")) { // trim trailing ,
match = match.substring(0, match.length() - 1);
}
/*if (match.startsWith("\"")) { // assume also ends with
match = match.substring(1, match.length() - 1);
}*/
if (match.length() == 0)
match = null;
list.add(match);
}
return list;
}
}
To clarify, the issue arises when attempting to put the TreeMap data of products and value in exportsData. Same is applicable when attempting to add exportsData to the exportsTable correlating its key (Country) to the exportsData (Value). I understand what the errors means, I just have no idea as to how to fix it. Additionally libraries are not allowed (Purpose is to understand the flow of input data into "rows/columns" and experiment with Trees, HashMaps, etc)
Additionally, I cannot use a database for this as this is a requirement to manually do this. However what is not a requirement is using TreeMaps of course. We are allowed to experiment with the various Collection classes.
I have spent a while trying to get this to work but I have run out of thoughts and forum pages to read now. Eventually, this would be ideal to make it cater towards larger CSV files of unknown columns. However, for the practice run, we have been given the information before hand, hence the indexing in the code above.
CSV data:
Country,Exports,Value (dollars)
Germany,"motor vehicles, machinery, chemicals","$1,547,000,000,000"
Macedonia,"tobacco, textiles","$3,421,000,000"
Madagascar,"coffee, vanilla, shellfish","$864,800,000"
Malawi,"tea, sugar, cotton, coffee","$1,332,000,000"
Malaysia,"semiconductors, wood","$231,300,000,000"
Namibia,"diamonds, copper, gold, zinc, lead","$4,597,000,000"
Peru,"copper, gold, lead, zinc, tin, coffee","$36,430,000,000"
Rwanda,"coffee, tea, hides, tin ore","$720,000,000"
South Africa,"gold, diamonds, platinum","$97,900,000,000"
United States,"corn, computers, automobiles, medicines","$1,610,000,000,000"
This is my first time using the above so it is prone to beginner errors.
Check javadoc. To cite - The map is sorted according to the natural ordering of its keys, or by a Comparator provided at map creation time, depending on which constructor is used. Natural ordering of keys means key must implement Comparable. Your keys are of type TreeMap, which does not have natural ordering - it does not implement Comparable. Naturally this leads to ClassCastException.
If you really must use TreeMap as key for your TreeMap, you must provide a Comparator to TreeMap constructor:
Comparator<TreeMap<String,String>> comparator = implement it;
TreeMap<TreeMap<String,String>,TreeMap<String,String>> exportsData = new TreeMap<>(comparator);
Seeing that your data is coming from a csv file, i would suggest to parse it to some custom class. It would be much more readable, and it will be easier to implement Comparable, or Comparator if needed.
Edit: You don't actually need 2 maps - for exports and value, it complicates thing more that needed. Those can be put in a single map. Keys are values from the first line in csv(or other keys, as in your case) and values are corresponding values from the parsed line. So you have:
Map<String, String> lineData;
Country may also be part of this map(if you need it). Normally it's this map, which will be rerpesented by your custom class, but it looks like your task is to work with collections, so i won't delve into that.
Since you want to map country names to data, now you need another map - keys will be string(country name) and values the map containing line data from above.
All of that can be stored in a list(you can store anything in a list). I'm leaving to you figuring out the exact way to implement it.

How to add list of items to an ArrayList<String> in java?

I have list of words which I need to load to ArrayList< String >
prefix.properties
vocab\: = http://myweb.in/myvocab#
hydra\: = http://www.w3.org/ns/hydra/core#
schema\: = http://schema.org/
"vocab:" is actually "vocab:" .Slash(\) is used to read colon(:) character because it is special character.
Dictionary.java
public class Dictionary {
public static ArrayList<String> prefix = new ArrayList<>();
static {
Properties properties = new Properties();
InputStream input = null;
input = ClassLoader.getSystemResourceAsStream("prefix.properties");
System.out.println(input!=null);
try {
properties.load(input);
} catch (IOException e) {
e.printStackTrace();
}
Set<Map.Entry<Object, Object>> entries = properties.entrySet();
for(Map.Entry<Object, Object> E : entries)
{
prefix.add(E.getKey().toString());
prefix.add(E.getValue().toString());
}
}
}
In Dictionary.java , ArrayList prefix will have
prefix = [
"vocab:",
"http://myweb.in/myvocab#",
"hydra:",
"http://www.w3.org/ns/hydra/core#",
"schema:",
"http://schema.org/"
]
I am querying some data in another class.
For eg:
public class QueryClass
{
public ArrayList<String> queryResult(String findKey)
{
ArrayList<String> result = new ArrayList<String>();
ArrayList<String> prefix = Dictionary.prefix;
Iterator<String> iterator = prefix.iterator();
while (iterator.hasNext())
{
String currentKey = iterator.next()+findKey;
/**
Here my logic to search data with this currentKey
*/
}
return result;
}
}
Problem :
I want to avoid this method to load from .properties file because there is possibility of odd number of elements can be present while .properties file provide (key,value) pair way to store data.
Reason why I have to load from separate file ? Because In future I will have to add more keywords/String thats why I put it in prefix.properties file.
Is there any alternative way to do this?
Do not re-invent the wheel.
If you can define the file format, then just go for java properties.
You see, the Properties class has a method getProperty(String, String) where the second argument can be used to pass a default value for example. That method could be used in order to fetch keys that don't come with values.
I would be really careful about inventing your own format; instead I would look into ways of re-using what is already there. Writing code is similar to building roads: people forget that each new road that is built translates to maintenance efforts in the future.
Besides: you add string values to a list of strings by calling list.add(strValue). That is all that is to that.
Edit on your comment: when "java properties" are not what you are looking for; then consider using other formats. For example you could be persisting your data in some JSON based format. And then just go for some existing JSON parser. Actually, your data almost looks like JSON already.
I'm not sure why you need to use ArrayList but if you want to pass these property keys/values, there are 2 better ways:
Use Properties itself.
Convert to HashMap.

Java what's the best data structure to search objects by keywords [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
suppose I have a "journal article" class which has variables such as year, author(s), title, journal name, keyword(s), etc.
variables such as authors and keywords might be declared as String[] authors and String[] keywords
What's the best data structure to search among a group of objects of "journal paper" by one or several "keywords", or one of several author names, or part of the title?
Thanks!
==========================================================================
Following everybody's help, the test code realized via the Processing environment is shown below. Advices are greatly appreciated! Thanks!
ArrayList<Paper> papers = new ArrayList<Paper>();
HashMap<String, ArrayList<Paper>> hm = new HashMap<String, ArrayList<Paper>>();
void setup(){
Paper paperA = new Paper();
paperA.title = "paperA";
paperA.keywords.append("cat");
paperA.keywords.append("dog");
paperA.keywords.append("egg");
//println(paperA.keywords);
papers.add(paperA);
Paper paperC = new Paper();
paperC.title = "paperC";
paperC.keywords.append("egg");
paperC.keywords.append("cat");
//println(paperC.keywords);
papers.add(paperC);
Paper paperB = new Paper();
paperB.title = "paperB";
paperB.keywords.append("dog");
paperB.keywords.append("egg");
//println(paperB.keywords);
papers.add(paperB);
for (Paper p : papers) {
// get a list of keywords for the current paper
StringList keywords = p.keywords;
// go through each keyword of the current paper
for (int i=0; i<keywords.size(); i++) {
String keyword = keywords.get(i);
if ( hm.containsKey(keyword) ) {
// if the hashmap has this keyword
// get the current paper list associated with this keyword
// which is the "value" of this keyword
ArrayList<Paper> papers = hm.get(keyword);
papers.add(p); // add the current paper to the paper list
hm.put(keyword, papers); // put the keyword and its paper list back to hashmap
} else {
// if the hashmap doesn't have this keyword
// create a new Arraylist to store the papers with this keyword
ArrayList<Paper> papers = new ArrayList<Paper>();
papers.add(p); // add the current paper to this ArrayList
hm.put(keyword, papers); // put this new keyword and its paper list to hashmap
}
}
}
ArrayList<Paper> paperList = new ArrayList<Paper>();
paperList = hm.get("egg");
for (Paper p : paperList) {
println(p.title);
}
}
void draw(){}
class Paper
{
//===== variables =====
int ID;
int year;
String title;
StringList authors = new StringList();
StringList keywords = new StringList();
String DOI;
String typeOfRef;
String nameOfSource;
String abs; // abstract
//===== constructor =====
//===== update =====
//===== display =====
}
Use a HashMap<String, JournalArticle> data structure.
for example
Map<String, JournalArticle> journals = new HashMap<String, JournalArticle>();
journals.put("keyword1", testJA);
if (journals.containsKey("keyword1")
{
return journals.get("keyword1");
}
you can put your keywords as the key of String type in this map, however, it only supports "exact-match" kind of search, meaning that you have to use the keyword (stored as key in the Hashmap) in your search.
If you are looking for " like " kind of search, I suggest you save your objects in a database that supports queries for "like".
Edit: on a second thought, I think you can do some-kind-of "like" queries (just like the like clause in SQL), but the efficiency is not going to be too good, because you are iterating through all the keys in the HashMap whenever you do a query. If you know regex, you can do all kinds of queries with modification of the following example code (e.g. key.matches(pattern)):
List<JournalArticle> results = null;
for (String key : journals.keySet())
{
if (key.contains("keyword")) /* keyword has to be part of the key stored in the HashMap, but does not have to be an exact match any more */
results.add(journals.get(key));
}
return results;
For simple cases you can use a Multimap<String, Article>. There's one in Guava library.
For larger amounts of data Apache Lucene will be a better fit.
I would create a map from a keyword (likewise for author, or title, etc.), to a set of JournalArticles.
Map<String, Set<JournalArticle>> keyWordMap = new HashMap<>();
Map<String, Set<JournalArticle>> authorMap = new HashMap<>();
When you create a new JournalArticle, for each of its key words, you'd add that article to the appropriate set.
JournalArticle ja = new JournalArticle();
for(String keyWorld : ja.getKeyWords())
{
if(keyWordMap.containsKey(keyWorld) == false)
keyWordMap.put(keyWorld, new HashSet<JournalArticle>());
keyWordMap.get(keyWorld).add(ja);
}
To do a look up, you'd do something like:
String keyWord = "....";
Set<JournalArticle> matchingSet = keyWordMap.get(keyWord);

Very slow execution

I have a basic method which reads in ~1000 files with ~10,000 lines each from the hard drive. Also, I have an array of String called userDescription which has all the "description words" of the user. I have created a HashMap whose data structure is HashMap<String, HashMap<String, Integer>> which corresponds to HashMap<eachUserDescriptionWords, HashMap<TweetWord, Tweet_Word_Frequency>>.
The file is organized as:
<User=A>\t<Tweet="tweet...">\n
<User=A>\t<Tweet="tweet2...">\n
<User=B>\t<Tweet="tweet3...">\n
....
My method to do this is:
for (File file : tweetList) {
if (file.getName().endsWith(".txt")) {
System.out.println(file.getName());
BufferedReader in;
try {
in = new BufferedReader(new FileReader(file));
String str;
while ((str = in.readLine()) != null) {
// String split[] = str.split("\t");
String split[] = ptnTab.split(str);
String user = ptnEquals.split(split[1])[1];
String tweet = ptnEquals.split(split[2])[1];
// String user = split[1].split("=")[1];
// String tweet = split[2].split("=")[1];
if (tweet.length() == 0)
continue;
if (!prevUser.equals(user)) {
description = userDescription.get(user);
if (description == null)
continue;
if (prevUser.length() > 0 && wordsCount.size() > 0) {
for (String profileWord : description) {
if (wordsCorr.containsKey(profileWord)) {
HashMap<String, Integer> temp = wordsCorr
.get(profileWord);
wordsCorr.put(profileWord,
addValues(wordsCount, temp));
} else {
wordsCorr.put(profileWord, wordsCount);
}
}
}
// wordsCount = new HashMap<String, Integer>();
wordsCount.clear();
}
setTweetWordCount(wordsCount, tweet);
prevUser = user;
}
} catch (IOException e) {
System.err.println("Something went wrong: "
+ e.getMessage());
}
}
}
Here, the method setTweetWord counts the word frequency of all the tweets of a single user. The method is:
private void setTweetWordCount(HashMap<String, Integer> wordsCount,
String tweet) {
ArrayList<String> currTweet = new ArrayList<String>(
Arrays.asList(removeUnwantedStrings(tweet)));
if (currTweet.size() == 0)
return;
for (String word : currTweet) {
try {
if (word.equals("") || word.equals(null))
continue;
} catch (NullPointerException e) {
continue;
}
Integer countWord = wordsCount.get(word);
wordsCount.put(word, (countWord == null) ? 1 : countWord + 1);
}
}
The method addValues checks to see if wordCount has words that is already in the giant HashMap wordsCorr. If it does, it increases the count of the word in the original HashMap wordsCorr.
Now, my problem is no matter what I do the program is very very slow. I ran this version in my server which has fairly good hardware but its been 28 hours and the number of files scanned is just ~450. I tried to see if I was doing anything repeatedly which might be unnecessary and I corrected few of them. But still the program is very slow.
Also, I have increased the heap size to 1500m which is the maximum that I can go.
Is there anything I might be doing wrong?
Thank you for your help!
EDIT: Profiling Results
first of all I really want to thank you guys for the comments. I have changed some of the stuffs in my program. I now have precompiled regex instead of direct String.split() and other optimization. However, after profiling, my addValues method is taking the highest time. So, here's my code for addValues. Is there something that I should be optimizing here? Oh, and I've also changed my startProcess method a bit.
private HashMap<String, Integer> addValues(
HashMap<String, Integer> wordsCount, HashMap<String, Integer> temp) {
HashMap<String, Integer> merged = new HashMap<String, Integer>();
for (String x : wordsCount.keySet()) {
Integer y = temp.get(x);
if (y == null) {
merged.put(x, wordsCount.get(x));
} else {
merged.put(x, wordsCount.get(x) + y);
}
}
for (String x : temp.keySet()) {
if (merged.get(x) == null) {
merged.put(x, temp.get(x));
}
}
return merged;
}
EDIT2: Even after trying so hard with it, the program didn't run as expected. I did all the optimization of the "slow method" addValues but it didn't work. So I went to different path of creating word dictionary and assigning index to each word first and then do the processing. Lets see where it goes. Thank you for your help!
Two things come to mind:
You are using String.split(), which uses a regular expression to do the splitting. That's completely oversized. Use one of the many splitXYZ() methods from Apache StringUtils instead.
You are probably creating really huge hash maps. When having very large hash maps, the hash collisions will make the hashmap functions much slower. This can be improved by using more widely spread hash values. See an example over here: Java HashMap performance optimization / alternative
One suggestion (I don't know how much of an improvement you'll get from it) is based on the observation that curTweet is never modified. There is no need for creating a copy. I.e.
ArrayList<String> currTweet = new ArrayList<String>(
Arrays.asList(removeUnwantedStrings(tweet)));
can be replaced with
List<String> currTweet = Arrays.asList(removeUnwantedStrings(tweet));
or you can use the array directly (which will be marginally faster). I.e.
String[] currTweet = removeUnwantedStrings(tweet);
Also,
word.equals(null)
is always false by the definition of the contract of equals. The right way to null-check is:
if (null == word || word.equals(""))
Additionally, you won't need that null-pointer-exception try-catch if you do this. Exception handling is expensive when it happens, so if your word array tends to return lots of nulls, this could be slowing down your code.
More generally though, this is one of those cases where you should profile the code and figure out where the actual bottleneck is (if there is a bottleneck) instead of looking for things to optimize ad-hoc.
You would gain from a few more optimizations:
String.split recompiles the input regex (in string form) to a pattern every time. You should have a single static final Pattern ptnTab = Pattern.compile( "\\t" ), ptnEquals = Pattern.compile( "=" ); and call, e.g., ptnTab.split( str ). The resulting performance should be close to StringTokenizer.
word.equals( "" ) || word.equals( null ). Lots of wasted cycles here. If you are actually seeing null words, then you are catching NPEs, which is very expensive. See the response from #trutheality above.
You should allocate the HashMap with a very large initial capacity to avoid all the resizing that is bound to happen.
split() uses regular expressions, which are not "fast". try using a StringTokenizer or something instead.
Have you thought about using db instead of Java. Using db tools you can load the data using dataload tools that comes with DB in tables and from there you can do set processing. One challenge that I see is loading data in table as fields are not delimited with common seprator like "'" or ":"
You could rewrite addValues like this to make it faster - a few notes:
I have not tested the code but I think it is equivalent to yours.
I have not tested that it is quicker (but would be surprised if it wasn't)
I have assumed that wordsCount is larger than temp, if not exchange them in the code
I have also replaced all the HashMaps by Maps which does not make any difference for you but makes the code easier to change later on
private Map<String, Integer> addValues(Map<String, Integer> wordsCount, Map<String, Integer> temp) {
Map<String, Integer> merged = new HashMap<String, Integer>(wordsCount); //puts everyting in wordCounts
for (Map.Entry<String, Integer> e : temp.entrySet()) {
Integer countInWords = merged.get(e.getKey()); //the number in wordsCount
Integer countInTemp = e.getValue();
int newCount = countInTemp + (countInWords == null ? 0 : countInWords); //the sum
merged.put(e.getKey(), newCount);
}
return merged;
}

JavaScript objects in Java?

How do I loop over a string of item and create an object based on that?
I currently have this code:
public static Object ParseParams(String string)
{
Object params = new Object();
String[] lines = string.split("\n");
for(String line : lines)
{
String[] splittedLine = line.split("=");
params[splittedLine[0]] = splittedLine[1]; //JavaScript syntax, not Java!
}
return params;
}
The input string is in this format:
param1=value1
param2=value2
foo=bar
How do I fix the problematic line?
Edit
Sometimes the string would look like this:
foo=bar
param=1=hello
param=2=world
Would it be possible with Maps in Java to get the output like this:
foo
bar
param
1
hello
2
world
So the Maps are sometimes nested, and it you would retrieve hello by calling params.get("param").get("1");
It sounds like you want either a Map, or a JSON library.
Maps
public static Map<String, String> ParseParams(String string)
{
Map<String, String> params = new HashMap<String, String>();
String[] lines = string.split("\n");
for(String line : lines)
{
String[] splittedLine = line.split("=");
params.put(splittedLine[0], splittedLine[1]);
}
return params; // Get a param with params.get(key);
}
In general you'll want a HashMap (fast but not stored in order), but there's also a TreeMap which is slightly slower but stored in order (which can be useful sometimes).
JSON
The format JavaScript uses for objects is used as a general-purpose storage format called JSON (JavaScript Object Notation). This is only useful for storage / printing / network transmission. Internally, Java JSON libraries use maps (JavaScript interpreters probably do too).
There are several Java APIs, but StackOverflow users seem to recommend Json-lib:
public static JSONObject ParseParams(String string)
{
// Note that we need everything from the other method anyway
return JSONObject.fromObject(ParseParams(string));
}
EDIT:
You're already reaching the point where using a Map is strained. I'd suggest just using a class:
class MyStuff {
String foo;
Map<String, String> params;
}
It is possible to nest Maps, like Map<String, Map<String, String>> or Map<String, Object>, but you really should be using classes for this.

Categories

Resources