Null-free "maps": Is a callback solution slower than tryGet()? - java

In comments to "How to implement List, Set, and Map in null free design?", Steven Sudit and I got into a discussion about using a callback, with handlers for "found" and "not found" situations, vs. a tryGet() method, taking an out parameter and returning a boolean indicating whether the out parameter had been populated. Steven maintained that the callback approach was more complex and almost certain to be slower; I maintained that the complexity was no greater and the performance at worst the same.
But code speaks louder than words, so I thought I'd implement both and see what I got. The original question was fairly theoretical with regard to language ("And for argument sake, let's say this language don't even have null") -- I've used Java here because that's what I've got handy. Java doesn't have out parameters, but it doesn't have first-class functions either, so style-wise, it should suck equally for both approaches.
(Digression: As far as complexity goes: I like the callback design because it inherently forces the user of the API to handle both cases, whereas the tryGet() design requires callers to perform their own boilerplate conditional check, which they could forget or get wrong. But having now implemented both, I can see why the tryGet() design looks simpler, at least in the short term.)
First, the callback example:
class CallbackMap<K, V> {
private final Map<K, V> backingMap;
public CallbackMap(Map<K, V> backingMap) {
this.backingMap = backingMap;
}
void lookup(K key, Callback<K, V> handler) {
V val = backingMap.get(key);
if (val == null) {
handler.handleMissing(key);
} else {
handler.handleFound(key, val);
}
}
}
interface Callback<K, V> {
void handleFound(K key, V value);
void handleMissing(K key);
}
class CallbackExample {
private final Map<String, String> map;
private final List<String> found;
private final List<String> missing;
private Callback<String, String> handler;
public CallbackExample(Map<String, String> map) {
this.map = map;
found = new ArrayList<String>(map.size());
missing = new ArrayList<String>(map.size());
handler = new Callback<String, String>() {
public void handleFound(String key, String value) {
found.add(key + ": " + value);
}
public void handleMissing(String key) {
missing.add(key);
}
};
}
void test() {
CallbackMap<String, String> cbMap = new CallbackMap<String, String>(map);
for (int i = 0, count = map.size(); i < count; i++) {
String key = "key" + i;
cbMap.lookup(key, handler);
}
System.out.println(found.size() + " found");
System.out.println(missing.size() + " missing");
}
}
Now, the tryGet() example -- as best I understand the pattern (and I might well be wrong):
class TryGetMap<K, V> {
private final Map<K, V> backingMap;
public TryGetMap(Map<K, V> backingMap) {
this.backingMap = backingMap;
}
boolean tryGet(K key, OutParameter<V> valueParam) {
V val = backingMap.get(key);
if (val == null) {
return false;
}
valueParam.value = val;
return true;
}
}
class OutParameter<V> {
V value;
}
class TryGetExample {
private final Map<String, String> map;
private final List<String> found;
private final List<String> missing;
private final OutParameter<String> out = new OutParameter<String>();
public TryGetExample(Map<String, String> map) {
this.map = map;
found = new ArrayList<String>(map.size());
missing = new ArrayList<String>(map.size());
}
void test() {
TryGetMap<String, String> tgMap = new TryGetMap<String, String>(map);
for (int i = 0, count = map.size(); i < count; i++) {
String key = "key" + i;
if (tgMap.tryGet(key, out)) {
found.add(key + ": " + out.value);
} else {
missing.add(key);
}
}
System.out.println(found.size() + " found");
System.out.println(missing.size() + " missing");
}
}
And finally, the performance test code:
public static void main(String[] args) {
int size = 200000;
Map<String, String> map = new HashMap<String, String>();
for (int i = 0; i < size; i++) {
String val = (i % 5 == 0) ? null : "value" + i;
map.put("key" + i, val);
}
long totalCallback = 0;
long totalTryGet = 0;
int iterations = 20;
for (int i = 0; i < iterations; i++) {
{
TryGetExample tryGet = new TryGetExample(map);
long tryGetStart = System.currentTimeMillis();
tryGet.test();
totalTryGet += (System.currentTimeMillis() - tryGetStart);
}
System.gc();
{
CallbackExample callback = new CallbackExample(map);
long callbackStart = System.currentTimeMillis();
callback.test();
totalCallback += (System.currentTimeMillis() - callbackStart);
}
System.gc();
}
System.out.println("Avg. callback: " + (totalCallback / iterations));
System.out.println("Avg. tryGet(): " + (totalTryGet / iterations));
}
On my first attempt, I got 50% worse performance for callback than for tryGet(), which really surprised me. But, on a hunch, I added some garbage collection, and the performance penalty vanished.
This fits with my instinct, which is that we're basically talking about taking the same number of method calls, conditional checks, etc. and rearranging them. But then, I wrote the code, so I might well have written a suboptimal or subconsicously penalized tryGet() implementation. Thoughts?
Updated: Per comment from Michael Aaron Safyan, fixed TryGetExample to reuse OutParameter.

I would say that neither design makes sense in practice, regardless of the performance. I would argue that both mechanisms are overly complicated and, more importantly, don't take into account actual usage.
Actual Usage
If a user looks up a value in a map and it isn't there, most likely the user wants one of the following:
To insert some value with that key into the map
To get back some default value
To be informed that the value isn't there
Thus I would argue that a better, null-free API would be:
has(key) which indicates if the key is present (if one only wishes to check for the key's existence).
get(key) which reports the value if the key is present; otherwise, throws NoSuchElementException.
get(key,defaultval) which reports the value for the key, or defaultval if the key isn't present.
setdefault(key,defaultval) which inserts (key,defaultval) if key isn't present, and returns the value associated with key (which is defaultval if there is no previous mapping, otherwise prev mapping).
The only way to get back null is if you explicity ask for it as in get(key,null). This API is incredibly simple, and yet is able to handle the most common map-related tasks (in most use cases that I have encountered).
I should also add that in Java, has() would be called containsKey() while setdefault() would be called putIfAbsent(). Because get() signals an object's absence via a NoSuchElementException, it is then possible to associate a key with null and treat it as a legitimate association.... if get() returns null, it means the key has been associated with the value null, not that the key is absent (although you can define your API to disallow a value of null if you so choose, in which case you would throw an IllegalArgumentException from the functions that are used to add associations if the value given is null). Another advantage to this API, is that setdefault() only needs to perform the lookup procedure once instead of twice, which would be the case if you used if( ! dict.has(key) ){ dict.set(key,val); }. Another advantage is that you do not surprise developers who write something like dict.get(key).doSomething() who assume that get() will always return a non-null object (because they have never inserted a null value into the dictionary)... instead, they get a NoSuchElementException if there is no value for that key, which is more consistent with the rest of the error checking in Java and which is also a much easier to understand and debug than NullPointerException.
Answer To Question
To answer original question, yes, you are unfairly penalizing the tryGet version.... in your callback based mechanism you construct the callback object only once and use it in all subsequent calls; whereas in your tryGet example, you construct your out parameter object in every single iteration. Try taking the line:
OutParameter out = new OutParameter();
Take the line above out of the for-loop and see if that improves the performance of the tryGet example. In other words, place the line above the for-loop, and re-use the out parameter in each iteration.

David, thanks for taking the time to write this up. I'm a C# programmer, so my Java skills are a bit vague these days. Because of this, I decided to port your code over and test it myself. I found some interesting differences and similarities, which are pretty much worth the price of admission as far as I'm concerned. Among the major differences are:
I didn't have to implement TryGet because it's built into Dictionary.
In order to use the native TryGet, instead of inserting nulls to simulate misses, I simply omitted those values. This still means that v = map[k] would have set v to null, so I think it's a proper porting. In hindsight, I could have inserted the nulls and changed (_map.TryGetValue(key, out value)) to (_map.TryGetValue(key, out value) && value != null)), but I'm glad I didn't.
I want to be exceedingly fair. So, to keep the code as compact and maintainable as possible, I used lambda calculus notation, which let me define the callbacks painlessly. This hides much of the complexity of setting up anonymous delegates, and allows me to use closures seamlessly. Ironically, the implementation of Lookup uses TryGet internally.
Instead of declaring a new type of Dictionary, I used an extension method to graft Lookup onto the standard dictionary, much simplifying the code.
With apologies for the less-than-professional quality of the code, here it is:
using System;
using System.Collections.Generic;
using System.Linq;
namespace ConsoleApplication1
{
static class CallbackDictionary
{
public static void Lookup<K, V>(this Dictionary<K, V> map, K key, Action<K, V> found, Action<K> missed)
{
V v;
if (map.TryGetValue(key, out v))
found(key, v);
else
missed(key);
}
}
class TryGetExample
{
private Dictionary<string, string> _map;
private List<string> _found;
private List<string> _missing;
public TryGetExample(Dictionary<string, string> map)
{
_map = map;
_found = new List<string>(_map.Count);
_missing = new List<string>(_map.Count);
}
public void TestTryGet()
{
for (int i = 0; i < _map.Count; i++)
{
string key = "key" + i;
string value;
if (_map.TryGetValue(key, out value))
_found.Add(key + ": " + value);
else
_missing.Add(key);
}
Console.WriteLine(_found.Count() + " found");
Console.WriteLine(_missing.Count() + " missing");
}
public void TestCallback()
{
for (int i = 0; i < _map.Count; i++)
_map.Lookup("key" + i, (k, v) => _found.Add(k + ": " + v), k => _missing.Add(k));
Console.WriteLine(_found.Count() + " found");
Console.WriteLine(_missing.Count() + " missing");
}
}
class Program
{
static void Main(string[] args)
{
int size = 2000000;
var map = new Dictionary<string, string>(size);
for (int i = 0; i < size; i++)
if (i % 5 != 0)
map.Add("key" + i, "value" + i);
long totalCallback = 0;
long totalTryGet = 0;
int iterations = 20;
TryGetExample tryGet;
for (int i = 0; i < iterations; i++)
{
tryGet = new TryGetExample(map);
long tryGetStart = DateTime.UtcNow.Ticks;
tryGet.TestTryGet();
totalTryGet += (DateTime.UtcNow.Ticks - tryGetStart);
GC.Collect();
tryGet = new TryGetExample(map);
long callbackStart = DateTime.UtcNow.Ticks;
tryGet.TestCallback();
totalCallback += (DateTime.UtcNow.Ticks - callbackStart);
GC.Collect();
}
Console.WriteLine("Avg. callback: " + (totalCallback / iterations));
Console.WriteLine("Avg. tryGet(): " + (totalTryGet / iterations));
}
}
}
My performance expectations, as I said in the article that inspired this one, would be that neither one is much faster or slower than the other. After all, most of the work is in the searching and adding, not in the simple logic that structures it. In fact, it varied a bit among runs, but I was unable to detect any consistent advantage.
Part of the problem is that I used a low-precision timer and the test was short, so I increased the count by 10x to 2000000 and that helped. Now callbacks are about 3% slower, which I do not consider significant. On my fairly slow machine, callbacks took 17773437 while tryget took 17234375.
Now, as for code complexity, it's a bit unfair because TryGet is native, so let's just ignore the fact that I had to add a callback interface. At the calling spot, lambda notation did a great job of hiding the complexity. If anything, it's actually shorter than the if/then/else used in the TryGet version, although I suppose I could have used a ternary operator to make it equally compact.
On the whole, I found the C# to be more elegant, and only some of that is due to my bias as a C# programmer. Mainly, I didn't have to define and implement interfaces, which cut down on the plumbing overhead. I also used pretty standard .NET conventions, which seem to be a bit more streamlined than the sort of style favored in Java.

Related

How to use the same hashmap in multiple threads

I have a Hashmap that is created for each "mailer" class and each "agent" class creates a mailer.
My problem is that each of my "agents" creates a "mailer" that in turn creates a new hashmap.
What I'm trying to do is to create one Hashmap that will be used by all the agents(every agent is a thread).
This is the Agent class:
public class Agent implements Runnable {
private int id;
private int n;
private Mailer mailer;
private static int counter;
private List<Integer> received = new ArrayList<Integer>();
#Override
public void run() {
System.out.println("Thread has started");
n = 10;
if (counter < n - 1) {
this.id = ThreadLocalRandom.current().nextInt(0, n + 1);
counter++;
}
Message m = new Message(this.id, this.id);
this.mailer.getMap().put(this.id, new ArrayList<Message>());
System.out.println(this.mailer.getMap());
for (int i = 0; i < n; i++) {
if (i == this.id) {
continue;
}
this.mailer.send(i, m);
}
for (int i = 0; i < n; i++) {
if (i == this.id) {
continue;
}
if (this.mailer.getMap().get(i) == null) {
continue;
} else {
this.received.add(this.mailer.readOne(this.id).getContent());
}
}
System.out.println(this.id + "" + this.received);
}
}
This is the Mailer class :
public class Mailer {
private HashMap<Integer, List<Message>> map = new HashMap<>();
public void send(int receiver, Message m) {
synchronized (map) {
while (this.map.get(receiver) == null) {
this.map.get(receiver);
}
if (this.map.get(receiver) == null) {
} else {
map.get(receiver).add(m);
}
}
}
public Message readOne(int receiver) {
synchronized (map) {
if (this.map.get(receiver) == null) {
return null;
} else if (this.map.get(receiver).size() == 0) {
return null;
} else {
Message m = this.map.get(receiver).get(0);
this.map.get(receiver).remove(0);
return m;
}
}
}
public HashMap<Integer, List<Message>> getMap() {
synchronized (map) {
return map;
}
}
}
I have tried so far :
Creating the mailer object inside the run method in agent.
Going by the idea (based on your own answer to this question) that you made the map static, you've made 2 mistakes.
do not use static
static means there is one map for the entire JVM you run this on. This is not actually a good thing: Now you can't create separate mailers on one JVM in the future, and you've made it hard to test stuff.
You want something else: A way to group a bunch of mailer threads together (these are all mailers for the agent), but a bit more discerning than a simple: "ALL mailers in the ENTIRE system are all the one mailer for the one agent that will ever run".
A trivial way to do this is to pass the map in as argument. Alternatively, have the map be part of the agent, and pass the agent to the mailer constructor, and have the mailer ask the agent for the map every time.
this is not thread safe
Thread safety is a crucial concept to get right, because the failure mode if you get it wrong is extremely annoying: It may or may not work, and the JVM is free to base whether it'll work right this moment or won't work on the phase of the moon or the flip of a coin: The JVM is given room to do whatever it feels like it needs to, in order to have a JVM that can make full use of the CPU's powers regardless of which CPU and operating system your app is running on.
Your code is not thread safe.
In any given moment, if 2 threads are both referring to the same field, you've got a problem: You need to ensure that this is done 'safely', and the compiler nor the runtime will throw errors if you fail to do this, but you will get bizarre behaviour because the JVM is free to give you caches, refuse to synchronize things, make ghosts of data appear, and more.
In this case the fix is near-trivial: Use java.util.concurrent.ConcurrentHashMap instead, that's all you'd have to do to make this safe.
Whenever you're interacting with a field that doesn't have a convenient 'typesafe' type, or you're messing with the field itself (one thread assigns a new value to the field, another reads it - you don't do that here, there is just the one field that always points at the same map, but you're messing with the map) - you need to use synchronized and/or volatile and/or locks from the java.util.concurrent package and in general it gets very complicated. Concurrent programming is hard.
I was able to solve this by changing the mailer to static in the Agent class

Using Chronicle Map producing garbage while using Streams API

Today I was experimenting with Chronicle Map. Here is a code sample:
package experimental;
import net.openhft.chronicle.core.values.IntValue;
import net.openhft.chronicle.map.ChronicleMap;
import net.openhft.chronicle.values.Values;
public class Tmp {
public static void main(String[] args) {
try (ChronicleMap<IntValue, User> users = ChronicleMap
.of(IntValue.class, User.class)
.name("users")
.entries(100_000_000)
.create();) {
User user = Values.newHeapInstance(User.class);
IntValue id = Values.newHeapInstance(IntValue.class);
for (int i = 1; i < 100_000_000; i++) {
user.setId(i);
user.setBalance(Math.random() * 1_000_000);
id.setValue(i);
users.put(id, user);
if (i % 100 == 0) {
System.out.println(i + ". " +
users.values()
.stream()
.max(User::compareTo)
.map(User::getBalance)
.get());
}
}
}
}
public interface User extends Comparable<User> {
int getId();
void setId(int id);
double getBalance();
void setBalance(double balance);
#Override
default int compareTo(User other) {
return Double.compare(getBalance(), other.getBalance());
}
}
}
As you see in above code I am just creating User object and putting it in Chronicle Map, and after each 100th record I am just printing the User with max balance. But unfortunately it is producing some garbage. When I monitored it with VisualVM I got the following:
It seems using streams in Chronicle Map will produce garbage anyway.
So my questions are:
* Does this mean that I should not use Streams API with Chronicle Map.
* Are there any other solutions/ways of doing this?
* How to filter/search Chronicle Map in proper way because I have use cases other than
just putting/getting data in it.
ChronicleMap's entrySet().iterator() (as well as iterator on keySet() and values()) is implemented so that it dumps all objects in a Chronicle Map's segment into memory before iterating over them.
You can inspect how much segments do you have by calling map.segments(). You could also configure it during the ChronicleMap construction phase, check out ChronicleMapBuilder javadoc.
So, during iteration, you should expect regularly, approximately numEntries / numSegments entries to be dumped into memory at once, where numEntries is the size of your Chronicle Map.
You can implement streaming processing on a Chronicle Map avoiding creating a lot of garbage, by reusing objects, via Segment Context API:
User[] maxUser = new User[1];
for (int i = 0; i < users.segments(); i++) {
try (MapSegmentContext<IntValue, User, ?> c = map.segmentContext(i)) {
c.forEachSegmentEntry((MapEntry<IntValue, User> e) -> {
User user = e.value().get();
if (maxUser[0] == null || user.compareTo(maxUser[0]) > 0) {
// Note that you cannot just assign `maxUser[0] = user`:
// this object will be reused by the SegmentContext later
// in the iteration, and it's contents will be rewritten.
// Check out the doc for Data.get().
if (maxUser[0] == null) {
maxUser[0] = Values.newHeapInstance(User.class);
}
User newMaxUser = e.value().getUsing(maxUser[0]);
// assert the object is indeed reused
assert newMaxUser == maxUser[0];
}
});
}
}
Link to doc for Data.get().
The code of the above example is adapted from here.

Compose variable number of ListenableFuture

I'm quite new to Futures and am stuck on chaining calls and create a list of objects. I'm using Android, API min is 19.
I want to code the method getAllFoo() below:
ListenableFuture<List<Foo>> getAllFoo() {
// ...
}
I have these 2 methods available:
ListenableFuture<Foo> getFoo(int index) {
// gets a Foo by its index
}
ListenableFuture<Integer> getNbFoo() {
// gets the total number of Foo objects
}
Method Futures.allAsList() would work nicely here, but my main constraint is that each call to getFoo(int index) cannot occur until the previous one is completed.
As far as I understand it (and tested it), Futures.allAsList() "fans-out" the calls (all the calls start at the same time), so I can't use something like that:
ListenableFuture<List<Foo>> getAllFoo() {
// ...
List<ListenableFuture<Foo>> allFutureFoos = new ArrayList<>();
for (int i = 0; i < size; i++) {
allFutureFoos.add(getFoo(i));
}
ListenableFuture<List<Foo>> allFoos = Futures.allAsList(allFutureFoos);
return allFoos;
}
I have this kind of (ugly) solution (that works):
// ...
final SettableFuture<List<Foo>> future = SettableFuture.create();
List<Foo> listFoos = new ArrayList<>();
addApToList(future, 0, nbFoo, listFoos);
// ...
private ListenableFuture<List<Foo>> addFooToList(SettableFuture future, int idx, int size, List<Foo> allFoos) {
Futures.addCallback(getFoo(idx), new FutureCallback<Foo>() {
#Override
public void onSuccess(Foo foo) {
allFoos.add(foo);
if ((idx + 1) < size) {
addFooToList(future, idx + 1, size, allFoos);
} else {
future.set(allFoos);
}
}
#Override
public void onFailure(Throwable throwable) {
future.setException(throwable);
}
});
return future;
}
How can I implement that elegantly using ListenableFuture ?
I found multiple related topics (like this or that), but these are using "coded" transform, and are not based on a variable number of transformations.
How can I compose ListenableFutures and get the same return value as Futures.allAsList(), but by chaining calls (fan-in)?
Thanks !
As a general rule, it's better to chain derived futures together with transform/catching/whennAllSucceed/whenAllComplete than with manual addListener/addCallback calls. The transformation methods can do some more for you:
present fewer opportunities to forget to set an output, thus hanging the program
propagate cancellation
avoid retaining memory longer than needed
do tricks to reduce the chance of stack overflows
Anyway, I'm not sure there's a particularly elegant way to do this, but I suggest something along these lines (untested!):
ListenableFuture<Integer> countFuture = getNbFoo();
return countFuture.transformAsync(
count -> {
List<ListenableFuture<Foo>> results = new ArrayList<>();
ListenableFuture<?> previous = countFuture;
for (int i = 0; i < count; i++) {
final int index = i;
ListenableFuture<Foo> current = previous.transformAsync(
unused -> getFoo(index),
directExecutor());
results.add(current);
previous = current;
}
return allAsList(results);
},
directExecutor());

Mapping hash values to a range, with minimal collisions

Context
Hi, I'm working on an assignment for school that asks us to implement a hash table in Java. There are no requirements that collisions be kept to a minimum, but low collision rate and speed seem to be the two most sought-after qualities in all the reading (some more) that I've done.
Problem
I'd like some guidance on how to map the output of a hash function to a smaller range, without having >20% of my keys collide (yikes).
In all of the algorithms that I've explored, keys are mapped to the entire range of an unsigned 32 bit integer (or in many cases, 64, even 128 bit). I'm not finding much about this on here, Wikipedia, or in any of the hash-related articles / discussions I've come across.
In terms of the specifics of my implementation, I'm working in Java (mandate of my school), which is problematic since there are no unsigned types to work with. To get around this, I've been using the 64-bit long integer type, then using a bit mask to map back down to 32 bits. Instead of simply truncating, I XOR the top 32 bits with the bottom 32, then perform a bitwise AND to mask out any upper bits that might result in a negative value when I cast it down to a 32 bit integer. After all that, a separate function compresses the resulting hash value down to fit into the bounds of the hash table's inner array.
It ends up looking like:
int hash( String key ) {
long h;
for( int i = 0; i < key.length(); i++ )
//do some stuff with each character in the key
h = h ^ ( h << 32 );
return h & 2147483647;
}
Where the inner-loop depends on the hash function (I've implemented a few: polynomial hashing, FNV1, SuperFastHash, and a custom one tailored to the input data).
They basically all perform horribly. I have yet to see <20% keys collide. Even before I compress the hash values down to array indices, none of my hash functions will get me less thank 10k collisions. My inputs are two text files, each ~220,000 lines. One is English words, the other is random strings of varying length.
My lecture notes recommend the following, for compressing the hashed keys:
(hashed key) % P
Where P is the largest prime < the size of the inner array.
Is this an accepted method of compressing hash values? I have a feeling it isn't, but since performance is so poor even before compression, I have a feeling it's not the primary culprit, either.
I don´t know if I understand well your concrete problem, but I will try to help in hash performance and collisions.
The hash based objects will determine in which bucket they will store the key-value pair based on hash value. Inside each bucket there is a structure (In HashMap case a LinkedList) in where the pair is stored.
If the hash value is usually the same, the bucket will be usually the same so the performance will degrade a lot, let´s see an example:
Consider this class
package hashTest;
import java.util.Hashtable;
public class HashTest {
public static void main (String[] args) {
Hashtable<MyKey, String> hm = new Hashtable<>();
long ini = System.currentTimeMillis();
for (int i=0; i<100000; i++) {
MyKey a = new HashTest().new MyKey(String.valueOf(i));
hm.put(a, String.valueOf(i));
}
System.out.println(hm.size());
long fin = System.currentTimeMillis();
System.out.println("tiempo: " + (fin-ini) + " mls");
}
private class MyKey {
private String str;
public MyKey(String i) {
str = i;
}
public String getStr() {
return str;
}
#Override
public int hashCode() {
return 0;
}
#Override
public boolean equals(Object o) {
if (o instanceof MyKey) {
MyKey aux = (MyKey) o;
if (this.str.equals(aux.getStr())) {
return true;
}
}
return false;
}
}
}
Note that hashCode in class MyKey returns always '0' as hash. It is ok with the hashcode definition (http://docs.oracle.com/javase/7/docs/api/java/lang/Object.html#hashCode()). If we run that program, this is the result
100000
tiempo: 62866 mls
Is a very poor performance, now we are going to change the MyKey hashcode code:
package hashTest;
import java.util.Hashtable;
public class HashTest {
public static void main (String[] args) {
Hashtable<MyKey, String> hm = new Hashtable<>();
long ini = System.currentTimeMillis();
for (int i=0; i<100000; i++) {
MyKey a = new HashTest().new MyKey(String.valueOf(i));
hm.put(a, String.valueOf(i));
}
System.out.println(hm.size());
long fin = System.currentTimeMillis();
System.out.println("tiempo: " + (fin-ini) + " mls");
}
private class MyKey {
private String str;
public MyKey(String i) {
str = i;
}
public String getStr() {
return str;
}
#Override
public int hashCode() {
return str.hashCode() * 31;
}
#Override
public boolean equals(Object o) {
if (o instanceof MyKey) {
MyKey aux = (MyKey) o;
if (this.str.equals(aux.getStr())) {
return true;
}
}
return false;
}
}
}
Note that only hashcode in MyKey has changed, now when we run the code te result is
100000
tiempo: 47 mls
There is an incredible better performance now with a minor change. Is a very common practice return the hashcode multiplied by a prime number (in this case 31), using the same hashcode members that you use inside equals method in order to determine if two objects are the same (in this case only str).
I hope that this little example can you point out a solution for your problem.

Why is my HashMap implementation 10 times slower than the JDK's?

I would like to know what makes the difference, what should i aware of when im writing code.
Used the same parameters and methods put(), get() when testing
without printing
Used System.NanoTime() to test runtime
I tried it with 1-10 int keys with 10 values, so every single hash returns unique index, which is the most optimal scenario
My HashSet implementation which is based on this is almost as fast as the JDK's
Here's my simple implementation:
public MyHashMap(int s) {
this.TABLE_SIZE=s;
table = new HashEntry[s];
}
class HashEntry {
int key;
String value;
public HashEntry(int k, String v) {
this.key=k;
this.value=v;
}
public int getKey() {
return key;
}
}
int TABLE_SIZE;
HashEntry[] table;
public void put(int key, String value) {
int hash = key % TABLE_SIZE;
while(table[hash] != null && table[hash].getKey() != key)
hash = (hash +1) % TABLE_SIZE;
table[hash] = new HashEntry(key, value);
}
public String get(int key) {
int hash = key % TABLE_SIZE;
while(table[hash] != null && table[hash].key != key)
hash = (hash+1) % TABLE_SIZE;
if(table[hash] == null)
return null;
else
return table[hash].value;
}
Here's the benchmark:
public static void main(String[] args) {
long start = System.nanoTime();
MyHashMap map = new MyHashMap(11);
map.put(1,"A");
map.put(2,"B");
map.put(3,"C");
map.put(4,"D");
map.put(5,"E");
map.put(6,"F");
map.put(7,"G");
map.put(8,"H");
map.put(9,"I");
map.put(10,"J");
map.get(1);
map.get(2);
map.get(3);
map.get(4);
map.get(5);
map.get(6);
map.get(7);
map.get(8);
map.get(9);
map.get(10);
long end = System.nanoTime();
System.out.println(end-start+" ns");
}
If you read the documentation of the HashMap class, you see that it implements a hash table implementation based on the hashCode of the keys. This is dramatically more efficient than a brute-force search if the map contains a non-trivial number of entries, assuming reasonable key distribution amongst the "buckets" that it sorts the entries into.
That said, benchmarking the JVM is non-trivial and easy to get wrong, if you're seeing big differences with small numbers of entries, it could easily be a benchmarking error rather than the code.
When it is up to performance, never assume something.
Your assumption was "My HashSet implementation which is based on this is almost as fast as the JDK's". No, obviously it is not.
That is the tricky part when doing performance work: doubt everything unless you have measured with great accuracy. Worse, you even measured, and the measurement told you that your implementation is slower; and instead of checking your source, and the source of the thing you are measuring against; you decided that the measuring process must be wrong ...

Categories

Resources