When should a Spliterator stop splitting? - java

I understand that there is overhead in setting up the processing of a parallel Stream, and that processing in a single thread is faster if there are few items or the processing of each item is fast.
But, is there a similar threshold for trySplit(), a point where decomposing a problem into smaller chunks is counterproductive? I'm thinking by analogy to a merge sort switching to insertion sort for the smallest chunks.
If so, does the threshold depend on the relative cost of trySplit() and consuming an item in the course of tryAdvance()? Consider a split operation that's a lot more complicated than advancing an array index—splitting a lexically-ordered multiset permutation, for example. Is there a convention for letting clients specify the lower limit for a split when creating a parallel stream, depending on the complexity of their consumer? A heuristic the Spliterator can use to estimate the lower limit itself?
Or, alternatively, is it always safe to let the lower limit of a Spliterator be 1, and let the work-stealing algorithm take care of choosing whether to continue splitting or not?

In general you have no idea how much work is done in the consumer passed to tryAdvance or forEachRemaining. Neither stream pipeline nor FJP knows this as it depends on user supplied code. It can be either much faster or much slower than the splitting procedure. For example, you may have two-elements input but the processing of each element takes one hour, so splitting this input is very reasonable.
I usually split the input as much as I can. There are three tricks which can be used to improve the splitting:
If it's hard to split evenly, but you can track (or at least roughly estimate) the size of each sub-part, feel free to split unevenly. The stream implementation will do more further splitting for the bigger part. Don't forget about SIZED and SUBSIZED characteristics.
Move the hard part of splitting to the next tryAdvance/forEachRemaining call. For example, suppose that you have a known number of permutations and in trySplit you are going to jump to other permutation. Something like this:
public class MySpliterator implements Spliterator<String> {
private long position;
private String currentPermutation;
private final long limit;
MySpliterator(long position, long limit, String currentPermutation) {
this.position = position;
this.limit = limit;
this.currentPermutation = currentPermutation;
}
#Override
public Spliterator<String> trySplit() {
if(limit - position <= 1)
return null;
long newPosition = (position+limit)>>>1;
Spliterator<String> prefix =
new MySpliterator(position, newPosition, currentPermutation);
this.position = newPosition;
this.currentPermutation = calculatePermutation(newPosition); // hard part
return prefix;
}
...
}
Move the hard part to the next tryAdvance call like this:
#Override
public Spliterator<String> trySplit() {
if(limit - position <= 1)
return null;
long newPosition = (position+limit)>>>1;
Spliterator<String> prefix =
new MySpliterator(position, newPosition, currentPermutation);
this.position = newPosition;
this.currentPermutation = null;
return prefix;
}
#Override
public boolean tryAdvance(Consumer<? super String> action) {
if(currentPermutation == null)
currentPermutation = calculatePermutation(position); // hard part
...
}
This way the hard part will also be executed in parallel with prefix processing.
If you have not so many elements left in current spliterator (for example, less than 10) and the split was requested, it's probably good just to advance to the half of your elements collecting them into array, then create an array-based spliterator for this prefix (similarly to how it's done in AbstractSpliterator.trySplit()). Here you control all the code, so you can measure in advance how normal trySplit is slower than tryAdvance and estimate the threshold when you should switch to array-based split.

Related

Bellman-Ford improvement: does it work?

I'am trying to improve the Bellman-Ford algorithm's performance and I would like to know if the improvement is correct.
I run the relaxing part not V-1 but V times, and I got a boolean variable involved, which is set true if any relax happened during the iteration of the outer loop. If no relax happened at the n. iteration where n <= V, it returns from the loop with the shortest path, but if it relaxes at n = V iteration, that means we have a negative cycle.
I thought it might improve runtime, since sometime we don't have to iterate for V-1 times to find the shortest path, and we can return earlier, and it's also more elegant than checking the cycle with another block of code.
AdjacencyListALD graph;
int[] distTo;
int[] edgeTo;
public BellmanFord(AdjacencyListALD g)
{
graph = g;
}
public int findSP(int source, int dest)
{
// initialization
distTo = new int[graph.SIZE];
edgeTo = new int[graph.SIZE];
for (int i = 0;i<graph.SIZE;i++)
{
distTo[i] = Integer.MAX_VALUE;
}
distTo[source] = 0;
// relaxing V-1 times + 1 for checking negative cycle = V times
for(int i = 0;i<(graph.SIZE);i++)
{
boolean hasRelaxed=false;
for(int j = 0;j<graph.SIZE;j++)
{
for(int x=0;x<graph.sources[j].length;x++)
{
int s = j;
int d = graph.sources[j].get(x).label;
int w = graph.sources[j].get(x).weight;
if(distTo[d] > distTo[s]+w)
{
distTo[d] = distTo[s]+w;
hasRelaxed = true;
}
}
}
if(!hasRelaxed)
return distTo[dest];
}
System.out.println("Negative cycle detected");
return -1;
}
Good comments on the need for testing. That's a given. But it doesn't address the underlying question, whether the OP's modifications to Bellman-Ford constitute an improvement to the algorithm. And the answer is, yes, this is actually a well-known improvement, as G. Bach pointed out in comments.
The OP's observation is that if, in any relaxation iteration, nothing relaxes, then there will be no changes in subsequent iterations and we can therefore just stop. Absolutely correct. There are no outside influences on the values assigned to the vertices. The only thing updating those values is the relaxation step itself. If it finds nothing to do on any iteration there is no way that something to do will materialize out of the aether. Ergo we can terminate.
This doesn't affect the complexity of the algorithm, nor does it help with worst case graphs, but it can reduce actual running time in practice.
As for running the relaxation one more time (|V| times rather than the usual |V|-1), this is just another way of stating the check for negative cycles that follows the relaxation step. It's just another way of saying that, when we terminate by running |V|-1 relaxation iterations, we need to see if any improvement can still be calculated, which reveals a negative cycle.
Bottom line: OP's approach is sound. Now, yes, test the code.

How can I evaluate a hash table implementation? (Using HashMap as reference)

Problem:
I need to compare 2 hash table implementations (well basically HashMap with another one) and make a reasonable conclusion.
I am not interested in 100% accuracy but just being in the right direction in my estimation.
I am interested in the difference not only per operation but mainly on the hashtable as a "whole".
I don't have a strict requirement on speed so if the other implementation is reasonably slower I can accept it but I do expect/require that the memory usage be better (since one of the hashtables is backed by primitive table).
What I did so far:
Originally I created my own custom "benchmark" with loops and many calls to hint for gc to get a feeling of the difference but I am reading online that using a standard tool is more reliable/appropriate.
Example of my approach (MapInterface is just a wrapper so I can switch among implementations.):
int[] keys = new int[10000000];
String[] values = new String[10000000];
for(int i = 0; i < keys.length; ++i) {
keys[i] = i;
values[i] = "" + i;
}
if(operation.equals("put", keys, values)) {
runPutOperation(map);
}
public static long[] runOperation(MapInterface map, Integer[] keys, String[] values) {
long min = Long.MAX_VALUE;
long max = Long.MIN_VALUE;
long run = 0;
for(int i = 0; i < 10; ++i) {
long start = System.currentTimeMillis();
for(int i = 0; i < keys.length; ++i) {
map.put(keys[i], values[i]);
}
long total = System.currentTimeMillis() - start;
System.out.println(total/1000d + " seconds");
if(total < min) {
min = time;
}
if(total > max) {
max = time;
}
run += time;
map = null;
map = createNewHashMap();
hintsToGC();
}
return new long[] {min, max, run};
}
public void hintsToGC() {
for(int i = 0; i < 20; ++i) {
System.out.print(". ");
System.gc();
try {
Thread.sleep(100);
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
private HashMapInterface<String> createNewHashMap() {
if(jdk) {
return new JDKHashMapWrapper<String>();
}
else {
return new AlternativeHashMapWrapper<String>();
}
}
public class JDKHashMapWrapper implements HashMapInterface<String> {
HashMap<Integer, String> hashMap;
JDKHashMapWrapper() {
hashMap = new HashMap<Integer, String>();
}
public String put(Integer key, String value) {
return hashMap.put(key, value);
}
//etc
}
(I want to test put, get, contains and the memory utilization)
Can I be sure by using my approach that I can get reasonable measurements?
If not what would be the most appropriate tool to use and how?
Update:
- I also test with random numbers (also ~10M random numbers) using SecureRandom.
- When the hash table resizes I print the logical size of the hash table/size of the actual table to get the load factor
Update:
For my specific case, where I am interested also in integers what can of pitfalls are there with my approach?
UPDATE after #dimo414 comments:
Well at a minimum the hashtable as a "whole" isn't meaningful
I mean how the hashtable behaves under various loads both at runtime and in memory consumption.
Every data structure is a tradeoff of different methods
I agree. My trade-off is an acceptable access penalty for memory improvement
You need to identify what features you're interested in verifying
1) put(key, value);
2) get(key, value);
3) containsKey(key);
4) all the above when having many entries in the hash table
Some key consideration for using Hash tables is the size of the "buckets" allocation, the collision resolution strategy, and the shape of your data. Essentially, a Hash table takes the key supplied by the application and then hashes it to a value less than or equal to the number of allocated buckets. When two key values hash to the same bucket, the implementation has to resolve the collision and return the right value. For example, one could have a sorted linked list for each bucket and that list is searched.
If your data happens to have a lot of collisions, then your performance will suffer, because the Hash table implementation will spend too much time resolving the collision. On the other hand, if you have a very large number of buckets, you solve the collision problem at the expense of memory. Also, Java's built-in HashMap implementation will "rehash" if the number of entries gets larger than a certain amount - I imagine this is an expensive operation that is worth avoiding.
Since your key data is the positive integers from 1 to 10M, your test data looks good. I would also ensure that the different hash tables implementations were initialized to the same bucket size for a given test, otherwise it's not a fair comparison. Finally, I would vary the bucket size over a pretty significant range and rerun the tests to see how the implementations changed their behavior.
As I understand you are interested in both operations execution time and memory consumption of the maps in the test.
I will start with memory consumption as this seams not to be answered at all. What I propose is to use a small library called Classmexer. I personally used it when I need to get the 100% correct memory consumption of any object. It has the java agent approach (because it's using the Instrumentation API), which means that you need to add it as the parameter to the JVM executing your tests:
-javaagent: [PATH_TO]/classmexer.jar
The usage of the Classmexer is very simple. At any point of time you can get the memory consumption in bytes by executing:
MemoryUtil.deepMemoryUsageOf(mapIamInterestedIn, VisibilityFilter.ALL)
Note that with visibility filter you can specify if the memory calculation should be done for the object (our map) plus all other reachable object through references. That's what VisibilityFilter.ALL is for. However, this would mean that the size you get back includes all objects you used for keys and values. Thus if you have 100 Integer/String entries the reported size will include those as well.
For the timing aspect I would propose JMH tool, as this tool is made for micro bench-marking. There are plenty examples online, for example this article has map testing examples that can guide you pretty good.
Note that I should be careful when do you call the Classmexer's Memory Util as it will interfere with the time results if you call it during the time measuring. Furthermore, I am sure that there are many other tools similar to Classmexer, but I like it because it small and simple.
I was just doing something similar to this, and I ended up using the built in profiler in the Netbeans IDE. You can get really detailed info on both CPU and memory usage. I had originally written all my code in Eclipse, but Netbeans has an import feature for bringing in Eclipse projects and it set it all up no problem, if that is possibly your situation too.
For timing, you might also look at the StopWatch class in Apache Commons. It's a much more intuitive way of tracking time on targeted operations, e.g.:
StopWatch myMapTimer = new StopWatch();
HashMap<Integer, Integer> hashMap = new HashMap<>();
myMapTimer.start();
for (int i = 0; i < numElements; i++)
hashMap.put(i, i);
myMapTimer.stop();
System.out.println(myMapTimer.getTime()); // time will be in milliseconds

String concatenation without allocation in java

Is there a way to concatenate two Strings (not final) without allocating memory?
For example, I have these two Strings:
final String SCORE_TEXT = "SCORE: ";
String score = "1000"; //or int score = 1000;
When I concatenate these two strings, a new String object is created.
font.drawMultiLine(batch, SCORE_TEXT + score, 50f, 670f);//this creates new string each time
Since this is done in the main game loop (executed ~60 times in one second), there are a lot of allocations.
Can I somehow do this without allocation?
The obvious solution is to not recreate the output String on every frame, but only when it changes.
One way to do this is to store it somewhere outside your main loop and update it when a certain event happens, i.e. the "score" actually changes. In your main loop you then just use that pre-created String.
If you can't/or don't want to have this event based approach, you can always store the "previous" score and only concatenate a new String when the previous score is different from the current score.
Depending on how often your score actually changes, this should cut out most reallocations. Unless of course the score changes at 60 fps, in which case this whole point is completely mute because nobody would be able to read the text you're printing.
Seems that drawMultiLine accepts not a String, but CharSequence. Thus you may probably implement your own CharSequence which does not actually concatenates two strings. Here's the draft implementation:
public class ConcatenatedString implements CharSequence {
final String left, right;
final int leftLength;
public ConcatenatedString(String left, String right) {
this.left = left;
this.right = right;
this.leftLength = left.length();
}
#Override
public int length() {
return leftLength+right.length();
}
#Override
public char charAt(int index) {
return index < leftLength ? left.charAt(index) : right.charAt(index-leftLength);
}
#Override
public CharSequence subSequence(int start, int end) {
if(end <= leftLength)
return left.substring(start, end);
if(start >= leftLength)
return right.substring(start-leftLength, end-leftLength);
return toString().substring(start, end);
}
#Override
public String toString() {
return left.concat(right);
}
}
Use it like this:
font.drawMultiLine(batch, new ConcatenatedString(SCORE_TEXT, score), 50f, 670f);
Internally in your case drawMultiline just needs the length and charAt methods. Using ConcatenatedString you create only one new object. In contrast when you use SCORE_TEXT + score, you create a temporary StringBuilder which creates internally char[] array, copies the input symbols, resizes the array if necessary, then creates the final String object which creates the new char[] array and copies the symbols again. Thus it's likely that ConcatenatedString will be faster.
Didn't understand the question the first time around. Have you tried using the following?
SCORE_TEXT.concat(score);
I dont think you can populate a value without allocation a memory for it.. what best you can do is create a global string variable and provide the value of SCORE_TEXT + score to it. Use that global string variable in font.drawMultiLine() method.
This way you can minimize the amount of memeory allocated as memory is allocated only once and the same location is updated again & again.
String is designed to be immutable in Java. use StringBuilder

Performance difference between assignment and conditional test

This question is specifically geared towards the Java language, but I would not mind feedback about this being a general concept if so. I would like to know which operation might be faster, or if there is no difference between assigning a variable a value and performing tests for values. For this issue we could have a large series of Boolean values that will have many requests for changes. I would like to know if testing for the need to change a value would be considered a waste when weighed against the speed of simply changing the value during every request.
public static void main(String[] args){
Boolean array[] = new Boolean[veryLargeValue];
for(int i = 0; i < array.length; i++) {
array[i] = randomTrueFalseAssignment;
}
for(int i = 400; i < array.length - 400; i++) {
testAndChange(array, i);
}
for(int i = 400; i < array.length - 400; i++) {
justChange(array, i);
}
}
This could be the testAndChange method
public static void testAndChange(Boolean[] pArray, int ind) {
if(pArray)
pArray[ind] = false;
}
This could be the justChange method
public static void justChange(Boolean[] pArray, int ind) {
pArray[ind] = false;
}
If we were to end up with the very rare case that every value within the range supplied to the methods were false, would there be a point where one method would eventually become slower than the other? Is there a best practice for issues similar to this?
Edit: I wanted to add this to help clarify this question a bit more. I realize that the data type can be factored into the answer as larger or more efficient datatypes can be utilized. I am more focused on the task itself. Is the task of a test "if(aConditionalTest)" is slower, faster, or indeterminable without additional informaiton (such as data type) than the task of an assignment "x=avalue".
As #TrippKinetics points out, there is a semantical difference between the two methods. Because you use Boolean instead of boolean, it is possible that one of the values is a null reference. In that case the first method (with the if-statement) will throw an exception while the second, simply assigns values to all the elements in the array.
Assuming you use boolean[] instead of Boolean[]. Optimization is an undecidable problem. There are very rare cases where adding an if-statement could result in better performance. For instance most processors use cache and the if-statement can result in the fact that the executed code is stored exactly on two cache-pages where without an if on more resulting in cache faults. Perhaps you think you will save an assignment instruction but at the cost of a fetch instruction and a conditional instruction (which breaks the CPU pipeline). Assigning has more or less the same cost as fetching a value.
In general however, one can assume that adding an if statement is useless and will nearly always result in slower code. So you can quite safely state that the if statement will slow down your code always.
More specifically on your question, there are faster ways to set a range to false. For instance using bitvectors like:
long[] data = new long[(veryLargeValue+0x3f)>>0x06];//a long has 64 bits
//assign random values
int low = 400>>0x06;
int high = (veryLargeValue-400)>>0x06;
data[low] &= 0xffffffffffffffff<<(0x3f-(400&0x3f));
for(int i = low+0x01; i < high; i++) {
data[i] = 0x00;
}
data[high] &= 0xffffffffffffffff>>(veryLargeValue-400)&0x3f));
The advantage is that a processor can perform operations on 32- or 64-bits at once. Since a boolean is one bit, by storing bits into a long or int, operations are done in parallel.

Efficient way to implement 'events since x' in Java

I want to be-able to ask an object 'how many events have occurred in the last x seconds' where the x is an argument.
e.g. how many events have occurred in the last 120 seconds..
How I approached is linear based on the number of events occurring but was wanting to see what the most efficient way (space & time) to achieve this requirement?;
public class TimeSinceStat {
private List<DateTime> eventTimes = new ArrayList<>();
public void apply() {
eventTimes.add(DateTime.now());
}
public int eventsSince(int seconds) {
DateTime startTime = DateTime.now().minus(Seconds.seconds(seconds));
for (int i = 0; i < orderTimes.size(); i++) {
DateTime dateTime = eventTimes.get(i);
if (dateTime.compareTo(startTime) > 0)
return eventTimes.subList(i, eventTimes.size()).size();
}
return 0;
}
(PS - i'm using JodaTime for the date/time representation)
Edit:
The key of this algorithm to find all events that have happened in the last x seconds; the exact start time (e.g. now - 30 seconds) is may or maynot be in the collection
Store the DateTime in a TreeSet and then use tailSet to get the most recent events. This saves you from having to find the starting point by iteration (which is O(n)) and instead by searching (which is O (log n)).
TreeSet<DateTime> eventTimes;
public int eventsSince(int seconds) {
return eventTimes.tailSet(DateTime.now().minus(Seconds.seconds(seconds)), true).size();
}
Of course, you could also binary search on your sorted list, but this does the work for you.
Edit
If it's a concern that multiple events could occur at the same DateTime, you can take the exact same approach with a SortedMultiset from Guava:
TreeMultiset<DateTime> eventTimes;
public int eventsSince(int seconds) {
return eventTimes.tailMultiset(
DateTime.now().minus(Seconds.seconds(seconds)),
BoundType.CLOSED
).size();
}
Edit x2
Here's a much more efficient approach that leverages the fact that you only log events that happened after all other events. With each event, store the number of events up to that date:
SortedMap<DateTime, Integer> eventCounts = initEventMap();
public SortedMap<DateTime, Integer> initEventMap() {
TreeMap<DateTime, Integer> map = new TreeMap<>();
//prime the map to make subsequent operations much cleaner
map.put(DateTime.now().minus(Seconds.seconds(1)), 0);
return map;
}
private long totalCount() {
//you can handle the edge condition here
return eventCounts.getLastEntry().getValue();
}
public void logEvent() {
eventCounts.put(DateTime.now(), totalCount() + 1);
}
Then getting the count since a date is super efficient, just take the total and subtract the count of events that occurred before that date.
public int eventsSince(int seconds) {
DateTime startTime = DateTime.now().minus(Seconds.seconds(seconds));
return totalCount() - eventCounts.lowerEntry(startTime).getValue();
}
This eliminates the inefficient iteration. It's a constant time lookup and an O(log n) lookup.
If you were implementing a data structure from scratch, and the data are not in sorted order, you'd want to construct a balanced order statistic tree (also see code here). This is just a regular balanced tree with the size of the tree rooted at each node maintained in the node itself.
The size fields enable efficient calcualtion of the "rank" of any key in the tree. You can do the desired range query by making two O(log n) probes into the tree for the rank of the min and max range value, finally taking their difference.
The proposed tree and set tail operations are great except the tail views will need time to construct, even though all you need is their size. The asymptotic complexity is the same as the OST, but the OST avoids this overhead completely. The difference could be meaningful if performance is very criticial.
Of course I'd definitely use the standard library solution first and consider the OST only if the speed turned out to be inadequate.
Since DateTime already implements Comparable interface, I would recommend storing the data in a TreeMap instead, and you could use TreeMap#tailMap to get a subtree of the DateTime's that occurs in the desired time.
Based on your code:
public class TimeSinceStat {
//just in case two or more events start at the "same time"
private NavigableMap<DateTime, Integer> eventTimes = new TreeMap<>();
//if this class needs to be used in multiple threads, use ConcurrentSkipListMap instead of TreeMap
public void apply() {
DateTime dateTime = DateTime.now();
Integer times = eventTimes.contains(dateTime) != null ? 0 : (eventTimes.get(dateTime) + 1);
eventTimes.put(dateTime, times);
}
public int eventsSince(int seconds) {
DateTime startTime = DateTime.now().minus(Seconds.seconds(seconds));
NavigableMap<DateTime, Integer> eventsInRange = eventTimes.tailMap(startTime, true);
int counter = 0;
for (Integer time : eventsInRange.values()) {
counter += time;
}
return counter;
}
}
Assuming the list is sorted, you could do a binary search. Java Collections already provides Collections.binarySearch, and DateTime implements Comparable (according to the JodaTime JavaDoc). binarySearch will return the index of the value you want, if it exists in the list, otherwise it returns the index of the greatest value less than the one you want (with the sign flipped). So, all you need to do is (in your eventsSince method):
// find the time you want.
int index=Collections.binarySearch(eventTimes, startTime);
if(index < 0) index = -(index+1)-1; // make sure we get the right index if startTime isn't found
// check for dupes
while(index != eventTimes.size() - 1 && eventTimes.get(index).equals(eventTimes.get(index+1))){
index++;
}
// return the number of events after the index
return eventTimes.size() - index; // this works because indices start at 0
This should be a faster way to do what you want.

Categories

Resources