Compressing a list of intervals - java

I need to compress a list of intervals into a smaller list. Let me explain:
For example I have a list containing intervals [1,4],[2,5],[5,7],[10,11],[13,20],[19,21] and i want to join the intersecting intervals and return a list [1,7],[10,11],[13,21] that transforming intersecting intervals into a single longer interval.
For this I wrote this method:
public List compress(List<Interval> intervals) {
for (int j = 0; j < intervals.size(); j++) {
Interval a = intervals.get(j);
int aIndex = j;
for (int i = 1 + aIndex; i < intervals.size(); i++) {
Interval b = intervals.get(i);
if (a.intersects(b)) {
//the union method return a union of two intervals. For example returns [1,7] for [1,4] and [2,5]
intervals.add(j, a.union(b));
intervals.remove(j+1);
intervals.remove(i);
}
}
}
return intervals;
}
This seems to work fine for the first pair of intervals that are checked but it stops there. That is the final output is a list containing [1, 5],[5, 7],[10, 11],[13, 20],[19, 21].
I have found that this may be a problem with illegal removing of elements from a list? https://codereview.stackexchange.com/questions/64011/removing-elements-on-a-list-while-iterating-through-it?newreg=cc3f30e670e24cc2b05cd1fa2492906f
But I have no idea how to get around this.
Please can anyone give me a hint.
Notice: Sorry if I did anything wrong as this is my first post to stackoverflow. And thanks to anyone that will try to help.
UPDATE:
Here is the solution I found after Maraboc proposed to create a copy of the list and manipulate that one.
That seems to work.
public List compress(List<Interval> intervals) {
List<Interval> man = intervals;
for (int j = 0; j < intervals.size(); j++) {
Interval a = intervals.get(j);
int aIndex = j;
for (int i = 1 + aIndex; i < intervals.size(); i++) {
Interval b = intervals.get(i);
if (a.intersects(b)) {
a = a.union(b);
man.add(j,a);
man.remove(j+1);
man.remove(i);
i--;
}
}
}
return intervals;
}
Thank you everyone.

You are actually NOT using iterator, you are using for-cycles and select elements from list based on their position, therefore you do not have to be afraid of "I am not able to remove while iterating" issue.

I posted this question first to stackexchange by mistake. They redirected me to this place and the question was put on hold. But before that happened Maraboc[a link](https://codereview.stackexchange.com/users/87685/maraboc
)
Helped with an idea. He told me to create a new list and modify that one. I did that and it seems to work. The updated solution will be in the updated question.

Just for the fun of it I took an existing Interval Tree and added a minimise method that seems to work nicely.
/**
* Title: IntervlTree
*
* Description: Implements a static Interval Tree. i.e. adding and removal are not possible.
*
* This implementation uses longs to bound the intervals but could just as easily use doubles or any other linear value.
*
* #author OldCurmudgeon
* #version 1.0
* #param <T> - The Intervals to work with.
*/
public class IntervalTree<T extends IntervalTree.Interval> {
// My intervals.
private final List<T> intervals;
// My center value. All my intervals contain this center.
private final long center;
// My interval range.
private final long lBound;
private final long uBound;
// My left tree. All intervals that end below my center.
private final IntervalTree<T> left;
// My right tree. All intervals that start above my center.
private final IntervalTree<T> right;
public IntervalTree(List<T> intervals) {
if (intervals == null) {
throw new NullPointerException();
}
// Initially, my root contains all intervals.
this.intervals = intervals;
// Find my center.
center = findCenter();
/*
* Builds lefts out of all intervals that end below my center.
* Builds rights out of all intervals that start above my center.
* What remains contains all the intervals that contain my center.
*/
// Lefts contains all intervals that end below my center point.
final List<T> lefts = new ArrayList<>();
// Rights contains all intervals that start above my center point.
final List<T> rights = new ArrayList<>();
long uB = Long.MIN_VALUE;
long lB = Long.MAX_VALUE;
for (T i : intervals) {
long start = i.getStart();
long end = i.getEnd();
if (end < center) {
lefts.add(i);
} else if (start > center) {
rights.add(i);
} else {
// One of mine.
lB = Math.min(lB, start);
uB = Math.max(uB, end);
}
}
// Remove all those not mine.
intervals.removeAll(lefts);
intervals.removeAll(rights);
uBound = uB;
lBound = lB;
// Build the subtrees.
left = lefts.size() > 0 ? new IntervalTree<>(lefts) : null;
right = rights.size() > 0 ? new IntervalTree<>(rights) : null;
// Build my ascending and descending arrays.
/**
* #todo Build my ascending and descending arrays.
*/
}
/*
* Returns a list of all intervals containing the point.
*/
List<T> query(long point) {
// Check my range.
if (point >= lBound) {
if (point <= uBound) {
// Gather all intersecting ones.
List<T> found = intervals
.stream()
.filter((i) -> (i.getStart() <= point && point <= i.getEnd()))
.collect(Collectors.toList());
// Gather others.
if (point < center && left != null) {
found.addAll(left.query(point));
}
if (point > center && right != null) {
found.addAll(right.query(point));
}
return found;
} else {
// To right.
return right != null ? right.query(point) : Collections.<T>emptyList();
}
} else {
// To left.
return left != null ? left.query(point) : Collections.<T>emptyList();
}
}
/**
* Blends the two lists together.
*
* If the ends touch then make them one.
*
* #param a
* #param b
* #return
*/
static List<Interval> blend(List<Interval> a, List<Interval> b) {
// Either empty - lreturn the other.
if (a.isEmpty()) {
return b;
}
if (b.isEmpty()) {
return a;
}
Interval aEnd = a.get(a.size() - 1);
Interval bStart = b.get(0);
ArrayList<Interval> blended = new ArrayList<>();
// Do they meet?
if (aEnd.getEnd() >= bStart.getStart() - 1) {
// Yes! merge them.
// Remove the last.
blended.addAll(a.subList(0, a.size() - 1));
// Add a combined one.
blended.add(new SimpleInterval(aEnd.getStart(), bStart.getEnd()));
// Add all but the first.
blended.addAll(b.subList(1, b.size()));
} else {
// Just join them.
blended.addAll(a);
blended.addAll(b);
}
return blended;
}
static List<Interval> blend(List<Interval> a, List<Interval> b, List<Interval>... more) {
List<Interval> blended = blend(a, b);
for (List<Interval> l : more) {
blended = blend(blended, l);
}
return blended;
}
List<Interval> minimise() {
// Calculate min of left and right.
List<Interval> minLeft = left != null ? left.minimise() : Collections.EMPTY_LIST;
List<Interval> minRight = right != null ? right.minimise() : Collections.EMPTY_LIST;
// My contribution.
long meLeft = minLeft.isEmpty() ? lBound : Math.max(lBound, minLeft.get(minLeft.size() - 1).getEnd());
long meRight = minRight.isEmpty() ? uBound : Math.min(uBound, minRight.get(0).getEnd());
return blend(minLeft,
Collections.singletonList(new SimpleInterval(meLeft, meRight)),
minRight);
}
private long findCenter() {
//return average();
return median();
}
protected long median() {
if (intervals.isEmpty()) {
return 0;
}
// Choose the median of all centers. Could choose just ends etc or anything.
long[] points = new long[intervals.size()];
int x = 0;
for (T i : intervals) {
// Take the mid point.
points[x++] = (i.getStart() + i.getEnd()) / 2;
}
Arrays.sort(points);
return points[points.length / 2];
}
/*
* What an interval looks like.
*/
public interface Interval {
public long getStart();
public long getEnd();
}
/*
* A simple implemementation of an interval.
*/
public static class SimpleInterval implements Interval {
private final long start;
private final long end;
public SimpleInterval(long start, long end) {
this.start = start;
this.end = end;
}
#Override
public long getStart() {
return start;
}
#Override
public long getEnd() {
return end;
}
#Override
public String toString() {
return "{" + start + "," + end + "}";
}
}
/**
* Test code.
*
* #param args
*/
public static void main(String[] args) {
/**
* #todo Needs MUCH more rigorous testing.
*/
// Test data.
long[][] data = {
{1, 4}, {2, 5}, {5, 7}, {10, 11}, {13, 20}, {19, 21},};
List<Interval> intervals = new ArrayList<>();
for (long[] pair : data) {
intervals.add(new SimpleInterval(pair[0], pair[1]));
}
// Build it.
IntervalTree<Interval> test = new IntervalTree<>(intervals);
// Check minimise.
List<Interval> min = test.minimise();
System.out.println("Minimise test: ---");
System.out.println(min);
}
}

For your algorithm to work, the intervals must be sorted, say by start.
Then the for-i loop can make a the longest possible interval.
if (a.intersects(b)) {
a = a.union(b);
intervals.remove(i);
--i; // So we remain at old i value.
}
} // for i
intervals.set(j, a);
The reason for these requirements is that intervals A, B, C might form one long interval ABC, whereas C. B, A might.

Indeed the problem is that when you remove an element from the list, then all subsequent elements will be shifted. At around j I'm guessing it doesn't change because you insert then remove an item at the same location. But the removal at position i will shift all elements in the list.
What you could be doing, instead of removing the elements, is to put a null value at that position, so that the indices remain the same. You will then have to perform a final pass to remove null elements from the array (and check for nulls before comparing).
You could also run your inner loop backwards (from max i down to j) so that any element that gets shifted after i has already been processed.

Related

How to increace propability of a specific number in Random() in java [duplicate]

I want to choose a random item from a set, but the chance of choosing any item should be proportional to the associated weight
Example inputs:
item weight
---- ------
sword of misery 10
shield of happy 5
potion of dying 6
triple-edged sword 1
So, if I have 4 possible items, the chance of getting any one item without weights would be 1 in 4.
In this case, a user should be 10 times more likely to get the sword of misery than the triple-edged sword.
How do I make a weighted random selection in Java?
I would use a NavigableMap
public class RandomCollection<E> {
private final NavigableMap<Double, E> map = new TreeMap<Double, E>();
private final Random random;
private double total = 0;
public RandomCollection() {
this(new Random());
}
public RandomCollection(Random random) {
this.random = random;
}
public RandomCollection<E> add(double weight, E result) {
if (weight <= 0) return this;
total += weight;
map.put(total, result);
return this;
}
public E next() {
double value = random.nextDouble() * total;
return map.higherEntry(value).getValue();
}
}
Say I have a list of animals dog, cat, horse with probabilities as 40%, 35%, 25% respectively
RandomCollection<String> rc = new RandomCollection<>()
.add(40, "dog").add(35, "cat").add(25, "horse");
for (int i = 0; i < 10; i++) {
System.out.println(rc.next());
}
There is now a class for this in Apache Commons: EnumeratedDistribution
Item selectedItem = new EnumeratedDistribution<>(itemWeights).sample();
where itemWeights is a List<Pair<Item, Double>>, like (assuming Item interface in Arne's answer):
final List<Pair<Item, Double>> itemWeights = Collections.newArrayList();
for (Item i: itemSet) {
itemWeights.add(new Pair(i, i.getWeight()));
}
or in Java 8:
itemSet.stream().map(i -> new Pair(i, i.getWeight())).collect(toList());
Note: Pair here needs to be org.apache.commons.math3.util.Pair, not org.apache.commons.lang3.tuple.Pair.
You will not find a framework for this kind of problem, as the requested functionality is nothing more then a simple function. Do something like this:
interface Item {
double getWeight();
}
class RandomItemChooser {
public Item chooseOnWeight(List<Item> items) {
double completeWeight = 0.0;
for (Item item : items)
completeWeight += item.getWeight();
double r = Math.random() * completeWeight;
double countWeight = 0.0;
for (Item item : items) {
countWeight += item.getWeight();
if (countWeight >= r)
return item;
}
throw new RuntimeException("Should never be shown.");
}
}
139
There is a straightforward algorithm for picking an item at random, where items have individual weights:
calculate the sum of all the weights
pick a random number that is 0 or greater and is less than the sum of the weights
go through the items one at a time, subtracting their weight from your random number until you get the item where the random number is less than that item's weight
Use an alias method
If you're gonna roll a lot of times (as in a game), you should use an alias method.
The code below is rather long implementation of such an alias method, indeed. But this is because of the initialization part. The retrieval of elements is very fast (see the next and the applyAsInt methods they don't loop).
Usage
Set<Item> items = ... ;
ToDoubleFunction<Item> weighter = ... ;
Random random = new Random();
RandomSelector<T> selector = RandomSelector.weighted(items, weighter);
Item drop = selector.next(random);
Implementation
This implementation:
uses Java 8;
is designed to be as fast as possible (well, at least, I tried to do so using micro-benchmarking);
is totally thread-safe (keep one Random in each thread for maximum performance, use ThreadLocalRandom?);
fetches elements in O(1), unlike what you mostly find on the internet or on StackOverflow, where naive implementations run in O(n) or O(log(n));
keeps the items independant from their weight, so an item can be assigned various weights in different contexts.
Anyways, here's the code. (Note that I maintain an up to date version of this class.)
import static java.util.Objects.requireNonNull;
import java.util.*;
import java.util.function.*;
public final class RandomSelector<T> {
public static <T> RandomSelector<T> weighted(Set<T> elements, ToDoubleFunction<? super T> weighter)
throws IllegalArgumentException {
requireNonNull(elements, "elements must not be null");
requireNonNull(weighter, "weighter must not be null");
if (elements.isEmpty()) { throw new IllegalArgumentException("elements must not be empty"); }
// Array is faster than anything. Use that.
int size = elements.size();
T[] elementArray = elements.toArray((T[]) new Object[size]);
double totalWeight = 0d;
double[] discreteProbabilities = new double[size];
// Retrieve the probabilities
for (int i = 0; i < size; i++) {
double weight = weighter.applyAsDouble(elementArray[i]);
if (weight < 0.0d) { throw new IllegalArgumentException("weighter may not return a negative number"); }
discreteProbabilities[i] = weight;
totalWeight += weight;
}
if (totalWeight == 0.0d) { throw new IllegalArgumentException("the total weight of elements must be greater than 0"); }
// Normalize the probabilities
for (int i = 0; i < size; i++) {
discreteProbabilities[i] /= totalWeight;
}
return new RandomSelector<>(elementArray, new RandomWeightedSelection(discreteProbabilities));
}
private final T[] elements;
private final ToIntFunction<Random> selection;
private RandomSelector(T[] elements, ToIntFunction<Random> selection) {
this.elements = elements;
this.selection = selection;
}
public T next(Random random) {
return elements[selection.applyAsInt(random)];
}
private static class RandomWeightedSelection implements ToIntFunction<Random> {
// Alias method implementation O(1)
// using Vose's algorithm to initialize O(n)
private final double[] probabilities;
private final int[] alias;
RandomWeightedSelection(double[] probabilities) {
int size = probabilities.length;
double average = 1.0d / size;
int[] small = new int[size];
int smallSize = 0;
int[] large = new int[size];
int largeSize = 0;
// Describe a column as either small (below average) or large (above average).
for (int i = 0; i < size; i++) {
if (probabilities[i] < average) {
small[smallSize++] = i;
} else {
large[largeSize++] = i;
}
}
// For each column, saturate a small probability to average with a large probability.
while (largeSize != 0 && smallSize != 0) {
int less = small[--smallSize];
int more = large[--largeSize];
probabilities[less] = probabilities[less] * size;
alias[less] = more;
probabilities[more] += probabilities[less] - average;
if (probabilities[more] < average) {
small[smallSize++] = more;
} else {
large[largeSize++] = more;
}
}
// Flush unused columns.
while (smallSize != 0) {
probabilities[small[--smallSize]] = 1.0d;
}
while (largeSize != 0) {
probabilities[large[--largeSize]] = 1.0d;
}
}
#Override public int applyAsInt(Random random) {
// Call random once to decide which column will be used.
int column = random.nextInt(probabilities.length);
// Call random a second time to decide which will be used: the column or the alias.
if (random.nextDouble() < probabilities[column]) {
return column;
} else {
return alias[column];
}
}
}
}
public class RandomCollection<E> {
private final NavigableMap<Double, E> map = new TreeMap<Double, E>();
private double total = 0;
public void add(double weight, E result) {
if (weight <= 0 || map.containsValue(result))
return;
total += weight;
map.put(total, result);
}
public E next() {
double value = ThreadLocalRandom.current().nextDouble() * total;
return map.ceilingEntry(value).getValue();
}
}
A simple (even naive?), but (as I believe) straightforward method:
/**
* Draws an integer between a given range (excluding the upper limit).
* <p>
* Simulates Python's randint method.
*
* #param min: the smallest value to be drawed.
* #param max: the biggest value to be drawed.
* #return The value drawn.
*/
public static int randomInt(int min, int max)
{return (int) (min + Math.random()*max);}
/**
* Tests wether a given matrix has all its inner vectors
* has the same passed and expected lenght.
* #param matrix: the matrix from which the vectors length will be measured.
* #param expectedLenght: the length each vector should have.
* #return false if at least one vector has a different length.
*/
public static boolean haveAllVectorsEqualLength(int[][] matrix, int expectedLenght){
for(int[] vector: matrix){if (vector.length != expectedLenght) {return false;}}
return true;
}
/**
* Draws an integer between a given range
* by weighted values.
*
* #param ticketBlock: matrix with limits and weights for the drawing. All its
* vectors should have lenght two. The weights, instead of percentages, should be
* measured as integers, according to how rare each one should be draw, the rarest
* receiving the smallest value.
* #return The value drawn.
*/
public static int weightedRandomInt(int[][] ticketBlock) throws RuntimeException {
boolean theVectorsHaventAllLengthTwo = !(haveAllVectorsEqualLength(ticketBlock, 2));
if (theVectorsHaventAllLengthTwo)
{throw new RuntimeException("The given matrix has, at least, one vector with length lower or higher than two.");}
// Need to test for duplicates or null values in ticketBlock!
// Raffle urn building:
int raffleUrnSize = 0, urnIndex = 0, blockIndex = 0, repetitionCount = 0;
for(int[] ticket: ticketBlock){raffleUrnSize += ticket[1];}
int[] raffleUrn = new int[raffleUrnSize];
// Raffle urn filling:
while (urnIndex < raffleUrn.length){
do {
raffleUrn[urnIndex] = ticketBlock[blockIndex][0];
urnIndex++; repetitionCount++;
} while (repetitionCount < ticketBlock[blockIndex][1]);
repetitionCount = 0; blockIndex++;
}
return raffleUrn[randomInt(0, raffleUrn.length)];
}

How do you split a long value at every n digits, and then add it to a List in Java

I have converted a String value that consists solely of numbers into a Long value. Now I want to split this Long value every n digits, and add those "sub-Longs" into a List.
Long myLong = Long.valueOf(str).longValue();
I want to split myLong at every nth digits, and add those sections into a List<Long>.
How would I go about doing this?
Would have to use String.substring(...) to make this work. That and a for loop. I'm going to edit in the rest of the code. Edited in. Explanatory comments added as well.
public static void main(String[] args) {
// Just for testing.
StringGrouper grouper = new StringGrouper();
// Have tested for 2, 3, 4, and 5
List<Long> output = grouper.makeList(12892374897L, 4);
// Sends out the output. Note that this calls the List's toString method.
System.out.println(output);
}
/**
* #param l
* #param spacing
*/
private List<Long> makeList(long l, int spacing) {
List<Long> output = new ArrayList<>(Math.round(l / spacing));
String longStr = String.valueOf(l);
// System.err.println(longStr);
for (int x = 0; x < longStr.length(); x++) {
// When the remainder is 0, that's where the grouping starts and ends
if (x % spacing == 0) {
// If it does not overflow, send it in
if ((x + spacing) < longStr.length()) {
String element = longStr.substring(x, x + spacing);
// System.err.println(element);
output.add(Long.parseLong(element));
} else {
// If it does overflow, put in the remainder
output.add(Long.parseLong(longStr.substring(x, longStr.length())));
}
}
}
return output;
}

How to recursively get all combinations in Java?

What is the best way in Java to make recursive function to get all combinations of elements taken from several sets of candidates?
In general the number of candidate sets is undefined so recursive solution seems to be appropriate for this task. As an example for given sets of candidates
[1,2]
[3,4]
[5,6,7]
should get 12 combinations:
[1,3,5] [2,3,5]
[1,4,5] [2,4,5]
[1,3,6] [2,3,6]
[1,4,6] [2,4,6]
[1,3,7] [2,3,7]
[1,4,7] [2,4,7]
Candidate set is represented as List of List of Type: List<List<T>>
I encountered this same problem several years ago. I solved it by iterating through the result list with an odometer.
The number of wheels in the odometer is the number of input sets. The figures on each wheel are the members of the corresponding set. To get the next permutation, roll the rightmost odometer wheel. If it's turned all the way around, roll the one to it's left, etc.
For example:
Wheel 0 values: [1,2]
Wheel 1 values: [3,4]
Wheel 2 values: [5,6,7]
Start with odometer reading (1,3,5). Advance to (1,3,6), (1,3,7). Then roll the next wheel as well, to (1,4,5), (1,4,6) and (1,4,7). Continue.
Odometer wheels as indices
Alternatively, you can represent the wheels as indices into the corresponding list.
Wheel 0 values: [0,1]
Wheel 1 values: [0,1]
Wheel 2 values: [0,1,2]
Start with odometer reading (0,0,0). Advance to (0,0,1), (0,0,2). Then roll the next wheel as well, to (0,1,0), (0,1,1) and (0,1,2). Continue. For each reading, translate to the result list by using the odometer wheel readings as indices into the input lists.
Odometer wheels as iterators
As another alternative, you can represent the wheels as iterators into the input collections. This is more general than the prior two approaches. It works even if the input collections are not accessible by index. And it's scalable. And this is the approach I used several years ago.
The total number of combinations is the product of the sizes of the candidate sets. Each result set's size is equal to the number of candidate sets.
You don't need recursion for the solution. Just go through each candidate set. In this example, the first has two values, 1 and 2. The first 6 result sets (half of them) get the first value as 1. The next half get 6.
Onto the next candidate set, there are two values, 3 and 4. But this time, alternate assigning them in groups of 3, rather than 6. So the first 3 result sets get 3, the next 3 sets get 4, the next 3 get 3, and so on.
The next candidate set has three values: 5, 6, and 7. You'll be rotating which value you assign for each result set now (rotating each 1 assignment.) If there were more candidate sets or different amounts of values in them, the amount you assign before rotating to the next value would change. But you can figure this out programatically.
You don't need recursion. Just use the size of the list of sets and then of each set. You can keep the results open to addition of further elements, in case you get more sets to mix in in the future, in case that's what you need.
Thank you all for your replies.
Andy Thomas, quite interesting idea with odometer. Will give it a try a bit later. For now I've implemented it as ThatOneCloud suggested.
Here's what i've got (for Integer items; if needed can be generalized):
public List<List<Integer>> makeCombinations(List<List<Integer>> candidates) {
List<List<Integer>> result = new ArrayList<List<Integer>>();
// calculate result size
int size = 1;
for (List<Integer> candidateSet : candidates)
size *= candidateSet.size();
// make result
for (int i = 0; i < size; i++)
result.add(new ArrayList<Integer>());
// fill result
int pos = 1;
for (List<Integer> candidateSet : candidates)
fillPosition(candidateSet, result, countRepeats(candidates, pos++));
// return
return result;
}
public int countRepeats(List<List<Integer>> candidates, int pos) {
int repeats = 1;
for (int i = pos; i < candidates.size(); i++)
repeats *= candidates.get(i).size();
return repeats;
}
public void fillPosition( List<Integer> candidateSet,
List<List<Integer>> result,
int repeats) {
int idx = 0;
while (idx < result.size()) {
for (int item : candidateSet) {
for (int i = 0; i < repeats; i++) {
result.get(idx++).add(item);
}
}
}
}
And here's another version (Odometer, as Andy Thomas suggested)
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;
public class Odometer<T> implements Iterable<List<T>> {
private class Wheel {
List<T> values;
int idx = 0;
/**
* Create an odometer wheel from list of values
* #param v
*/
protected Wheel (List<T> v) {
if (v == null) throw new NullPointerException("can't create an instance of Wheel.class with null values");
if (v.isEmpty()) throw new IllegalArgumentException("can't create an instance of Wheel.class with no values");
this.values = v;
}
/**
* Get wheel value
* #return
*/
protected T value() {
return values.get(idx);
}
/**
* switch an odometer wheel one step
* #return TRUE - if a wheel have made full cycle and have switched to first item
*/
protected boolean next() {
if (idx >= values.size() - 1) {
idx = 0;
return true;
} else {
idx++;
return false;
}
}
}
/**
* list of wheels
*/
private List<Wheel> wheels = new ArrayList<Wheel>();
/**
* Create an odometer from several lists of values
* (each List<T> is a list of values for one odometer wheel)
* #param values
*/
public Odometer(List<List<T>> values) {
for (List<T> v : values)
wheels.add(new Wheel(v));
}
/**
* Get odometer value
* #return
*/
public List<T> get() {
List<T> result = new ArrayList<T>();
for (Wheel wheel : wheels) {
result.add(wheel.value());
}
return result;
}
/**
* Switch to next value
* #return TRUE if full cycle is finished
*/
public boolean next() {
for (int i = wheels.size() - 1; i >= 0; i--)
if (!wheels.get(i).next()) return false;
return true;
}
/**
* Reset odometer
*/
public void reset() {
for (Wheel wheel : wheels)
wheel.idx = 0;
}
/**
* Iterator
*/
#Override
public Iterator<List<T>> iterator() {
reset();
Iterator<List<T>> it = new Iterator<List<T>>() {
private boolean last = false;
#Override
public boolean hasNext() {
return !last;
}
#Override
public List<T> next() {
List<T> result = get();
last = Odometer.this.next();
return result;
}
#Override
public void remove() {
throw new UnsupportedOperationException();
}
};
return it;
}
public static void main(String [] args) {
List<Integer> l1 = new ArrayList<Integer>(); l1.add(1); l1.add(2);
List<Integer> l2 = new ArrayList<Integer>(); l2.add(3); l2.add(4); l2.add(5);
List<Integer> l3 = new ArrayList<Integer>(); l3.add(6); l3.add(7);
List<List<Integer>> list = new ArrayList<List<Integer>>(); list.add(l1); list.add(l2); list.add(l3);
Odometer<Integer> odometer = new Odometer<Integer>(list);
for (List<Integer> value : odometer) {
System.out.println(value);
}
}
}

Time Complexity of finding a basin

The following algorithm is used to find a basin in matrix. The whole question is as follows:
2-D matrix is given where each cell represents height of cell. Water
can flow from cell with higher height to lower one. A basin is when
there is no cell with lower height in the neighbours
(left,right,up,down,diagonal). You have to find maximum size basin
block.
I have implemented the code. I am looking for timeComplexity. In my opinion time complexity is O(n * m) where n and m is the row and column of the matrix. Please verify.
public final class Basin {
private Basin() {}
private static enum Direction {
NW(-1, -1), N(0, -1), NE(-1, 1), E(0, 1), SE(1, 1), S(1, 0), SW(1, -1), W(-1, 0);
private int rowDelta;
private int colDelta;
Direction(int rowDelta, int colDelta) {
this.rowDelta = rowDelta;
this.colDelta = colDelta;
}
public int getRowDelta() {
return rowDelta;
}
public int getColDelta() {
return colDelta;
}
}
private static class BasinCount {
private int count;
private boolean isBasin;
private int item;
BasinCount(int count, boolean basin, int item) {
this.count = count;
this.isBasin = basin;
this.item = item;
}
};
/**
* Returns the minimum basin.
* If more than a single minimum basin exists then returns any arbitrary basin.
*
* #param m : the input matrix
* #return : returns the basin item and its size.
*/
public static BasinData getMaxBasin(int[][] m) {
if (m.length == 0) { throw new IllegalArgumentException("The matrix should contain atleast one element."); }
final boolean[][] visited = new boolean[m.length][m[0].length];
final List<BasinCount> basinCountList = new ArrayList<>();
for (int i = 0; i < m.length; i++) {
for (int j = 0; j < m[0].length; j++) {
if (!visited[i][j]) {
basinCountList.add(scan(m, visited, i, j, m[i][j], new BasinCount(0, true, m[i][j])));
}
}
}
return getMaxBasin(basinCountList);
}
private static BasinData getMaxBasin(List<BasinCount> basinCountList) {
int maxCount = Integer.MIN_VALUE;
int item = 0;
for (BasinCount c : basinCountList) {
if (c.isBasin) {
if (c.count > maxCount) {
maxCount = c.count;
item = c.item;
}
}
}
return new BasinData(item, maxCount);
}
private static BasinCount scan(int[][] m, boolean[][] visited, int row, int col, int item, BasinCount baseCount) {
// array out of index
if (row < 0 || row == m.length || col < 0 || col == m[0].length) return baseCount;
// neighbor "m[row][col]" is lesser than me. now i cannot be the basin.
if (m[row][col] < item) {
baseCount.isBasin = false;
return baseCount;
}
// my neighbor "m[row][col]" is greater than me, thus not to add it to the basin.
if (m[row][col] > item) return baseCount;
// my neighbor is equal to me, but i happen to have visited him already. thus simply return without adding count.
// this is optimisitic recursion as described by rolf.
if (visited[row][col]) {
return baseCount;
}
visited[row][col] = true;
baseCount.count++;
for (Direction dir : Direction.values()) {
scan(m, visited, row + dir.getRowDelta(), col + dir.getColDelta(), item, baseCount);
/**
* once we know that current 'item' is not the basin, we do "want" to explore other dimensions.
* With the commented out code - consider: m3
* If the first 1 to be picked up is "1 # row2, col4." This hits zero, marks basin false and returns.
* Next time it starts with "1 # row 0, col 0". This never encounters zero, because "1 # row2, col4." is visited.
* this gives a false answer.
*/
// if (!baseCount.basin) {
// System.out.println(baseCount.item + "-:-:-");
// return baseCount;
// }
}
return baseCount;
}
Yes, your code (assuming it works; I have not tested it) is O(n * m) in time, and O(n * m) in space.
Complexities cannot be lower than O(n * m), since any cell can be a part of a neighbouring max-basin in the general case, and all must therefore be (in general) examined. Your complexity is O(n * m) due to the two nested for-loops in getMaxBasin, and the fact that visited[i][j] can only be set at a single place (inside scan()), and prohibits later visits of the same cell.
Due to recursion, every time you chain a call scan(), you are adding to the stack. With a sufficiently-long chain of scan() calls, you could run into stack limits. The worst-case scenario is a zig-zag pattern so that the stack ends up containing a scan() call for each an every cell.

Need help streamlining my Merge Sort implementation [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am building a class of sortable ArrayLists which extends ArrayList. The goal is to be able to call a sort method on a SortDoubleArray, and have that array be sorted via the method described. I got Quicksort, Insertion Sort, Bubble Sort, and Selection Sort all working as I want. I am having some difficulty with Merge Sort, however.
The sort works, but due to the way the recursion involved is working, I am forced reset the contents of the list to be the method applied to itself.
First, here is the tester class. It shows how the other sorts are being implemented. If I did a poor job explaining my issue, hopefully you will see the difference in how the mergeSort() method must be used.
public class SortTester
{
/**
* #param args
*/
public static void main(String[] args)
{
SortDoubleArray list = new SortDoubleArray();
// Code to fill an array with random values.
//list.quickSort();
//list.insertionSort();
//list.selectionSort();
//list.bubbleSort();
list = list.mergeSort();
// Code to print the sorted array.
}
}
Next, here is the SortDoubleArray class. All of the other sorts but insertionSort (to serve as an example of one working the way I want) have been removed for brevity.
public class SortDoubleArray extends ArrayList<Double>
{ // Start of class.
private static final long serialVersionUID = 1271821028912510404L;
/**
* Progresses through the elements one at a time inserting them in their proper place
* via swaps.
*/
public void insertionSort()
{ // Start of insertionSort.
int current = 1;
while (current < size())
{
int i = current;
boolean placeFound = false;
while(i > 0 && !placeFound)
{
if (get(i) < get(i - 1))
{
double temp = get(i);
set(i, get(i - 1));
set(i - 1, temp);
i -= 1;
}
else
{
placeFound = true;
}
}
current += 1;
}
} // End of insertionSort.
/**
* Triggers the recursive mSort method.
* #return
*/
public SortDoubleArray mergeSort()
{ // start of mergeSort.
return mSort(this);
} // End of mergeSort.
/**
* Separates the values each into their own array.
*/
private SortDoubleArray mSort(SortDoubleArray list)
{ // Start of mSort.
if (list.size() <= 1)
{
return list;
}
SortDoubleArray left = new SortDoubleArray();
SortDoubleArray right = new SortDoubleArray();
int middle = list.size() / 2;
for (int i = 0; i < middle; i += 1)
{
left.add(list.get(i));
}
for (int j = middle; j < list.size(); j += 1)
{
right.add(list.get(j));
}
left = mSort(left);
right = mSort(right);
return merge(left, right);
} // End of mSort.
/**
* Merges the separated values back together in order.
*/
private SortDoubleArray merge(SortDoubleArray left, SortDoubleArray right)
{ // Start of merge.
SortDoubleArray result = new SortDoubleArray();
while (left.size() > 0 || right.size() > 0)
{
if (left.size() > 0 && right.size() > 0)
{
if (left.get(0) <= right.get(0))
{
result.add(left.get(0));
left.remove(0);
}
else
{
result.add(right.get(0));
right.remove(0);
}
}
else if (left.size() > 0)
{
result.add(left.get(0));
left.remove(0);
}
else if (right.size() > 0)
{
result.add(right.get(0));
right.remove(0);
}
}
return result;
} // End of merge.
} // End of class.
Please give me some ideas on how I can alter the mergeSort() / mSort() functions within the SortDoubleArray class to have the same implementation as the rest of the sorts.
Thank you!
Given that mSort and merge methods are correct, how about this ?
public void mergeSort()
{ // start of mergeSort.
SortDoubleArray result = mSort(this);
clear();
addAll(result);
} // End of mergeSort.
The relevant line in your test would then be:
list.mergeSort();
Good luck!
Currently your mergeSort() and merge() functions are each creating new SortedDoubleArray objects. Ideally you would do everything in-place without creating new arrays, the amount your creating and copying will create quite a performance hit for your algorithm.
So your methods would have prototypes something like this:
private SortDoubleArray mSort(SortDoubleArray list, int startIndex, int length)
private SortDoubleArray merge(SortDoubleArray list,
int leftIndex, int leftlength,
int rightIndex, int rightlength)
Then use ArrayList.set and .get with a temporary variable to do the swapping in-place. This will mean you're only working on a single array and not creating any new unnecessary ones.
Does this help? Let me know if I understood the issue or you need more explanation.
Note that int endIndex can also work instead of int length

Categories

Resources