java automatically doubling array size - java

trying to initialize my array at 1 and have it double every time it's input fills up. this is what i have right now
int max = 1;
PhoneRecord[] records = new PhoneRecord[max];
int numRecords = 0;
int size = Integer.parseInt(length.records[numRecords]);
if (size >= max) {
size = 2*size;
}
but it's clearly full of fail. any suggestions or guidance would be great, thanks.

OK, you should use an ArrayList, but several other folks already told you that.
If you still want to use an array, here's how you'd resize it:
int max = 1;
PhoneRecord[] records = new PhoneRecord[max];
int numRecords = 0;
void addRecord(PhoneRecord rec) {
records[numRecords++] = rec;
if(numRecords == max) {
/* out of space, double the array size */
max *= 2;
records = Arrays.copyOf(records, max);
}
}

Why not use an ArrayList ? It'll exhibit very similar characteristics automatically.
From the private grow() method:
int newCapacity = oldCapacity + (oldCapacity >> 1);
You can't override the growth behaviour, but unless you really require a doubling due to your app characteristics, I'm sure it'll be sufficient.

Size is only multiplying the size number, not double the array size.
Try:
records = Arrays.copyOf(records, records.length*2);

Related

Store the optimal solution

I have this code:
public static void main(String[] args) {
final int[] weights = {20,40,10,30}, costs = {5,20,2,6};
final int minWeight = 50;
firstSolution(weights,costs,minWeight);
}
public static void firstSolution(int[] weights, int[] costs, int minWeight){
int maxWeight = 0;
for(final int weight: weights){
maxWeight += weight;
}
int[] minCost = new int[maxWeight + 1];
for(int i = 1; i <= maxWeight; i++){
minCost[i] = Integer.MAX_VALUE;
}
for(int i = 0; i < weights.length; i++){
for(int j = maxWeight; j >= weights[i]; j--){
if(minCost[j - weights[i]] != Integer.MAX_VALUE){
minCost[j] = Math.min(minCost[j], minCost[j - weights[i]] + costs[i]);
}
}
}
int answer = Integer.MAX_VALUE;
for(int i = minWeight; i <= maxWeight; i++){
answer = Math.min(answer, minCost[i]);
}
System.out.println(answer);
}
This codes takes as input an array of weights and an array of costs, and it calculates the least possible cost for the given minimum Weight. I actually need to also have which item's are used for this solution.
For example with these inputs i would have as my optimal solution :
Use item at index 0 (weight = 20, cost = 5) and item at index 3(weight = 30, cost = 6).
This will give me the minimum cost which is 11 for more than or equal weight which is 50 in this case.
The code works and gives me answer 11 which is the minimum cost, but it doesn't give me the actual items that led to this solutions. Could you help with changing the code a bit so it can also determine which items lead to the optimal solution?
When you do the following:
minCost[j] = Math.min(minCost[j], minCost[j - weights[i]] + costs[i]);
you don't know if the existing solution or the new solution are the best one, so instead you should do:
if (minCost[j - weights[i]] + costs[i] < minCost[j]) {
minCost[j] = minCost[j - weights[i]] + costs[i];
// And update the storage of the best solution here.
}
Now to store a best solution, you only need to know what was your last best choice, and then iterate / recurse backwards to reconstruct the solution.
For instance, in the above code you know your optimal solution includes item i. So you can simply update your best solution with the following code:
solutions[j] = i;
And then when you're done you can always reconstruct your solution knowing that it was built on solutions[j - weights[solutions[j]]], repeating this backtracking until -1 == solutions[j].
Putting this all together, we get:
while (-1 != solution[W]) {
// This prints the last item picked and its corresponding weight.
print solution[W] + ": " + weight[solution[W]]
// This updates the total weight to refer to the
// optimal sub-solution that we built upon.
W = W - weight[solution[W]]
}

Java: How to generate a random number bigger than x, no maximum?

I just want to generate a random number bigger than x, without a maximum value, to operate with it afterwards. I've been searching for answers to my question but none of them match my problem: they refer to a restriction with minimum and maximum values. I want a simple code just like:
Random number = new Random();
int x = 0; //the minimum value
int finalNumber;//any positive random number, in this case; if, for example, x were 2, some number bigger than 2.
How can I get finalNumber?
Thanks for taking your time in posting an answer, I would really appreciate it.
Since information is stored into finite bytes you can't in any case generate a number with "no maximum".
So assuming your limit it Integer.MAX_VALUE then you can generate a number in [0, Integer.MAX_VALUE - minimum] and then add minimum to the result.
Eg:
final int MINIMUM = ...
int v = random.nextInt(Integer.MAX_VALUE - MINIMUM) + MINIMUM;
Mind that this is exclusive since nextInt(int) contract specify that upper bound is excluded. And that this required MINIMUM to be positive.
Try this:
if (x > 0) {
return (int)(Math.random() * (Integer.MAX_VALUE - x)) + x;
}
Why not just wrap it with your own class/utility method say MyMath.Random() like this method:
public class MyMath{
public static int Random()
{
final int MIN = 0;
final int MAX = Integer.MAX_VALUE;
return (int)(Math.random() * MAX);
}
/// overload so you can specify min value
public static int Random(in MIN)
{
final int MAX = Integer.MAX_VALUE;
return (int)(Math.random() * (MAX-MIN));
}
}
You can refer to the Integer.MAX_VALUE from the Java docs here
This way you can just do:
int someInt = MyMath.Random();
or
int x = 123;
int someInt = MyMath.Random(x);

Lists partition

I need to get 20% of the list of books and divide them into 5 folds. Currently, done the following
List<Integer> nonRatedBooks= allBookIDs;
Collections.shuffle(nonRatedBooks);
nonRatedBooks= nonRatedBooks.subList(0, (int) Math.ceil(nonRatedBooks.size() * 0.2));
int foldSize = (int) Math.ceil((float)nonRatedBooks.size() / 5);
List<List<Integer>> testFolds = Lists.partition(nonRatedBooks, foldSize);
The issue is that, when, for example I have a nonRatedBooks.size()=6 (after getting the sublist), then foldsize=2 and the testFolds.size()=3, because Lists.partition will divide into folds with size of 2. How can I do, so that there are always 5 folds?
That should works for you:
// get your 20% first
int chunk = nonRatedBooks.size() / 5;
List<List<Integer>> result = new LinkedList<List<Integer>>();
if (chunk < 1) {
nonRatedBooks.stream().map(Lists::newArrayList).collect(Collectors.toCollection(() -> result));
} else {
for (int i = 0; i < 5; i++) {
int endIndex = i < 4 ? (i + 1) * chunk : nonRatedBooks.size();
result.add(nonRatedBooks.subList(i * chunk, endIndex));
}
}
Lists.partition is not the best solution in your case, since it will break your list depends on partition size which is vary.
A more bullet-proof way to divide an integer value into two parts by a fractional amount only needs one division:
int total = ...;
float ratio = ...; // presumably from [0.0 .. 1.0] range
int part = Math.round(total * ratio);
int remaining = total - part;
This makes sure that part + remaining always sum up to total.

counting different values in a vector with Aparapi

i want to implement an Entropy function in parallel with APARAPI.
in that function i need to count different keys in a vector but it cant execute correctly.
assume that we have just 3 different values.
here is my codes:
final int[] V = new int[1024];
// Initialization for V values
final int[] count = new int[3];
Kernel kernel = new Kernel(){
#Override
public void run(){
int gid = getGlobalId();
count[V[gid]]++;
}
};
kernel.execute(Range.create(V.length));
kernel.dispose();
after run this code segment, when i print count[] values it gives me 1,1,1.
it seems that count[V[gid]]++ execute just 1 time for each V[gid].
thanks.
So here is the problem. The ++ operator is actually three operations in one: read the current value, increment it, write the new value. In Aparapi you have potentially 1024 GPU threads running simultaneously. That means they will read the value, probably all the same time when the value is 0, then increment it to 1, then all 1024 threads will write 1. So it is acting as expected.
What you are trying to do is called a Map-reduce function. You are just skipping a lot of steps. You need to remember Aparapi is a system that has no Thread safety, so you have to write your algorithms to accommodate that. That is where Map-reduce comes in and here is how to do one. I just wrote it and added it to the Aparapi repository at its new home, details below.
int size = 1024;
final int count = 3;
final int[] V = new int[size];
//lets fill in V randomly...
for (int i = 0; i < size; i++) {
//random number either 0, 1, or 2
V[i] = (int) (Math.random() * 3);
}
//this will hold our values between the phases.
int[][] totals = new int[count][size];
///////////////
// MAP PHASE //
///////////////
final int[][] kernelTotals = totals;
Kernel mapKernel = new Kernel() {
#Override
public void run() {
int gid = getGlobalId();
int value = V[gid];
for(int index = 0; index < count; index++) {
if (value == index)
kernelTotals[index][gid] = 1;
}
}
};
mapKernel.execute(Range.create(size));
mapKernel.dispose();
totals = kernelTotals;
//////////////////
// REDUCE PHASE //
//////////////////
while (size > 1) {
int nextSize = size / 2;
final int[][] currentTotals = totals;
final int[][] nextTotals = new int[count][nextSize];
Kernel reduceKernel = new Kernel() {
#Override
public void run() {
int gid = getGlobalId();
for(int index = 0; index < count; index++) {
nextTotals[index][gid] = currentTotals[index][gid * 2] + currentTotals[index][gid * 2 + 1];
}
}
};
reduceKernel.execute(Range.create(nextSize));
reduceKernel.dispose();
totals = nextTotals;
size = nextSize;
}
assert size == 1;
/////////////////////////////
// Done, just print it out //
/////////////////////////////
int[] results = new int[3];
results[0] = totals[0][0];
results[1] = totals[1][0];
results[2] = totals[2][0];
System.out.println(Arrays.toString(results));
Keep in mind while it may seem inefficient it actually works pretty well on much larger number. This algorithm works just fine with
size = 1048576.
With the new size the following result was computed on my system in about a second.
[349602, 349698, 349276]
One final note, you might want to consider moving to the more active project at aparapi.com. It includes several fixes to bugs and a lot of extra features and performance enhancements over the older library you linked above. It is also in maven central with about a dozen releases. so it is easier to use. I just wrote the code in this answer but decided to use it in the new Aparapi repository's example section, you can find that at the following link in the new Aparapi repository.

Why is my Vector size 0?

As you can see in the screenshot, new_mean's capacity is 0 eventhough I've created it with an initial capacity of 2 therefore I'm getting index out of bounds exception.
Does anyone know what I'm doing wrong?
Update: Here's the code
private static Vector<Double> get_new_mean(
Tuple<Set<Vector<Double>>, Vector<Double>> cluster,
Vector<Double> v, boolean is_being_added) {
Vector<Double> previous_mean = cluster.y;
int n = previous_mean.size(), set_size = cluster.x.size();
Vector<Double> new_mean = new Vector<Double>(n);
if (is_being_added) {
for (int i = 0; i < n; ++i) {
double temp = set_size * previous_mean.get(i);
double updated_mean = (temp + v.get(i)) / (set_size + 1);
new_mean.set(i, updated_mean);
}
} else {
if (set_size > 1) {
for (int i = 0; i < n; ++i) {
double temp = set_size * previous_mean.get(i);
double updated_mean = (temp - v.get(i)) / (set_size - 1);
new_mean.set(i, updated_mean);
}
} else {
new_mean = null;
}
}
return new_mean;
}
Capacity is the total number of elements you could store.
Size is the number of elements you have actually stored.
In your code, there is nothing stored in the Vector, so you get an IndexOutOfBoundsException when you try to access element 0.
Use set(int, object) to change an EXISTING element. Use add(int, object) to add a NEW element.
This is explained in the javadoc for Vectors. elementCount should be 0 (it's empty) and capacityIncrement is 0 by default, and is only relevant if you're going to go over the limit you specified (2).
You need to fill your Vector with null values to make it's size equal to the capacity. Capacity is an optimization hint for the collection, it makes no change to the collection usage. Collection will automatically grow as you add elements to it and capacity will increase. So initializing with a higher capacity would requires less expansions and less memory allocations.

Categories

Resources