I'm having trouble with a loop that is supposed to add together a number of very small float values to eventually produce a weighted average, like so:
for(int k = 0; k < slopes.size(); k++){
if(slopes.get(k).isClimbing() == false){
float tempWeight = (slopes.get(k).getzDiff() / highestFallZ);
weight += tempWeight;
highestFallX += (slopes.get(k).getEndX() * tempWeight);
}
highestFallX = highestFallX / weight;
}
Essentially what it does is produce a weighting from one attribute of an object (the result of which is always between 0 and 1), then modifies another attribute of the same object by that weighting and adds the result to a running tally, which is in the end divided by the sum of the weightings. All values and variables are of the type float.
Now the problem I'm running into is that within just a few steps the running tally (highestFallX) is growing exponentially into -infinity. I've run some diagonistics, and they've shown that each individual value added was in the range between -1 and -1*10^-5 (after the multiplication with the weighting) and no more than 60 of them were added together, so neither overflow nor underflow should be a problem. For comparison, here is a list of the last value added (LastFallX) and the tally (HighestFallX) during the first few steps of the loop:
LastFallX: -1.2650555E-4
HighestFallX: -1.2650555E-4
LastFallX: -6.3799386E-4
HighestFallX: -0.25996128
LastFallX: -4.602447E-4
HighestFallX: -87.01444
LastFallX: -0.0020183846
HighestFallX: -16370.462
LastFallX: -4.158747E-5
HighestFallX: -826683.3
From there on it keeps growing exponentially, and hits -infinity within about 10 more loops. The highestFallX variable isn't referenced nor modified by anything else during this loop.
One way of expressing an average is:
totalValue += nextValue * nextWeight;
totalWeight += nextWeight;
average = totalValue / totalWeight;
This is prone to overflow in totalValue, as you have seen.
Instead you can also do:
totalWeight += nextWeight;
average += ((nextValue * nextWeight) - average) / totalWeight;
In your case, I think that might look like:
for(int k = 0; k < slopes.size(); k++){
if(slopes.get(k).isClimbing() == false){
float tempWeight = (slopes.get(k).getzDiff() / highestFallZ);
weight += tempWeight;
float weightedValue = (slopes.get(k).getEndX() * tempWeight);
float delta = weightedValue - highestFallX;
highestFallX += delta / weight;
}
}
but I'm still trying to work out exactly how your weighted average should work, so I'm a little unsure of that last bit.
How precise does it have to be? You could simply drop everything past the third decimal to make sure it doesn't run into issues with the massive amounts of digits
Related
Java, Intellij IDE
Coursera, Computer Science: Programming with a Purpose
Princeton University
My program is not returning any output because n and f[] aren't returning any value outside the while loop - I checked it using the print statement. However, when I use the same print statement to print the value of n and f[] inside the while loop it prints the value. It seems like n and f[] becomes obsolete outside the while loop.
The question is Shannon entropy. Write a program ShannonEntropy.java that takes a command-line integer m; reads a sequence of integers between 1 and m from standard input; and prints the Shannon entropy to standard output, with 4 digits after the decimal point. The Shannon entropy of a sequence of integers is given by the formula:
H=−(p1log2p1+p2log2p2+…+pmlog2pm)
where pi denotes the proportion of integers whose value is i. If pi=0, then treat pilog2pi as 0.
If the question is unclear please take a look
It will be great if you can help me out. Thanks in advance.
public class ShannonEntropy {
public static void main(String[] args) {
int m = Integer.parseInt(args[0]);
int[] f = new int[m + 1];
int n = 0;
// calculating the frequency by incrementing the array and incrementing n alongside
while (!StdIn.isEmpty()) {
int value = StdIn.readInt();
f[value]++;
n++;
}
double entropy = 0;
for (int i = 1; i <= m; i++) {
double p = (double) f[i] / n;
System.out.println(p);
if (f[i] > 0)
entropy -= p * (Math.log(p) / Math.log(2));
}
// printing the output
StdOut.println((double) Math.round(entropy * 10000) / 10000);
}
}
Hi I have tested your code. There is indeed a problem with the way you wrote your output code.
Instead of:
StdOut.println((double) Math.round(entropy * 10000) / 10000);
You should use formatted printing aka printf() to achieve 4 decimal places instead of using Math.round() as 1.0 will be printed as 1.0 instead of the desired 1.0000
Use this:
StdOut.printf("%.4f\n" ,entropy);
For more regarding printf(), refer to this link: printf() guide
I'm tasked with a problem that I'm not quite sure how to solve mathematically.
I am trying to create a method that takes an int array as an argument. the length of the array will vary but will never be zero. The values in the array are not important as the method will overwrite them with the values determined below.
The purpose of the method is to divide a total of 1.0 between each position in the array. This is straightforward enough however an additional complexity is that the division should be biased. The values on the left should be higher than the values on the right (see example output below).
An example would be passing an int array of size 7. I would expect an output similar to:
[.3, .25, .15, .1, .09, .07, 0.04]
where the sum of all the values = 1
I'm using Java but even pseudo code will help!
I'd generate a list of unique random numbers, then normalize it by dividing all of them by their sum.
Then sort and reverse your list.
int n = 7;
// make a list of n unique random numbers
double[] randomValues = new Random().doubles(0, 1).distinct().limit(n).toArray();
// normalize the list and reverse sort it
double sum = Arrays.stream(randomValues).sum();
List<Double> array = Arrays.stream(randomValues).boxed()
.map(d -> d/sum)
.sorted(Comparator.reverseOrder())
.collect(Collectors.toList());
Your array should now have random values adding up to 1, sorted in reverse order
Caveat:
You might want to actually recalculate the latest value by subtracting the other ones from 1 to minimize rounding errors. It depends on the precision you require.
If you need exact values, you can't work with doubles, work with ints instead (like we do with currencies).
double[] distributeDecreasing(int n) {
Random random = new Random();
double[] values = new double[n];
final double nEps = 0.0001 * n;
double sum = 0.0;
for (int i = 0; i < n; ++n) {
double value = nEps + random.next();
sum += value;
values[i] = value;
}
double sumFrom1 = 0.0;
for (int i = 1; i < n; ++n) {
values[i] /= sum;
sumFrom1 += values[i];
}
if (n > 0) {
values[0] = 1.0 - sumFrom1;
}
Arrays.sort(values, Comparator.reverseOrder());
return values;
}
The biasing done by decreasing order.
The sum 1.0 realised by dividing of the sum of Random.next (between 0 and 1),
plus an epsilon to not be zero.
For minimal floating point error, correct the first element as 1.0 - sum of the rest.
You didn't specify eactly what bias you are looking for or what distribution, but a straightforward aprroach for a biased non-uniform distribution would be:
Draw the first number a1 from [0,1], draw the second number a2 from [0,1-a1], draw the third number a3 from [0,1-a1-a2] and so on. set an as the complement to 1 of the current total, and sort everything at the end.
I have a boolean array of aproximattely 10 000 elements. I would like to with rather low,set probability (cca 0,1-0,01) change the value of the elements, while knowing the indexes of changed elements. The code that comes to mind is something like:
int count = 10000;
Random r = new Random();
for (int i = 0; i < count; i++) {
double x = r.nextDouble();
if (x < rate) {
field[i]=!field[i];
do something with the index...
}
}
However, as I do this in a greater loop (inevitably), this is slow. The only other possibility that I can come up with is using quantile function (gaussian math), however I have yet to find any free to use code or library to use. Do you have any good idea how to work around this problem, or any library (standard would be best) that could be used?
Basically, you have set up a binomial model, with n == count and p == rate. The relevant number of values you should get, x, can be modeled as a normal model with center n*p == count*rate and standard deviation sigma == Math.sqrt(p*(1-p)/n) == Math.sqrt(rate * (1-rate) / count).
You can easily calculate
int x = (int) Math.round(Math.sqrt(rate * (1-rate) / count)
* r.nextGaussian() + count * rate)
Then you can generate x random numbers in the range using the following code.
Set<Integer> indices = new HashSet<Integer>();
while(indices.size() < x){
indices.add(r.nextInt(count));
}
indices will now contain the correct indices, which you can use as you wish.
You'll only have to call nextInt a little more than x times, which should be much less than the count times you had to call it before.
I've got an issue with an assignment that I have requiring the use of arrays. I need to create the Sieve of Eratosthenes algorithm and print out all the prime numbers. I'm quite confused because as far as I can tell, my order of operations is correct. Here is the code:
//Declare the array
boolean numbers [] = new boolean[1000];
int y = 0;
//Declare all numbers as true to begin
for(int i = 2; i < 1000;i++){
numbers[i] = true;
}
//Run loop that increases i and multiplies it by increasing multiples
for (int x = 2; x < 1000; x++) {
//A loop for the increasing multiples; keep those numbers below 1000
//Set any multiple of "x" to false
for(int n = 2; y < 1000; n++){
y = n * x;
numbers[y] = false;
}
}
I first set all the numbers in the array to true. Then the second loop will start "x" at 2, then inside it is a nested loop that will multiply "x" by values of "n" and "n" will continue to increase as long as the product of that multiplication ("y") is below 1000. Once "y" reaches that maximum, "x" will go up one number and the process repeats until all non-prime numbers are set to false.
That was my logic when I made the code, but when I try to run it I get the "ArrayIndexOutOfBoundsException" error. From what I can tell I set everything to stay below 1000 so it shouldn't be going over the array size.
I know its probably not the most efficient algorithm because as "x" increases it will go over numbers it already went over but it was the most simple one I could think of.
Here:
for(int n = 2; y < 1000; n++){
y = n * x;
numbers[y] = false;
}
you first check that y < 1000, and then intialize and use it. This is the wrong way around.
Also, you can get away with running the above loop only when x is prime. This won't affect correctness, but should make your code much faster.
I have an ArrayList of doubles, and i need to find the average between all numbers.
The amount of Double instances in the arraylist is not constant, could be 2, could be 90
I have been trying for hours to get the algorithm myself but could not get it to work in anyway.
do you have any advice? or maybe you can link me to an existing library to get this average?
Thank you
Create a sum variable:
double sum = 0;
Go through the elements in the list using a for-loop:
for (double d : yourList)
in each iteration, add to sum the current value:
sum += d;
after the loop, to find the average, divide sum with the number of elements in the list:
double avg = sum / yourList.size();
Here's for everyone that thinks this was too simple...
The above solution is actually not perfect. If the first couple of elements in the list are extremely large and the last elements are small, the sum += d may not make a difference towards the end due to the precision of doubles.
The solution is to sort the list before doing the average-computation:
Collections.sort(doublesList);
Now the small elements will be added first, which makes sure they are all counted as accurately as possible and contribute to the final sum :-)
If you like streams:
List<Number> list = Arrays.asList(number1, number2, number3, number4);
// calculate the average
double average = list.stream()
.mapToDouble(Number::doubleValue)
.average()
.getAsDouble();
The definition of the average is the sum of all values, divided by the number of values. Loop over the values, add them, and divide the sum by the size of the list.
If by "average between all numbers" you mean the average of all the doubles in your list, you can simply add all the doubles in your list (with a loop) and divide that total by the number of doubles in your list.
Have you tried:
List<Double> list = new ArrayList<Double>();
...
double countTotal = 0;
for (Double number : list)
{
countTotal += number;
}
double average = countTotal / list.size();
Maybe I haven't got the question... but it's not something like this?
double sum = 0.0;
for (double element : list) {
sum += element;
}
double average = sum / list.size();
If you're worried about overflow as you sum the whole list together, you can use a running average. Because it has more operations, it will be slower than adding everything together and dividing once.
for (int x = 0; x < list.size(); x++)
average = (average / (x+1)) * (x) + list.get(x) / (x+1);