What could Error in this java program to compute sine? - java

I have written this code to compute the sine of an angle. This works fine for smaller angles, say upto +-360. But with larger angles it starts giving faulty results. (When I say larger, I mean something like within the range +-720 or +-1080)
In order to get more accurate results I increased the number of times my loop runs. That gave me better results but still that too had its limitations.
So I was wondering if there is any fault in my logic or do I need to fiddle with the conditional part of my loop? How can I overcome this shortcoming of my code? The inbuilt java sine function gives correct results for all the angles I have tested..so where am I going wrong?
Also can anyone give me an idea as to how do I modify the condition of my loop so that it runs until I get a desired decimal precision?
import java.util.Scanner;
class SineFunctionManual
{
public static void main(String a[])
{
System.out.print("Enter the angle for which you want to compute sine : ");
Scanner input = new Scanner(System.in);
int degreeAngle = input.nextInt(); //Angle in degree.
input.close();
double radianAngle = Math.toRadians(degreeAngle); //Sine computation is done in terms of radian angle
System.out.println(radianAngle);
double sineOfAngle = radianAngle,prevVal = radianAngle; //SineofAngle contains actual result, prevVal contains the next term to be added
//double fractionalPart = 0.1; // This variable is used to check the answer to a certain number of decimal places, as seen in the for loop
for(int i=3;i<=20;i+=2)
{
prevVal = (-prevVal)*((radianAngle*radianAngle)/(i*(i-1))); //x^3/3! can be written as ((x^2)/(3*2))*((x^1)/1!), similarly x^5/5! can be written as ((x^2)/(5*4))*((x^3)/3!) and so on. The negative sign is added because each successive term has alternate sign.
sineOfAngle+=prevVal;
//int iPart = (int)sineOfAngle;
//fractionalPart = sineOfAngle - iPart; //Extracting the fractional part to check the number of decimal places.
}
System.out.println("The value of sin of "+degreeAngle+" is : "+sineOfAngle);
}
}

The polynomial approximation for sine diverges widely for large positive and large negative values. Remember, since varies from -1 to 1 over all real numbers. Polynomials, on the other hand, particularly ones with higher orders, can't do that.
I would recommend using the periodicity of sine to your advantage.
int degreeAngle = input.nextInt() % 360;
This will give accurate answers, even for very, very large angles, without requiring an absurd number of terms.

The further you get from x=0, the more terms you need, of the Taylor expansion for sin x, to get within a particular accuracy of the correct answer. You're stopping around the 20th term, which is fine for small angles. If you want better accuracy for large angles, you'll just need to add more terms.

Related

How is it possible to get a random number with a specific probability?

I wanted to make a random number picker in the range 1-50000.
But I want to do it so that the larger the number, the smaller the probability.
Probability like (1/2*number) or something else.
Can anybody help?
You need a mapping function of some sort. What you get from Random is a few 'primitive' constructs that you can trust do exactly what their javadoc spec says they do:
.nextInt(X) which returns, uniform random (i.e. the probability chart is an exact horizontal line), a randomly chosen number between 0 and X-1 inclusive.
.nextBoolean() which gives you 1 bit of randomness.
.nextDouble(), giving you a mostly uniform random number between 0.0 and 1.0
nextGaussian() which gives you a random number whose probability chart is a uniform normal curve with standard deviation = 1.0 and midpoint (average) of 0.0.
For the double-returning methods, you run into some trouble if you want exact precision. Computers aren't magical. As a consequence, if you e.g. write this mapping function to turn nextDouble() into a standard uniformly distributed 6-sided die roll, you'd think: int dieRoll = 1 + (int) (rnd.nextDouble() * 6); would do it. Had double been perfect, you'd be right. But they aren't, so, instead, best case scenario, 4 of 6 die faces are going to come up 750599937895083 times, and the other 2 die faces are going to come up 750599937895082 times. It'll be hard to really notice that, but it is provably imperfect. I assume this kind of tiny deviation doesn't matter to you, but, it's good to be aware that anytime you so much as mention double, inherent tiny errors creep into everything and you can't really stop that from happening.
What you need is some sort of mapping function that takes any amount of such randomly provided data (from those 3 primitives, and really only from nextInt/nextBoolean if you want to avoid the errors that double inherently brings) to produce what you want.
For example, imagine instead the 'primitive' I gave you is a uniform random value between 1 and 6, inclusive, i.e.: A standard 6-sided die roll. And I ask you to come up with a uniform algorithm (as in, each value is equally likely) to produce a number between 2 and 12, inclusive.
Perhaps you might think: Easy, just roll 2 dice and add em up. But that would be incorrect: 7 is far more likely than 12.
Instead, you'd roll 1 die and just register if it was even or odd. Then you roll the second die and that's your result, unless the first die was odd in which case you add 6 to it. If you get odd on the first die and 1 on the second die, you start the process over again; eventually you're bound to not roll snake eyes.
That'd be uniform random.
You can apply the same principle to your question. You need a mathematical function that maps the 'horizontal line' of .nextInt() to whatever curve you want. For example, sounds like you want to perhaps generate something and then take the square root and round it down, maybe. You're going to have to draw out or write a formula that precisely describes the probability density.
Here's an example:
while (true) {
int v = (int) (50000.0 * Math.abs(r.nextGaussian()));
if (v >= 1 && v <= 50000) return v;
}
That returns you a roughly normally distributed value, 1 being the most likely, 50000 being the least likely.
One simple formula that will give you a very close approximation to what you want is
Random random = new Random();
int result = (int) Math.pow( 50001, random.nextDouble());
That will give a result in the range 1 - 50000, where the probability of each result is approximately proportional to 1 / result, which is what you asked for.
The reason why it works is that the probability of result being any value n within the range is P( n <= 50001^x < n+1) where x is randomly distributed in [0,1). That's the probability that x falls between log(n) and log(n+1), where the logs are base 50001. But that probability is proportional to log (1 + 1/n), which is very close to 1/n.

Fitness function

Whilst searching on Google about Genetic Algorithms, I came across OneMax Problem, my search showed that this is one of the very first problem that the Genetic Algorithm was applied to. However, I am not exactly sure what is OneMax problem. Can anyone explain.
Any help is appreciated
The goal of One-Max problem is to create a binary string of length n where every single gene contains a 1. The fitness function is very simple, you just iterate through your binary string counting all ones. This is what the sum represents in the formula you provided with your post. It is just the number of ones in the binary string. You could also represent the fitness as a percentage, by dividing the number of ones by n * 0.01. A higher fitness would have a higher percentage. Eventually you will get a string of n ones with a fitness of 100% at some generation.
double fitness(List<int> chromosome) {
int ones = chromosome.stream().filter(g -> g == 1).count();
return ones / chromosome.size() * 0.01;
}

Java: convert float to double preserving decimal point precision

I have a float-based storage of decimal by their nature numbers. The precision of float is fine for my needs. Now I want is to perform some more precise calculations with these numbers using double.
An example:
float f = 0.1f;
double d = f; //d = 0.10000000149011612d
// but I want some code that will convert 0.1f to 0.1d;
Update 1:
I know very well that 0.1f != 0.1d. This question is not about precise decimal calculations. Sadly, the question was downvoted. I will try to explain it again...
Let's say I work with an API that returns float numbers for decimal MSFT stock prices. Believe or not, this API exists:
interface Stock {
float[] getDayPrices();
int[] getDayVolumesInHundreds();
}
It is known that the price of a MSFT share is a decimal number with no more than 5 digits, e.g. 31.455, 50.12, 45.888. Obviously the API does not work with BigDecimal because it would be a big overhead for the purpose to just pass the price.
Let's also say I want to calculate a weighted average of these prices with double precision:
float[] prices = msft.getDayPrices();
int[] volumes = msft.getDayVolumesInHundreds();
double priceVolumeSum = 0.0;
long volumeSum = 0;
for (int i = 0; i < prices.length; i++) {
double doublePrice = decimalFloatToDouble(prices[i]);
priceVolumeSum += doublePrice * volumes[i];
volumeSum += volumes[i];
}
System.out.println(priceVolumeSum / volumeSum);
I need a performant implemetation of decimalFloatToDouble.
Now I use the following code, but I need a something more clever:
double decimalFloatToDouble(float f) {
return Double.parseDouble(Float.toString(f));
}
EDIT: this answer corresponds to the question as initially phrased.
When you convert 0.1f to double, you obtain the same number, the imprecise representation of the rational 1/10 (which cannot be represented in binary at any precision) in single-precision. The only thing that changes is the behavior of the printing function. The digits that you see, 0.10000000149011612, were already there in the float variable f. They simply were not printed because these digits aren't printed when printing a float.
Ignore these digits and compute with double as you wish. The problem is not in the conversion, it is in the printing function.
As I understand you, you know that the float is within one float-ulp of an integer number of hundredths, and you know that you're well inside the range where no two integer numbers of hundredths map to the same float. So the information isn't gone at all; you just need to figure out which integer you had.
To get two decimal places, you can multiply by 100, rint/Math.round the result, and multiply by 0.01 to get a close-by double as you wanted. (To get the closest, divide by 100.0 instead.) But I suspect you knew this already and are looking for something that goes a little faster. Try ((9007199254740992 + 100.0 * x) - 9007199254740992) * 0.01 and don't mess with the parentheses. Maybe strictfp that hack for good measure.
You said five significant figures, and apparently your question isn't limited to MSFT share prices. Up until doubles can't represent powers of 10 exactly, this isn't too bad. (And maybe this works beyond that threshold too.) The exponent field of a float narrows down the needed power of ten down to two things, and there are 256 possibilities. (Except in the case of subnormals.) Getting the right power of ten just needs a conditional, and the rounding trick is straightforward enough.
All of this is all going to be a mess, and I'd recommend you stick with the toString approach for all the weird cases.
If your goal is to have a double whose canonical representation will match the canonical representation of a float converting the float to string and converting the result back to double would probably be the most accurate way of achieving that result, at least when it's possible (I don't know for certain whether Java's double-to-string logic would guarantee that there won't be a pair of consecutive double values which report themselves as just above and just-below a number with five significant figures).
If your goal is to round to five significant figures a value which is known to have been rounded to five significant figures while in float form, I would suggest that the simplest approach is probably to simply round to five significant figures. If your magnitude of your numbers will be roughly within the range 1E+/-12, start by finding the smallest power of ten which is smaller than your number, multiply that by 100,000, multiply your number by that, round to the nearest unit, and divide by that power of ten. Because division is often much slower than multiplication, if performance is critical, you might keep a table with powers of ten and their reciprocals. To avoid the possibility of rounding errors, your table should store for each power of then the closest power-of-two double to its reciprocal, and then the closest double to the difference between the first double and the actual reciprocal. Thus, the reciprocal of 100 would be stored as 0.0078125 + 0.0021875; the value n/100 would be computed as n*0.0078125 + n*0.0021875. The first term would never have any round-off error (multiplying by a power of two), and the second value would have precision beyond that needed for the final result, so the final result should thus be rounded accurately.

Using double up to two decimal places

I want to use double up to just 2 decimal places. i.e. it will be stored upto 2 decimal places, if two double values are compared then the comparison should be based on only the first 2 decimal places. How to achieve such a thing? I mean storing, comparison, everything will be just based on the 1st two decimal places. The remaining places may be different, greater than, less than, doesn't matter.
EDIT
My values arent large. say from 0 to 5000 maximum. But I have to multiply by Cos A, Sin A a lot of times, where the value of A keeps changing during the course of the program.
EDIT
Look in my program a car is moving at a particular speed, say 12 m/s. Now after every few minutes, the car changes direction, as in chooses a new angle and starts moving in a straight line along that direction. Now everytime it moves, I have to find out its x and y position on the map. which will be currentX+velocity*Cos A and currentY+Velocity*Sin A. but since this happens often, there will be a lot of cumulative error over time. How to avoid that?
Comparing floating point values for equality should always use some form of delta/epsilon comparison:
if (Abs(value1 - value2) < 0.01 )
{
// considered equal to 2 decimal places
}
Don't use a float (or double). For one, it can't represent all two-decimal-digit numbers. For another, you can get the same (but accurate) effect with an int or long. Just pretend the tens and ones column is really the tenths and hundredths column. You can always divide by 100.0 if you need to output the result to screen, but for comparisons and behind-the-scenes work, integer storage should be fine. You can even get arbitrary precision with BigInteger.
To retain a value of 2 decimal places, use the BigDecimal class as follows:
private static final int DECIMAL_PLACES = 2;
public static void main(String... args) {
System.out.println(twoDecimalPlaces(12.222222)); // Prints 12.22
System.out.println(twoDecimalPlaces(12.599999)); // Prints 12.60
}
private static java.math.BigDecimal twoDecimalPlaces(final double d) {
return new java.math.BigDecimal(d).setScale(DECIMAL_PLACES,
java.math.RoundingMode.HALF_UP);
}
To round to two decimal places you can use round
public static double round2(double d) {
return Math.round(d * 100) / 100.0;
}
This only does round half up.
Note: decimal values in double may not be an exact representation. When you use Double.toString(double) directly or indirectly, it does a small amount of rounding so the number will appear as intended. However if you take this number and perform an operation you may need to round the number again.
it will be stored up to 2 decimal
places
Impossible. Floating-point numbers don't have decimal places. They have binary places after the dot.
You have two choices:
(a) don't use floating-point, as per dlev's answer, and specifically use BigDecimal;
(b) set the required precision when doing output, e.g. via DecimalFormat, an SQL column with defined decimal precision, etc.
You should also have a good look at What every computer scientist should know about floating-point.

probability and programming simulation

I'm having some trouble understanding the following result.
I want to know if the following code is actually correct. It stumps me - but that could be due to me misunderstanding the probability involved.
The code should speak for itself, but to clarify the 'real world' simulation represents 2 people flipping a coin. When you lose you pay 1 dollar, when you win you win a dollar. An even sum game!
private static Random rnd = new Random();
public static void main(String[] args) {
int i=0;
for (int x = 0; x<1000000; x++) {
if (rnd.nextBoolean()) i+=1;
else i-=1;
}
System.out.println(i);
}
When I run this however I get huge swings! Whilst I would expect a large sample like this to converge to 0, I'm seeing +-4000
Not only that but increasing the sample size seems to only make the swings higher.
Am I misusing the random function ? :P
I think you're good. The thing to look at is the ratio of the swing to your sample.
4000 out of 1000000 for example is 0.4%
If you increase the sample size, you should expect that ratio to go down.
The results of your experiment should follow a binomial distribution. If the
number of trials is N, and the probability of success p=1/2, then the
number of successes N_success (for large enough N) should have a mean of approximately Np,
and standard deviation sqrt(N*p*(1-p)).
You're actually tracking K = (N_success - N_fail). So N_success = N/2 + K/2.
With 1,000,000 trials and K=4000, we get N_success = 502000. The expected
value is 500000, with standard deviation sqrt(250000) = 500. The difference
between the observed and expected values of N_success is 2000, or about 4 sigma.
That's significant enough to question whether the random number generator is
biased. On the other hand, if you're running this test thousands of times,
you'd expect a few outliers of this magnitude, and you seem to be seeing both
positive and negative values, so in the long run maybe things are OK after all.
You are simulating a one-dimensional random walk. Basically, imagine yourself standing on a line of integers. You begin at point i=0. With equal probability you take a step to the right or the left.
The random walk has a few cool properties and you've touched on my favourite:
Starting at point i=0, as N gets larger, the probability that you will return to that point approaches 1. As you point out - a zero sum game.
However, the expected time it will take you to return there tends to infinity. As you notice, you get some very large swings.
Since the average value should be 0 and the variance of N moves is N, then you would expect 95% of your simulations to end in the region: (- 1.96, 1.96)*N^(0.5).

Categories

Resources