golden ratio calculation with precision - java

I've got the task to calculate the golden ratio (phi = (1+ sqrt(5))/2).
But I need to calculate it with about 50 decimal digits / decimal places and then round the result up to 30 decimal digits and print it on the console. I am allowed to use BigDecimal and MathContext.
Does anyone have an idea how to calculate it? I am lost right now.
Thanks

I won't try to solve your problem for you!
However I think to point you in a promising direction would be to look at the API:
https://docs.oracle.com/javase/9/docs/api/java/math/BigDecimal.html
and specifically at the constructor:
BigDecimal(BigInteger unscaledVal, int scale, MathContext mc)
I believe that if you experiment with these objects you can meet your goal.
Note: sqrt was only added to BigDecimal in Java 9.
Good luck.

I found this on the web. It can be used to verify your calculations.
String goldenRatio =
"1.61803398874989484820458683436563811772030917980576286213544862270526046281890" +
"244970720720418939113748475408807538689175212663386222353693179318006076672635";
You can verify the correctness of a regular calculation by
Verifying the length (don't forget the decimal point and the whole number digits).
verifying that it matches some initial part of the supplied answer.
To calculate normally, I used MathContext of precision = 30 and RoundingMode.HALF_UP.
This may not work for the way you are expected to do it. If you run into problems, folks here will be able to help.
I strongly suggest you post an attempt before asking for any additional help though.

Related

How do I avoid rounding errors with doubles?

I'm using Math.sin to calculate trigonometry in Java with 3 decimal precision. However when I calculate values that should result in an Integer I get 1.0000000002 instead of 1.
I have tried using
System.out.printf(Locale.ROOT, "%.3f ", v);
which does solve the problem of 1.000000002 turning into 1.000.
However when I calculate numbers that should result in 0 and instead get -1.8369701987210297E-16 and use
System.out.printf(Locale.ROOT, "%.3f ", v);
prints out -0.000 when I need it to be 0.000.
Any ideas on how to get rid of that negative sign?
Lets start with this:
How do I avoid rounding errors with doubles?
Basically, you can't. They are inherent to numerical calculations using floating point types. Trust me ... or take the time to read this article:
What every Computer Scientist should know about floating-point arithmetic by David Goldberg.
In this case, the other thing that comes into play is that trigonometric functions are implemented by computing a finite number of steps of an infinite series with finite precision (i.e. floating point) arithmetic. The javadoc for the Math class leaves some "wiggle room" on the accuracy of the math functions. It is worth reading the javadocs to understand the expected error bounds.
Finally, if you are computing (for example) sin π/2 you need to consider how accurate your representation of π/2 is.
So what you should really be asking is how to deal with the rounding error that unavoidably happens.
In this case, you are asking is how to make it look like the user of your program as if there isn't any rounding error. There are two approaches to this:
Leave it alone! The rounding errors occur, so we should not lie to the users about it. It is better to educate them. (Honestly, this is high school maths, and even "the pointy haired boss" should understand that arithmetic is inexact.)
Routines like printf do a pretty good job. And the -0.000 displayed in this case is actually a truthful answer. It means that the computed answer rounds to zero to 3 decimal places but is actually negative. This is not actually hard for someone with high school maths to understand. If you explain it.
Lie. Fake it. Put in some special case code to explicitly convert numbers between -0.0005 and zero to exactly zero. The code suggested in a comment
System.out.printf(Locale.ROOT, "%.3f ", Math.round(v * 1000d) / 1000d);
is another way to do the job. But the risk of this is that the lie could be dangerous in some circumstances. On the other hand, you could say that real mistake problem is displaying the numbers to 3 decimal places.
Depends on accuracy you need, you can multiply by X and divide by X where X is X=10^y and y is required floating poing precision.

How to get exact value of Trigonometric functions in Java

This is not duplicate of this, and this
I am developing a Calculator application for Android and I have been searching web for past 20-30 days but did not find any reasonable answer. I have also studied many papers on Floating Point Computation.
I have also tried both Math and StrictMath library.
The following values I have tried
Math.cos(Math.PI/4) result in 0.7071067811865476 which is correct answer
Math.cos(Math.PI/2) result in 6.123233995736766E-17 correct answer is 0
Math.cos(Math.PI) result in -1.0 which is correct
Math.cos((3*Math.PI)/2) result in -1.8369701987210297E-16 correct answer is 0
Math.cos(Math.PI*2) result in 1.0 which is correct
Math.sin(Math.PI/4) result in 0.7071067811865476 which is correct answer
Math.sin(Math.PI/2) result in 1 which is correct
Math.sin(Math.PI) result in 1.2246467991473532E-16 correct answer is 0
Math.sin((3*Math.PI)/2) result in -1 which is correct
Math.sin(Math.PI*2) result in -2.4492935982947064E-16 correct answer is 0
Math.tan(Math.PI/4) result in 0.999999999999999 correct answer is 1
Math.tan(Math.PI/2) result in 1.633123935319537E16 correct answer is NaN
Math.tan(Math.PI) result in -1.2246467991473532E-16 correct answer is 0
Math.tan((3*Math.PI)/2) result in 5.443746451065123E15 correct answer is NaN
Math.tan(Math.PI*2) result in -2.4492935982947064E-16 correct answer is 0
When I tried all these calculations on Google's Official calculator which is included in Stock Lollipop yielded all correct answers except for tan((3*PI)/2) and tan(PI/2)
When I tried all these calculation on my Casio fx-991 PLUS all answers were correct.
Now my question is "How Google's calculator and Casio's calculator managed to get correct answer using limited floating precision of CPU?" and "How can I
achieve same output?"
I am skeptical of many of the "correct answer" values you give. sin(pi) is 0, but Math.PI is not pi, it is an approximation. Sine of something that is only close to PI shouldn't give you 0. How is the user entering the values? If the user enters a decimal input, with 16 decimal places, he/she should expect to have some results that are off in the 16th decimal place. If a user asks for sin(10^-15) and you change the input to 0 and return a result of 0, then you make it so the user can't compute a numerical derivative for sin x at 0 by computing (sin(10^-15)-sin(0))/(10^-15-0). The same is true if the user enters an approximation like Math.PI and you change the input to pi.
As Bryan Reilly answered, you can round results before presenting them to the user, and this will avoid showing a value like 5*10^-15 instead of 0.
You can shift the inputs to ranges near 0. For example, for values of x greater than pi or less than -pi, you can subtract off a multiple of 2pi to get a value in [-pi,pi]. Then you can use trig identities to reduce the domain you need further to [0,pi/2]. For example, if x is in [-pi,pi/2], then use sin(x) = -sin(x+pi).
If any roundoff errors at all are unacceptable, then perhaps you should make a symbolic calculator instead of a floating point calculator.
What is most likely is that Google and Casios calculators simply round down if the result is something smaller than say 1.0E-14.
Something similar can be done if the number is too large.
Floating point inaccuracies are hard to deal with but rounding is the most common way to fix them.
And although it seems like you know what is happening under the hood, this may help you:
http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Maybe they use something like the Big Decimal Class
Sorry I'm not yet allowed to provide a simple comment.
For example yesterday I created a simple swing application which converted from decimal to unsigned int binary and vice versa. The problem is, what the user enters in a text box are String values to convert, and they can easily exceed even Long.MAX_VALUE.
What I wanted was for the user to enter as many digits as desired in both cases, so I used the Big Integer Class, which is similar to Big Decimal but is suited for Integer values. The result of using it allows users to enter strings of incredible length, and for my program to output the conversion in as much or even more digits, far, far exceeding Long.MAX_VALUE.
However, since the Math functions use doubles, we are still limited here with the 'Big' classes, which initialize with doubles and strings. If you have an application where you can represent numbers with strings (which can have a max character length of Integer.MAX_VALUE), then the 'Big' classes are great. However if you want to initialize with a double, you are obviously limited by the constraints of double.

Take Power of Decimal to Maximum Precision

I want to do the following in java:
Math.pow((int),(double))
and keep the decimal precision to more than 16 (much greater than that actually).
Is this possible? I know it involves using BigDecimal and maybe ln functions, but I'm not sure how to approach this.
Thanks in advance.
EDIT
The reason I am asking is because I am trying to compute pi to an enormous amount of precision. Currently, I am using Chudnovsky's algorithm. I've tried to use taylor series for this purpose and it takes much too long to be practical.
EDIT
Maybe this is a better question: How do you find the square root of a BigDecimal. (Technically, same as the original since raising to the .5 is...)

Double precision values

Just a day before I participated in the qualification round of Google Code Jam. This is my first experience of such an online coding contest. It was really fun.
There were three problems given of which i was able to solve two. But on one of the problems I was asked to work with values that are really huge. I am a Java guy and I thought I would go for double variable. Unfortunately, the precision of double also was not enough. Moreover, I attended this during the closing stage, I was not having the time to dig much into it (plus solving 1 is enough to qualify to the next stage).
My question is this, How to have a precision mechanism that is greater than double. My coding experience is in Java, so it would be great if you could please answer in that lines.
Thanks
Java has BigDecimall for arbitrary-precision arithmetic - but it's much, much slower than using double.
It's also possible that the problem in question was supposed to be soved by using algebraic transformations and e.g. work with logarithms.
If the problem requires integers you can use BigInteger.
Also, long is slightly better than double for integers, with 63 bits compared to 53 bits of precision (assuming positive numbers).
You can use arbitrary precision numbers, such as BigDecimal - it's slower but as precise as you specify.

Precision nightmare in Java and SQL Server

I've been struggling with precision nightmare in Java and SQL Server up to the point when I don't know anymore. Personally, I understand the issue and the underlying reason for it, but explaining that to the client half way across the globe is something unfeasible (at least for me).
The situation is this. I have two columns in SQL Server - Qty INT and Price FLOAT. The values for these are - 1250 and 10.8601 - so in order to get the total value its Qty * Price and result is 13575.124999999998 (in both Java and SQL Server). That's correct. The issue is this - the client doesn't want to see that, they see that number only as 13575.125 and that's it. On one place they way to see it in 2 decimal precision and another in 4 decimals. When displaying in 4 decimals the number is correct - 13575.125, but when displaying in 2 decimals they believe it is wrong - 13575.12 - should instead be 13575.13!
Help.
Your problem is that you are using floats. On the java side, you need to use BigDecimal, not float or double, and on the SQL side you need to use Decimal(19,4) (or Decimal(19,3) if it helps jump to your precision level). Do not use the Money type because math on the Money type in SQL causes truncation, not rounding. The fact that the data is stored as a float type (which you say is unchangeable) doesn't affect this, you just have to convert it at first opportunity before doing math on it.
In the specific example you give, you need to first get the 4 decimal precision number and put it in a BigDecimal or Decimal(19,4) as the case may be, and then further round it to 2 decimal precision. Then (if you are rounding up) you will get the result you want.
Use BigDecimal. Float is not an approciate type to represent money. It will handle the rounding properly. Float will always produce rounding errors.
For storing monetary amounts floating point values are not the way to go. From your description I would probably handle amounts as long integers with as value the monetary amount multiplied by 10^5 as database storage format.
You need to be able to handle calculations with amounts that do not loose precision, so here again floating point is not the way to go. If the total sums between debit and credit are off by 1 cent in a ledger, the ledger fails in the eyes of financial people, so make sure your software operates in their problem domain, not yours. If you can not use existing classes for monetary amounts, you need to build your own class that works with amount * 10^5 and formats according to the precision wanted only for input and output purposes.
Don't use the float datatype for
price. You should use "Money" or
"SmallMoney".
Here's a reference for [MS SQL
DataTypes][1].
[1]:
http://webcoder.info/reference/MSSQLDataTypes.html
Correction: Use Decimal(19,4)
Thanks Yishai.
I think I see the problem.
10.8601 cannot be represented perfectly, and so while the rounding to 13575.125 works OK it's difficult to get it to round to .13 because adding 0.005 just doesn't quite get there. And to make matters worse, 0.005 doesn't have an exact representation either, so you end up just slightly short of 0.13.
Your choices are then to either round twice, once to three digits and then once to 2, or do a better calculation to start with. Using long or a high precision format, scale by 1000 to get *.125 to *125. Do the rounding using precise integers.
By the way, it's not entirely correct to say one of the endlessly repeated variations on "floating point is inaccurate" or that it always produces errors. The problem is that the format can only represent fractions that you can sum negative powers of two to create. So, of the sequence 0.01 to 0.99, only .25, .50, and .75 have exact representations. Consequently, FP is best used, ironically, by scaling it so that only integer values are used, then it is as accurate as integer datatype arithmetic. Of course, then you might as well have just used fixed point integers to start with.
Be careful, scaling, say, 0.37 to 37 still isn't exact unless rounded. Floating point can be used for monetary values but it's more work than it is worth and typically the necessary expertise isn't available.
The FLOAT data type can't represent fractions accurately because it is base2 instead of base10. (See the convenient link :) http://gregs-blog.com/2007/12/10/dot-net-decimal-type-vs-float-type/).
For financial computations or anything that requires fractions to be represented accurately, the DECIMAL data type must be used.
If you can't fix the underlying database you can fix the java like this:
import java.text.DecimalFormat;
public class Temp {
public static void main(String[] args) {
double d = 13575.124999999;
DecimalFormat df2 = new DecimalFormat("#.##");
System.out.println( " 2dp: "+ Double.valueOf(df2.format(d)) );
DecimalFormat df4 = new DecimalFormat("#.####");
System.out.println( " 4dp: "+Double.valueOf(df4.format(d)) );
}
}
Although you shouldn't be storing the price as a float in the first place, you can consider converting it to decimal(38, 4), say, or money (note that money has some issues since results of expressions involving it do not have their scale adjusted dynamically), and exposing that in a view on the way out of SQL Server:
SELECT Qty * CONVERT(decimal(38, 4), Price)
So, given that you can't change the database structure (which would probably be the best option, given that you are using a non-fixed-precision to represent something that should be fixed/precise, as many others have already discussed), hopefully you can change the code somewhere. On the Java side, I think something like #andy_boot answered with would work. On the SQL side, you basically would need to cast the non-precise value to the highest precision you need and continue to cast down from there, basically something like this in the SQL code:
declare #f float,
#n numeric(20,4),
#m money;
select #f = 13575.124999999998,
#n = 13575.124999999998,
#m = 13575.124999999998
select #f, #n, #m
select cast(#f as numeric(20,4)), cast(cast(#f as numeric(20,4)) as numeric(20,2))
select cast(#f as money), cast(cast(#f as money) as numeric(20,2))
You can also do a DecimalFormat and then round using it.
DecimalFormat df = new DecimalFormat("0.00"); //or "0.0000" for 4 digits.
df.setRoundingMode(RoundingMode.HALF_UP);
String displayAmt = df.format((new Float(<your value here>)).doubleValue());
And I agree with others that you should not be using Float as a DB field type to store currency.
If you can't change the database to a fixed decimal datatype, something you might try is rounding by taking truncate((x+.0055)*10000)/10000. Then 1.124999 would "round" to 1.13 and give consistent results. Mathematically this is unreliable, but I think it would work in your case.

Categories

Resources