I'm supposed to calculate
using Simpson's rule, with 4 sub intervals.
I surely do not want do it by hand so I have tried to write that algorithm in Java.
The formula for Simpson's rule is
And here is my code:
import java.util.Scanner;
import java.util.Locale;
public class Simpson {
public static void main(String[] args) {
Scanner input = new Scanner(System.in).useLocale(Locale.US);
//e= 2.718281828459045 to copy paste
System.out.println("Interval a: ");
double aInt = input.nextDouble();
System.out.println("Interval b: ");
double bInt = input.nextDouble();
System.out.println("How many sub intervals: ");
double teilInt = input.nextDouble();
double intervaldistance = (bInt-aInt)/teilInt;
System.out.println("h = "+"("+bInt+"-"+aInt+") / "+teilInt+ " = "+intervaldistance);
double total = 0;
System.out.println("");
double totalSum=0;
for(double i=0; i<teilInt; i++) {
bInt = aInt+intervaldistance;
printInterval(aInt, bInt);
total = prod1(aInt, bInt);
total = total*prod2(aInt, bInt);
aInt = bInt;
System.out.println(total);
totalSum=totalSum+total;
total=0;
}
System.out.println("");
System.out.println("Result: "+totalSum);
}
static double prod1(double a, double b) { // first product of simpson rule; (b-a) / 6
double res1 = (b-a)/6;
return res1;
}
static double prod2(double a, double b) { // second pproduct of simpson rule
double res2 = Math.log(a)+4*Math.log((a+b)/2)+Math.log(b);
return res2;
}
static void printInterval(double a, double b) {
System.out.println("");
System.out.println("["+a+"; "+b+"]");
}
}
Output for 4 sub intervals:
[1.0; 1.4295704571147612]
0.08130646125926948
[1.4295704571147612; 1.8591409142295223]
0.21241421690076787
[1.8591409142295223; 2.2887113713442835]
0.31257532785558795
[2.2887113713442835; 2.7182818284590446]
0.39368288949073565
Result: 0.9999788955063609
Now If I compare my solution with other online calculators (http://www.emathhelp.net/calculators/calculus-2/simpsons-rule-calculator/?f=ln+%28x%29&a=1&b=e&n=4&steps=on), it differs.. But I don't see why mine should be wrong.
My solution is 0.9999788955063609, online solution is 0.999707944567103
Maybe there is a mistake I made? But I have double checked everything and couldn't find.
You may be accumulating the rounding error by doing b_n = a_{n} + interval many times.
Instead you could be using an inductive approach, where you say a_n = a_0 + n*interval, since this only involves introducing a rounding error once.
I will test with actual numbers to confirm and flesh out the answer in a little bit, but in the meantime you can watch this explanation about accumulation of error from handmade hero
PS. As a bonus, you get to watch an excerpt from handmade hero!
UPDATE: I had a look at your link. While the problem I described above does apply, the difference in precision is small (you'll get the answer 0.9999788955063612 instead). The reason for the discrepancy in your case is that the formula used in your online calculator is a slightly different variant in terms of notation, which treats the interval [a,b] as 2h. In other words, your 4 intervals is equivalent to 8 intervals in their calculation.
If you put 8 rectangles in that webpage you'll get the same result as the (more accurate) number here:
Answer: 0.999978895506362.
See a better explanation of the notation used on that webpage here
I changed your delta calculation to the top to so that you don't calculate the delta over and over again. You were also not applying the right multipliers for the odd and even factors, as well as not applying the right formula for deltaX since it has to be: ((a-b)/n) /3
double deltaX = ((bInt-aInt)/teilInt)/3;
for(int i=0; i<=teilInt; i++) { //changed to <= to include the last interval
bInt = aInt+intervaldistance;
printInterval(aInt, bInt);
total = prod2(aInt, bInt, i+1, teilInt); //added the current interval and n. The interval is +1 to work well with the even and odds
totalSum += total;
aInt = bInt;
System.out.println(total);
}
System.out.println("");
System.out.println("Result: "+ (totalSum*deltaX)); //multiplication with deltaX is now here
To account for the right factor of f(x) i changed the prod2 to:
static double prod2(double a, double b, int interval, double n) {
int multiplier = 1;
if (interval > 0 && interval <= n){
//applying the right multiplier to f(x) given the current interval
multiplier = (interval % 2 == 0) ? 4 : 2;
}
return multiplier * Math.log(a);
}
Now it yields the correct result:
Related
One of the first things we learn in floating point arithmetics is how rounding error plays a crucial role in double summation. Let's say we have an array of double myArray and we want to find the mean. What we could trivially do is:
double sum = 0.0;
for(int i = 0; i < myArray.length; i++) {
sum += myArray[i];
}
double mean = (double) sum/myArray.length;
However, we would have rounding error. This error can be reduced using other summation algorithm such as the Kahan one (wiki https://en.wikipedia.org/wiki/Kahan_summation_algorithm).
I have recently discovered Java Streams (refer to: https://docs.oracle.com/javase/8/docs/api/java/util/stream/package-summary.html) and in particular DoubleStream (see: https://docs.oracle.com/javase/8/docs/api/java/util/stream/DoubleStream.html).
With the code:
double sum = DoubleStream.of(myArray).parallel().sum();
double average = (double) sum/myArray.length;
we can get the average of our array. Two advantages are remarkable in my opinion:
More concise code
Faster as it is parallelized
Of course we could also have done something like:
double average = DoubleStream.of(myArray).parallel().average();
but I wanted to stress the summation.
At this point I have a question (which API didn't answer): is this method sum() numerically stable? I have done some experiments and it appears to be working fine. However I am not sure is at least good as the Kahan algorithm. Any help really welcomed!
The documentation says it:
Returns the sum of elements in this stream. Summation is a special
case of a reduction. If floating-point summation were exact, this
method would be equivalent to:
return reduce(0, Double::sum);
However, since floating-point summation is not exact, the above code
is not necessarily equivalent to the summation computation done by
this method.
Have you considered using BigDecimal to perform exact results?
Interesting, so I implemented the Kahan variant of Klein, mentioned in the wikipedia article. And a Stream version of it.
The results are not convincing.
double[] values = new double[10_000];
Random random = new Random();
Arrays.setAll(values, (i) -> Math.atan(random.nextDouble()*Math.PI*2) * 3E17);
long t0 = System.nanoTime();
double sum1 = DoubleStream.of(values).sum();
long t1 = System.nanoTime();
double sum2 = DoubleStream.of(values).parallel().sum();
long t2 = System.nanoTime();
double sum3 = kleinSum(values);
long t3 = System.nanoTime();
double sum4 = kleinSumAsStream(values);
long t4 = System.nanoTime();
System.out.printf(
"seq %f (%d ns)%npar %f (%d ns)%nkah %f (%d ns)%nstr %f (%d ns)%n",
sum1, t1 - t0,
sum2, t2 - t1,
sum3, t3 - t2,
sum4, t4 - t3);
An a non-stream version of modified Kahan:
public static double kleinSum(double[] input) {
double sum = 0.0;
double cs = 0.0;
double ccs = 0.0;
for (int i = 0; i < input.length; ++i) {
double t = sum + input[i];
double c = Math.abs(sum) >= Math.abs(input[i])
? (sum - t) + input[i]
: (input[i] - t) + sum;
sum = t;
t = cs + c;
double cc = Math.abs(cs) >= Math.abs(c)
? (cs - t) + c
: (c - t) + cs;
cs = t;
ccs += cc;
}
return sum + cs + ccs;
}
A Stream version:
public static double kleinSumAsStream(double[] input) {
double[] scc = DoubleStream.of(input)
.boxed()
.reduce(new double[3],
(sumCsCcs, x) -> {
double t = sumCsCcs[0] + x;
double c = Math.abs(sumCsCcs[0]) >= Math.abs(x)
? (sumCsCcs[0] - t) + x
: (x - t) + sumCsCcs[0];
sumCsCcs[0] = t;
t = sumCsCcs[1] + c;
double cc = Math.abs(sumCsCcs[1]) >= Math.abs(c)
? (sumCsCcs[1] - t) + c
: (c - t) + sumCsCcs[1];
sumCsCcs[1] = t;
sumCsCcs[2] += cc;
return sumCsCcs;
},
(scc1, scc2) -> new double[] {
scc2[0] + scc1[0],
scc2[1] + scc1[1],
scc2[2] + scc1[2]});
return scc[0] + scc[1] + scc[2];
}
Mind that the times would only be evidence, when a microworkbench would have been used.
However one still sees the overhead of a DoubleStream:
sequential 3363280744568882000000,000000 (5083900 ns)
parallel 3363280744568882500000,000000 (4492600 ns)
klein 3363280744568882000000,000000 (1051600 ns)
kleinStream 3363280744568882000000,000000 (3277500 ns)
Unfortunately I did not correctly cause floating point errors, and its for me late.
Using a Stream instead of the kleinSum would need a reduction with at least 2 doubles (sum and correction), so a double[2] or in newer Java a Record(double sum, double cs, double ccs) value.
A far less magical auxiliary approach is to sort the input by magnitude.
float (used for readability reasons only, double has a precision limit too, used later) has a 24-bit mantissa (of which 23 bits are stored, and the 24th one is considered 1 for "normal" numbers), so if you have the number 2^24, you simply can't add 1 to it, the smallest increment it has is 2:
float f=1<<24;
System.out.println(Float.valueOf(f).intValue());
f++;
f++;
System.out.println(Float.valueOf(f).intValue());
f+=2;
System.out.println(Float.valueOf(f).intValue());
will display
16777216
16777216 <-- 16777216+1+1
16777218 <-- 16777216+2
while summing them in the other direction works
float f=0;
System.out.println(Float.valueOf(f).intValue());
f++;
f++;
System.out.println(Float.valueOf(f).intValue());
f+=2;
System.out.println(Float.valueOf(f).intValue());
f+=1<<24;
System.out.println(Float.valueOf(f).intValue());
produces
0
2
4
16777220 <-- 4+16777216
(of course the pair of f++s is intentional, 16777219 would not exist, just like 16777217 for the previous case. These are not incomprehensibly huge numbers, yet a simple line as System.out.println((int)(float)16777219); already prints 16777220).
The thing applies to double too, just there you have 53-bits precision.
Two things:
the documentation actually suggests this: API Note: Elements sorted by increasing absolute magnitude tend to yield more accurate results
sum() internally ends in Collectors.sumWithCompensation(), which explicitly writes that it's an implementation of Kahan summation. (GitHub link is of JetBrains because Java uses different source control, which is a bit harder to find and link - but the file is present in your JDK too, inside src.zip, usually located in the lib folder)
Ordering by magnitude is something like ordering by log(abs(x)), which is a bit uglier in code, but possible:
double t[]= {Math.pow(2, 53),1,-1,-Math.pow(2, 53),1};
System.out.println(DoubleStream.of(t).boxed().collect(Collectors.toList()));
t=DoubleStream.of(t).boxed()
.sorted((a,b)->(int)(Math.log(Math.abs(a))-Math.log(Math.abs(b))))
.mapToDouble(d->d)
.toArray();
System.out.println(DoubleStream.of(t).boxed().collect(Collectors.toList()));
will print an okay order
[9.007199254740992E15, 1.0, -1.0, -9.007199254740992E15, 1.0]
[1.0, -1.0, 1.0, 9.007199254740992E15, -9.007199254740992E15]
So it's nice, but you can actually break it with little effort (the first few lines show that 2^53 really is the "integer limit" for double, and also "reminds" us of the actual value, then the sum with a single +1 ends up being less than 2^53):
double d=Math.pow(2, 53);
System.out.println(Double.valueOf(d).longValue());
d++;
d++;
System.out.println(Double.valueOf(d).longValue());
d+=2;
System.out.println(Double.valueOf(d).longValue());
double array[]= {Math.pow(2, 53),1,1,1,1};
for(var i=0;i<5;i++) {
var copy=Arrays.copyOf(array, i+1);
d=DoubleStream.of(copy).sum();
System.out.println(i+": "+Double.valueOf(d).longValue());
}
produces
9007199254740992
9007199254740992 <-- 9007199254740992+1+1
9007199254740994 <-- 9007199254740992+2
0: 9007199254740992
1: 9007199254740991 <-- that would be 9007199254740992+1 with Kahan
2: 9007199254740994
3: 9007199254740996 <-- "rounding" upwards, just like with (float)16777219 earlier
4: 9007199254740996
TL;DR: you don't need your own Kahan implementation, but use computers with care in general.
Im writing a function that implements the following expression (1/n!)*(1!+2!+3!+...+n!).
The function is passed the arguement n and I have to return the above statement as a double, truncated to the 6th decimal place. The issue im running into is that the factorial value becomes so large that it becomes infinity (for large values of n).
Here is my code:
public static double going(int n) {
double factorial = 1.00;
double result = 0.00, sum = 0.00;
for(int i=1; i<n+1; i++){
factorial *= i;
sum += factorial;
}
//Truncate decimals to 6 places
result = (1/factorial)*(sum);
long truncate = (long)Math.pow(10,6);
result = result * truncate;
long value = (long) result;
return (double) value / truncate;
}
Now, the above code works fine for say n=5 or n= 113, but anything above n = 170 and my factorial and sum expressions become infinity. Is my approach just not going to work due to the exponential growth of the numbers? And what would be a work around to calculating very large numbers that doesnt impact performance too much (I believe BigInteger is quite slow from looking at similar questions).
You can solve this without evaluating a single factorial.
Your formula simplifies to the considerably simpler, computationally speaking
1!/n! + 2!/n! + 3!/n! + ... + 1
Aside from the first and last terms, a lot of factors actually cancel, which will help the precision of the final result, for example for 3! / n! you only need to multiply 1 / 4 through to 1 / n. What you must not do is to evaluate the factorials and divide them.
If 15 decimal digits of precision is acceptable (which it appears that it is from your question) then you can evaluate this in floating point, adding the small terms first. As you develop the algorithm, you'll notice the terms are related, but be very careful how you exploit that as you risk introducing material imprecision. (I'd consider that as a second step if I were you.)
Here's a prototype implementation. Note that I accumulate all the individual terms in an array first, then I sum them up starting with the smaller terms first. I think it's computationally more accurate to start from the final term (1.0) and work backwards, but that might not be necessary for a series that converges so quickly. Let's do this thoroughly and analyse the results.
private static double evaluate(int n){
double terms[] = new double[n];
double term = 1.0;
terms[n - 1] = term;
while (n > 1){
terms[n - 2] = term /= n;
--n;
}
double sum = 0.0;
for (double t : terms){
sum += t;
}
return sum;
}
You can see how very quickly the first terms become insignificant. I think you only need a few terms to compute the result to the tolerance of a floating point double. Let's devise an algorithm to stop when that point is reached:
The final version. It seems that the series converges so quickly that you don't need to worry about adding small terms first. So you end up with the absolutely beautiful
private static double evaluate_fast(int n){
double sum = 1.0;
double term = 1.0;
while (n > 1){
double old_sum = sum;
sum += term /= n--;
if (sum == old_sum){
// precision exhausted for the type
break;
}
}
return sum;
}
As you can see, there is no need for BigDecimal &c, and certainly never a need to evaluate any factorials.
You could use BigDecimal like this:
public static double going(int n) {
BigDecimal factorial = BigDecimal.ONE;
BigDecimal sum = BigDecimal.ZERO;
BigDecimal result;
for(int i=1; i<n+1; i++){
factorial = factorial.multiply(new BigDecimal(i));
sum = sum.add(factorial);
}
//Truncate decimals to 6 places
result = sum.divide(factorial, 6, RoundingMode.HALF_EVEN);
return result.doubleValue();
}
Code 1:
Class 1
import java.text.NumberFormat;
public class testing2 {
int balance;
void addInterest(int rate) {
balance += balance*(rate/100);
}
void display() {
NumberFormat currency = NumberFormat.getCurrencyInstance();
System.out.print ("The balance is ");
System.out.print(currency.format(balance));
}
}
Class 2
import java.util.Random;
public class testing {
public static void main (String args[]) {
testing2 aTesting = new testing2();
Random Myrandom = new Random();
aTesting.balance = Myrandom.nextInt(501);
int rate2 = 5;
System.out.println("Current balance: " + aTesting.balance);
System.out.println("Current rate: " + rate2);
aTesting.addInterest(rate2);
aTesting.display();
System.out.println();
}
}
OUTPUT:
Current balance: 327
Current rate: 5
The balance is MYR327.00
Code 2:
Class 1
import java.text.NumberFormat;
public class testing2 {
double balance;
void addInterest(double rate) {
balance += balance*(rate/100);
}
void display() {
NumberFormat currency = NumberFormat.getCurrencyInstance();
System.out.print ("The balance is ");
System.out.print(currency.format(balance));
}
}
Class 2
import java.util.Random;
public class testing {
public static void main (String args[]) {
testing2 aTesting = new testing2();
Random Myrandom = new Random();
aTesting.balance = Myrandom.nextInt(501);
double rate2 = 5;
System.out.println("Current balance: " + aTesting.balance);
System.out.println("Current rate: " + rate2);
aTesting.addInterest(rate2);
aTesting.display();
System.out.println();
}
}
OUTPUT:
Current balance: 170.0
Current rate: 5.0
The balance is MYR178.50
CONCLUSION: The first program does not change the final value of the balance whilst the 2nd program does. What is the reason for this? I only changed the type of the variable from int to double and nothing more.
In first case you're doing
int balance = 327;
int rate = 5;
balance += balance * (rate / 100);
When you do division on int result is also int, so result of division rate / 100 is (int)rate / 100 what gives(int)0.05 which is 0. What gives balance += balance * 0 that's why balance hasn't changed in first program.
When you change your variables and parameters to double there is no truncating values so all calculations go as you expect.
If you declare two variables as integers and divide them:
int a = 4;
int b = 5;
System.out.println(a / b);
You'll not get 0.8.
In Java, an integer divided by an integer is always an integer and since the mathematical result, 0.8 is not an integer, it is rounded down to get 0. If you want to get the result 0.8, you need to make either a or b a double.
In Code 1, your addInterest method gets a argument value of 5 and this happens:
balance += balance*(rate/100);
Since rate and 100 are all integers, the result must be an integer. Therefore, the mathematical result 0.05 is rounded down to get 0. And as we all know, anything that is multiplied by 0 is 0. As a result, after the right side is evaluated, the assignment statement looks like this:
balance += 0;
The first program does not change the final value of the balance
whilst the 2nd program does. What is the reason for this ?
It is all about double vs integer arithmetic.
void addInterest(double rate) {
balance += balance*(rate/100);
}
In the second program, when you call addInterest(double rate), the value of rate passed from main() will be type-casted into double variable (which is the type defined by addInterest method signature) and then balance*(rate/100) will be calculated like below:
When rate is double:
rate/100 = 5/100 = 0.05
But, when rate is integer:
rate/100 = 5/100 = 0 (intgers strip all decimals)
It's because in your first you have:
int rate2 = 5;
and in your second you have:
double rate2 = 5;
You should be using double when dealing with currency type values.
This is a classic problem for beginners. I'm surprised I can't find a really good explanation. This one is a bit helpful:
Why is the result of 1/3 == 0?
In Java how an operation like division (/) is performed depends on the types of the operands. In a/b a and b are the operands. If you divide integer by integer it performs integer division. The means taking the whole number part and discarding the remainder. In that method (56/100) == 0 but (156/100) == 1 no rounding.
To force floating point arithmetic make sure one of the operands is double for example try:
void addInterest(int rate) {
balance += balance*(rate/100.0);
}
The compiler will interpret 100.0 as double then perform the calculation in floating point arithmetic (probably what you were expecting) and then take the integer value to add to balance.
Footnote 1: Never use double for currency values. Use java.math.BigDecimal there a number of rounding oddities in double arithmetic that cause problems in financial calculations. double can't represent 0.01 precisely and accumulating rounding errors inevitably cause some confusion when (0.01+0.01+0.01+0.01+0.01+0.01)!=0.06 for example.
You can fudge round it with tolerances but in financial systems of any real size you will eventually go outside it.
Footnote 2: Why does Java work like this? Integer arithmetic is useful for many algorithms and in loops performing millions of increments will never 'drift off' like we can see happening after just 6 operations above!
I want to make a program that uses the Babylonian algorithm to compute the square root of a positive number n as follows :
Make a guess at the answer (you can pick n/2 as your initial guess).
Compute r = n / guess.
Set guess = (guess + r) / 2
Go back to step 2 until the last two guess values are within 1% of each other.
Now that's my code.
double n = input.nextDouble();
double guess = n / 2;
while ()
{
double r = n / guess;
guess = (guess + r) / 2;
System.out.println(guess);
}
How can I get the loop to stop iterating when guess is within 1% of the previous guess ? I don't get the part "guess is within 1% of the previous guess".
This should do the trick:
double n = input.nextDouble();
double guess = n / 2;
double pctDiff = Double.MAX_VALUE;
double lastGuess = guess;
while (Math.abs(pctDiff) >= 0.01)
{
double r = n / guess;
guess = (guess + r) / 2;
pctDiff = ((guess-lastGuess)/lastGuess); // normally, multiply by 100, but don't need to necessarily...
lastGuess = guess;
System.out.println(guess);
}
Store the previous and the current guesses in separate variables. Then simply have an if statement to check how far your currentGuess is from previousGuess.
The algorithm listed in the book Algorithms for square root with explanation
added is
public static double sqrt(double c)
{
if (c < 0) return Double.NaN;// less than zero are complex numbers
double err = 1e-15; // (1 power -15)decreasing this will cause more accuracy
// and more iterations to converge(quadratic
// convergence) toward the actual value
double t = c; // initial positive value
while (Math.abs(t - c/t) > err * t)
t = (c/t + t) / 2.0;
return t;
}
I am new to java, and my program is likely nowhere near as efficient as it could be, but here it is:
public class Compute {
public static void main(String[] args) {
for(double i = 10000; i <= 100000; i += 10000)
{
System.out.println("The value for the series when i = " + i + " is " + e(i));
}
}
public static double e(double input) {
double e = 0;
for(double i = 0; i <= input; i++)
{
e += 1 / factorial(input);
}
return e;
}
public static double factorial(double input) {
double factorial = 1;
for(int i = 1; i <= input; i++)
{
factorial *= i;
}
return factorial;
}
}
I believe this calculates the value e for i = 10000, 20000, ..., & 100000.
Where e = 1 + (1/1!) + (2/2!) + ... + (1/i!)
It takes about 47 seconds to do so, but I believe it works.
My issue is, for every i, the result is always 0.0
I believe this is because whenever the method factorial is called, the return value is too big to be stored which somehow causes a problem.
What can I do to store the value returned by the method Factorial?
Although you can calculate arbitrary precision results with BigDecimal, there is no need to calculate to 100000! for the series expansion of e. Consider that the 20th term in the series (20/20!) has a magnitude of about 10-19, so its contribution to the overall total is insignificant.
In other words, the contribution of any terms after the 20th would change only digits after the 19th decimal place.
You should probably use java.math.BigInteger to store the factorial.
Change this
e += 1 / factorial(input);
to
e += 1 / factorial(i);
Lots to do to speed up the code. Think about (i+1)! vs i!, don't recalc the whole factorial every time.
Also stop calculating when the answer will change less than the required precision like Jim said.