private static double [] sigtab = new double[1001]; // values of f(x) for x values
static {
for(int i=0; i<1001; i++) {
double ifloat = i;
ifloat /= 100;
sigtab[i] = 1.0/(1.0 + Math.exp(-ifloat));
}
}
public static double fast_sigmoid (double x) {
if (x <= -10)
return 0.0;
else if (x >= 10)
return 1.0;
else {
double normx = Math.abs(x*100);
int i = (int)normx;
double lookup = sigtab[i] + (sigtab[i+1] - sigtab[i])*(normx - Math.floor(normx));
if (x > 0)
return lookup;
else // (x < 0)
return (1 - lookup);
}
}
Anyone know why this "fast sigmoid" actually runs slower than the exact version using Math.exp?
You should profile your code, but I'll bet it's the call to Math.floor taking around half your CPU cycles (it is slow because it calls the native method StrictMath.floor(double), incurring the JNI overhead.)
It is possible to compute (less-accurate) versions of sigmoid functions faster than the (exact) hardware implementations. Here's an example for tanh, which should be easy to transform to your function (is it expit(-x)?)
Two tricks that are used here are often useful in LUT-based approximations:
Simulate rounding by adding a large constant (forcing the FPU will truncate it, having too few bits to represent the sum)
Make your table size a power of 2 (means one less multiply per call)
public static float fastTanH(float x) {
if (x<0) return -fastTanH(-x);
if (x>8) return 1f;
float xp = TANH_FRAC_BIAS + x;
short ind = (short) Float.floatToRawIntBits(xp);
float tanha = TANH_TAB[ind];
float b = xp - TANH_FRAC_BIAS;
x -= b;
return tanha + x * (1f - tanha*tanha);
}
private static final int TANH_FRAC_EXP = 6; // LUT precision == 2 ** -6 == 1/64
private static final int TANH_LUT_SIZE = (1 << TANH_FRAC_EXP) * 8 + 1;
private static final float TANH_FRAC_BIAS =
Float.intBitsToFloat((0x96 - TANH_FRAC_EXP) << 23);
private static float[] TANH_TAB = new float[TANH_LUT_SIZE];
static {
for (int i = 0; i < TANH_LUT_SIZE; ++ i) {
TANH_TAB[i] = (float) Math.tanh(i / 64.0);
}
}
Do you mean looking up in an array of double elements and performing some calculus should be faster than calculating it on the spot?
Altough the CPU only has basic operations, it can handle an exponentiation pretty easily. I'd say in less than 5 basic operations.
What you are doing here is somehow complex and requires actually having to go fetch some elements in the memory. 64bits*1001 surely fits in your cache but cache access time certainly does not match registry access time.
This case does not surprise me in the least.
Related
I had a problem where i had to calculate sum of large powers of numbers in an array and return the result.For example arr=[10,12,34,56] then output should be
10^1+12^2+34^3+56^4.Here the output could be very large so we were asked to take a mod of 10^10+11 on the output and then return it.I did it easily in python but in java initially i used BigInteger and got tle for half the test cases so i thought of using Long and then calculating power using modular exponential but then i got the wrong output to be precise all in negative as it obviously exceeded the limit.
Here is my code using Long and Modular exponential.
static long power(long x, long y, long p)
{
long res = 1; // Initialize result
x = x % p; // Update x if it is more than or
// equal to p
while (y > 0)
{
// If y is odd, multiply x with result
if ((y & 1)==1)
res = (res*x) % p;
// y must be even now
y = y>>1; // y = y/2
x = (x*x) % p;
}
return res;
}
static long solve(int[] a) {
// Write your code here
Long[] arr = new Long[a.length];
for (int i = 0; i < a.length; i++) {
arr[i] = setBits(new Long(a[i]));
}
Long Mod = new Long("10000000011");
Long c = new Long(0);
for (int i = 0; i < arr.length; i++) {
c += power(arr[i], new Long(i + 1),Mod) % Mod;
}
return c % Mod;
}
static long setBits(Long a) {
Long count = new Long(0);
while (a > 0) {
a &= (a - 1);
count++;
}
return count;
}
Then i also tried Binary Exponentiation but nothing worked for me.How do i achieve this without using big integer and as easily as i got it in python
You have added an extra zero the value of mod, it should be 1000000011.
Hope this will solve it
I was trying to create a program that finds the power of a real number . The problem is that exponent is in decimal and less than 1 but not negative.
suppose we have to find the power of
50.76
what i really tried was i wrote 0.76 as 76/100 and it would be 576/100
and after that i wrote
here is the code if you want to see what i did
public class Struct23 {
public static void main(String[] args) {
double x = 45;
int c=0;
StringBuffer y =new StringBuffer("0.23");
//checking whether the number is valid or not
for(int i =0;i<y.length();i++){
String subs = y.substring(i,i+1);
if(subs.equals(".")){
c=c+1;
}
}
if(c>1){
System.out.println("the input is wrong");
}
else{
String nep= y.delete(0, 2).toString();
double store = Double.parseDouble(nep);
int length = nep.length();
double rootnum = Math.pow(10, length);
double skit = power(x,store,rootnum);
System.out.println(skit);
}
}
static double power(double x,double store,double rootnum){
//to find the nth root of number
double number = Math.pow(x, 1/rootnum);
double power = Math.pow(number, store);
return power;
}
}
the answer would come but the main problem is that i cannot use pow function to do that
i can't also use exp() and log() functions.
i can only use
+
-
*
/
help me suggest your ideas .
thanks in advance
def newtons_sqrt(initial_guess, x, threshold=0.0001):
guess = initial_guess
new_guess = (guess+float(x)/guess)/2
while abs(guess-new_guess) > threshold :
guess=new_guess
new_guess = (guess+float(x)/guess)/2
return new_guess
def power(base, exp,threshold=0.00001):
if(exp >= 1): # first go fast!
temp = power(base, exp / 2);
return temp * temp
else: # now deal with the fractional part
low = 0
high = 1.0
sqr = newtons_sqrt(base/2,base)
acc = sqr
mid = high / 2
while(abs(mid - exp) > threshold):
sqr = newtons_sqrt(sqr/2.0,sqr)
if (mid <= exp):
low = mid
acc *= sqr
else:
high = mid
acc *= (1/sqr)
mid = (low + high) / 2;
return acc
print newtons_sqrt(1,8)
print 8**0.5
print power(5,0.76)
print 5**0.76
I reapropriated most of this answer from https://stackoverflow.com/a/7710097/541038
you could also expound on newtons_sqrt to give newtons_nth_root ... but then you have to figure out that 0.76 == 76/100 (which im sure isnt too hard really)
you can convert your number to complex form of it and then use de Moivre' formula to compute the nth root of your number using your legal oprations.
In my computer architecture class I just learned that running an algebraic expression involving multiplication through a multiplication circuit can be more costly than running it though an addition circuit, if the number of required multiplications are less than 3. Ex: 3x. If I'm doing this type of computation a few billion times, does it pay off to write it as: x + x + x or does the JIT optimizer optimize for this?
I wouldn't expect to be a huge difference on writing it one way or the other.
The compiler will probably take care of making all of those equivalent.
You can try each method and measure how long it takes, that could give you a good hint to answer your own question.
Here's some code that does the same calculations 10 million times using different approaches (x + x + x, 3*x, and a bit shift followed by a subtraction).
They seem to all take approx the same amount of time as measured by System.nanoTime.
Sample output for one run:
sum : 594599531
mult : 568783654
shift : 564081012
You can also take a look at this question that talks about how compiler's optimization can probably handle those and more complex cases: Is shifting bits faster than multiplying and dividing in Java? .NET?
Code:
import java.util.Random;
public class TestOptimization {
public static void main(String args[]) {
Random rn = new Random();
long l1 = 0, l2 = 0, l3 = 0;
long nano1 = System.nanoTime();
for (int i = 1; i < 10000000; i++) {
int num = rn.nextInt(100);
l1 += sum(num);
}
long nano2 = System.nanoTime();
for (int i = 1; i < 10000000; i++) {
int num = rn.nextInt(100);
l2 += mult(num);
}
long nano3 = System.nanoTime();
for (int i = 1; i < 10000000; i++) {
int num = rn.nextInt(100);
l3 += shift(num);
}
long nano4 = System.nanoTime();
System.out.println(l1);
System.out.println(l2);
System.out.println(l3);
System.out.println("sum : " + (nano2 - nano1));
System.out.println("mult : " + (nano3 - nano2));
System.out.println("shift : " + (nano4 - nano3));
}
private static long sum(long x) {
return x + x + x;
}
private static long mult(long x) {
return 3 * x;
}
private static long shift(long x) {
return (x << 2) - x;
}
}
I got curious about a rounding algorithm, because in CS we had to emulate an HP35 without using the Math library. We didn't include a rounding algorithm in our final build, but I wanted to do it anyway.
public class Round {
public static void main(String[] args) {
/*
* Rounds by using modulus subtraction
*/
double a = 1.123599;
// Should you port this to another method, you can take this as a parameter
int b = 5;
double accuracy = Math.pow(10, -b);
double remainder = a % accuracy;
if (remainder >= 5 * accuracy / 10) // Divide by ten is important because remainder is smaller than accuracy
a += accuracy;
a -= remainder;
/*
* Removes round off error done by modulus
*/
String string = Double.toString(a);
int index = string.indexOf('.') + b;
string = string.substring(0, index);
a = Double.parseDouble(string);
System.out.println(a);
}
}
Is this a good algorithm, or are there any better ones? I don't care about the ones defined in the Java API, I just wanted to know how it was done.
[EDIT]
Here's the code I came up with after looking over EJP's answer
public class Round {
public static void main(String[] args) {
double a = -1.1234599;
int b = 5;
boolean negative = a < 0;
if (negative) a = -a;
String string = Double.toString(a);
char array[] = string.toCharArray();
int index = string.indexOf('.') + b;
int i = index;
int value;
if (Character.getNumericValue(array[index +1]) >= 5) {
for (; i > 0; i--) {
value = Character.getNumericValue(array[i]);
if (value != -1) {
++value;
String temp = Integer.toString(value)
array[i] = temp.charAt(temp.length()-1);
if (value <= 9) break;
}
}
}
string = "";
for (int j=0; j < index + 1 ; j++) {
string += array[j];
}
a = Double.parseDouble(string);
if (negative) a =-a;
System.out.println(a);
}
}
Floating-point numbers don't have decimal places. They have binary places, and the two are not commensurable. Any attempt to modify a floating-point variable to have a specific number of decimal places is doomed to failure.
You have to do the rounding to a specified number of decimal places after conversion to a decimal radix.
There are a different ways to round numbers. The RoundingMode documentation for Java (introduced in 1.5) should give you a brief introduction to the different methods people use.
I know you said you don't have access to the Math functions, but the simplest rounding you can do is:
public static double round(double d)
{
return Math.floor(d + 0.5);
}
If you don't want to use any Math functions, you could try something like this:
public static double round(double d)
{
return (long)(d + 0.5);
}
Those two probably behave differently in some situations (negative numbers?).
I'm trying to calculate the Mean Difference average of a set of data. I have two (supposedly equivalent) formulas which calculate this, with one being more efficient (O^n) than the other (O^n2).
The problem is that while the inefficient formula gives correct output, the efficient one does not. Just by looking at both formulas I had a hunch that they weren't equivalent, but wrote it off because the derivation was made by a statician in a scientific journal. So i'm assuming the problem is my translation. Can anyone help me translate the efficient function properly?
Inefficient formula:
Inefficient formula translation (Java):
public static double calculateMeanDifference(ArrayList<Integer> valuesArrayList)
{
int valuesArrayListSize = valuesArrayList.size();
int sum = 0;
for(int i = 0; i < valuesArrayListSize; i++)
{
for(int j = 0; j < valuesArrayListSize; j++)
sum += (i != j ? Math.abs(valuesArrayList.get(i) - valuesArrayList.get(j)) : 0);
}
return new Double( (sum * 1.0)/ (valuesArrayListSize * (valuesArrayListSize - 1)));
}
Efficient derived formula:
where (sorry, don't know how to use MathML on here):
x(subscript i) = the ith order statistic of the data set
x(bar) = the mean of the data set
Efficient derived formula translation (Java):
public static double calculateMean(ArrayList<Integer> valuesArrayList)
{
double sum = 0;
int valuesArrayListSize = valuesArrayList.size();
for(int i = 0; i < valuesArrayListSize; i++)
sum += valuesArrayList.get(i);
return sum / (valuesArrayListSize * 1.0);
}
public static double calculateMeanDifference(ArrayList<Integer> valuesArrayList)
{
double sum = 0;
double mean = calculateMean(valuesArrayList);
int size = valuesArrayList.size();
double rightHandTerm = mean * size * (size + 1);
double denominator = (size * (size - 1)) / 2.0;
Collections.sort(valuesArrayList);
for(int i = 0; i < size; i++)
sum += (i * valuesArrayList.get(i) - rightHandTerm);
double meanDifference = (2 * sum) / denominator;
return meanDifference;
}
My data set consists of a collection of integers each having a value bounded by the set [0,5].
Randomly generating such sets and using the two functions on them gives different results. The inefficient one seems to be the one producing results in line with what is being measured: the absolute average difference between any two values in the set.
Can anyone tell me what's wrong with my translation?
EDIT: I created a simpler implementation that is O(N) provided the all your data has values limited to a relatively small set.The formula sticks to the methodology of the first method and thus, gives identical results to it (unlike the derived formula). If it fits your use case, I suggest people use this instead of the derived efficient formula, especially since the latter seems to give negative values when N is small).
Efficient, non-derived translation (Java):
public static double calculateMeanDifference3(ArrayList<Integer> valuesArrayList)
{
HashMap<Integer, Double> valueCountsHashMap = new HashMap<Integer, Double>();
double size = valuesArrayList.size();
for(int i = 0; i < size; i++)
{
int currentValue = valuesArrayList.get(i);
if(!valueCountsHashMap.containsKey(currentValue))
valueCountsHashMap.put(currentValue, new Double(1));
else
valueCountsHashMap.put(currentValue, valueCountsHashMap.get(currentValue)+ 1);
}
double sum = 0;
for(Map.Entry<Integer, Double> valueCountKeyValuePair : valueCountsHashMap.entrySet())
{
int currentValue = valueCountKeyValuePair.getKey();
Double currentCount = valueCountKeyValuePair.getValue();
for(Map.Entry<Integer, Double> valueCountKeyValuePair1 : valueCountsHashMap.entrySet())
{
int loopValue = valueCountKeyValuePair1.getKey();
Double loopCount = valueCountKeyValuePair1.getValue();
sum += (currentValue != loopValue ? Math.abs(currentValue - loopValue) * loopCount * currentCount : 0);
}
}
return new Double( sum/ (size * (size - 1)));
}
Your interpretation of sum += (i * valuesArrayList.get(i) - rightHandTerm); is wrong, it should be sum += i * valuesArrayList.get(i);, then after your for, double meanDifference = ((2 * sum) - rightHandTerm) / denominator;
Both equations yields about the same value, but they are not equal. Still, this should help you a little.
You subtract rightHandTerm on each iteration, so it gets [over]multiplied to N.
The big Sigma in the nominator touches only (i x_i), not the right hand term.
One more note: mean * size == sum. You don't have to divide sum by N and then remultiply it back.