I need to represent the unit of Percent per second using the JScience.org's JSR 275 units and measures implementation. I am trying to do to the following:
Unit<Dimensionless> PERCENT_PER_SECOND = NonSI.PERCENT.divide(Si.SECOND).asType(Dimensionless.class)
but I am getting a ClassCastException when I try to do that.
The following works, but I'm not sure if there's a better way:
public interface PercentOverTime extends Quantity {}
public static Unit<PercentOverTime> PERCENT_PER_SECOND = new BaseUnit<PercentOverTime>("%/s");
Any thoughts? The closest I could find to this is the question on Cooking Measurements (which is how I saw how to define your own units).
I wrote up this code sample to test out the math here:
public void testUnit() {
// define two points on a function from t -> %
// the rate of change between these two points
// should have unit %/t
Measure<Double, Dimensionless> p0 = Measure.valueOf(50.0, NonSI.PERCENT);
Measure<Double, Dimensionless> p1 = Measure.valueOf(20.0, NonSI.PERCENT);
Measure<Double, Duration> timeDifference = Measure.valueOf(10.0, SI.SECOND);
// JSR-275 has no Amount, so have to convert and do math ourselves
// these doubles are percents
double p0Raw = p0.doubleValue(NonSI.PERCENT);
double p1Raw = p1.doubleValue(NonSI.PERCENT);
// this duration is in seconds
double timeDifferenceRaw = timeDifference.doubleValue(SI.SECOND);
// this is the slope of the secant between the two points
// so it should be the %/s we want
double rateSecant = (p1Raw - p0Raw) / timeDifferenceRaw;
// let's see what we get
Measure<Double, ?> answer = Measure.valueOf(rateSecant,
NonSI.PERCENT.divide(SI.SECOND));
System.out.println(answer);
}
If your original function has time as the independent variable (e.g. as seconds) and a ratio as the independent variable (e.g. as a percent), then the derivative of this function with regard to time will still have time as the independent variable, but will have 'ratio per time' as the dependent.
Yes, ratios are dimensionless, so this is a little bit odd, but you could imagine a graph of the percent change day over day in a stock price and then a graph of the change in the percent change in a stock price day over day day over day.
So what does this print out?
-3.0 %/s
Which is what we expect the rate of change to be for a change from 50 to 20 percent over 10 seconds.
So your unit construction should look like:
Unit<?> magicUnit = NonSI.PERCENT.divide(SI.SECOND);
Dimension magicDimension = Dimension.NONE.divide(Dimension.TIME);
System.out.println(magicUnit + " measures " + magicDimension + " ("
+ magicUnit.getDimension() + ")");
Indeed this prints %/s measures 1/[T] (1/[T]), as we expect.
So we have a Unit and Dimension and can make Measures. What is the Quantity we are measuring? The docs say this about Quantity:
Distinct quantities have usually
different physical dimensions;
although it is not required nor
necessary, for example Torque and
Energy have same dimension but are of
different nature (vector for torque,
scalar for energy).
So while Frequency would seem to be the correct Quantity, it doesn't really express the semantic quantity we seem to be discussing.
In closing, your first line of code doesn't work because in the included model 1/[T] measures the quantity Freqency, not the quantity Dimensionless. So if you don't want to make your own Quantity, use Unit. The Dimension you are looking for is None/Time, or %/second if you want the correct scalar multipliers in there. Finally, it's up to you whether you want to make your own Quantity, but it might be worthwhile if you're using this in a lot of places.
It would also be worthwhile to check out the latest developments in the JScience space since it seems like they decided Amount (with add, subtract, multiply, divide, pow, etc. methods) was needed. It would be really easy to do all this dimensional analysis with Amount. Just do a Percent Amount minus a Percent Amount and divide by a Seconds amount and it should do the units for you.
It has units s^-1, or Hz (SI.HERTZ in JScience speak).
Or Unit<Frequency>.
Percent isn't a unit, but a scalar - so percent per second is only a scalar value per unit time, which doesn't make sense. It's like saying "3 per second". 3 what?
If you incorporate the unit of what you are measuring per unit time that will get you the correct unit.
Related
I have tried a few different ways to write this problem out and I keep on having issues. So I have 5 categories that need to be averaged out. Here is just a sample code of one part of the code:
public double getAvg() {
String[] Avg = new String[]{Q1, Q2, Q3, Q4};
int sum = 0;
for (String input : Avg) {
sum += Integer.parseInt(input);
}
return sum * (10);
}
So there are 4 other similar sections like this. Anyway, I need to take these 5 results and have them post a total using this equation:
(((part1)*(.35))+((part2)*(.15))+((part3)*(.10))+((part4)+(.20))+((part5)*(.20)))=100%
-
So, I need to clarify. I am writing a javabean that gets and overall average of statistics input by the user. There are 5 sections that need to be evaluated to get an overall average of the given statistics. I have written three convenience methods to get the first three averages(one of these is what is shown above in the example code). The last 2 values are input by the user.
So what I need is to write a convenience method in the bean that gets the averages in the manner shown above and takes the two additional values input by the user and then spits out an overall average using this rubric:
Value A: 35% of overall average
Value B: 15% of overall average
Value C: 10% of overall average
Value D: 20% of overall average
Value E: 20% of overall average.
The trouble that I am having is I don't know how to factor in the answers of the three convenience methods into the overall average convenience method.
Any help is appreciated!
To give some context, I have been writing a basic Perlin noise implementation in Java, and when it came to implementing seeding, I had encountered a bug that I couldn't explain.
In order to generate the same random weight vectors each time for the same seed no matter which set of coordinates' noise level is queried and in what order, I generated a new seed (newSeed), based on a combination of the original seed and the coordinates of the weight vector, and used this as the seed for the randomization of the weight vector by running:
rnd.setSeed(newSeed);
weight = new NVector(2);
weight.setElement(0, rnd.nextDouble() * 2 - 1);
weight.setElement(1, rnd.nextDouble() * 2 - 1);
weight.normalize()
Where NVector is a self-made class for vector mathematics.
However, when run, the program generated very bad noise:
After some digging, I found that the first element of each vector was very similar (and so the first nextDouble() call after each setSeed() call) resulting in the first element of every vector in the vector grid being similar.
This can be proved by running:
long seed = Long.valueOf(args[0]);
int loops = Integer.valueOf(args[1]);
double avgFirst = 0.0, avgSecond = 0.0, avgThird = 0.0;
double lastfirst = 0.0, lastSecond = 0.0, lastThird = 0.0;
for(int i = 0; i<loops; i++)
{
ran.setSeed(seed + i);
double first = ran.nextDouble();
double second = ran.nextDouble();
double third = ran.nextDouble();
avgFirst += Math.abs(first - lastfirst);
avgSecond += Math.abs(second - lastSecond);
avgThird += Math.abs(third - lastThird);
lastfirst = first;
lastSecond = second;
lastThird = third;
}
System.out.println("Average first difference.: " + avgFirst/loops);
System.out.println("Average second Difference: " + avgSecond/loops);
System.out.println("Average third Difference.: " + avgSecond/loops);
Which finds the average difference between the first, second and third random numbers generated after a setSeed() method has been called over a range of seeds as specified by the program's arguments; which for me returned these results:
C:\java Test 462454356345 10000
Average first difference.: 7.44638117976783E-4
Average second Difference: 0.34131692827329957
Average third Difference.: 0.34131692827329957
C:\java Test 46245445 10000
Average first difference.: 0.0017196011123287126
Average second Difference: 0.3416750057190849
Average third Difference.: 0.3416750057190849
C:\java Test 1 10000
Average first difference.: 0.0021601598225344998
Average second Difference: 0.3409914232342002
Average third Difference.: 0.3409914232342002
Here you can see that the first average difference is significantly smaller than the rest, and seemingly decreasing with higher seeds.
As such, by adding a simple dummy call to nextDouble() before setting the weight vector, I was able to fix my perlin noise implementation:
rnd.setSeed(newSeed);
rnd.nextDouble();
weight.setElement(0, rnd.nextDouble() * 2 - 1);
weight.setElement(1, rnd.nextDouble() * 2 - 1);
Resulting in:
I would like to know why this bad variation in the first call to nextDouble() (I have not checked other types of randomness) occurs and/or to alert people to this issue.
Of course, it could just be an implementation error on my behalf, which I would be greatful if it were pointed out to me.
The Random class is designed to be a low overhead source of pseudo-random numbers. But the consequence of the "low overhead" implementation is that the number stream has properties that are a long way off perfect ... from a statistical perspective. You have encountered one of the imperfections. Random is documented as being a Linear Congruential generator, and the properties of such generators are well known.
There are a variety of ways of dealing with this. For example, if you are careful you can hide some of the most obvious "poor" characteristics. (But you would be advised to run some statistical tests. You can't see non-randomness in the noise added to your second image, but it could still be there.)
Alternatively, if you want pseudo-random numbers that have guaranteed good statistical properties, then you should be using SecureRandom instead of Random. It has significantly higher overheads, but you can be assured that many "smart people" will have spent a lot of time on the design, testing and analysis of the algorithms.
Finally, it is relatively simple to create a subclass of Random that uses an alternative algorithm for generating the numbers; see link. The problem is that you have to select (or design) and implement an appropriate algorithm.
Calling this an "issue" is debatable. It is a well known and understood property of LCGs, and use of LCGs was a concious engineering choice. People want low overhead PRNGs, but low overhead PRNGs have poor properties. TANSTAAFL.
Certainly, this is not something that Oracle would contemplate changing in Random. Indeed, the reasons for not changing are stated clearly in the javadoc for the Random class.
"In order to guarantee this property, particular algorithms are specified for the class Random. Java implementations must use all the algorithms shown here for the class Random, for the sake of absolute portability of Java code."
This is known issue. Similar seed will generate similar few first values. Random wasn't really designed to be used this way. You are supposed to create instance with a good seed and then generate moderately sized sequence of "random" numbers.
Your current solution is ok - as long as it looks good and is fast enough. You can also consider using hashing/mixing functions which were designed to solve your problem (and then, optionally, using the output as seed). For example see: Parametric Random Function For 2D Noise Generation
Move your setSeed out of the loop. Java's PRNG is a linear congruential generator, so seeding it with sequential values is guaranteed to give results that are correlated across iterations of the loop.
ADDENDUM
I dashed that off before running out the door to a meeting, and now have time to illustrate what I was saying above.
I've written a little Ruby script which implements Schrage's portable prime modulus multiplicative linear congruential generator. I instantiate two copies of the LCG, both seeded with a value of 1. However, in each iteration of the output loop I reseed the second one based on the loop index. Here's the code:
# Implementation of a Linear Congruential Generator (LCG)
class LCG
attr_reader :state
M = (1 << 31) - 1 # Modulus = 2**31 - 1, which is prime
# constructor requires setting a seed value to use as initial state
def initialize(seed)
reseed(seed)
end
# users can explicitly reset the seed.
def reseed(seed)
#state = seed.to_i
end
# Schrage's portable prime modulus multiplicative LCG
def value
#state = 16807 * #state % M
# return the generated integer value AND its U(0,1) mapping as an array
[#state, #state.to_f / M]
end
end
if __FILE__ == $0
# create two instances of LCG, both initially seeded with 1
mylcg1 = LCG.new(1)
mylcg2 = LCG.new(1)
puts " default progression manual reseeding"
10.times do |n|
mylcg2.reseed(1 + n) # explicitly reseed 2nd LCG based on loop index
printf "%d %11d %f %11d %f\n", n, *mylcg1.value, *mylcg2.value
end
end
and here's the output it produces:
default progression manual reseeding
0 16807 0.000008 16807 0.000008
1 282475249 0.131538 33614 0.000016
2 1622650073 0.755605 50421 0.000023
3 984943658 0.458650 67228 0.000031
4 1144108930 0.532767 84035 0.000039
5 470211272 0.218959 100842 0.000047
6 101027544 0.047045 117649 0.000055
7 1457850878 0.678865 134456 0.000063
8 1458777923 0.679296 151263 0.000070
9 2007237709 0.934693 168070 0.000078
The columns are iteration number followed by the underlying integer generated by the LCG and the result when scaled to the range (0,1). The left set of columns show the natural progression of the LCG when allowed to proceed on its own, while the right set show what happens when you reseed on each iteration.
I am doing a two-faces comparison work using OpenCV FaceRecognizer of LBP type. My question is how to calculate the percentage format prediction confidence? Giving the following code(javacv):
int n[] = new int[1];
double p[] = new double[1];
personRecognizer.predict(mat, n, p);
int confidence = p[0];
but the confidence is a double value, how should I convert it into a percentage % value of probability?
Is there an existing formula?
Sorry if I didn't state my question in a clear way. Ok, here is the scenario:
I want to compare two face images and get out the likeliness of the two face, for example input John's pic and his classmate Tom's pic, and let's say the likeliness is 30%; and then input John's pic and his brother Jack's pic, comes the likeliness is 80%.
These two likeliness factor shows that Jack is more like his brother John than Tom does... so the likeliness factor in percentage format is what i want, more the value means more likeliness of the two input face.
Currently I did this by computing the confidence value of the input using opencv function FaceRecognizer.predict, but the confidence value actually stands for the distance between the inputs in their feature vectors space, so how can I scale the distance(confidence) into the likeliness percentage format?
You are digging too deep by your question. Well, according to the OpenCV documentation:
predict()
Predicts a label and associated confidence (e.g. distance) for a given
input image
I am not sure what are you looking for here but the question is not really easy to be answered. Intra-person face variants (variation of the same person) are vast and inter-person face variation (faces from different persons) can be more compact (e.g. when both face front while the intra-person second facial image is profile) so this is a whole topic that expect an answer.
Probably you should have a ground truth (i.e. some faces with labels already known) and deduct form this set the percentage you want associating the distances with the labels. Though this is also often inaccurate as distance would not coincide with your perception of similarity (as mentioned before inter-person faces can vary a lot).
Edit:
First of all, there is no universal human perception of face similarity. On the other half, most people would recognize a face that belongs to the same person in various poses and postures. Most word here is important. As you pressure the limits the human perception will start to diverge, e.g. when asked to recognize a face over the years and the time span becomes quite large (child vs adolescence vs old person).
You are asking to compute the similarity of noses/eyes etc? If so, I think the best way is to find a set of noses/eyes belonging to the same persons and train over this and then check your performance on a different set from different persons.
The usual approach as I know is to train and test using pairs of images comprising positive and negative samples. A positive sample is a pair of images belonging to the same person while a negative one is an image pair belong to two different ones.
I am not sure what you are asking exactly so maybe you can check out this link.
Hope it helped.
Edit 2:
Well, since you want to convert the distance that you are getting to a similarity expressed as percentage you can somehow invert the distance to get the similarity. There are some problems arising here though:
There is a value for absolute match, that is dis = 0; or equivalently similarity is sim = 100% but there is no value explicit for total mismatch: dis = infinite so sim = 0%. On the other hand the inverse progress has explicit boundaries 0% - 100%.
Since extreme values include 0 and infinite there must be a smarter conversion than simple inversion.
You can easily assign 1.0 (or 100% to similarity) corresponding to the absolute match but what you are going to take as total mismatch is not clear. You can consider an arbitrary high value as 0.0 (since you there is no big difference e.g. in using distance 10000 to 11000 I guess) and all values higher than this (distance values that is) to be considered 0.0.
To find which value that should be I would suggest to compare two quite distinct images and use the distance between them as 0.0.
Let's suppose that this value is disMax = 250.0; and simMax = 100.0;
then a simple approach could be:
double sim = simMax - simMax/disMax*dis;
which gives a 100.0 similarity for 0 distance and 0.0 for 250 distance. Values larger than 250 would give negative similarity values which should be considered 0.0.
I have written this code to compute the sine of an angle. This works fine for smaller angles, say upto +-360. But with larger angles it starts giving faulty results. (When I say larger, I mean something like within the range +-720 or +-1080)
In order to get more accurate results I increased the number of times my loop runs. That gave me better results but still that too had its limitations.
So I was wondering if there is any fault in my logic or do I need to fiddle with the conditional part of my loop? How can I overcome this shortcoming of my code? The inbuilt java sine function gives correct results for all the angles I have tested..so where am I going wrong?
Also can anyone give me an idea as to how do I modify the condition of my loop so that it runs until I get a desired decimal precision?
import java.util.Scanner;
class SineFunctionManual
{
public static void main(String a[])
{
System.out.print("Enter the angle for which you want to compute sine : ");
Scanner input = new Scanner(System.in);
int degreeAngle = input.nextInt(); //Angle in degree.
input.close();
double radianAngle = Math.toRadians(degreeAngle); //Sine computation is done in terms of radian angle
System.out.println(radianAngle);
double sineOfAngle = radianAngle,prevVal = radianAngle; //SineofAngle contains actual result, prevVal contains the next term to be added
//double fractionalPart = 0.1; // This variable is used to check the answer to a certain number of decimal places, as seen in the for loop
for(int i=3;i<=20;i+=2)
{
prevVal = (-prevVal)*((radianAngle*radianAngle)/(i*(i-1))); //x^3/3! can be written as ((x^2)/(3*2))*((x^1)/1!), similarly x^5/5! can be written as ((x^2)/(5*4))*((x^3)/3!) and so on. The negative sign is added because each successive term has alternate sign.
sineOfAngle+=prevVal;
//int iPart = (int)sineOfAngle;
//fractionalPart = sineOfAngle - iPart; //Extracting the fractional part to check the number of decimal places.
}
System.out.println("The value of sin of "+degreeAngle+" is : "+sineOfAngle);
}
}
The polynomial approximation for sine diverges widely for large positive and large negative values. Remember, since varies from -1 to 1 over all real numbers. Polynomials, on the other hand, particularly ones with higher orders, can't do that.
I would recommend using the periodicity of sine to your advantage.
int degreeAngle = input.nextInt() % 360;
This will give accurate answers, even for very, very large angles, without requiring an absurd number of terms.
The further you get from x=0, the more terms you need, of the Taylor expansion for sin x, to get within a particular accuracy of the correct answer. You're stopping around the 20th term, which is fine for small angles. If you want better accuracy for large angles, you'll just need to add more terms.
How to calculate percentage ( or average) when You have dividend but not deviser?
You have a lot of values, and some of them figure into your average - or percentage - and some of them probably don't. You are not expressing the problem clearly enough for anyone to be able to give you a meaningful answer.
A percentage represents a fraction, one value divided by another (multiplied by 100 to express it in percentage, but that's trivial and not part of the problem). What is the value that represents 100%? And what value are you trying to assign? In what way do you think that the quantity of bonuses should affect the percentage?
Some possible answers:
The total bonus earned by an individual, as compared to her nominal salary. If she earns $50k and her bonus is $20K, that is 20/50 *100 = 40%.
The total bonus earned by an individual, as compared to all the bonuses given out that year. If she received the same $20K, but the company gave out $100K in bonuses, then the percentage is 20/100 * 100 = 20%.
The most recent bonus earned by an individual, as compared to all bonuses awarded to her this year. If she got $5K for her last bonus, and the total was $20, that's 5/20 * 100 = 25%.
We really don't have enough information to go on; it could be any of these, or something entirely different. It is entirely possible to have a percentage value greater than 100%.
The average of one value is that value (Total number=1).
But this probably means I don't understand your question.
Without knowing the number of years, you need to know something else about the range of bonuses possible. i.e. does it have to be a whole number between 15 - 25%. However, this is largely guessing.
To get an average, you need a total and a count. BTW: In your case you want the geometric average, but you need to know the same things.
If your input is a list of numbers, showing percentage values means you need to compute the total and then see how much of the total each of them is:
For instance, if you have 110, 110, 110, you'll have a total of 330 and each of the values will be shown as 110/330 = 0.33 = 33% of the total.
In addition, if I have three decimal
values 120, 4420, and 230. How can I
get a number less than 1 that
represent the average of these 3
values?
You cannot. The average of those 3 numbers will be (120 + 4420 + 230) / 3. That will never be less than one. Maybe you are confused about what average means?
You need to be more specific or give an example. But I will give an answer based off of what I THINK you mean.
You cannot find the average of one lone number. If you were saying a temperature of 125 degrees every hour you could do it, The answer would obviously be 125. It is the closest thing that I can think of to what you are asking. You need to be more specific or the problem cannot be done. Otherwise use the simple formula: Sum/Number of integers. Also known as the mean. So that would be 125/1, which is still 125.