How to get the same RSI as Tradingview in Java? - java

I don't understand why, but my RSI is always different as Tradingview's RSI.
Is use the same period (14 candles of 15min each), I use the same type of value (closes price), I tried to add the last non closed candle, but I never get the same RSI.
Tradingview RSI code :
//#version=4
study(title="Relative Strength Index", shorttitle="RSI",
format=format.price, precision=2, resolution="")
len = input(14, minval=1, title="Length")
src = input(close, "Source", type = input.source)
up = rma(max(change(src), 0), len)
down = rma(-min(change(src), 0), len)
rsi = down == 0 ? 100 : up == 0 ? 0 : 100 - (100 / (1 + up / down))
plot(rsi, "RSI", color=#7E57C2)
band1 = hline(70, "Upper Band", color=#787B86)
bandm = hline(50, "Middle Band", color=color.new(#787B86, 50))
band0 = hline(30, "Lower Band", color=#787B86)
fill(band1, band0, color=color.rgb(126, 87, 194, 90), title="Background")
MY code with TA-Lib
MInteger outBegIdx = new MInteger();
MInteger outNbElement = new MInteger();
double[] outReal = new double[array.length-1];
int startIdx = 0;
int endIdx = array.length - 1;
Core core = new Core();
core.rsi(startIdx, endIdx, array, length-1, outBegIdx, outNbElement, outReal);
System.out.println(Arrays.toString(outReal));
return outReal[0];
my custom code without plugin
double av_gain_up_periods = 0;
double av_loss_down_periods = 0;
int gain_count = 0;
int loss_count = 0;
double previous_observation = array[0];
for (int i = 1; i < array.length; i++) {
if (previous_observation <= array[i]) { // if gain
double gain = array[i] - previous_observation;
gain_count++;
av_gain_up_periods += gain;
}
else { // if loss
double loss = previous_observation - array[i];
loss_count++;
av_loss_down_periods += loss;
}
previous_observation = array[i];
}
av_gain_up_periods = av_gain_up_periods/gain_count;
av_loss_down_periods = av_loss_down_periods/loss_count;
// CALCULATE RSI
double relative_strength = av_gain_up_periods/av_loss_down_periods;
double relative_strength_index = 100-(100/(1+relative_strength));
// PRINT RESULT
return relative_strength_index;
I can garantee you that I have 14 closes price and they are the same as Tradingview's. The difference is in the calculation.
Related to this issue
Thanks

I think the problem is in RMA (relative moving average) calculation.it happens because of the unmutual starting point for getting RMA. Different starting point for getting RMA will cause big difference in calculated RSI oppose to tradingview RSI. Tradingview is using Realtime data, hence older data. My suggestion is to start with 1000 klines.
Here is the tradingview formula :
len = input(14, minval=1, title="Length")
src = input(close, "Source", type = input.source)
up = rma(max(change(src), 0), len)
down = rma(-min(change(src), 0), len)
rsi = down == 0 ? 100 : up == 0 ? 0 : 100 - (100 / (1 + up / down))
according to this formula, first you need to get prices and store them somewhere. Closed candles are either gain or loss. After that you have to calculate the change for gain and loss.
Notice : for a closer result get more klines. you need the older data to have a more accurate calculation.( for example 1000 klines)
Now is the time to calculate RMA.
The RMA formula is :
Alpha*source(or as we know, change) + (1-alpha) * previous RMA
Alpha : 1/period(for example 1/14)
At this point we are at the tricky part of calculation. As you can see in the RMA formula, we need the previous RMA to calculate the current one. Assume that we are using 1 hour timeframe and store the klines lets say in an array. Each and every one off array elements has a previous element, hence previous RMA ; But what about the element stored in array[0] ?
It would also be needing previous RMA. Here you will need to calculate SMA(simple moving average).
SMA : the number of prices within a time period is divided by the number of total periods.
The calculated SMA is equal the arrays first member RMA.
Now is the time to calculate RSI. With this method I was able to calculate RSI almost exactly same as tradingview.
Personally, I wrote a bot with C++ to calculate RSI according to tradingview, using Binance API. You can see my code here :
https://github.com/Mina-Jahan/RSIgnal_bot
And I would be appreciated to hear your opinion in order to make my code better if there is a way to reform it.
Thank you.

Related

How to get exact chance of three variables?

I am struggling to calculate the estimated chance of user to get an item. He have got 15 tries(which may vary), chance to get into group of items and then chance to get exact item. I can see two ways of doing it, but none is perfect. Please take a look:
int userTries = 15;//Variable holding number of how many tries user has to get an item
double groupChance = 17;//Chance for user to get any item from the group
double itemChance = 80;//Chance for user to get specific item from the group
double simpleChance = groupChance * itemChance * userTries / 100.0;
int correctionTries = 10000;
int totalPassed = 0;
for (int i = 0;i < correctionTries;i++)
{
for (int x = 0;x < userTries;x++)
{
//Rnd.chance is checking if input chance was rolled, if chance is 17 then this method will return true in 17 tries out of 100
if (Rnd.chance(groupChance))
if (Rnd.chance(itemChance))
{
totalPassed++;
break;
}
}
}
double iterationChance = (double) totalPassed / (double) correctionTries * 100.0;
System.out.println("simple=" + simpleChance + " iteration=" + iterationChance);
When groupChance and itemChance are low(like 1 & 1), then simpleChance gives very good results, but when chances are high(like 17 & 80) then they vary a lot from iteration result. The problem with iteration solution is that when one I increase one of the chances, result actually be lower because of bad luck in calculated chances. I could increase correctionTries to solve that issue, but chance will be different when calculating same values once again and it also would have significant impact to performance.
Do you know any way to calculate that chance with low performance impact and good estimation that stays the same after calculating it once again?
I assume that the groupChance and itemChance are probabilities (in percent) to get into the specific group and to get the specific item in the group..
If so, then the probability to get this specific Item is groupChance/100 * itemChance/100 = 0.17*0.8 = 0.136 = 13.6%
not clear either what simpleChance should be => to get the specific item at least once after 15 tries?? exactly once after 15 tries?? to get it 15 times in a row?
if you want to get it 15 times in a row, then the chance is (groupChance/100 * itemChance/100 ) ^ userTries = 0.000000000000101
if you want to get it at least once after 15 tries, then the chance is 1 - ( 1 - groupChance/100 * itemChance/100 ) ^ userTries = 0.88839

Slow spark application - java

I am trying to create a spark application that takes a dataset of lat, long, timestamp points and increases the cell count if they are inside a grid cell. The grid is comprised of 3d cells with lon,lat and time as the z-axis.
Now I have completed the application and it does what its supposed to, but it takes hours to scan the whole dataset(~9g). My cluster is comprised of 3 nodes with 4 cores,8g ram each and I am currently using 6 executors with 1 core and 2g each.
I am guessing that I can optimize the code quite a bit but is there like a big mistake in my code that results in this delay?
//Create a JavaPairRDD with tuple elements. For each String line of lines we split the string
//and assign latitude, longitude and timestamp of each line to sdx,sdy and sdt. Then we check if the data point of
//that line is contained in a cell of the centroids list. If it is then a new tuple is returned
//with key the latitude, Longitude and timestamp (split by ",") of that cell and value 1.
JavaPairRDD<String, Integer> pairs = lines.mapToPair(x -> {
String sdx = x.split(" ")[2];
String sdy = x.split(" ")[3];
String sdt = x.split(" ")[0];
double dx = Double.parseDouble(sdx);
double dy = Double.parseDouble(sdy);
int dt = Integer.parseInt(sdt);
List<Integer> t = brTime.getValue();
List<Point2D.Double> p = brCoo.getValue();
double dist = brDist.getValue();
int dur = brDuration.getValue();
for(int timeCounter=0; timeCounter<t.size(); timeCounter++) {
for ( int cooCounter=0; cooCounter < p.size(); cooCounter++) {
double cx = p.get(cooCounter).getX();
double cy = p.get(cooCounter).getY();
int ct = t.get(timeCounter);
String scx = Double.toString(cx);
String scy = Double.toString(cy);
String sct = Integer.toString(ct);
if (dx > (cx-dist) && dx <= (cx+dist)) {
if (dy > (cy-dist) && dy <= (cy+dist)) {
if (dt > (ct-dur) && dt <= (ct+dur)) {
return new Tuple2<String, Integer>(scx+","+scy+","+sct,1);
}
}
}
}
}
return new Tuple2<String, Integer>("Out Of Bounds",1);
});
Try to use mapPartitions it's more fast see this exapmle link; other thing to do is to put this part of code outside the loop timeCounter
One of the biggest factors that may contribute to costs in running a Spark map like this relates to data access outside of the RDD context, which means driver interaction. In your case, there are at least 4 accessors of variables where this occurs: brTime, brCoo, brDist, and brDuration. It also appears that you're doing some line parsing via String#split rather than leveraging built-ins. Finally, scx, scy, and sct are all calculated for each loop, though they're only returned if their numeric counterparts pass a series of checks, which means wasted CPU cycles and extra GC.
Without actually reviewing the job plan, it's tough to say whether the above will make performance reach an acceptable level. Check out your history server application logs and see if there are any stages which are eating up your time - once you've identified a culprit there, that's what actually needs optimizing.
I tried mappartitionstopair and also moved the calculations of scx,scy and sct so that they are calculated only if the point passes the conditions. The speed of the application has improved dramatically only 17 minutes! I believe that the mappartitionsopair was the biggest factor. Thanks a lot Mks and bsplosion!

How can I average N images without keeping all images in memory, using OpenCV

I've created a class that attempts to create an average image from any number of images (passed one by one).
This process will run in it's own thread, while other threads read in images, do processing, and pass the output to this average object.
Unfortunately, the averaged image gets brighter, and brighter, with each additional image.
I suspect that there is an error in my averaging function, but I have been unable to find it.
Edit: I was missing the "curImg/curCount" part of the equation. With this correction, the images now get dark. I am not left with a good average.
Edit 2: I see that I was down voted. Is there something I could do to improve this question?
class AverageImage {
private Vector<Mat> average = new Vector<>();
private int count = 0;
public void add(Mat img) {
count++;
Vector<Mat> splitImg = new Vector<>();
Mat convertedImg = img.clone();
convertedImg.convertTo(convertedImg, CvType.CV_32FC1);
Core.split(img.clone(), splitImg);
if (average.isEmpty()) {
average = splitImg;
} else {
// prevAverage * (prevCount/curCount) + curImg/curCount
for (int i = 0; i < average.size(); i++) {
Core.multiply(average.get(i), new Scalar((count - 1) / ((double) count)), average.get(i));
Mat temp = new Mat();
Core.divide(count, splitImg.get(i), temp);
Core.add(average.get(i), temp, average.get(i));
}
}
}
public Mat getAverage() {
Mat convertedAverage = new Mat();
Core.merge(average, convertedAverage);
convertedAverage.convertTo(convertedAverage.clone(), CvType.CV_8UC3);
return convertedAverage;
}
}
Minor comment: convertedImg is not being used at all in your code. You can remove it.
Your determination of what is known as the cumulative mean is correct. However, the part that is really messing with you is the divide statement:
Core.divide(count, splitImg.get(i), temp);
By consulting the OpenCV documentation, when you call the variant which the first element is a scalar, the operation that is being done is:
dst(I) = saturate(scale / src(I))
scale is used to divide into the output. Therefore, what you are doing is count / splitImg.get(i) when you should actually be doing splitImg.get(i) / count. With this in mind, divide does not support taking an image and dividing by a coefficient. However, a workaround is to use Core.multiply with the inverse of count:
Core.multiply(splitImg.get(i), new Scalar(1.0 / count), temp);
All you need to do is change the Core.divide statement to the one above and it should work out.
// prevAverage * (prevCount/curCount) + curImg/curCount
for (int i = 0; i < average.size(); i++) {
Core.multiply(average.get(i), new Scalar((count - 1) / ((double) count)), average.get(i));
Mat temp = new Mat();
// Core.divide(count, splitImg.get(i), temp);
Core.multiply(splitImg.get(i), new Scalar(1.0 / count), temp);
Core.add(average.get(i), temp, average.get(i));
}
To verify that this is correct, here's the math I wrote out for this in LaTeX. Given a signal of N values which we call x, we can calculate the mean with the first line of the equation. x_i denotes the ith value of the signal x. The second line and onwards is what would happen if we added an additional term to the mean. If you work out the math, we verify that your equation in your code is correct... but you simply need to correct the Core.divide statement:
The left most term in the second equation is what you have in your first line of code in the loop which is correct:
Core.multiply(average.get(i), new Scalar((count - 1) / ((double) count)), average.get(i));
Finally to compute the second term of this equation, we do:
Core.multiply(splitImg.get(i), new Scalar(1.0 / count), temp);
For numerical stability it's probably best to just store the accumulated sum image and divide that by the current N when you need the avg.
Just take care that you don't overflow.

Compare graph values or structure

I have an android application which is getting gesture coordinates (3 axis - x,y,z). I need to compare them with coordinates which I have in my DB and determine whether they are the same or not.
I also need to add some tolerance, since accelerometer (device which captures gestures) is very sensitive. It would be easy, but I also want to consider e.g. "big circle" drawn in the air, same as "small circle" drawn in the air. meaning that there would be different values, but structure of the graph would be the same, right?
I have heard about translating graph values into bits and then compare. Is that the right approach? Is there any library for such comparison?
So far I just hard coded it, covering all my requirements except the last one (big circle vs small circle).
My code now:
private int checkWhetherGestureMatches(byte[] values, String[] refValues) throws IOException {
int valuesSize = 32;
int ignorePositions = 4;
byte[] valuesX = new byte[valuesSize];
byte[] valuesY = new byte[valuesSize];
byte[] valuesZ = new byte[valuesSize];
for (int i = 0; i < valuesSize; i++) {
int position = i * 3 + ignorePositions;
valuesX[i] = values[position];
valuesY[i] = values[position + 1];
valuesZ[i] = values[position + 2];
}
Double[] valuesXprevious = new Double[valuesSize];
Double[] valuesYprevious = new Double[valuesSize];
Double[] valuesZprevious = new Double[valuesSize];
for (int i = 0; i < valuesSize; i++) {
int position = i * 3 + ignorePositions;
valuesXprevious[i] = Double.parseDouble(refValues[position]);
valuesYprevious[i] = Double.parseDouble(refValues[position + 1]);
valuesZprevious[i] = Double.parseDouble(refValues[position + 2]);
}
int incorrectPoints = 0;
for (int j = 0; j < valuesSize; j++) {
if (valuesX[j] < valuesXprevious[j] + 20 && valuesX[j] > valuesXprevious[j] - 20
&& valuesY[j] < valuesYprevious[j] + 20 && valuesY[j] > valuesYprevious[j] - 20
&& valuesZ[j] < valuesZprevious[j] + 20 && valuesZ[j] > valuesZprevious[j] - 20) {
} else {
incorrectPoints++;
}
}
return incorrectPoints;
}
EDIT:
I found JGraphT, it might work. If you know anything about that already, let me know.
EDIT2:
See these images, they are the same gesture but one is done in a slower motion than another.
Faster one:
Slower one:
I haven't captured images of the same gesture where one would be smaller than another, might add that later.
If your list of gestures is complex, I would suggest training a neural network which can classify the gestures based on the graph value bits you mentioned. The task is very similar to classification of handwritten numerical digits, for which lots of resources are there on the net.
The other approach would be to mathematically guess the shape of the gesture, but I doubt it will be useful considering the tolerance of the accelerometer and the fact that users won't draw accurate shapes.
(a) convert your 3D coordinates into 2D plain figure. Use matrix transformations.
(b) normalize your gesture scale - again with matrix transformations
(c) normalize the number of points or use interpolation on the next step.
(d) calculate the difference between your stored (s) gesture and current (c) gesture as
Sum((Xs[i] - Xc[i])^2 + (Ys[i] - Yc[i])^2) where i = 0 .. num of points
If the difference is below your predefined precision - gestures are equal.
I have used a Java implementation of Dynamic Time Wrapping algorithm. The library is called fastDTW.
Unfortunately from what I undersood they don't support it anymore, though I found a use for it.
https://code.google.com/p/fastdtw/
I can't recall now, but I think I used this one and compiled it myself:
https://github.com/cscotta/fastdtw/tree/master/src/main/java/com/fastdtw/dtw

Using math in solr if expressions

I am trying to use math in an 'if' statement in solr. What I want to achieve is following, I have a trapeze function defined as:
There are 4 points on the x axis, left_minimum, left_optimum, right_optimum, right_maximum.
For every value of the field I want to have following outcome:
value v
score s
maxscore
if (v<left_minimum)
s = 0;
if (v>right_maximum)
s = 0;
if (v>=left_optimum AND v<=right_optimum)
s = maxScore;
if (v>=left_minimum AND v<left_optimum)
s = maxScore * (v - left_minimum) / (left_optimum - left_minimum)
if (v>right_optimum AND v<=right_maximum )
s = maxScore * (v - right_optimum) / (right_maximum - right_optimum)
The basic idea is to rank results which are "near" the ideal result higher than the results that are too far away.
to achieve this I tried to split my calculation for height in three parts (maxscore is 1.0):
heightWM=product(1.0, map(height,160,170,1,0))
&heightWL=if(height < 160 AND height > 150, product(1.0, div(sub(height,160),10)), 0)
&heightWR=if(height < 180 AND height > 170, product(1.0, div(sub(height,170),10)), 0)
&heightW=sum(heightWL, heightWM, heightWR)
problem is that solr doesn't like if with mathematical expression. Or at least I haven't find how.
Is there any other possibility to achieve this?
Ok, I actually found a solution or, better, a workaround.
heightWL=if(map(height,160,169,1,0),div(sub(height,160),10),0)
so I am basically mapping all values between x and y in a > x and a < y to 1, and all other values to 0. This way if can actually check, since it can work with 1 and 0, and in that case I calculate the subvalue for this area.

Categories

Resources