QRS Detection in java from ecg byte array - java

i read ecg byte array from file.now i want to detect QRS of read ecg byte.
how can i acheive this in java.
i get byte array from Lifegain defibrilator(an ecg device).i draw ecg on android from these bytes.now i want to detect QRS complex(term used for calculation of time and voltage of a wave of one heart beat).DATA=LeadData":"-284,-127,-122,17,-35,10,32,10,52,16,49,33,38,69,70,58,45,93,47,88,58,90,149,5,82,-12,-4,40,-34,29,-29,5,-4,-17,-13,-29,-13,-4,-9,-9,-10,-20,-15,-22,-32,-25,-23,-2,-15,-7,-13,-19,-17,-28,-27,-27,-33,-20,-16,-13,-20,-10,-22,-20,-19,-28,-15,-19,-22,-21,-9,-3,-6,-8,-6,-11,-8,-8,-5,-10,-5,-6,-9,-4,-6,3,20,3,14,7,11,10,5,11,5,10,2,10,13,14"
Regards,
shah

If the data you have is the data I think you have, you need to use one of the algorithms to detect your QRS complex.
There are a lot of algorithms out there to detect a QRS complex, one of the easiest is A Moving Average based Filtering System with its Application to Real-time QRS Detection by HC Chen and SW Chen (you can get it on http://www.cinc.org/archives/2003/pdf/585.pdf).
The stages are:
High Pass filtering
Low Pass filtering
Descision making stage
From the low pass picture you can notice that now we have the peaks we need to detect our QRS complex. The last stage is the decision stage making. In the article you have a formula to implement this.
We need to know when a QRS complex starts, so we need to set a threshold for this. The formula in this implementation is:
threshold = alpha * gamma * peak + (1 - alpha) * threshold
The peak is the local maximum in the window (we usually search the signal with a window of width 250), the threshold is the initial value (the first one can be the firstly found peak), alpha and gamma are randomly created, alpha is greater than 0 and smaller than 1 whereas gamma is 0.15 or 0.20. If the value of the current signal exceeds the threshold, a QRS complex is found.
Here is the source code in Java for low pass, high pass and decision making:
// High pass filter
// y1[n] = 1/M * Sum[m=0, M-1] x[n-m]
// y2[n] = x[n - (M+1)/2]
public static float[] highPass(int[] sig0, int nsamp) {
float[] highPass = new float[nsamp];
int M = 5; // M is recommended to be 5 or 7 according to the paper
float constant = (float) 1/M;
for(int i=0; i<sig0.length; i++) {
float y1 = 0;
float y2 = 0;
int y2_index = i-((M+1)/2);
if(y2_index < 0) {
y2_index = nsamp + y2_index;
}
y2 = sig0[y2_index];
float y1_sum = 0;
for(int j=i; j>i-M; j--) {
int x_index = i - (i-j);
if(x_index < 0) {
x_index = nsamp + x_index;
}
y1_sum += sig0[x_index];
}
y1 = constant * y1_sum;
highPass[i] = y2 - y1;
}
return highPass;
}
// Low pass filter; na n-to mesto zapiši kvadrat 30ih števil v oknu
public static float[] lowPass(float[] sig0, int nsamp) {
float[] lowPass = new float[nsamp];
for(int i=0; i<sig0.length; i++) {
float sum = 0;
if(i+30 < sig0.length) {
for(int j=i; j<i+30; j++) {
float current = sig0[j] * sig0[j];
sum += current;
}
}
else if(i+30 >= sig0.length) {
int over = i+30 - sig0.length;
for(int j=i; j<sig0.length; j++) {
float current = sig0[j] * sig0[j];
sum += current;
}
for(int j=0; j<over; j++) {
float current = sig0[j] * sig0[j];
sum += current;
}
}
lowPass[i] = sum;
}
return lowPass;
}
public static int[] QRS(float[] lowPass, int nsamp) {
int[] QRS = new int[nsamp];
double treshold = 0;
for(int i=0; i<200; i++) {
if(lowPass[i] > treshold) {
treshold = lowPass[i];
}
}
int frame = 250;
for(int i=0; i<lowPass.length; i+=frame) {
float max = 0;
int index = 0;
if(i + frame > lowPass.length) {
index = lowPass.length;
}
else {
index = i + frame;
}
for(int j=i; j<index; j++) {
if(lowPass[j] > max) max = lowPass[j];
}
boolean added = false;
for(int j=i; j<index; j++) {
if(lowPass[j] > treshold && !added) {
QRS[j] = 1;
added = true;
}
else {
QRS[j] = 0;
}
}
double gama = (Math.random() > 0.5) ? 0.15 : 0.20;
double alpha = 0.01 + (Math.random() * ((0.1 - 0.01)));
treshold = alpha * gama * max + (1 - alpha) * treshold;
}
return QRS;
}

Please follow the link below , i think they will help you.
http://www.cinc.org/archives/2008/pdf/0441.pdf
http://carg.site.uottawa.ca/doc/ELG6163GeoffreyGreen.pdf
http://www.eplimited.com/osea13.pdf
http://mirel.xmu.edu.cn/mirel/public/Teaching/QRSdetection.pdf
http://sourceforge.net/projects/ecgtoolkit-cs/files/ecgtoolkit-cs/ecgtoolkit-cs-2_2/

Related

Backpropagation for neural network

I'm trying to implement a simple neural network.
I know there is a lot of library already available out there, that is not the point.
My network has only 3 layers:
one input layer
one hidden layer
one output layer
The output layer has 8 neuron representing each a different class.
I've understand how to implement the feedfoward algorythm but i'm really struggling for the backpropagation one.
here is what i've come up with so far :
private void backPropagation(List<List<Input>> trainingData)
{
List<Input> trainingSet = new ArrayList<Input>();
for (int row = 0; row < trainingData.size(); row++) {
trainingSet = trainingData.get(row);
//we start by getting the output of the network
List<Double> outputs = feedFoward(trainingSet);
//Im using the Iris dataset, so here the desiredOutput is
//the species where
// 1 : setosa
// 2 : versicolor
// 3 : virginica
double desiredOutput = getDesiredOutputFromTrainingSet(trainingSet);
//We are getting the output neuron that fired the highest result
//like if we have
//Ouput layer :
//Neuron 1 --> 0.001221513
//Neuron 2 --> 0.990516510
//Neuron 3 --> 0.452221000
//so the network predicted that the trainingData correspond to (2) versicolor
double highestOutput = Collections.max(outputs);
//What our neuron should aim for
double target = 0;
List<Double> deltaOutputLayer = new ArrayList<Double>();
List<List<Double>> newWeightsOutputLayer = new ArrayList<List<Double>>();
for (int j = 0; j < outputs.size(); j++) {
double out = outputs.get(j);
//Important to do j + 1 because the species classes start at 1 (1 : setosa, 2: versicolor, 3:virginica)
if(out == highestOutput && (j + 1) == desiredOutput)
target = 0.99; //1
else
target = 0.01; //0
//chain rule
double delta = (out - target) * LogisticFonction.sigmoidPrime(out);
deltaOutputLayer.add(delta);
//get the new weigth value from delta and neta
List<Double> newWeights = new ArrayList<Double>();
for (int weightIndex = 0; weightIndex < _outputLayer.get(j).get_weigths().size(); weightIndex++) {
double gradient = delta * _outputsAfterActivationHiddenLayer.get(weightIndex);
double newWeight = _outputLayer.get(j).get_weigths().get(weightIndex) - (_learningRate * gradient);
newWeights.add(newWeight);
}
newWeightsOutputLayer.add(newWeights);
}
//hidden layer
double totalError = 0;
for (int i = 0; i < _neuronsHiddenLayer.size(); i++) {
for (int j = 0; j < deltaOutputLayer.size(); j++) {
double wi = _outputLayer.get(j).get_weigths().get(i);
double delta = deltaOutputLayer.get(j);
double partialError = wi * delta;
totalError += partialError;
}
double z = _outputsAfterActivationHiddenLayer.get(i);
double errorNeuron = LogisticFonction.sigmoidPrime(z);
List<Double> newWeightsHiddenLayer = new ArrayList<Double>();
for (int k = 0; k < _neuronsHiddenLayer.get(i).get_weigths().size(); k++) {
double in = _neuronsHiddenLayer.get(i).get_inputs().get(k);
double gradient = totalError * errorNeuron * in;
double oldWeigth = _neuronsHiddenLayer.get(i).get_weigths().get(k);
double newWeigth = oldWeigth - (_learningRate * gradient);
_neuronsHiddenLayer.get(i).get_weigths().set(k, newWeigth);
newWeightsHiddenLayer.add(newWeigth);
}
}
//then update the weigth of the output layer with the new values.
for (int i = 0; i < newWeightsOutputLayer.size(); i++) {
List<Double> newWeigths = newWeightsOutputLayer.get(i);
_outputLayer.get(i).set_weigths(newWeigths);
}
}
}
I've try testing with the Iris dataset : https://en.wikipedia.org/wiki/Iris_flower_data_set
but my result are very inconsistant leading me to belive there is a bug in my backpropagation algorythm.
If anyone can see some major flaw tell me please!
thanks a lot.
In this part of the code:
if(out == highestOutput && (j + 1) == desiredOutput)
target = 0.99; //1
else
target = 0.01; //0
The target output of the neuron is 0.99 when the condition (out == highestOutput && (j + 1) == desiredOutput). It means that you would only expect the output of the neuron to be 0.99 when the feedforward outputs the same neuron as the training example. This is incorrect.
The condition on that part of the code should've been only (j + 1) == desiredOutput. Remove out == highestOutput condition. The target output should be 0.99 for the desiredOutput neuron no matter whether the feedforward results in that neuron or not. So this is the corrected code:
if((j + 1) == desiredOutput)
target = 0.99; //1
else
target = 0.01; //0

Formatting of 2D arrays of chars

I have a program which reads in a pair of integers from a file and stores that in a Point class which i have created. The first integer is the x coordinate and the second is the y coordinate on each line of the file. All valid points have x-coordinates in the range [0, 40] and y-coordinates in the range [1, 20].
The input file contains data like this:
I performed some validation checks so that my program ignores invalid data or out of range data.
I then have to plot a regression line on top of those points. I have to use 2D arrays of chars and would prefer to do it this way insetad of using Point2D class of java or some other graphical classes of java.
"X"s is used to represent the points, "-"s the regression line segments, and ""s where a line segment and a point are located at the same spot*
Th formula used for regression line is this:
# Edit #
Below is the code snippet that i got from #sprinter:
initializeArray(charArray);
int xySum = 0;
int xSqSum = 0;
int xSum = 0;
int ySum = 0;
for (Point points: point) {
xySum += points.getX() * points.getY();
xSqSum += points.getX() * points.getX();
xSum += points.getX();
ySum += points.getY();
}
int xMean = xSum / count;
int yMean = ySum / count;
int n = point.size();
int slope = (xySum - n* xMean * yMean) / (xSqSum - n * xMean * xMean);
for (Point points: point) {
charArray[points.getX()][points.getY()] = 'X';
}
// plot the regression line
for (int x = 0; x <charArray.length; x++) {
int y = yMean + slope * (x - xMean); // calculate regression value
charArray[x][y] = charArray[x][y] == 'X' ? '*' : '-';
}
This is my program's output after i ran sprinter's code.:
whereas i want output like this:
Also this is how i am initializing the charArray:
public static void initializeArray(char[][] charArray) {
for(int k =0; k< charArray.length; k++) {
for(int d = 0; d<charArray[k].length;d++) {
charArray[k][d] = ' ';
}
}
}
This is the new output:
I'm finding it hard to understand what the fillArray function is supposed to do. You could have multiple 'y' values for each 'x' value in your list of points so i assume you are calling this once for each point. But the regression line has lots of 'x' values that aren't in the list of points which means you would have to call this once for each regression point. You also don't need to return the array after filling the value.
Your slope calculation doesn't seem to match the formula at all. This would make more sense to me:
float xySum = 0;
float xSqSum = 0;
float xSum = 0;
float ySum = 0;
for (Point point: points) {
xySum += point.x * point.y;
xSqSum += point.x * point.x;
xSum += point.x;
ySum += point.y;
}
float xMean = xSum / count;
float yMean = ySum / count;
float n = points.size();
float slope = (xySum - n* xMean * yMean) / (xSqSum - n * xMean * xMean);
I suspect you would be much better off plotting all the points then plotting the regression line.
List<Point> points = ...;
// first plot the points
for (Point point: points) {
array[point.x][point.y] = 'X';
}
// now plot the regression line
for (int x = 0; x < 40; x++) {
int y = Math.round(yMean + slope * (x - xMean));
array[x][y] = array[x][y] == 'X' ? '*' : '-';
}
By the way, if you are familiar with Java 8 streams then you could use:
double n = points.size();
double xySum = points.stream().mapToDouble(p -> p.x * p.y).sum();
double xSqSum = points.stream().mapToDouble(p -> p.x * p.x).sum();
double xMean = points.stream().mapToDouble(p -> p.x).sum() / n;
double yMean = points.stream().mapToDouble(p -> p.y).sum() / n;
Finally, your x dimension is the first and y dimension second. So to print you need to iterate through y first, not x:
for (int y = 0; y < 20; y++) {
for (int x = 0; x < 40; x++) {
System.out.print(array[x][20-y-1]);
}
System.out.println();
}

Divide and Conquer Closest Pair Algorithm

I'm trying to create an algorithm that returns the closest pair from randomly generated points. I have finished the algorithm, however the divide and conquer method of the algorithm is not much faster than the brute-force method. What can I do to optimize the code so that it returns at (n log n) time?
import java.util.*;
import java.lang.*;
import static java.lang.Math.min;
import static java.lang.StrictMath.abs;
public class closestPair {
private static Random randomGenerator; // for random numbers
public static class Point implements Comparable<Point> {
public long x, y;
// Constructor
public Point(long x, long y) {
this.x = x;
this.y = y;
}
public int compareTo(Point p) {
// compare this and p and there are three results: >0, ==0, or <0
if (this.x == p.x) {
if (this.y == p.y)
return 0;
else
return (this.y > p.y)? 1 : -1;
}
else
return (this.x > p.x)? 1 : -1;
}
public String toString() {
return " ("+Long.toString(this.x)+","+Long.toString(this.y)+")";
}
public double distance(Point p) {
long dx = (this.x - p.x);
long dy = (this.y - p.y);
return Math.sqrt(dx*dx + dy*dy);
}
}
public static Point[] plane;
public static Point[] T;
public static Point[] Y;
public static int N; // number of points in the plane
public static void main(String[] args) {
// Read in the Size of a maze
Scanner scan = new Scanner(System.in);
try {
System.out.println("How many points in your plane? ");
N = scan.nextInt();
}
catch(Exception ex){
ex.printStackTrace();
}
scan.close();
// Create plane of N points.
plane = new Point[N];
Y = new Point[N];
T = new Point[N];
randomGenerator = new Random();
for (int i = 0; i < N; ++i) {
long x = randomGenerator.nextInt(N<<6);
long y = randomGenerator.nextInt(N<<6);
plane[i] = new Point(x, y);
}
Arrays.sort(plane); // sort points according to compareTo.
for (int i = 1; i < N; ++i) // make all x's distinct.
if (plane[i-1].x >= plane[i].x) plane[i].x = plane[i-1].x + 1;
//for (int i = 1; i < N; i++)
// if (plane[i-1].y >= plane[i].y) plane[i].y = plane[i-1].y + 1;
//
//
System.out.println(N + " points are randomly created.");
System.out.println("The first two points are"+plane[0]+" and"+plane[1]);
System.out.println("The distance of the first two points is "+plane[0].distance(plane[1]));
long start = System.currentTimeMillis();
// Compute the minimal distance of any pair of points by exhaustive search.
double min1 = minDisSimple();
long end = System.currentTimeMillis();
System.out.println("The distance of the two closest points by minDisSimple is "+min1);
System.out.println("The running time for minDisSimple is "+(end-start)+" mms");
// Compute the minimal distance of any pair of points by divide-and-conquer
long start1 = System.currentTimeMillis();
double min2 = minDisDivideConquer(0, N-1);
long end1 = System.currentTimeMillis();
System.out.println("The distance of the two closest points by misDisDivideConquer is "+min2);
System.out.println("The running time for misDisDivideConquer is "+(end1-start1)+" mms");
}
static double minDisSimple() {
// A straightforward method for computing the distance
// of the two closest points in plane[0..N-1].
// to be completed
double midDis = Double.POSITIVE_INFINITY;
for (int i = 0; i < N - 1; i++) {
for (int j = i + 1; j < N; j++) {
if (plane[i].distance(plane[j]) < midDis){
midDis = plane[i].distance(plane[j]);
}
}
}
return midDis;
}
static void exchange(int i, int j) {
Point x = plane[i];
plane[i] = plane[j];
plane[j] = x;
}
static double minDisDivideConquer(int low, int high) {
// Initialize necessary values
double minIntermediate;
double minmin;
double minDis;
if (high == low+1) { // two points
if (plane[low].y > plane[high].y) exchange(low, high);
return plane[low].distance(plane[high]);
}
else if (high == low+2) { // three points
// sort these points by y-coordinate
if (plane[low].y > plane[high].y) exchange(low, high);
if (plane[low].y > plane[low+1].y) exchange(low, low+1);
else if (plane[low+1].y > plane[high].y) exchange(low+1, high);
// compute pairwise distances
double d1 = plane[low].distance(plane[high]);
double d2 = plane[low].distance(plane[low+1]);
double d3 = plane[low+1].distance(plane[high]);
return ((d1 < d2)? ((d1 < d3)? d1 : d3) : (d2 < d3)? d2 : d3); // return min(d1, d2, d3)
} else { // 4 or more points: Divide and conquer
int mid = (high + low)/2;
double lowerPartMin = minDisDivideConquer(low,mid);
double upperPartMin = minDisDivideConquer(mid+1,high);
minIntermediate = min(lowerPartMin, upperPartMin);
int k = 0;
double x0 = plane[mid].x;
for(int i = 1; i < N; i++){
if(abs(plane[i].x-x0) <= minIntermediate){
k++;
T[k] = plane[i];
}
}
minmin = 2 * minIntermediate;
for (int i = 1; i < k-1; i++){
for(int j = i + 1; j < min(i+7,k);j++){
double distance0 = abs(T[i].distance(T[j]));
if(distance0 < minmin){
minmin = distance0;
}
}
}
minDis = min(minmin, minIntermediate);
}
return minDis;
}
}
Use the following method with the change for minDisSimple. You can get more performance.
static double minDisSimple() {
// A straightforward method for computing the distance
// of the two closest points in plane[0..N-1].
// to be completed
double midDis = Double.POSITIVE_INFINITY;
double temp;
for (int i = 0; i < N - 1; i++) {
for (int j = i + 1; j < N; j++) {
temp = plane[i].distance(plane[j]);
if (temp < midDis) {
midDis = temp;
}
}
}
return midDis;
}
Performance wise for small amount of points simple method is good but larger amount of points Divide and Conquer is good. Try number of points with 10, 100, 1000, 10000, 100000, 1000000.
One critical aspect in the minDisDivideConquer() is that the loop that constructs the auxiliary array T iterates through all the N points. Since there are O(N) recursive calls in total, making this pass through all the N points every time leads to a complexity of O(N^2), equivalent to that of the simple algorithm.
The loop should actually only consider the points with indices between low and high. Furthermore, it could be split into two separate loops that start from mid (forward and backward), and break when the checked distance is already too large.
Another possible improvement for the minDisDivideConquer() method, in the "4 or more points" situation is to prevent looking into pairs that were already considered in the recursive calls.
If my understanding is correct, the array T contains those points that are close enough on x axis to the mid point, so that there is a chance that a pair of points in T generates a distance smaller than those from the individual half sets.
However, it is not necessary to look into points that are both before mid, or both after mid (since these pairs were already considered in the recursive calls).
Thus, a possible optimization is to construct two lists T_left and T_right (instead of T) and check distances between pairs of points such that one is on the left of mid, and the other to the right.
This way, instead of computing |T| * (|T| - 1) / 2 distances, we would only look into |T_left| * |T_right| pairs, with |T_left| + |T_right| = |T|. This value is at most (|T| / 2) * (|T| / 2) = |T| ^ 2 / 4, i.e. around 2 times fewer distances than before (this is in the worst case, but the actual number of pairs can also be much smaller, inclusively zero).

Java Backpropagation Algorithm is very slow

I have a big problem. I try to create a neural network and want to train it with a backpropagation algorithm. I found this tutorial here http://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/ and tried to recreate it in Java. And when I use the training data he uses, I get the same results as him.
Without backpropagation my TotalError is nearly the same as his. And when I use the back backpropagation 10 000 time like him, than I get the nearly the same error. But he uses 2 Input Neurons, 2 Hidden Neurons and 2 Outputs but I'd like to use this neural network for OCR, so I need definitely more Neurons. But if I use for example 49 Input Neurons, 49 Hidden Neurons and 2 Output Neurons, It takes very long to change the weights to get a small error. (I believe it takes forever.....). I have a learningRate of 0.5. In the constructor of my network, I generate the neurons and give them the same training data like the one in the tutorial and for testing it with more neurons, I gave them random weights, inputs and targets. So can't I use this for many Neurons, does it takes just very long or is something wrong with my code ? Shall I increase the learning rate, the bias or the start weight?
Hopefully you can help me.
package de.Marcel.NeuralNetwork;
import java.math.BigDecimal;
import java.util.ArrayList;
import java.util.Random;
public class Network {
private ArrayList<Neuron> inputUnit, hiddenUnit, outputUnit;
private double[] inHiWeigth, hiOutWeigth;
private double hiddenBias, outputBias;
private double learningRate;
public Network(double learningRate) {
this.inputUnit = new ArrayList<Neuron>();
this.hiddenUnit = new ArrayList<Neuron>();
this.outputUnit = new ArrayList<Neuron>();
this.learningRate = learningRate;
generateNeurons(2,2,2);
calculateTotalNetInputForHiddenUnit();
calculateTotalNetInputForOutputUnit();
}
public double calcuteLateTotalError () {
double e = 0;
for(Neuron n : outputUnit) {
e += 0.5 * Math.pow(Math.max(n.getTarget(), n.getOutput()) - Math.min(n.getTarget(), n.getOutput()), 2.0);
}
return e;
}
private void generateNeurons(int input, int hidden, int output) {
// generate inputNeurons
for (int i = 0; i < input; i++) {
Neuron neuron = new Neuron();
// for testing give each neuron an input
if(i == 0) {
neuron.setInput(0.05d);
} else if(i == 1) {
neuron.setOutput(0.10d);
}
inputUnit.add(neuron);
}
// generate hiddenNeurons
for (int i = 0; i < hidden; i++) {
Neuron neuron = new Neuron();
hiddenUnit.add(neuron);
}
// generate outputNeurons
for (int i = 0; i < output; i++) {
Neuron neuron = new Neuron();
if(i == 0) {
neuron.setTarget(0.01d);
} else if(i == 1) {
neuron.setTarget(0.99d);
}
outputUnit.add(neuron);
}
// generate Bias
hiddenBias = 0.35;
outputBias = 0.6;
// generate connections
double startWeigth = 0.15;
// generate inHiWeigths
inHiWeigth = new double[inputUnit.size() * hiddenUnit.size()];
for (int i = 0; i < inputUnit.size() * hiddenUnit.size(); i += hiddenUnit.size()) {
for (int x = 0; x < hiddenUnit.size(); x++) {
int z = i + x;
inHiWeigth[z] = round(startWeigth, 2, BigDecimal.ROUND_HALF_UP);
startWeigth += 0.05;
}
}
// generate hiOutWeigths
hiOutWeigth = new double[hiddenUnit.size() * outputUnit.size()];
startWeigth += 0.05;
for (int i = 0; i < hiddenUnit.size() * outputUnit.size(); i += outputUnit.size()) {
for (int x = 0; x < outputUnit.size(); x++) {
int z = i + x;
hiOutWeigth[z] = round(startWeigth, 2, BigDecimal.ROUND_HALF_UP);
startWeigth += 0.05;
}
}
}
private double round(double unrounded, int precision, int roundingMode)
{
BigDecimal bd = new BigDecimal(unrounded);
BigDecimal rounded = bd.setScale(precision, roundingMode);
return rounded.doubleValue();
}
private void calculateTotalNetInputForHiddenUnit() {
// calculate totalnetinput for each hidden neuron
for (int s = 0; s < hiddenUnit.size(); s++) {
double net = 0;
int x = (inHiWeigth.length / inputUnit.size());
// calculate toAdd
for (int i = 0; i < x; i++) {
int v = i + s * x;
double weigth = inHiWeigth[v];
double toAdd = weigth * inputUnit.get(i).getInput();
net += toAdd;
}
// add bias
net += hiddenBias * 1;
net = net *-1;
double output = (1.0 / (1.0 + (double)Math.exp(net)));
hiddenUnit.get(s).setOutput(output);
}
}
private void calculateTotalNetInputForOutputUnit() {
// calculate totalnetinput for each hidden neuron
for (int s = 0; s < outputUnit.size(); s++) {
double net = 0;
int x = (hiOutWeigth.length / hiddenUnit.size());
// calculate toAdd
for (int i = 0; i < x; i++) {
int v = i + s * x;
double weigth = hiOutWeigth[v];
double outputOfH = hiddenUnit.get(s).getOutput();
double toAdd = weigth * outputOfH;
net += toAdd;
}
// add bias
net += outputBias * 1;
net = net *-1;
double output = (double) (1.0 / (1.0 + Math.exp(net)));
outputUnit.get(s).setOutput(output);
}
}
private void backPropagate() {
// calculate ouputNeuron weigthChanges
double[] oldWeigthsHiOut = hiOutWeigth;
double[] newWeights = new double[hiOutWeigth.length];
for (int i = 0; i < hiddenUnit.size(); i += 1) {
double together = 0;
double[] newOuts = new double[hiddenUnit.size()];
for (int x = 0; x < outputUnit.size(); x++) {
int z = x * hiddenUnit.size() + i;
double weigth = oldWeigthsHiOut[z];
double target = outputUnit.get(x).getTarget();
double output = outputUnit.get(x).getOutput();
double totalErrorChangeRespectOutput = -(target - output);
double partialDerivativeLogisticFunction = output * (1 - output);
double totalNetInputChangeWithRespect = hiddenUnit.get(x).getOutput();
double puttedAllTogether = totalErrorChangeRespectOutput * partialDerivativeLogisticFunction
* totalNetInputChangeWithRespect;
double weigthChange = weigth - learningRate * puttedAllTogether;
// set new weigth
newWeights[z] = weigthChange;
together += (totalErrorChangeRespectOutput * partialDerivativeLogisticFunction * weigth);
double out = hiddenUnit.get(x).getOutput();
newOuts[x] = out * (1.0 - out);
}
for (int t = 0; t < newOuts.length; t++) {
inHiWeigth[t + i] = (double) (inHiWeigth[t + i] - learningRate * (newOuts[t] * together * inputUnit.get(t).getInput()));
}
hiOutWeigth = newWeights;
}
}
}
And my Neuron Class:
package de.Marcel.NeuralNetwork;
public class Neuron {
private double input, output;
private double target;
public Neuron () {
}
public void setTarget(double target) {
this.target = target;
}
public void setInput (double input) {
this.input = input;
}
public void setOutput(double output) {
this.output = output;
}
public double getInput() {
return input;
}
public double getOutput() {
return output;
}
public double getTarget() {
return target;
}
}
Think about it: you have 10,000 propagations through 49->49->2 neurons. Between the input layer and the hidden layer, you have 49 * 49 links to propagate through, so parts of your code are being executed about 24 million times (10,000 * 49 * 49). That is going to take time. You could try 100 propogations, and see how long it takes, just to give you an idea.
There are a few things that can be done to increase performance, like using a plain array instead of an ArrayList, but this is a better topic for the Code Review site. Also, don't expect this to give drastic improvements.
Your back propagation code has complexity of O(h*o + h^2) * 10000, where h is the number of hidden neurons and o is the number of output neurons. Here's why.
You have a loop that executes for all of your hidden neurons...
for (int i = 0; i < hiddenUnit.size(); i += 1) {
... containing another loop that executes for all the output neurons...
for (int x = 0; x < outputUnit.size(); x++) {
... and an additional inner loop that executes again for all the hidden neurons...
double[] newOuts = new double[hiddenUnit.size()];
for (int t = 0; t < newOuts.length; t++) {
... and you execute all of that ten thousand times. Add on top of this O(i + h + o) [initial object creation] + O(i*h + o*h) [initial weights] + O(h*i) [calculate net inputs] + O(h*o) [calculate net outputs].
No wonder it's taking forever; your code is littered with nested loops. If you want it to go faster, factor these out - for example, combine object creation and initialization - or reduce the number of neurons. But significantly cutting the number of back propagation calls is the best way to make this run faster.

Big buffer causes Android to get out of memory

I'm stuck for 3 days on this. I got a Dicom files that i need to parse with a buffered reader which returns some informations from the header of the document and the raw data for the image. After that, I apply a LUT on the raw to convert the byte into grayscale and then throw it into a Bitmap.create . It was perfect for little image but now, I have to load 13Mo image and, not only it take ages to open it (about 20 seconds), but also, while applying the LUT int the bitmap method, Android throws an error About Bitmap 29052480-byte external allocation too large for this process. java.lang.OutOfMemoryError: bitmap size exceeds VM budget . I know there are a lot of threads about this error, but in my case, it's a little bit original as I only want to open one image (so it's not about stacking much bitmap). I could show you some code :
RefreshBmp :
private void refreshBmp(int windowWidth, int windowCentre) {
int[] colorArray = process.transformBuffer(myDicomObject.getRawData(),
myDicomObject.isInverted(), windowWidth, windowCentre,
myDicomObject.getnBits());
Bitmap bmp = Bitmap.createBitmap(colorArray,
myDicomObject.getColumns(), myDicomObject.getRows(),
Bitmap.Config.ARGB_8888);
dicomImageView.setImageBitmap(bmp);
}
Which call my LUT :
public int[] transformBuffer(int[] rawData, boolean inverted,
int windowWidth, int windowCenter, int nBits) {
System.gc();
int min = windowCenter - (windowWidth/2);
int max = windowCenter + (windowWidth/2);
int intGrayscale = (int) Math.pow(2, nBits);
int intDivisionFactor = nBits-8;
double dmin = (double) min;
double dmax = (double) max;
double doubleGrayScale = (double) intGrayscale;
int rawDataLength = rawData.length;
int[] resultBuffer = new int[rawDataLength];
lutBuffer = new int[intGrayscale];
if(inverted){
for(int i = 0 ; i < min ; i++){
lutBuffer[i] = 255;
}
for(int i = min ; i < max ; i++){
double value = doubleGrayScale * ((i - dmin + 1) / (dmax - dmin + 1));
lutBuffer[i] = (int) (doubleGrayScale - value) >> intDivisionFactor;
}
for(int i = max ; i < intGrayscale ; i++){
lutBuffer[i] = 0;
}
}else{
for(int i = 0 ; i < min ; i++){
lutBuffer[i] = 0;
}
for(int i = min ; i < max ; i++){
double value = ((i - dmin + 1) / (dmax - dmin + 1));
lutBuffer[i] = (int) (value) << intDivisionFactor;
}
for(int i = max ; i < intGrayscale ; i++){
lutBuffer[i] = 255;
}
}
for(int i = 0 ; i < rawDataLength ; i++){
int colorValue = lutBuffer[rawData[i]];
resultBuffer[i] = Color.argb(255, colorValue, colorValue, colorValue);
}
System.out.println(resultBuffer.length);
return resultBuffer;
}
Hopefully, someone would know a way to save some memory allocation, especialy onto the LUT method.
You should try modifying directly rawData if you don't need it as it is any further. Something like this:
public void transformBuffer(int[] rawData, boolean inverted,
int windowWidth, int windowCenter, int nBits) {
/*
change rowData instead resultBuffer
*/
}
public void methodThatCallsTransformBuffer(...) {
transformBuffer(data, inverted, ...);
//now data is transformed
}
Also, instead using lutBuffer, you could compute colorValue for every pixel. It would be a bit slower bu you'll save some memory:
int colorValue = 0;
if (inverted) {
if (rawData[i] < min) {
colorValue = 255;
} else {
if (rawData[i] < max) {
double value = doubleGrayScale * ((i - dmin + 1) / (dmax - dmin + 1));
colorValue = (int) (doubleGrayScale - value) >> intDivisionFactor;
}
}
} else {
if (rawData[i] >= min && rowData[i] < max) {
double value = doubleGrayScale * ((i - dmin + 1) / (dmax - dmin + 1));
colorValue = (int) (doubleGrayScale - value) >> intDivisionFactor;
} else if (rowData[i] > max) {
colorValue = 255;
}
}
I decided to answer myself as I finaly found a solution. The only way you can avoid this OutOfMemory error with those big Bitmap data is post-binning (I'm not sur if it is the actual name). It consists in taking every alternate pixels. You do this by reading one pixel, skip one, read another, skip one and keep doing this until you reach the end of the line, then skip a line and continue this process to the end of the data buffer.
I hope it could help someone else.

Categories

Resources