I know the API "long getUidRxBytes (int uid)",but this interface could not get the network speed with each process. Is there someone who konws a simple way to get the speed of the prcocess.
my english is not very well.
Basically, to measure speed of anything, you need 2 parameters: time and amount.
Here, I assume you are calculating byte/s, you need to measure how many bytes transfered every second.
Almost time, you will need a algorithm such as
totalTimeSpent = 0
bytesSent = 0
do
beforeSendingTime = getCurrentMilisecond
send n bytes to desination via network
bytesSent = bytesSent + n
afterSendingTime = getCurrentMilisecond
timeSpent = afterSendingTime - beforeSendingTime
totalTimeSpent = totalTimeSpent + timeSpent
say: currentSpeed = n/timeSpent
say: averageSpeed = bytesSent/totalTimeSpent
loop until no data remaining to send
Hope it help, you need to implement that algorithm in your own development language
Related
I'm working on project using oximeter.I want to smooth it out so I can use it for calculating hearthbeat. I'm gathering raw data from microphone , I put them in new array, lets say, sData[].
Signal is really crazy and jumps all over the plot as expected, so i tried smoothing it using moving average. So my main code looks something like this.
writeAudioDataToFile();
for (int a = 0; a < sData.length; a++) {
tempAvg += Math.abs(sData[a]);
if (a % numberOfSamplesAveraged == 0) { //Average from x samples
if (myIndex > SAMPLE_SIZE - 1) {
myIndex = 0;
}
tempAvg = tempAvg / numberOfSamplesAveraged;
if (tempAvg > 500) {
newDataArray[myIndex] = 500;
}else{
newDataArray[myIndex] = tempAvg;
} //This is suppose to clear the high peaks.
Log.d("isRecording - Average from " + numberOfSamplesAveraged + " numbers.", "newDataArray[" + myIndex + "] = " + newDataArray[myIndex]);
tempAvg = 0;
myIndex++;
}
}
notifier.notifyObservers();
Log.d("Notifier", "Notifier Fired!");
Thread.sleep(20); //Still experimenting with this value
It looks so messy, but the plot (I'm using AndroidPlot btw) looks good but it is so inaccurate so I can't calculate the hearthrate from it. It has so much "Bounce" in "high" state. I found on internet that some kind of filter (Maybe IIR filter) will do the job. So i just wanna ask you guys how can i achieve a nice smooth chart? Is IIR filter the way to go? Is there any free applet/lib to smoothen it out?
This is my first question here so I'm really sorry if it is badly written.
If you will need any more information to help me, just ask.
Here is a picture how my chart looks like now.
http://oi62.tinypic.com/2uf7yoy.jpg // Cant post images im new here.
This is a lucky one though.
I need smoother output.
Noises, which occur at measurement, have high frequency. You should filter your signal, that is you should retain low frequency part of signal and suppress high frquency part of signal. You can do it, making a low-pass filter. It could be, for example, first-order inertial model. I suggest make a pass-band till ~~10 kHz, since people hear sound from 2 kHz to 20 kHz. Then appriopriate sample time is 0,0001 sec (0,1 ms). Discrete model has following equation:
y[k] = 0.9048*y[k-1] + 0.09516*u[k-1],
where u is a measured vector (directly from microphone, input in our filter),
and y is a vector you want to analyze (so output from our filter).
As you can see,, you can calculate a sample number 1, and you can just assign 0 to the sample number 0. After all, you can try to plot y vector.
I would like to achieve 0.5-1 million remote function calls per second. Let's assume we have one Central computer where computation starts, and one Worker computer which does the computation. There will be many Worker computers in real configuration.
Let's assume for a moment that our task is to calculate a sum of [(random int from 0 to MAX_VAL)*2], PROBLEM_SIZE times
The very naive prototype is
Worker:
//The real function takes 0.070ms to compute.
int compute(int input) {
return input * 2;
}
void go() {
try {
ServerSocket ss = new ServerSocket(socketNum);
Socket s = ss.accept();
System.out.println("Listening for " + socketNum);
DataInput di = new DataInputStream(s.getInputStream());
OutputStream os = s.getOutputStream();
byte[] arr = new byte[4];
ByteBuffer wrap = ByteBuffer.wrap(arr);
for (; ; ) {
wrap.clear();
di.readFully(arr);
int value = wrap.getInt();
int output = compute(value);
wrap.clear();
byte[] bytes = wrap.putInt(output).array();
os.write(bytes);
}
} catch (IOException e) {
System.err.println("Exception at " + socketNum);
e.printStackTrace();
}
}
Central:
void go(){
try {
Socket s = new Socket(ip, socketNum);
s.setSoTimeout(2000);
OutputStream os = s.getOutputStream();
DataInput di = new DataInputStream(s.getInputStream());
System.out.println("Central socket starting for " + socketNum);
Random r = new Random();
byte[] buf = new byte[4];
ByteBuffer wrap = ByteBuffer.wrap(buf);
long start = System.currentTimeMillis();
long sum = 0;
for(int i = 0; i < n; i++) {
wrap.clear();
int value = r.nextInt(10000);
os.write(wrap.putInt(value).array());
di.readFully(buf);
wrap.clear();
int answer = wrap.getInt();
sum += answer;
}
System.out.println(n + " calls in " + (System.currentTimeMillis() - start) + " ms");
} catch(SocketTimeoutException ste) {
System.err.println("Socket timeout at " + socketNum);
}
catch (Exception e) {
e.printStackTrace();
}
If the ping is 0.150ms and we run 1-threaded Worker, and 1-threaded Central, each iteration will take ~0.150ms. To improve performance, I run N threads on both Worker and Central, n-th thread listens to port 2000+n. After each thread stops, we sum up the result.
Benchmarks
First, I ran the program above in my fellow's school network. Second, I ran it on two Amazon EC2 Cluster instances. Gap in results was very big.
CHUNK_SIZE = 100_000 in all runs.
Fellow's network:
I think 3 years ago it was top configuration available (Xeon E5645). I believe it is heavily optimized for parallel computations and has simple LAN topology since it has only 20 machines.
OS: Ubuntu
Average ping: ~0.165ms
N=1 total time=6 seconds
N=10 total time=9 seconds
N=20 total time=11 seconds
N=32 total time=14 seconds
N=100 total time=21 seconds
N=500 total time=54 seconds
Amazon network:
I ran the program on two Cluster Compute Eight Extra Large Instance (cc2.8xlarge) started in the same Placement Group.
OS is some amazonian linux
Average ping: ~0.170ms.
results were a bit disappointing:
N=1 total time=16 seconds
N=10 total time=36 seconds
N=20 total time=55 seconds
N=32 total time=82 seconds
N=100 total time=250 seconds
N=500 total time=1200 seconds
I ran each configuration 2-4 times, results were similar, mostly +-5%
Amazon N=1 result makes sense, since 0.170ms per function call = 6000 calls per second = 100_000 calls per 16 seconds. 6 seconds for Fellow's network are actually surprising.
I think that maximum TCP packets per second with modern networks is around 40-70k per second.
It corresponds with N=100, time=250 seconds: N*CHUNK_SIZE / time = 100 * 100_000packets / 250sec = 10_000_000packets / 250sec = 40_000packets/second.
The question is, how my Fellow's network/computer configuration managed to do so well, especially with high N values?
My guess: it is wasteful to put each 4byte request and 4byte response to individual packet since there is ~40 byte overhead. It would be wise to pool all these tiny requests for, say, 0.010ms and put them in one big packet, and then redistribute the requests to the corresponding sockets.
It is possible to implement pooling on application level, but seems that Fellow's network/OS is configured to do it.
Update: I've played with java.net.Socket.setTcpNoDelay(), it didn't change anything.
The ultimate goal:
I approximate equation with millions of variables using very large tree. Currently, tree with 200_000 nodes fits in RAM. However I am intrested to approximate equation which requires tree with millions of nodes. It would take few Terabytes of RAM. Basic idea of algorithm is taking random path from node to leaf, and imroving values along it. Currently program is 32-threaded, each thread does 15000 iterations per second. I would like to move it to cluster with same iterations per second number.
You may be looking to enable Nagle' algorithm: wikipedia entry.
Here's a link about disabling it that might be helpful: Disabling Nagle's Algorithm in linux
I am trying to measure the performance of our service by putting the data in a HashMap like-
X number of calls came back in Y ms. Below is my code which is very simple. It will set the timer before hitting the service and after the response came back, it will measure the time.
private static void serviceCall() {
histogram = new HashMap<Long, Long>();
keys = histogram.keySet();
long total = 10;
long runs = total;
while (runs > 0) {
long start_time = System.currentTimeMillis();
// hitting the service
result = restTemplate
.getForObject("Some URL",String.class);
long difference = (System.currentTimeMillis() - start_time);
Long count = histogram.get(difference);
if (count != null) {
count++;
histogram.put(Long.valueOf(difference), count);
} else {
histogram.put(Long.valueOf(difference), Long.valueOf(1L));
}
runs--;
}
for (Long key : keys) {
Long value = histogram.get(key);
System.out.println("SERVICE MEASUREMENT, HG data, " + key + ":" + value);
}
}
Currently the output I am getting is something like this-
SERVICE MEASUREMENT, HG data, 166:1
SERVICE MEASUREMENT, HG data, 40:2
SERVICE MEASUREMENT, HG data, 41:4
SERVICE MEASUREMENT, HG data, 42:1
SERVICE MEASUREMENT, HG data, 43:1
SERVICE MEASUREMENT, HG data, 44:1
which means is 1 call came back in 166 ms, 2 calls came back in 40 ms and same with other outputs.
Problem Statement:-
What I am looking for now is something like this. I should have range setup like this-
X Number of calls came back in between 1 and 10 ms
Y Number of calls came back in between 11 and 20 ms
Z Number of calls came back in between 21 and 30 ms
P Number of calls came back in between 31 and 40 ms
T number of calls came back in between 41 and 50 ms
....
....
I number of calls came back in more than 100 ms
And any way to configure the range also. Suppose in future I need to tweak in the range, I should be able to do it. How can I achieve this thing in my current program? Any suggestions will be of great help.
A histogram is a set of data arranged into "bins" of equal size. You should convert your time measurement to a bin and use that bin as the map key. This can be done simply by dividing your time value by the bin size. For example: time / 10L.
I need to progam an Arduino for a project, and I thought I'd add something fancy, a LED color changing thingy. The LED has a sort of cyclus in which it changes colors, which takes about 40 seconds to do so. Though, the bat sensor, that makes the LED burn, registers the whole time and tells the LED a couple of times a second to go on, again. The LED never gets the time to change color and only stays the first color.
I have no idea how to fix this. I was trying to give the LED a delay or something, but apparently I did that wrong. The code so far is this;
//Pin which triggers ultrasonic sound.
const int pingPin = 13;
//Pin which delivers time to receive echo using pulseIn().
int inPin = 12;
//Range in cm which is considered safe to enter, anything
//coming within less than 5 cm triggers the red LED.
int safeZone = 10;
//LED pin numbers
int redLed = 3, greenLed = 5;
void setup() {
//Initialize serial communication
Serial.begin(9600);
//Initializing the pin states
pinMode(pingPin, OUTPUT);
pinMode(redLed, OUTPUT);
pinMode(greenLed, OUTPUT);
}
void loop()
{
//Raw duration in milliseconds, cm is the
//converted amount into a distance.
long duration, cm;
//Sending the signal, starting with LOW for a clean signal 2 staat voor reactie.
digitalWrite(pingPin, LOW);
delayMicroseconds(2);
digitalWrite(pingPin, HIGH);
//Setting up the input pin, and receiving the duration in
//microseconds for the sound to bounce off the object in front.
pinMode(inPin, INPUT);
duration = pulseIn(inPin, HIGH); //Documentation for pulseIn():
//http://www.arduino.cc/en/Reference/PulseIn
//Convert the time into a distance
cm = microsecondsToCentimeters(duration);
//Printing the current readings to the serial display
Serial.print(cm);
Serial.print("cm");
Serial.println();
//If het is groter dan 10 dan gaat het lichtje uit
//else het is binnen bepaalde cm dan gaat het aan van 0 naar 255.
if(cm>10)
{
analogWrite(redLed, 0);
}
else{
analogWrite(redLed, map(cm,0,10,255,0));
dela
}
if(cm>5)
{
analogWrite(greenLed, 0);
}
else{
analogWrite(greenLed, map(cm,0,5,255,0));
}
delay(100);
}
long microsecondsToCentimeters(long microseconds)
{
// The speed of sound is 340 m/s or 29 microseconds per centimeter.
// The ping travels out and back, so to find the distance of the
// object we take half of the distance travelled.
return microseconds / 29 / 2;
}
But it still needs some kind of delay thing I think. I'm not sure what the sensor I'm using is called but it has two rounds with sensors in them, one sends and one receives, it measures how long it takes to receive back the sound and in my code I translate that to cm.
I hope you can help and understand what my problem is since my knowledge of this language is very poor.
Set a timeout value for pulseIn. Otherwise the program gets stuck in the line duration = pulseIn(inPin, HIGH); as you don't get the chance to send out another ultrasonic pulse if the previous ultrasonic pulse did not result in an echo.
In this case, the maximum range is 10 cm (20 cm travel distance for the sound pulse) so the timeout value can be set accordingly (s is distance, v is velocity and t is time):
s = v * t => t = s / v = 2 * 0.1 m / 343.2 m/s = 582.8 µs
The speed of sound is assumed to be in dry air at 20 °C.
Allowing for the width of the outgoing pulse of 2 µs the total time would then be 584.8 µs.
Instead of
duration = pulseIn(inPin, HIGH);
use
duration = pulseIn(inPin, HIGH, 585);
Other notes:
The outgoing pulse is very short, intended to be 2 µs.
digitalWrite() is quite slow, on the order of 5 µs so the actual pulse may be longer than 2 µs. Even so, the ultrasonic transducer may not be able to start up in such a short time.
Even if the outgoing pulse is longer than you think it is, it is on the order of a single period (if the ultrasonic transducer operates at 100 kHz the period is 10 µs)
Try to experiment with longer ranges and longer outgoing pulses to be sure this is not the problem.
Use delay(ms) to delay for 40 ms as required. That will shut the Arduino for 40 ms before it gets to process any data from the ultrasonic sensor.
Problem: move an object along a straight line at a constant speed in the Cartesian coordinate system (x,y only). The rate of update is unstable. The movement speed must be close to exact and the object must arrive very close to the destination. The line's source and destination may be anywhere.
Given: the source and destination addresses (x0,x1,y0, y1), and a speed of arbitrary value.
An asside: There is an answer on the SO regarding this, and it's good, however it presumes that total time spend traveling is given.
Here's what I've got:
x0 = 127;
y0 = 127;
x1 = 257;
y1 = 188;
speed = 127;
ostrich.x=x0 //plus some distance along the line;
ostrich.y=y0 // plus some distance along the line;
//An arbitrarily large value so that each iteration increments the distance a minute amount
SPEED_VAR = 1000;
xDistPerIteration = (x1 - x0) / SPEED_VAR;
yDistPerIteration = (y1 - y0) / SPEED_VAR;
distanceToTravel = ;//Pythagorean theorum
limitX = limit1 = 0; //determines when to stop the while loop
//get called 40-60 times per second
void update(){
//Keep incrementing the ostrich' location
while (limitX < speed && limitY < speed) {
limitX += Math.abs(xDistPerIteration);
limitY += Math.abs(yDistPerIteration);
ostrich.x += xDistPerIteration;
ostrich.y += yDistPerIteration;
}
distanceTraveled -= Math.sqrt(Math.pow(limitX, 2) + Math.pow(limitY, 2));
if (distanceTraveled <=0)
//ostrich arrived safely at the factory
}
This code gets the job done, however it takes up exclusively 18% of program time in a CPU intensive program. It's garbage, programatically and in terms of performance. Any ideas on what to do here?
An asside: There is an answer on the
SO regarding this, and it's good,
however it presumes that total time
spend traveling is given.
basic physics to the rescue
total time spent traveling = distance/speed
btw Math.hypot(limitX,limitY) is faster than Math.sqrt(Math.pow(limitX, 2) + Math.pow(limitY, 2))
though really it's that while loop you should refactor out
One thing to improve is:
There is no need to compute the square root in each call to the update function. You may use the squared distanceTraveled instead.
Similarly, Math.abs(xDistPerIteration) and Math.abs(yDistPerIteration) do not change at each call to update, you may save those values and get rid of the calls to the absolute value function in order to bit a save a bit more computing time.
Update gets called 40-60 times per second, right? In other words, once per frame. So why is there a while loop inside it?
Also, doing sqrt once, and pow twice, per frame is unnecessary.
Just let d2 be the distance squared, and stop when limitX*limitX+limitY*limitY exceeds it.