Android GraphView Memory Usage For Large Data Set - java

I am relatively new at Android (to give context to my skill level).
Summary (Question stated at the bottom): My app uses a lot of CPU (sounds basic, sorry) and I think it is because I'm using 3000 datapoints.
Context: I am making an app that takes bytes from a Bluetooth LE service
private void broadcastUpdate(final String action,final BluetoothGattCharacteristic characteristic) {
final Intent intent = new Intent(action);
// For all other profiles, writes the data formatted in HEX.
final byte[] data = characteristic.getValue();
Log.i(TAG, "data"+characteristic.getValue());
if (data != null && data.length > 0) {
final StringBuilder stringBuilder = new StringBuilder(data.length);
for(byte byteChar : data)
stringBuilder.append(String.format("%02X ", byteChar));
Log.d(TAG, String.format("%s", new String(data)));
// getting cut off when longer, need to push on new line, 0A
intent.putExtra(EXTRA_DATA,String.format("%s", new String(data)));
}
sendBroadcast(intent);
}
In another activity, I append the data to the graph using datapoints. (displayData just displays the data I'm getting onto a textview).
else if (BluetoothLeService.ACTION_DATA_AVAILABLE.equals(action)) {
//cnt starts at 0
String data = intent.getStringExtra(mBluetoothLeService.EXTRA_DATA);
double y = Double.parseDouble(data); //create type:double version of data
if(cnt == 3000) { //if graph maxes out regenerate new data
cnt = 0; //set the count back to zero
clearUI(); //clear textView that displays string version
series.resetData(new DataPoint[] {new DataPoint(cnt + 1, y)});
}
else //if there is no need to reset the map
{
series.appendData(new DataPoint(cnt + 1, y), false, 3000); //append that data to the series
}
displayData(data, y); //display cnt, data(string), and y(double)
cnt++;
}
Using a serial port tester, sending the number "12.5" every 60ms, the graphing process works fine until I reach about 300 datapoints. After that, the app becomes laggy. Logcat states that 30-34 frames are being skipped. By 3000 datapoints, Logcat states that 62-74 frames are being skipped.
Question: What can I do to structure my data better and/or make the app run smoother?

Related

Smoothing out microcontroller audio for Sphinx library

Good morning folks, I am trying to send audio data from a microphone attached to an ESP32 board over wifi to my desktop running some Java code. if I run the audio data using Java's AudioSystems library its a bit staticy but is legible. switching to use the Sphinx-4 library which converts audio to text it only sometimes recognizes the words.
This is the first time I've had to mess with raw audio data so it may not even be possible since the board can only read up to 12 bit signals which means converting a 16 bit, every single 12 bit value maps at 15 16bit values. it could also be due to the roughly 115 microsecond delay to down sample to 16kHz
How can I smooth out the audio playback enough that it can be easily recognized by the Sphinx4 library? The current implementation has very small breaks and some noise that I think is throwing it off
ESP32 Code:
BUFFERMAX = 8000
ONE_SECOND = 1000000
int writeBuffer[BUFFERMAX];
void writeAudio(){
for(int i=0; i< BUFFERMAX;i=i+1){
//data read in is 12 bits so I mapped the value to 16 bits ( 2 bytes)
sensorValue = (map(analogRead(sensorPin), 0, 4096, -32000, 32000));
//none to minimal sound is around -7000 so try to zero out additional noise with average
int prevAvg = avg;
avg = (avg + sensorValue)/2;
sensorValue = (abs(prevAvg) + sensorValue);
if(abs(sensorValue) < 1000){sensorValue = 0;}
writeBuffer[i] = ((sensorValue));
// delay so that 8000 INTs (16000 bytes) takes one second to record
delayMicroseconds(delayMicro);
}
client.write((byte*)writeBuffer, sizeof(writeBuffer));
Java Sphinx:
StreamSpeechRecognizer recognizer = new StreamSpeechRecognizer(configuration);
// Start recognition process pruning previously cached data.
recognizer.startRecognition(socket.getInputStream() );
System.out.print("awaiting command...");
SpeechResult result = recognizer.getResult();
System.out.println(result.getHypothesis().toLowerCase());
Java play audio:
private static void init() throws LineUnavailableException {
// specifying the audio format
AudioFormat _format = new AudioFormat(16000.F,// Sample Rate
16, // Size of SampleBits
1, // Number of Channels
true, // Is Signed?
false // Is Big Endian?
);
// creating the DataLine Info for the speaker format
DataLine.Info speakerInfo = new DataLine.Info(SourceDataLine.class, _format);
// getting the mixer for the speaker
_speaker = (SourceDataLine) AudioSystem.getLine(speakerInfo);
_speaker.open(_format);
}
_streamIn = socket.getInputStream();
_speaker.start();
byte[] data = new byte[16000];
System.out.println("Waiting for data...");
while (_running) {
long start = new Date().getTime();
// checking if the data is available to speak
if (_streamIn.available() <= 0)
continue; // data not available so continue back to start of loop
// count of the data bytes read
int readCount= _streamIn.read(data, 0, data.length);
if(readCount > 0 && (readCount%2) == 0){
System.out.println(readCount);
_speaker.write(data, 0, readCount);
readCount=0;
}
System.out.println("Time: " + (new Date().getTime() - start));
}

Android - BLE connection parameter and Storing BLE sensor data in SQLite Database

I am developing an Android app that receives data from a BLE sensor at a rate of about 8000 bytes per second.
The connection logic in my app is based upon Google's BluetoothLeGatt sample. It works. I haven't changed anything and don't explicitly set any connection parameters like interval between connection events (I don't think the Android 4.4 APIs support that).
I am testing on two Android phones, both using Android version 4.4.2 and am using the TI BLE sniffer to monitor BLE traffic.
One phone negotiates a 7.5ms interval and exchanges three 20-byte packets per connection event. The other phone negotiates 48.75ms between connection events and exchanges nineteen 20-byte packets per event (the effective data transfer rate per second is about the same).
My problem is that I am trying to log the data from the BLE Service activity as it comes in to a SQLite database. The logging works for the phone with the 7.5ms interval. However,
the app locks up for the phone with the 48.75ms interval. (In general, that phone's connection is a lot less stable). I assume that's because it is getting processing for 19 packets right on top of each other.
My questions:
1. Is there anyway I can make both phones (and any future devices) use the 7.5ms interval since that seems to work better? Is there a way to control the Minimum/Maximum_CE_Length parameters?
Is there a better way to log the data than directly from the BLE service activity? These SQLite Android Developer pages suggest using an ASync task but that doesn't seem appropriate since the data isn't going to the UI thread.
My code snippets:
This is my connection code directly from the BluetoothLeGatt sample
/**
* Connects to the GATT server hosted on the Bluetooth LE device.
*
* #param address The device address of the destination device.
*
* #return Return true if the connection is initiated successfully. The connection result
* is reported asynchronously through the
* {#code BluetoothGattCallback#onConnectionStateChange(android.bluetooth.BluetoothGatt, int, int)}
* callback.
*/
public boolean connect(final String address) {
if (mBluetoothAdapter == null || address == null) {
Log.w(TAG, "BluetoothAdapter not initialized or unspecified address.");
return false;
}
// Previously connected device. Try to reconnect.
if (mBluetoothDeviceAddress != null && address.equals(mBluetoothDeviceAddress)
&& mBluetoothGatt != null) {
Log.d(TAG, "Trying to use an existing mBluetoothGatt for connection.");
if (mBluetoothGatt.connect()) {
mConnectionState = STATE_CONNECTING;
return true;
} else {
return false;
}
}
final BluetoothDevice device = mBluetoothAdapter.getRemoteDevice(address);
if (device == null) {
Log.w(TAG, "Device not found. Unable to connect.");
return false;
}
// We want to directly connect to the device, so we are setting the autoConnect
// parameter to false.
mBluetoothGatt = device.connectGatt(this, false, mGattCallback);
Log.d(TAG, "Trying to create a new connection.");
mBluetoothDeviceAddress = address;
mConnectionState = STATE_CONNECTING;
return true;
}`
My logging code is in the broadcastUpdate function:
private void broadcastUpdate(final String action,
final BluetoothGattCharacteristic characteristic) {
final Intent intent = new Intent(action);
StringBuilder stringBuilder = new StringBuilder();
StringBuilder descStringBuilder = new StringBuilder();
byte[] newData = characteristic.getValue();
String dataString;
if (newData != null && newData.length > 0) {
if (UUID_SENSOR_FFF4.equals(characteristic.getUuid())) {
totalDataBytes += newData.length;
// https://stackoverflow.com/questions/8150155/java-gethours-getminutes-and-getseconds
estimatedTime = System.currentTimeMillis();
Date timeDiff = new Date(estimatedTime - startTime - 19 * 3600000);
SimpleDateFormat timeFormat = new SimpleDateFormat("HH:mm:ss.SSS");
descStringBuilder.append("CHAR_FFF4\n");
descStringBuilder.append("Total Data: " + totalDataBytes + " Bytes\n");
descStringBuilder.append("Elapsed Time: " + timeFormat.format(timeDiff) + "\n");
for (int i = 0; i < newData.length; i++){
byte[] tempArray = { newData[i+1], newData[i] };
ByteBuffer wrapper = ByteBuffer.wrap(tempArray);
short tempShort = wrapper.getShort();
i++;
stringBuilder.append( tempShort );
stringBuilder.append( ", ");
}
dataString = stringBuilder.toString();
values.put(NbmcContract.NmbcDeviceData.COLUMN_TIMESTAMP, estimatedTime );
values.put(NbmcContract.NmbcDeviceData.COLUMN_DATA_STRING, dataString);
long newRowId = db.insert(NbmcContract.NmbcDeviceData.TABLE_NAME, null, values);
descStringBuilder.append("Row ID: " + newRowId + "\n");
} else {
descStringBuilder.append(getCharacteristicString(characteristic) + "\nDATA: ");
// We expect these characteristics to return ASCII strings
if ( DEVICE_NAME_CHAR.equals(characteristic.getUuid()) ||
MODEL_NUM_CHAR.equals(characteristic.getUuid()) ||
SERIAL_NUM_CHAR.equals(characteristic.getUuid()) ||
FIRMWARE_REV_CHAR.equals(characteristic.getUuid()) ||
HARDWARE_REV_CHAR.equals(characteristic.getUuid()) ||
FIRMWARE_REV_CHAR.equals(characteristic.getUuid()) ||
SOFTWARE_REV_CHAR.equals(characteristic.getUuid()) ||
MANUF_NAME_STRING_CHAR.equals(characteristic.getUuid()))
{
for (byte byteChar : newData) {
stringBuilder.append(String.format("%c", byteChar));
}
}
else {
for (byte byteChar : newData) {
stringBuilder.append(String.format("%02X", byteChar));
}
}
dataString = stringBuilder.toString();
}
String descString = descStringBuilder.toString();
intent.putExtra("DESC_STRING", descString);
UUID uuid = characteristic.getUuid();
String uuidString = uuid.toString();
intent.putExtra("CHAR_UUID", uuidString);
intent.putExtra("EXTRA_DATA", dataString);
}
sendBroadcast(intent);
}
1) on API lvl21+ you can try bluetoothGatt.requestConnectionPriority(int connectionPriority)
But that is all you can do.
2)
The BTLE stack on Android is not the best, i would suggest taking the data without any processing and do the processing after all the data has been received. Please note that the gatt callbacks happen on a random thread that is NOT your ui-thread. Two calls can originate from two different threads and if one writes into a database and the other comes along and starts to write as well you get a mess.
My suggestion:
On every receiving event you copy the byte[] array (as Android may reuse the given array for future data) and send it to the main thread with a Handler. On the main thread you collect the arrays in a List or Collection and once a certain amount has been collected you start a new thread, give it the list and let it enter the data into a database while the main thread creates a new list for new data.

JnetPcap: reading from offline file very slow

I'm building a sort of custom version of wireshark with jnetpcap v1.4r1425. I just want to open offline pcap files and display them in my tableview, which works great except for the speed.
The files I open are around 100mb with 700k packages.
public ObservableList<Frame> readOfflineFiles1(int numFrames) {
ObservableList<Frame> frameData = FXCollections.observableArrayList();
if (numFrames == 0){
numFrames = Pcap.LOOP_INFINITE;
}
final StringBuilder errbuf = new StringBuilder();
final Pcap pcap = Pcap.openOffline(FileAddress, errbuf);
if (pcap == null) {
System.err.println(errbuf); // Error is stored in errbuf if any
return null;
}
JPacketHandler<StringBuilder> packetHandler = new JPacketHandler<StringBuilder>() {
public void nextPacket(JPacket packet, StringBuilder errbuf) {
if (packet.hasHeader(ip)){
sourceIpRaw = ip.source();
destinationIpRaw = ip.destination();
sourceIp = org.jnetpcap.packet.format.FormatUtils.ip(sourceIpRaw);
destinationIp = org.jnetpcap.packet.format.FormatUtils.ip(destinationIpRaw);
}
if (packet.hasHeader(tcp)){
protocol = tcp.getName();
length = tcp.size();
int payloadOffset = tcp.getOffset() + tcp.size();
int payloadLength = tcp.getPayloadLength();
buffer.peer(packet, payloadOffset, payloadLength); // No copies, by native reference
info = buffer.toHexdump();
} else if (packet.hasHeader(udp)){
protocol = udp.getName();
length = udp.size();
int payloadOffset = udp.getOffset() + udp.size();
int payloadLength = udp.getPayloadLength();
buffer.peer(packet, payloadOffset, payloadLength); // No copies, by native reference
info = buffer.toHexdump();
}
if (packet.hasHeader(payload)){
infoRaw = payload.getPayload();
length = payload.size();
}
frameData.add(new Frame(packet.getCaptureHeader().timestampInMillis(), sourceIp, destinationIp, protocol, length, info ));
//System.out.print(i+"\n");
//i=i+1;
}
};
pcap.loop(numFrames, packetHandler , errbuf);
pcap.close();
return frameData;
}
This code is very fast for the first maybe 400k packages, but after that it slows down a lot. It needs around 1 minute for the first 400k packages and around 10 minutes for the rest. What is the issue here?
It's not that the list is getting too timeconsuming to work with is it? the listmethod add is O(1), isnt it?
I asked about this on the official jnetpcap forums too but it's not very active.
edit:
turn out it slows down massively because of the heap usage. Is there a way to reduce this?
As the profiler showed you, you're running low on memory and it starts to slow down.
Either give more memory with -Xmx or don't load all the packets into memory at once.

Android to computer FTP resuming upload strange phenomenon

I have a strange phenomenon when resuming a file transfer.
Look at the picture below you see the bad section.
This happens apparently random, maybe every 10:th time.
Im sending the picture from my Android phone to java server over ftp.
What is it that i forgot here.
I see the connection is killed due to java.net.SocketTimeoutException:
The transfer is resuming like this
Resume at : 287609 Sending 976 bytes more
The bytes are always correct when file is completely received.
Even for the picture below.
Dunno where to start debug this since its working most of the times.
Any suggestions or ideas would be grate i think i totally missed something here.
The device Sender code (only send loop):
int count = 1;
//Sending N files, looping N times
while(count <= max) {
String sPath = batchFiles.get(count-1);
fis = new FileInputStream(new File(sPath));
int fileSize = bis.available();
out.writeInt(fileSize); // size
String nextReply = in.readUTF();
// if the file exist,
if(nextReply.equals(Consts.SERVER_give_me_next)){
count++;
continue;
}
long resumeLong = 0; // skip this many bytes
int val = 0;
buffer = new byte[1024];
if(nextReply.equals(Consts.SERVER_file_exist)){
resumeLong = in.readLong();
}
//UPDATE FOR #Justin Breitfeller, Thanks
long skiip = bis.skip(resumeLong);
if(resumeLong != -1){
if(!(resumeLong == skiip)){
Log.d(TAG, "ERROR skip is not the same as resumeLong ");
skiip = bis.skip(resumeLong);
if(!(resumeLong == skiip)){
Log.d(TAG, "ERROR ABORTING skip is not the same as resumeLong);
return;
}
}
}
while ((val = bis.read(buffer, 0, 1024)) > 0) {
out.write(buffer, 0, val);
fileSize -= val;
if (fileSize < 1024) {
val = (int) fileSize;
}
}
reply = in.readUTF();
if (reply.equals(Consts.SERVER_file_receieved_ok)) {
// check if all files are sent
if(count == max){
break;
}
}
count++;
}
The receiver code (very truncated):
//receiving N files, looping N times
while(count < totalNrOfFiles){
int ii = in.readInt(); // File size
fileSize = (long)ii;
String filePath = Consts.SERVER_DRIVE + Consts.PTPP_FILETRANSFER;
filePath = filePath.concat(theBatch.getFileName(count));
File path = new File(filePath);
boolean resume = false;
//if the file exist. Skip if done or resume if not
if(path.exists()){
if(path.length() == fileSize){ // Does the file has same size
logger.info("File size same skipping file:" + theBatch.getFileName(count) );
count++;
out.writeUTF(Consts.SERVER_give_me_next);
continue; // file is OK don't upload it again
}else {
// Resume the upload
out.writeUTF(Consts.SERVER_file_exist);
out.writeLong(path.length());
resume = true;
fileSize = fileSize-path.length();
logger.info("Resume at : " + path.length() +
" Sending "+ fileSize +" bytes more");
}
}else
out.writeUTF("lets go");
byte[] buffer = new byte[1024];
// ***********************************
// RECEIVE FROM PHONE
// ***********************************
int size = 1024;
int val = 0;
bos = new BufferedOutputStream(new FileOutputStream(path,resume));
if(fileSize < size){
size = (int) fileSize;
}
while (fileSize >0) {
val = in.read(buffer, 0, size);
bos.write(buffer, 0, val);
fileSize -= val;
if (fileSize < size)
size = (int) fileSize;
}
bos.flush();
bos.close();
out.writeUTF("file received ok");
count++;
}
Found the error and the problem was bad logic from my part.
say no more.
I was sending pictures that was being resized just before they where sent.
The problem was when the resume kicked in after a failed transfer
the resized picture was not used, instead the code used the original
pictured that had a larger scale size.
I have now setup a short lived cache that holds the resized temporary pictures.
In the light of the complexity of the app im making I simply forgot that the files during resume was not the same as original.
With a BufferedOutputStream, BufferedInputStream, you need to watch out for the following
Create BufferedOutputStream before BuffererdInputStream (on both client and server)
And flush just after create.
Flush after every write (not just before close)
That worked for me.
Edited
Add sentRequestTime, receivedRequestTime, sentResponseTime, receivedResponseTime to your packet payload. Use System.nanoTime() on these, run your server and client on the same host, use ExecutorService to run multiple clients for that server, and plot your (received-sent) for both request and response packets, time delay on a excel chart (some csv format). Do this before bufferedIOStream and afterIOStream. You will be pleased to know that your performance has boosted by 100%. Made me very happy to plot that graph, took about 45 mins.
I have also heard that using custom buffer's further improves performance.
Edited again
In my case I am using Object IOStreams, I have added a payload of 4 long variables to the object, and initialize sentRequestTime when I send the packet from the client, initialize receivedRequestTime when the server receives the response, so and so forth for the response from server to client too. I then find the difference between received and sent time to find out the delay in response and request. Be careful to run this test on localhost. If you run it between different hardware/devices, their actual time difference may interfere with your test results. Since requestReceivedTime is time stamped at the server end and the requestSentTime is time stamped at the client end. In other words, their own local time is stamped (obviously). And both of these devices running the exact same time to the nano second is not possible. If you must run it between different devices atleast make sure that you have ntp running (to keep them time synchronized). That said, you hare comparing the performance before and after bufferedio (you dont really care about the actual time delays right ?), so time drift should not really matter. Comparing a set of results before buffered and after buffered is your actual interest.
Enjoy!!!

Processing / Reading Serial Port Data Streams Crashing Program

I'm working on a simple program to read a continuous stream of data from a plain old serial port. The program is written in Processing. Performing a simple read of data and dumping into to the console works perfectly fine, but whenever I add any other functionality (graphing,db entry) to the program, the port starts to become de-synchronized and all data from the serial port starts to become corrupt.
The incoming data from the serial port is in the following format :
A [TAB] //start flag
Data 1 [TAB]
Data 2 [TAB]
Data 3 [TAB]
Data 4 [TAB]
Data 5 [TAB]
Data 6 [TAB]
Data 7 [TAB]
Data 8 [TAB]
COUNT [TAB] //count of number of messages sent
Z [CR] //end flag followed by carriage return
So as stated, if I run the program below and simply have it output to the console, it runs fine without issue for several hours. If I add the graphing functionality or database connectivity, the serial data starts to come in garbled and serial port handler is never able to decode the message correctly again. I've tried all sorts of workarounds to this issue, thinking it is a timing problem but reducing the speed of the serial port doesn't seem to make a change.
If you see the serial port handler, I provide a large buffer just in case the terminating Z character is chopped off. I check to make sure the A and Z characters are in the correct place and in turn that the created "substring" is the correct length. When the program starts to fail, the substring will continuously fail this check until the program just crashes. Any ideas? I've tried several different ways of reading the serial port and am just beginning to wonder if I am missing something stupid here.
//Serial Port Tester
import processing.serial.*;
import processing.net.*;
import org.gwoptics.graphics.graph2D.Graph2D;
import org.gwoptics.graphics.graph2D.traces.ILine2DEquation;
import org.gwoptics.graphics.graph2D.traces.RollingLine2DTrace;
import de.bezier.data.sql.*;
SQLite db;
RollingLine2DTrace r1,r2,r3,r4;
Graph2D g;
Serial mSerialport; //the serial port
String[] svalues = new String[8]; //string values
int[] values = new int[8]; //int values
int endflag = 90; //Z
byte seperator = 13; //carriage return
class eq1 implements ILine2DEquation {
public double computePoint(double x,int pos) {
//data function for graph/plot
return (values[0] - 32768);
}
}
void connectDB()
{
db = new SQLite( this, "data.sqlite" );
if ( db.connect() )
{
db.query( "SELECT name as \"Name\" FROM SQLITE_MASTER where type=\"table\"" );
while (db.next())
{
println( db.getString("Name") );
}
}
}
void setup () {
size(1200, 1000);
connectDB();
println(Serial.list());
String portName = Serial.list()[3];
mSerialport = new Serial(this, portName, 115200);
mSerialport.clear();
mSerialport.bufferUntil(endflag); //generate serial event when endflag is received
background(0);
smooth();
//graph setup
r1 = new RollingLine2DTrace(new eq1(),250,0.1f);
r1.setTraceColour(255, 0, 0);
g = new Graph2D(this, 1080, 500, false);
g.setYAxisMax(10000);
g.addTrace(r1);
g.position.y = 50;
g.position.x = 100;
g.setYAxisTickSpacing(500);
g.setXAxisMax(10f);
}
void draw () {
background(200);
//g.draw(); enable this and program crashes quickly
}
void serialEvent (Serial mSerialport)
{
byte[] inBuffer = new byte[200];
mSerialport.readBytesUntil(seperator, inBuffer);
String inString = new String(inBuffer);
String subString = "";
int startFlag = inString.indexOf("A");
int endFlag = inString.indexOf("Z");
if (startFlag == 0 && endFlag == 48)
{
subString = inString.substring(startFlag+1,endFlag);
}
else
{
println("ERROR: BAD MESSAGE DISCARDED!");
subString = "";
}
if ( subString.length() == 47)
{
svalues = (splitTokens(subString));
values = int(splitTokens(subString));
println(svalues);
// if (db.connect()) //enable this and program crashes quickly
// {
// if ( svalues[0] != null && svalues[7] != null)
// {
// statement = svalues[7] + ", " + svalues[0] + ", " + svalues[1] + ", " + svalues[2] + ", " + svalues[3] + ", " + svalues[4] + ", " + svalues[5] + ", " + svalues[6];
// db.execute( "INSERT INTO rawdata (messageid,press1,press2,press3,press4,light1,light2,io1) VALUES (" + statement + ");" );
// }
// }
}
}
While I'm not familiar with your specific platform, my first thought from reading your problem description is that you still have a timing problem. At 115,200bps, data is coming in rather quickly-- more than 10 characters every millisecond. As such, if you spend precious time opening a database (slow file IO) or drawing graphics (also potentially slow), you might well not be able to keep up with the data.
As such, it might be a good idea to put the serial port processing on its own thread, interrupt, etc. That might make the multitasking much easier. Again, this is just an educated guess.
Also, you say that your program "crashes" when you enable the other operations. Do you mean that the entire process actually crashes, or that you get corrupted data, or both? Is it possible that you are overrunning your 200 byte inBuffer[]? At 115kbps, it wouldn't take but 20ms to do so.

Categories

Resources