Processing: Label text slow to update via network events - java

I'm working on a sketch that is receiving network events from an external program (specifically, an OpenFrameworks sketch), using the processing.net library.
Inside the draw method, I have the following code to parse the incoming data, and assign it appropriately to display a value of text in a text label:
void draw()
{
// check for incoming data
Client client = server.available();
if (client != null) {
// check for a full line of incoming data
String line = client.readStringUntil('\n');
if (line != null) {
//println(line);
int val = int(trim(line)); // extract the predicted class
//println(val);
if (val == 1) {
messageText = "EVENT 1";
} else if (val == 2) {
messageText = "EVENT 2";
} else if (val == 3) {
messageText = "EVENT 3";
}
}
}
// draw
background(0);
textFont(f,64);
fill(255);
textAlign(CENTER);
text(messageText, width/2, height/2);
}
Through logging, I have verified that the data is being received properly
However, I'm experiencing a very annoying bug - the text of my messageText label is VERY slow to update...after a new event has occurred (and is shown as such through logging), the messageText will still display the value of the last event for several seconds.
Anyone have any pointers on how to speed up performance here?
Thanks!
EDIT: Below is the full, complete sketch code:
import processing.net.*; // include the networking library
Server server; // will receive predictions
String messageText;
PFont f;
void setup()
{
fullScreen();
//size(200,200);
server = new Server(this, 5204); // listen on port 5204
messageText = "NO HAND";
f = createFont("Arial",16,true); // Arial, 16 point, anti-aliasing on
}
void draw()
{
// check for incoming data
Client client = server.available();
if (client != null) {
// check for a full line of incoming data
String line = client.readStringUntil('\n');
if (line != null) {
//println(line);
int val = int(trim(line)); // extract the predicted class
//println(val);
if (val == 1) {
messageText = "EVENT 1";
} else if (val == 2) {
messageText = "EVENT 2";
} else if (val == 3) {
messageText = "EVENT 3";
}
}
}
// draw
background(0);
textFont(f,64);
fill(255);
textAlign(CENTER);
text(messageText, width/2, height/2);
}
EDIT2 As Kevin pointed out below, my solution is rather hacky. I'm attempting to use the Message Events methods from the Networking library, rather than stuffing all my networking code inside of the draw() method.
So, I tried implementing the clientEvent method as such. However, I think I may be misunderstanding something...even though my original, hacky code seems to work OK now, my new code below using this delegate method doesn't seem to work at all. Basically, I have to run my sketch first, which creates a server, that my external program connects to. That program then sends out data that's received by my Processing sketch.
Here's what my full sketch looks like - anyone know where my misunderstanding may be coming from?
import processing.net.*; // include the networking library
Server server; // will receive predictions
Client client;
String messageText;
int dataIn;
PFont f;
void setup() {
fullScreen(P3D);
frameRate(600);
server = new Server(this, 5204); // listen on port 5204
client = server.available();
messageText = "NO HAND";
textAlign(CENTER);
fill(255);
f = createFont("Arial",48,true); // Arial, 16 point, anti-aliasing on
textFont(f, 120);
}
void draw() {
// draw
background(0);
text(messageText, width/2, height/2);
}
// If there is information available to read
// this event will be triggered
void clientEvent(Client client) {
String msg = client.readStringUntil('\n');
// The value of msg will be null until the
// end of the String is reached
if (msg != null) {
int val = int(trim(line)); // extract the predicted class
println(val);
if (val == 1) {
messageText = "A";
} else if (val == 2) {
messageText = "B";
} else if (val == 3) {
messageText = "C";
} else if (val == 4) {
messageText = "D";
}
}
}
}

So, the answer ended up having to do with the framerate and renderer used in the project. Since the network update code was being called in my sketch's draw method, the speed at which it was called was dependent on the framerate/renderer used.
After a bit of experimentation / trial-and-error, changing the sketch to use the FX2D renderer, and a framerate of 600, significantly improved performance to the degree at which I needed.
void setup()
{
fullScreen(FX2D);
frameRate(600);
...
}
EDIT:
After a conversation with one of the Processing core team members, I'm considering my networking code correct and complete. Changing the renderer to FX2D significantly improved performance.
In my very specific use case, I'm running the sketch full-screen on a MacBookPro with Retina display. Bumping the framerate value high, and changing the renderer, gave me the performance I required for my quick prototype sketches.

Related

How do you light up the Arduino Mega diode using Java?

I would like to write a GUI in Java, in which there will be a button. Pressing the button will illuminate the diode connected to the Arduino. I'm using the RXTXcomm.jar library.
For now, I wrote code that displays the COM21 port because that's how my Arduino is connected to and opens it. Here's the code:
private String name;
private String portName;
private CommPortIdentifier portIdentifier = null;
private boolean staPort;
private void getPorts () throws PortInUseException {
List <String> list = new ArrayList ();
CommPortIdentifier serialPortId;
Enumeration enumComm;
enumComm = CommPortIdentifier.getPortIdentifiers ();
while (enumComm.hasMoreElements ()) {
serialPortId = (CommPortIdentifier) ​​enumComm.nextElement ();
name = serialPortId.getName ();
if ("COM21" .equals (name)) {
if (serialPortId.isCurrentlyOwned ()) {
System.out.println ("Port is open");
} Else {
serialPortId.open (name, WIDTH);
}
} else {
System.out.println ("error");
}
}
}
I would like to ask how to now ignite a diode connected to eg pin1? What method to use? I use an Arduino Mega. I found a few posts on this subject, unfortunately no specific answer matching my problem. I will be grateful for any help, materials or links.
Understand that you'll need two programs to do this. The first is similar to your Java program. But the second is the program that runs on the Arduino itself.
Here is a link that should give you an idea. The code is repeated below in case the link goes stale:
int led = 13; // Pin 13
void setup()
{
pinMode(led, OUTPUT); // Set pin 13 as digital out
// Start up serial connection
Serial.begin(9600); // baud rate
Serial.flush();
}
void loop()
{
String input = "";
// Read any serial input
while (Serial.available() > 0)
{
input += (char) Serial.read(); // Read in one char at a time
delay(5); // Delay for 5 ms so the next char has time to be received
}
if (input == "on")
{
digitalWrite(led, HIGH); // on
}
else if (input == "off")
{
digitalWrite(led, LOW); // off
}
}
This is the C code that needs to run on the Arduino. In this case, as you can see, it is using pin 13. You'll need to get an Arduino development environment setup to get this part working. See the Arduino Software page for information on how to setup the Arduino IDE. That will be different from your Netbeans IDE but the concepts are similar.
After you've got your sketch uploaded to your Arduino you'll connect to it at 9600 baud as shown in the Arduino code. Your Java code isn't setting communication parameters like baud rate so you'll need to update it for that. I found several links for setting the serial communication parameters in RXTX so take a look around.
Good luck - it seems like alot at first but it's really not too bad.

take picture when face detected using FaceDetector google-vision

I found the demo code here: https://github.com/googlesamples/android-vision/blob/master/visionSamples/FaceTracker/app/src/main/java/com/google/android/gms/samples/vision/face/facetracker/FaceTrackerActivity.java
and my question is how to take picture when face detected and save it to device, and when we take 1st picture next picture will be take after 5s when face detected because we can't save to many picture to device.
You have to add FaceDetectionListener in camera API then call startFaceDetection() method,
CameraFaceDetectionListener fDListener = new CameraFaceDetectionListener();
mCamera.setFaceDetectionListener(fDetectionListener);
mCamera.startFaceDetection();
Implement Camera.FaceDetectionListener, you receive the detected face in onFaceDetection override method,
private class MyFaceDetectionListener
implements Camera.FaceDetectionListener {
#Override
public void onFaceDetection(Face[] faces, Camera camera) {
if (faces.length == 0) {
Log.i(TAG, "No faces detected");
} else if (faces.length > 0) {
Log.i(TAG, "Faces Detected = " +
String.valueOf(faces.length));
public List<Rect> faceRects;
faceRects = new ArrayList<Rect>();
for (int i=0; i<faces.length; i++) {
int left = faces[i].rect.left;
int right = faces[i].rect.right;
int top = faces[i].rect.top;
int bottom = faces[i].rect.bottom;
Rect uRect = new Rect(left0, top0, right0, bottom0);
faceRects.add(uRect);
}
// add function to draw rects on view/surface/canvas
}
}
As per your case, new Handler().postDelayed(new Runnable,long seconds) take 2nd picture inside runnable after 5 seconds.
Please let me know if you have any queries.

Process hosting the camera service has died unexpectedly

I have tried everything and I don't find a reason for why my Camera app is throwing me a dead service exception.
Here is the case. I'm using a HDR jni library, which I already check and it works fine, It's not a memory lead of native memory, and it's not a jni problem. So, the problem must to be in my code:
I'm just waiting to the CaptureResult to return me a AE_CONVERGED_STATE to check if the sensor already take the correct exposure and then I call my method:
Log.performanceEnd("YUV capture");
Log.d(TAG, "[onImageAvailable] YUV capture, mBurstCount: " + mBurstCount);
Image image = imageReader.acquireNextImage();
if (mBackgroundHandler != null) {
mBackgroundHandler.post(new YuvCopy(image, mBurstCount));
}
mBurstCount++;
if (mBurstState == BURST_STATE_HDR) {
switch (mBurstCount) {
case 1:
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_EXPOSURE_COMPENSATION, HDR_EXPOSURE_COMPENSATION_VALUE_HIGH);
break;
case 2:
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AE_EXPOSURE_COMPENSATION, HDR_EXPOSURE_COMPENSATION_VALUE_LOW);
break;
case 3:
//Restore exposure compensation value
mCaptureCallback = mPhotoCaptureCallback;
mSettingsManager.setExposureCompensation(mPreviewRequestBuilder);
mActivity.runOnUiThread(new Runnable() {
#Override
public void run() {
onPictureCaptured();
}
});
unlockFocus();
break;
}
if (mBurstCount != 3) {
updatePreviewSession();
}
//Finish HDR session
if (mBurstCount < YUV_BURST_LIMIT) mHdrState = STATE_PICTURE_TAKEN;
}
Here is my YUV method:
/**
* Transform YUV420 to NV21 readable frames
*/
private class YuvCopy implements Runnable {
private final Image mImage;
private final int mPictureIndex;
public YuvCopy(Image image, int index) {
mImage = image;
mPictureIndex = index;
}
#Override
public void run() {
if (mImage != null) {
if (mImage.getWidth() * mImage.getHeight() > 0) {
Image.Plane[] planes = mImage.getPlanes();
long startCopy = System.currentTimeMillis();
int width = mImage.getWidth();
int height = mImage.getHeight();
int ySize = width * height;
ByteBuffer yBuffer = mImage.getPlanes()[0].getBuffer();
ByteBuffer uvBuffer = mImage.getPlanes()[1].getBuffer();
ByteBuffer vuBuffer = mImage.getPlanes()[2].getBuffer();
byte[] mData = new byte[ySize + (ySize / 2)];
yBuffer.get(mData, 0, ySize);
vuBuffer.get(mData, ySize, (ySize / 2) - 1);
mData[mData.length - 1] = uvBuffer.get(uvBuffer.capacity() - 1);
mImage.close();
mHdrCaptureArray[mPictureIndex] = mData;
Log.i(TAG, "[YuvCopy|run] Time to Copy data: " + (System.currentTimeMillis() - startCopy) + "ms");
if (mPictureIndex == YUV_BURST_LIMIT - 1) {
startHdrProcessing();
} else {
mImage.close();
}
}
}
}
I pick a total of three photos and then I call my merge method of my JNI library. I tried to comment all the jni code and it still happening, so I think that possibly the problem must to be here, in my YUV method or maybe in the Burst HDR call.
Finally here is my log error when it happends:
01-01 12:30:27.531 21945-21957/com.myCamera W/AudioSystem: AudioFlinger server died!
01-01 12:30:27.532 21945-22038/com.myCamera W/AudioSystem: AudioPolicyService server died!
1-01 12:30:27.903 21945-21978/com.myCamera I/CameraManagerGlobal: Connecting to camera service
01-01 12:30:27.903 21945-21978/com.myCamera E/CameraManagerGlobal: Camera service is unavailable
01-01 12:30:27.903 21945-21978/com.myCamera W/System.err: android.hardware.camera2.CameraAccessException: Camera service is currently unavailable
01-01 12:30:29.103 21945-21945/com.myCamera W/System.err: android.hardware.camera2.CameraAccessException: Process hosting the camera service has died unexpectedly
Sometimes it take just 2 photos, and sometimes 300, but in the end, it still happening. Also, a lot of times all my device is almost dead and anything work's really fine, so I need to reboot my phone.
Finally the problem was caused because I had a wrong configuration of my ImageReaders, depending of the Hardware level of the Phone, the camera can allow different types of imageReaders with different sizes for each one.
For example, a INFO_SUPPORTED_HARDWARE_LEVEL == FULL doesn't support a JPEG image reader configurated to the max size of the device and another one with YUV format over the preview size in that moment. Anyway, sometimes it can work, and sometimes fail.
If an application tries to create a session using a set of targets that exceed the limits described in the below tables, one of three possibilities may occur. First, the session may be successfully created and work normally. Second, the session may be successfully created, but the camera device won't meet the frame rate guarantees as described in getOutputMinFrameDuration(int, Size). Or third, if the output set cannot be used at all, session creation will fail entirely, with onConfigureFailed(CameraCaptureSession) being invoked.
Quote from: https://developer.android.com/reference/android/hardware/camera2/CameraDevice.html
That means that my device can't have a YUV image reader configurated to 4608x3456 size when my JPEG imageReader is configurated to the same size too. It can only support my preview size(1920x1080). You can check all the possible configurations in this link.

Delay on loop without locking the screen

I've been working on an app where I need to get the location of the device. The thing is, I want to try at least 5 or 10 times and I want a delay in between each try, in order to show a error message in the screen after each try (FailedLocationMSG). The reason why I want this delay is because some times it takes some time to the gps start working and get the actual location. I've tried so many things and I can't make it work. The idea is to have an interface similar to a 'terminal' where I will display a message (Error, trying again 1/5.. Error 2/5....) after each try. The problem is, I've tried using Handler and Thread.sleep but I always get my screen locked and I can't see the error message displayed on the screen after each try.
This is the method where I get the location:
int breakloop=0;
private void GetLocation(){
locationmanager=(LocationManager)getSystemService(Context.LOCATION_SERVICE);
Criteria cri=new Criteria();
String provider=locationmanager.getBestProvider(cri,false);
if(provider!=null & !provider.equals("")){
//Get location
final Location location=locationmanager.getLastKnownLocation(provider);
locationmanager.requestLocationUpdates(provider,2000,10,this);
while(breakloop<10){
breakloop++;
if(location!=null)
onLocationChanged(location);
else
FailedLocationMSG(breakloop);
}
}else
Toast.makeText(getApplicationContext(),"Provider is null",Toast.LENGTH_LONG).show();
}
You could use a FutureTask and a Callable for your location check and wait for its return value.
public class LocationChecker implements Callable<Location> {
#Override
public Location call() throws Exception {
//do your stuff for location check
return location;
}
}
in your method you do something like
LocationChecker lc = new LocationChecker();
FutureTask<Location> ft = new FutureTask<Location>(lc);
ExecutorService es = Executors.newCachedThreadPool();
es.execute(ft);
Location loc;
int attempts = 0;
while(attempts < 10) {
if(ft.isDone() && loc != null) {
es.shutdown();
break;
}
else if(ft.isDone() && loc == null) {
ft = new FutureTask<Location>(lc);
es.execute(ft);
attempts++;
}
loc = ft.get();
}
This will check your location and will wait for the result (ten times).

Failing Android MediaCodec Decoding from Unity3d

I've got a problem similar to the question/answer posed here: https://stackoverflow.com/a/22461014. However, the difference is that the I'm trying to decode from the UnityMain thread (Unity's main loop). Calling from Update(), I pass a byte array and a textureId to MediaCodec's decoder.
public void decodeFrameToTexture(BytePointer pixels, int len, int textureID) {
if ( this.textureID != textureID) {
Log.d(TAG, "TextureID changed: " + textureID);
this.textureID = textureID;
SurfaceTexture surfaceTexture = new SurfaceTexture(textureID);
mSurface = new Surface(surfaceTexture);
outputSurface = new CodecOutputSurface(width, height, textureID);
}
... then we do the decoding (basically a copy of the code at http://bigflake.com/mediacodec/ExtractMpegFramesTest.java.txt but without the while loop, as this is frame-by-frame. Also copied the CodecOutputSurface and supporting classes basically verbatim).
Finally we have this code:
decoder.releaseOutputBuffer(decoderStatus, info.size != 0 && outputSurface != null /*render*/);
if ( outputSurface != null ) {
outputSurface.awaitNewImage();
outputSurface.drawImage(true);
}
The trouble is, awaitNewImage() always times out without getting a frame, leading back to the problem referenced here, that the onFrameAvailable() callback is never getting called.
For reference, UnityMain does not have a Looper component. When running this code:
Looper looper;
if ((looper = Looper.myLooper()) != null) {
mEventHandler = new EventHandler(looper);
} else if ((looper = Looper.getMainLooper()) != null) {
mEventHandler = new EventHandler(looper);
} else {
mEventHandler = null;
}
the assigned looper from that thread is the MainLooper looper. Any ideas would be appreciated. As stated by #fadden, "the trick is to make sure that frame-available events arrive on a different thread from the one sitting in awaitNewImage()". Given that we're running
mSurfaceTexture.setOnFrameAvailableListener(this);
from UnityMain, I think this satisfies this requirement? The callback should be called from the "main" thread?

Categories

Resources