Kinect for Processing error when using multiple cameras - java

I'm trying to run two Kinect V1 cameras simultaneously in Processing 3. I have gotten to a solution that is not sustainable, and am I trying to make something more stable/reliable.
At the moment, whenever I try to run both cameras simultaneously on a single sketch, I am hit with the error
"Could not claim interface on camera: -3
Failed to open camera subdevice or it is not disabled.Failed to open motor subddevice or it is not disabled.Failed to open audio subdevice or it is not disabled.There are no kinects, returning null"
One camera opens, the other does not. It is not always consistent which camera opens, which leads me to believe there's something tripping over permissions after the objects are created, or when the second object is initialized.
My code is as such
import SimpleOpenNI.*;
import org.openkinect.freenect.*;
import org.openkinect.freenect2.*;
import org.openkinect.processing.*;
import org.openkinect.tests.*;
//Imported libraries
//Some might be unnecessary but I don't have time to check
//Better safe than sorry, maybe I'll delete later
Kinect kinect;
Kinect kinect2;
PImage depthImage;
PImage depthImage2;
//Set depth threshold
float minDepth = 996;
float maxDepth = 2493;
float iWidth1 = 0;
float iHeight1 = 0;
float iWidth2 = 0;
float iHeight2 = 0;
//Double check for the number of devices, mostly for troubleshooting
int numDevices = 0;
//control which device is being controlled (in case I want device control)
int deviceIndex = 0;
void setup() {
//set Arbitrary size
size(640, 360);
//Set up window to resize, need to figure out how to keep things centered
surface.setResizable(true);
//not necessary, but good for window management. Window label
surface.setTitle("KINECT 1");
//get number of devices, print to console
numDevices = Kinect.countDevices();
println("number of V1 Kinects = "+numDevices);
//set up depth for the first kinect tracking
kinect = new Kinect(this);
kinect.initDepth();
//Blank Image
depthImage = new PImage(kinect.width, kinect.height);
//set up second window
String [] args = {"2 Frame Test"};
SecondApplet sa = new SecondApplet();
PApplet.runSketch(args, sa);
}
//Draw first window's Kinect Threshold
void draw () {
if ((width/1.7778) < height) {
iWidth1 = width;
iHeight1 = width/1.7778;
} else {
iWidth1 = height*1.7778;
iHeight1 = height;
}
//Raw Image
image(kinect.getDepthImage(), 0, 0, iWidth1, iHeight1);
//Threshold Equation
int[] rawDepth = kinect.getRawDepth();
for (int i=0; i < rawDepth.length; i++) {
if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
depthImage.pixels[i] = color(255);
} else {
depthImage.pixels[i] = color(1);
}
}
}
public class SecondApplet extends PApplet {
public void settings() {
//arbitrary size
size(640, 360);
kinect2 = new Kinect(this);
kinect2.initDepth();
//Blank Image
depthImage2 = new PImage(kinect2.width, kinect2.height);
}
void draw () {
if ((width/1.7778) < height) {
iWidth2 = width;
iHeight2 = width/1.7778;
} else {
iWidth2 = height*1.7778;
iHeight2 = height;
}
image(kinect2.getDepthImage(), 0, 0, iWidth2, iHeight2);
surface.setResizable(true);
surface.setTitle("KINECT 2");
int[] rawDepth2 = kinect2.getRawDepth();
for (int i=0; i < rawDepth2.length; i++) {
if (rawDepth2[i] >= minDepth && rawDepth2[i] <= maxDepth) {
depthImage2.pixels[i] = color(255);
} else {
depthImage2.pixels[i] = color(1);
}
}
}
}
Curiously, the code returns a confirmation that there are two kinect devices connected in the console. For some reason, it cannot access both at the same time.
I'm not a very experienced code, so this code might look amateur. Open to feedback on other parts, but really just looking to solve this problem.
This code returns the error pasted above when there are two Kinect V1's connected to the computer.
Running Mac OS11.6.8 on an Intel MacBook Pro
Using Daniel Schiffman's OpenKinect for Processing as a starting point for the code.
I've run a successful iteration of this code with a slimmed down version of Daniel Schiffman's Depth Threshold example.
import org.openkinect.freenect.*;
import org.openkinect.processing.*;
Kinect kinect;
// Depth image
PImage depthImg;
// Which pixels do we care about?
// These thresholds can also be found with a variaty of methods
float minDepth = 996;
float maxDepth = 2493;
// What is the kinect's angle
float angle;
void setup() {
size(1280, 480);
kinect = new Kinect(this);
kinect.initDepth();
angle = kinect.getTilt();
// Blank image
depthImg = new PImage(kinect.width, kinect.height);
}
void draw() {
// Draw the raw image
image(kinect.getDepthImage(), 0, 0);
// Calibration
//minDepth = map(mouseX,0,width, 0, 4500);
//maxDepth = map(mouseY,0,height, 0, 4500);
// Threshold the depth image
int[] rawDepth = kinect.getRawDepth();
for (int i=0; i < rawDepth.length; i++) {
if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
depthImg.pixels[i] = color(255);
} else {
depthImg.pixels[i] = color(0);
}
}
// Draw the thresholded image
depthImg.updatePixels();
image(depthImg, kinect.width, 0);
//Comment for Calibration
fill(0);
text("TILT: " + angle, 10, 20);
text("THRESHOLD: [" + minDepth + ", " + maxDepth + "]", 10, 36);
//Calibration Text
//fill(255);
//textSize(32);
//text(minDepth + " " + maxDepth, 10, 64);
}
Using this code, I was able to get both cameras operating using the following process:
Connect a single Kinect v1 to the computer
Open and run the above code
Duplicate the sketch file
Connect the second Kinect V1 to the computer
Open and run the duplicated sketch of the same code
This worked for my purposes and remained stable for an extended period of time. However, this isn't a sustainable solution if anyone other than me wants to utilize this program.
Any help with this problem would be greatly appreciated

Related

color tracking using webcam feed

I am trying to create a color tracking bird flock, using live video from my webcam. I was instructed to use a constructor to create an array of .gifs that could work independently and follow a specific color around the video.
I did some research and this is as far as I got. Now I am getting an error that I don't really understand. For a very early dummy example of the intentions i have with the code, please see this .gif: Flock of birds
import processing.video.*;
import gifAnimation.*;
video = new Movie(); /// This is the line that gives me the error
// class
Birdy [] arrayOfBirds;
int numberOfBirds = 10;
class Birdy
{
//variables
int numberOfBeaks;
String birdName;
color birdColor;
PVector location;
// constructor, allows you to make new Birds in the rest of the code
// A constructor is part of the class
Birdy (int nob, String bname, color bColor, PVector loc) {
numberOfBeaks = nob;
birdName = bname;
birdColor = bColor;
location = loc;
}
//The bird appears
void showBird()
{
fill(birdColor);
textSize(24);
text(birdName, location.x, location.y);
ellipse(location.x, location.y, 20, 20);
}
}
void setup() {
size(640, 480);
//fill the array Of Birds with new Birds
arrayOfBirds = new Birdy[numberOfBirds];
//to make 10 birds and put them in the array
for (int i = 0; i < numberOfBirds; i++)
{
// each new bird needs its own set of parameters but will do this when i figure out how to work with this one first!
arrayOfBirds[i]= new Birdy(2, "Tweety "+i, color(255-(i*25), i*25, 255), new PVector(i*40, i*40));
}
}
void draw(int x, int y) {
if (video.available()) {
video.read();
image(video, 0, 0, width, height); // Draw the webcam video onto the screen
int colorX = 0; // X-coordinate of the closest in color video pixel
int colorY = 0; // Y-coordinate of the closest in color video pixel
float closestColor = 500; //we set this to be abritrarily large, once program runs, the first pixel it scans will be set to this value
// Search for the closest in color pixel: For each row of pixels in the video image and
// for each pixel in the yth row, compute each pixel's index in the video
background(0);
//show that first bird we called Tweety by calling the showBird() function on Tweety
Tweety.showBird();
//show all the birds in the array by calling the showBird() method on each object in the array
for(int i = 0; i < arrayOfBirds.length; i++){
arrayOfBirds[i].location = new PVector(x,y);
arrayOfBirds[i].showBird();
}
}
setup();
Gif loopingGif;
Capture video;
size(640, 480); // Change size to 320 x 240 if too slow at 640 x 480 // Uses the default video input ---- but i dont think it works
video = new Capture(this, width, height, 30);
video.start();
noStroke();
smooth();
frameRate(10);
loopingGif = new Gif(this, "circle.gif");
String [] animas = {};
video.loadPixels();
int index = 0;
for (int y = 0; y < video.height; y++) {
for (int x = 0; x < video.width; x++) {
// Get the color stored in the pixel
color pixelValue = video.pixels[index];
// Determine the color of the pixel
float colorProximity = abs(red(pixelValue)-27)+abs(green(pixelValue)-162)+abs(blue(pixelValue)-181); //select pixel
// If that value is closer in color value than any previous, then store the
// color proximity of that pixel, as well as its (x,y) location
if (colorProximity < closestColor) {
closestColor = colorProximity;
closestColor=closestColor-10; //Once it "locks" on to an object of color, it wont let go unless something a good bit better (closer in color) comes along
colorY = y;
colorX = x;
}
index++;
}
draw(x,y);
}
image (loopingGif, colorX, colorY);
loopingGif.play();
}here
You need to declare your variable by giving it a type:
Movie video = new Movie();
You've got some other weird things going on here. Why are you specifically calling the setup() function? Processing does that for you automatically. You've also got a bunch of code outside of a function at the bottom of your sketch. Maybe you meant to put that code inside the setup() function?
If you're still getting errors, edit your question to include their exact full text.

How to get clear mask of users in simple-openni?

I am trying to extract user silhouette and put it above my images. I was able to make a mask and cut user from rgb image. But the contour is messy.
The question is how I can make the mask more precise (to fit real user). I've tried ERODE-DILATE filters, but they don't do much. Maybe I need some Feather filter like in Photoshop. Or I don't know.
Here is my code.
import SimpleOpenNI.*;
SimpleOpenNI context;
PImage mask;
void setup()
{
size(640*2, 480);
context = new SimpleOpenNI(this);
if (context.isInit() == false)
{
exit();
return;
}
context.enableDepth();
context.enableRGB();
context.enableUser();
context.alternativeViewPointDepthToImage();
}
void draw()
{
frame.setTitle(int(frameRate) + " fps");
context.update();
int[] userMap = context.userMap();
background(0, 0, 0);
mask = loadImage("black640.jpg"); //just a black image
int xSize = context.depthWidth();
int ySize = context.depthHeight();
mask.loadPixels();
for (int y = 0; y < ySize; y++) {
for (int x = 0; x < xSize; x++) {
int index = x + y*xSize;
if (userMap[index]>0) {
mask.pixels[index]=color(255, 255, 255);
}
}
}
mask.updatePixels();
image(mask, 0, 0);
mask.filter(DILATE);
mask.filter(DILATE);
PImage rgb = context.rgbImage();
rgb.mask(mask);
image(rgb, context.depthWidth() + 10, 0);
}
It's good you're aligning the RGB and depth streams.
There are few things that could be improved in terms of efficiency:
No need to reload a black image every single frame (in the draw() loop) since you're modifying all the pixels anyway:
mask = loadImage("black640.jpg"); //just a black image
Also, since you don't need the x,y coordinates as you loop through the user data, you can use a single for loop which should be a bit faster:
for(int i = 0 ; i < numPixels ; i++){
mask.pixels[i] = userMap[i] > 0 ? color(255) : color(0);
}
instead of:
for (int y = 0; y < ySize; y++) {
for (int x = 0; x < xSize; x++) {
int index = x + y*xSize;
if (userMap[index]>0) {
mask.pixels[index]=color(255, 255, 255);
}
}
}
Another hacky thing you could do is retrieve the userImage() from SimpleOpenNI, instead of the userData() and apply a THRESHOLD filter to it, which in theory should give you the same result as above.
For example:
int[] userMap = context.userMap();
background(0, 0, 0);
mask = loadImage("black640.jpg"); //just a black image
int xSize = context.depthWidth();
int ySize = context.depthHeight();
mask.loadPixels();
for (int y = 0; y < ySize; y++) {
for (int x = 0; x < xSize; x++) {
int index = x + y*xSize;
if (userMap[index]>0) {
mask.pixels[index]=color(255, 255, 255);
}
}
}
could be:
mask = context.userImage();
mask.filter(THRESHOLD);
In terms of filtering, if you want to shrink the silhouette you should ERODE and bluring should give you a bit of that Photoshop like feathering.
Note that some filter() calls take arguments (like BLUR), but others don't like the ERODE/DILATE morphological filters, but you can still roll your own loops to deal with that.
I also recommend having some sort of easy to tweak interface (it can be fancy slider or a simple keyboard shortcut) when playing with filters.
Here's a rough attempt at the refactored sketch with the above comments:
import SimpleOpenNI.*;
SimpleOpenNI context;
PImage mask;
int numPixels = 640*480;
int dilateAmt = 1;
int erodeAmt = 1;
int blurAmt = 0;
void setup()
{
size(640*2, 480);
context = new SimpleOpenNI(this);
if (context.isInit() == false)
{
exit();
return;
}
context.enableDepth();
context.enableRGB();
context.enableUser();
context.alternativeViewPointDepthToImage();
mask = createImage(640,480,RGB);
}
void draw()
{
frame.setTitle(int(frameRate) + " fps");
context.update();
int[] userMap = context.userMap();
background(0, 0, 0);
//you don't need to keep reloading the image every single frame since you're updating all the pixels bellow anyway
// mask = loadImage("black640.jpg"); //just a black image
// mask.loadPixels();
// int xSize = context.depthWidth();
// int ySize = context.depthHeight();
// for (int y = 0; y < ySize; y++) {
// for (int x = 0; x < xSize; x++) {
// int index = x + y*xSize;
// if (userMap[index]>0) {
// mask.pixels[index]=color(255, 255, 255);
// }
// }
// }
//a single loop is usually faster than a nested loop and you don't need the x,y coordinates anyway
for(int i = 0 ; i < numPixels ; i++){
mask.pixels[i] = userMap[i] > 0 ? color(255) : color(0);
}
//erode
for(int i = 0 ; i < erodeAmt ; i++) mask.filter(ERODE);
//dilate
for(int i = 0 ; i < dilateAmt; i++) mask.filter(DILATE);
//blur
mask.filter(BLUR,blurAmt);
mask.updatePixels();
//preview the mask after you process it
image(mask, 0, 0);
PImage rgb = context.rgbImage();
rgb.mask(mask);
image(rgb, context.depthWidth() + 10, 0);
//print filter values for debugging purposes
fill(255);
text("erodeAmt: " + erodeAmt + "\tdilateAmt: " + dilateAmt + "\tblurAmt: " + blurAmt,15,15);
}
void keyPressed(){
if(key == 'e') erodeAmt--;
if(key == 'E') erodeAmt++;
if(key == 'd') dilateAmt--;
if(key == 'D') dilateAmt++;
if(key == 'b') blurAmt--;
if(key == 'B') blurAmt++;
//constrain values
if(erodeAmt < 0) erodeAmt = 0;
if(dilateAmt < 0) dilateAmt = 0;
if(blurAmt < 0) blurAmt = 0;
}
Unfortunately I can't test with an actual sensor right now, so please use the concepts explained, but bare in mind the full sketch code isn't tested.
This above sketch (if it runs) should allow you to use keys to control the filter parameters (e/E to decrease/increase erosion, d/D for dilation, b/B for blur). Hopefully you'll get satisfactory results.
When working with SimpleOpenNI in general I advise recording an .oni file (check out the RecorderPlay example for that) of a person for the most common use case. This will save you some time on the long run when testing and will allow you to work remotely with the sensor detached. One thing to bare in mind, the depth resolution is reduced to half on recordings (but using a usingRecording boolean flag should keep things safe)
The last and probably most important point is about the quality of the end result. Your resulting image can't be that much better if the source image isn't easy to work with to begin with. The depth data from the original Kinect sensor isn't great. The Asus sensors feel a wee bit more stable, but still the difference is negligible in most cases. If you are going to stick to one of these sensors, make sure you've got a clear background and decent lighting (without too much direct warm light (sunlight, incandescent lightbulbs, etc.) since they may interfere with the sensor)
If you want a more accurate user cut and the above filtering doesn't get the results you're after, consider switching to a better sensor like KinectV2. The depth quality is much better and the sensor is less susceptible to direct warm light. This may mean you need to use Windows (I see there's a KinectPV2 wrapper available) or OpenFrameworks(c++ collections of libraries similar to Processing) with ofxKinectV2
I've tried built-in erode-dilate-blur in processing. But they are very inefficient. Every time I increment blurAmount in img.filter(BLUR,blurAmount), my FPS decreases by 5 frames.
So I decided to try opencv. It is much better in comparison. The result is satisfactory.
import SimpleOpenNI.*;
import processing.video.*;
import gab.opencv.*;
SimpleOpenNI context;
OpenCV opencv;
PImage mask;
int numPixels = 640*480;
int dilateAmt = 1;
int erodeAmt = 1;
int blurAmt = 1;
Movie mov;
void setup(){
opencv = new OpenCV(this, 640, 480);
size(640*2, 480);
context = new SimpleOpenNI(this);
if (context.isInit() == false) {
exit();
return;
}
context.enableDepth();
context.enableRGB();
context.enableUser();
context.alternativeViewPointDepthToImage();
mask = createImage(640, 480, RGB);
mov = new Movie(this, "wild.mp4");
mov.play();
mov.speed(5);
mov.volume(0);
}
void movieEvent(Movie m) {
m.read();
}
void draw() {
frame.setTitle(int(frameRate) + " fps");
context.update();
int[] userMap = context.userMap();
background(0, 0, 0);
mask.loadPixels();
for (int i = 0; i < numPixels; i++) {
mask.pixels[i] = userMap[i] > 0 ? color(255) : color(0);
}
mask.updatePixels();
opencv.loadImage(mask);
opencv.gray();
for (int i = 0; i < erodeAmt; i++) {
opencv.erode();
}
for (int i = 0; i < dilateAmt; i++) {
opencv.dilate();
}
if (blurAmt>0) {//blur with 0 amount causes error
opencv.blur(blurAmt);
}
mask = opencv.getSnapshot();
image(mask, 0, 0);
PImage rgb = context.rgbImage();
rgb.mask(mask);
image(mov, context.depthWidth() + 10, 0);
image(rgb, context.depthWidth() + 10, 0);
fill(255);
text("erodeAmt: " + erodeAmt + "\tdilateAmt: " + dilateAmt + "\tblurAmt: " + blurAmt, 15, 15);
}
void keyPressed() {
if (key == 'e') erodeAmt--;
if (key == 'E') erodeAmt++;
if (key == 'd') dilateAmt--;
if (key == 'D') dilateAmt++;
if (key == 'b') blurAmt--;
if (key == 'B') blurAmt++;
//constrain values
if (erodeAmt < 0) erodeAmt = 0;
if (dilateAmt < 0) dilateAmt = 0;
if (blurAmt < 0) blurAmt = 0;
}

Processing in Java - drawn clearing rectangle breaks after calling a method

I am using Processing in Java to perpetually draw a line graph. This requires a clearing rectangle to draw over drawn lines to make room for the new part of the graph. Everything works fine, but when I call a method, the clearing stops working as it did before. Basically the clearing works by drawing a rectangle in front of where the line is currently at
Below are the two main methods involved. The drawGraph function works fine until I call the redrawGraph which redraws the graph based on the zoom. I think the center variable is the cause of the problem but I cannot figure out why.
public void drawGraph()
{
checkZoom();
int currentValue = seismograph.getCurrentValue();
int lastValue = seismograph.getLastValue();
step = step + zoom;
if(step>offset){
if(restartDraw == true)
{
drawOnGraphics(step-zoom, lastY2, step, currentValue);
image(graphGraphics, 0, 0);
restartDraw = false;
}else{
drawOnGraphics(step-zoom, lastValue, step, currentValue);
image(graphGraphics, 0, 0);
}
} // draw graph (connect last to current point // increase step - x axis
float xClear = step+10; // being clearing area in front of current graph
if (xClear>width - 231) {
xClear = offset - 10; // adjust for far side of the screen
}
graphGraphics.beginDraw();
if (step>graphSizeX+offset) { // draw two clearing rectangles when graph isn't split
// left = bg.get(0, 0, Math.round(step-graphSizeX), height - 200); // capture clearing rectangle from the left side of the background image
// graphGraphics.image(left, 0, 0); // print left clearing rectangle
// if (step+10<width) {
// right = bg.get(Math.round(step+10), 0, width, height - 200); // capture clearing rectangle from the right side of the background image
// // print right clearing rectangle
// }
} else { // draw one clearing rectangle when graph is split
center = bg.get(Math.round(xClear), lastY2, offset, height - 200); // capture clearing rectangle from the center of the background image
graphGraphics.image(center, xClear - 5, 0);// print center clearing rectangle
}
if (step > graphSizeX+offset) { // reset set when graph reaches the end
step = 0+offset;
}
graphGraphics.endDraw();
image(graphGraphics, 0 , 0);
System.out.println("step: " + step + " zoom: " + zoom + " currentValue: "+ currentValue + " lastValue: " + lastValue);
}
private void redrawGraph() //resizes graph when zooming
{
checkZoom();
Object[] data = seismograph.theData.toArray();
clearGraph();
step = offset;
int y2, y1 = 0;
int zoomSize = (int)((width - offset) / zoom);
int tempCount = 0;
graphGraphics.beginDraw();
graphGraphics.strokeWeight(2); // line thickness
graphGraphics.stroke(242,100,66);
graphGraphics.smooth();
while(tempCount < data.length)
{
try
{
y2 = (int)data[tempCount];
step = step + zoom;
if(step > offset && y1 > 0 && step < graphSizeX+offset){
graphGraphics.line(step-zoom, y1, step, y2);
}
y1 = y2;
tempCount++;
lastY2 = y2;
}
catch (Exception e)
{
System.out.println(e.toString());
}
}
graphGraphics.endDraw();
image(graphGraphics, 0, 0);
restartDraw = true;
}
Any help and criticisms are welcome. Thank you for your valuable time.
I'm not sure if that approach is the best. You can try something as simple as this:
// Learning Processing
// Daniel Shiffman
// http://www.learningprocessing.com
// Example: a graph of random numbers
float[] vals;
void setup() {
size(400,200);
smooth();
// An array of random values
vals = new float[width];
for (int i = 0; i < vals.length; i++) {
vals[i] = random(height);
}
}
void draw() {
background(255);
// Draw lines connecting all points
for (int i = 0; i < vals.length-1; i++) {
stroke(0);
strokeWeight(2);
line(i,vals[i],i+1,vals[i+1]);
}
// Slide everything down in the array
for (int i = 0; i < vals.length-1; i++) {
vals[i] = vals[i+1];
}
// Add a new random value
vals[vals.length-1] = random(height);//use seismograph.getCurrentValue(); here instead
}
You can easily do the same using a PGraphics buffer as your code suggests:
// Learning Processing
// Daniel Shiffman
// http://www.learningprocessing.com
// Example: a graph of random numbers
float[] vals;
PGraphics graph;
void setup() {
size(400,200);
graph = createGraphics(width,height);
// An array of random values
vals = new float[width];
for (int i = 0; i < vals.length; i++) {
vals[i] = random(height);
}
}
void draw() {
graph.beginDraw();
graph.background(255);
// Draw lines connecting all points
for (int i = 0; i < vals.length-1; i++) {
graph.stroke(0);
graph.strokeWeight(2);
graph.line(i,vals[i],i+1,vals[i+1]);
}
graph.endDraw();
image(graph,0,0);
// Slide everything down in the array
for (int i = 0; i < vals.length-1; i++) {
vals[i] = vals[i+1];
}
// Add a new random value
vals[vals.length-1] = random(height);//use seismograph.getCurrentValue(); here instead
}
The main idea is to cycle the newest data in an array and simply draw the values from this shifting array. As long as you clear the previous frame (background()) the graph should look ok.

Problems reading data from 3 Kinect cameras

I am trying to read data from multiple Kinect sensors (3 at the moment) and having issues when there's more than 2 devices.
I'm using Daniel Shiffman's OpenKinect Processing wrapper, slightly modified so it allows to open multiple Device instances. Everything works fine with 2 devices. The problem is when I use 3. One Kinect is connected straight into one of the available two usb ports, and the other two are plugged into a USB 2.0 Hub (that has it's own power adapter).
The devices all initialize succesfully:
org.openkinect.Device#1deeb40 initialized
org.openkinect.Device#2c35e initialized
org.openkinect.Device#1cffeb4 initialized
The problem is when I try to get the depth map from the 3rd device, I get an array filled with zeroes. I've thought it's the device, but if swap devices, it's always the 3rd (last connected device) that presents this behaviour.
Here's my code so far:
package librarytests;
import org.openkinect.Context;
import org.openkinect.processing.Kinect;
import processing.core.PApplet;
import processing.core.PVector;
public class PointCloudxN extends PApplet {
// Kinect Library object
int numKinects;// = 3;
Kinect[] kinects;
int[] colours = {color(192,0,0),color(0,192,0),color(0,0,192),color(192,192,0),color(0,192,192),color(192,0,192)};
// Size of kinect image
int w = 640;
int h = 480;
// We'll use a lookup table so that we don't have to repeat the math over and over
float[] depthLookUp = new float[2048];
// Scale up by 200
float factor = 200;
public void setup() {
size(800,600,P3D);
numKinects = Context.getContext().devices();
kinects = new Kinect[numKinects];
for (int i = 0; i < numKinects; i++) {
kinects[i] = new Kinect(this);
kinects[i].start(i);
kinects[i].enableDepth(true);
kinects[i].processDepthImage(false);
}
// Lookup table for all possible depth values (0 - 2047)
for (int i = 0; i < depthLookUp.length; i++) {
depthLookUp[i] = rawDepthToMeters(i);
}
}
public void draw() {
background(0);
translate(width/2,height/2,-50);
rotateY(map(mouseX,0,width,-PI,PI));
rotateX(map(mouseY,0,height,-PI,PI));
int skip = 4;//res
//*
for (int i = 0; i < numKinects; i++) {
Kinect kinect = kinects[i];
int[] depth = kinect.getRawDepth();
//if(frameCount % 60 == 0 && i == 2) println(depth);
if (depth != null) {
// Translate and rotate
for(int x=0; x<w; x+=skip) {
for(int y=0; y<h; y+=skip) {
int offset = x+y*w;
// Convert kinect data to world xyz coordinate
int rawDepth = depth[offset];
PVector v = depthToWorld(x,y,rawDepth);
stroke(colours[i]);
// Draw a point
point(v.x*factor,v.y*factor,factor-v.z*factor);
}
}
}
}
//*/
}
public void stop() {
for (int i = 0; i < numKinects; i++) kinects[i].quit();
super.stop();
}
public static void main(String _args[]) {
PApplet.main(new String[] { librarytests.PointCloudxN.class.getName() });
}
// These functions come from: http://graphics.stanford.edu/~mdfisher/Kinect.html
float rawDepthToMeters(int depthValue) {
if (depthValue < 2047) {
return (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161));
}
return 0.0f;
}
PVector depthToWorld(int x, int y, int depthValue) {
final double fx_d = 1.0 / 5.9421434211923247e+02;
final double fy_d = 1.0 / 5.9104053696870778e+02;
final double cx_d = 3.3930780975300314e+02;
final double cy_d = 2.4273913761751615e+02;
PVector result = new PVector();
double depth = depthLookUp[depthValue];//rawDepthToMeters(depthValue);
result.x = (float)((x - cx_d) * depth * fx_d);
result.y = (float)((y - cy_d) * depth * fy_d);
result.z = (float)(depth);
return result;
}
}
The only major change I've done to Daniel's Kinect class was adding an extra start() method:
public void start(int id) {
context = Context.getContext();
if(context.devices() < 1)
{
System.out.println("No Kinect devices found.");
}
device = context.getDevice(id);
//device.acceleration(this);
device.acceleration(new Acceleration()
{
void Acceleration(){
System.out.println("new Acceleration implementation");
}
public void direction(float x, float y, float z)
{
System.out.printf("Acceleration: %f %f %f\n", x ,y ,z);
}
});
kimg = new RGBImage(p5parent);
dimg = new DepthImage(p5parent);
running = true;
super.start();
}
I've also tried with MaxMSP/Jitter and the jit.freenect external and I get the same behaviour: I can get 2 depth maps, but the 3rd is not updating.
So it seems to be an issue related to the driver, not the wrapper, since the same behaviour is present using 2 different wrappers to libfreenect (Java/Processing and Max), but am clueless why this happens to be honest.
Has anyone had a similar issue (getting depth feeds from 3 devices) using the OpenKinect/libfreenect Driver ? Any ideas on how I can get past this issue ?
The Kinect is extremely demanding on USB - generally, you can only get one Kinect per USB host controller on your motherboard (most PCs and laptops have two). The only solution I've seen is to buy a PCI-E USB controller and plug the third one into it.
Also, you might be lucky if you reduce the bandwidth requirements by disabling the RGB stream on all the Kinects (I'm blithely assuming you aren't using it since it wasn't mentioned)

Getting a NullPointerException at seemingly random intervals, not sure why

I'm running an example from a Kinect library for Processing (http://www.shiffman.net/2010/11/14/kinect-and-processing/) and sometimes get a NullPointerException pointing to this line:
int rawDepth = depth[offset];
The depth array is created in this line:
int[] depth = kinect.getRawDepth();
I'm not exactly sure what a NullPointerException is, and much googling hasn't really helped. It seems odd to me that the code compiles 70% of the time and returns the error unpredictably. Could the hardware itself be affecting it?
Here's the whole example if it helps:
// Daniel Shiffman
// Kinect Point Cloud example
// http://www.shiffman.net
// https://github.com/shiffman/libfreenect/tree/master/wrappers/java/processing
import org.openkinect.*;
import org.openkinect.processing.*;
// Kinect Library object
Kinect kinect;
float a = 0;
// Size of kinect image
int w = 640;
int h = 480;
// We'll use a lookup table so that we don't have to repeat the math over and over
float[] depthLookUp = new float[2048];
void setup() {
size(800,600,P3D);
kinect = new Kinect(this);
kinect.start();
kinect.enableDepth(true);
// We don't need the grayscale image in this example
// so this makes it more efficient
kinect.processDepthImage(false);
// Lookup table for all possible depth values (0 - 2047)
for (int i = 0; i < depthLookUp.length; i++) {
depthLookUp[i] = rawDepthToMeters(i);
}
}
void draw() {
background(0);
fill(255);
textMode(SCREEN);
text("Kinect FR: " + (int)kinect.getDepthFPS() + "\nProcessing FR: " + (int)frameRate,10,16);
// Get the raw depth as array of integers
int[] depth = kinect.getRawDepth();
// We're just going to calculate and draw every 4th pixel (equivalent of 160x120)
int skip = 4;
// Translate and rotate
translate(width/2,height/2,-50);
rotateY(a);
for(int x=0; x<w; x+=skip) {
for(int y=0; y<h; y+=skip) {
int offset = x+y*w;
// Convert kinect data to world xyz coordinate
int rawDepth = depth[offset];
PVector v = depthToWorld(x,y,rawDepth);
stroke(255);
pushMatrix();
// Scale up by 200
float factor = 200;
translate(v.x*factor,v.y*factor,factor-v.z*factor);
// Draw a point
point(0,0);
popMatrix();
}
}
// Rotate
a += 0.015f;
}
// These functions come from: http://graphics.stanford.edu/~mdfisher/Kinect.html
float rawDepthToMeters(int depthValue) {
if (depthValue < 2047) {
return (float)(1.0 / ((double)(depthValue) * -0.0030711016 + 3.3309495161));
}
return 0.0f;
}
PVector depthToWorld(int x, int y, int depthValue) {
final double fx_d = 1.0 / 5.9421434211923247e+02;
final double fy_d = 1.0 / 5.9104053696870778e+02;
final double cx_d = 3.3930780975300314e+02;
final double cy_d = 2.4273913761751615e+02;
PVector result = new PVector();
double depth = depthLookUp[depthValue];//rawDepthToMeters(depthValue);
result.x = (float)((x - cx_d) * depth * fx_d);
result.y = (float)((y - cy_d) * depth * fy_d);
result.z = (float)(depth);
return result;
}
void stop() {
kinect.quit();
super.stop();
}
And here are the errors:
processing.app.debug.RunnerException: NullPointerException
at processing.app.Sketch.placeException(Sketch.java:1543)
at processing.app.debug.Runner.findException(Runner.java:583)
at processing.app.debug.Runner.reportException(Runner.java:558)
at processing.app.debug.Runner.exception(Runner.java:498)
at processing.app.debug.EventThread.exceptionEvent(EventThread.java:367)
at processing.app.debug.EventThread.handleEvent(EventThread.java:255)
at processing.app.debug.EventThread.run(EventThread.java:89)
Exception in thread "Animation Thread" java.lang.NullPointerException
at org.openkinect.processing.Kinect.enableDepth(Kinect.java:70)
at PointCloud.setup(PointCloud.java:48)
at processing.core.PApplet.handleDraw(PApplet.java:1583)
at processing.core.PApplet.run(PApplet.java:1503)
at java.lang.Thread.run(Thread.java:637)
You are getting a NullPointerException since the value of the depth array is null. You can see from the source code of the Kinect class, there is a chance of a null value being returned by the getRawDepth() method. It is likely that there is no image being displayed at the time.
The code can be found at:
https://github.com/shiffman/libfreenect/blob/master/wrappers/java/processing/KinectProcessing/src/org/openkinect/processing/Kinect.java
Your code should check if the depth array is null before trying to process it. For example...
int[] depth = kinect.getRawDepth();
if (depth == null) {
// do something here where you handle there being no image
} else {
// We're just going to calculate and draw every 4th pixel (equivalent of 160x120)
int skip = 4;
// Translate and rotate
translate(width/2,height/2,-50);
rotateY(a);
for(int x=0; x<w; x+=skip) {
for(int y=0; y<h; y+=skip) {
int offset = x+y*w;
// Convert kinect data to world xyz coordinate
int rawDepth = depth[offset];
PVector v = depthToWorld(x,y,rawDepth);
stroke(255);
pushMatrix();
// Scale up by 200
float factor = 200;
translate(v.x*factor,v.y*factor,factor-v.z*factor);
// Draw a point
point(0,0);
popMatrix();
}
}
// Rotate
a += 0.015f;
}
I would suggest using a Java Debugger so that you can see the state of the variables at the time the exception is thrown. Some people also like to use log statements to output the values of the variables at different points in the application.
You can then trace the problem back to a point where one of the values is not populated with a non-null value.
The null pointer is happening when offset > kinect.getRawDepth();
You have a lot of code here, I'm not going to look at it all. Why can you assume that offset is < kinect.getRawDepth()?
Edit:
On second though, #Asaph's comment is probably right.
Null Pointer exception happens when depth[offset] does not exist or has not been allocated. Check when depth[offset] is undefined and that is the cause of the nullpointer exception.
Check when kinect.getRawDepth(); is greater than offset.

Categories

Resources