Rendering an Image Using Threads Cause Noisy Pictures - java

I've built my own basic ray tracer.
I've tried to optimize its performance, so I thought of using threads.
When I render without threads - all looks good, but when I've tried to use threads - I've got a noisy picture with black stripes on it.
This is the code. for every line of pixels, I've created a new executor to calculate the pixel colors in that line:
public void renderImage(){
Camera camera = _scene.get_camera();
for (int i = 0; i < _imageWriter.getWidth(); ++i){
final int iFinal = i;
ExecutorService executor = Executors.newSingleThreadExecutor();
executor.submit(() -> {
for (int j = 0; j < _imageWriter.getHeight(); ++j){
ArrayList<Ray> rays = camera.constructRaysThroughPixel(
_imageWriter.getNx(), _imageWriter.getNy(),
iFinal, j, _scene.get_screenDistance(),
_imageWriter.getWidth(), _imageWriter.getHeight()
);
Color color = new Color();
for (Ray ray: rays) {
ArrayList<GeoPoint> intersectionPoints = _scene.get_geometries().findIntersections(ray);
if (intersectionPoints.isEmpty() == true) {
color = color.add(_scene.get_background());
}
else {
GeoPoint closestPoint = getClosestPoint(intersectionPoints);
color = color.add(new Color(calcColor(closestPoint, new Ray(camera.get_origin(), closestPoint.point.subtract(camera.get_origin())))));
}
}
int length = rays.size();
color = color.scale(1.0/length);
_imageWriter.writePixel(iFinal, j, color.getColor());
}
});
}
}
This is what I've gotten when I used threads:
This is what I've gotten when I didnt use threads:

Related

Kinect for Processing error when using multiple cameras

I'm trying to run two Kinect V1 cameras simultaneously in Processing 3. I have gotten to a solution that is not sustainable, and am I trying to make something more stable/reliable.
At the moment, whenever I try to run both cameras simultaneously on a single sketch, I am hit with the error
"Could not claim interface on camera: -3
Failed to open camera subdevice or it is not disabled.Failed to open motor subddevice or it is not disabled.Failed to open audio subdevice or it is not disabled.There are no kinects, returning null"
One camera opens, the other does not. It is not always consistent which camera opens, which leads me to believe there's something tripping over permissions after the objects are created, or when the second object is initialized.
My code is as such
import SimpleOpenNI.*;
import org.openkinect.freenect.*;
import org.openkinect.freenect2.*;
import org.openkinect.processing.*;
import org.openkinect.tests.*;
//Imported libraries
//Some might be unnecessary but I don't have time to check
//Better safe than sorry, maybe I'll delete later
Kinect kinect;
Kinect kinect2;
PImage depthImage;
PImage depthImage2;
//Set depth threshold
float minDepth = 996;
float maxDepth = 2493;
float iWidth1 = 0;
float iHeight1 = 0;
float iWidth2 = 0;
float iHeight2 = 0;
//Double check for the number of devices, mostly for troubleshooting
int numDevices = 0;
//control which device is being controlled (in case I want device control)
int deviceIndex = 0;
void setup() {
//set Arbitrary size
size(640, 360);
//Set up window to resize, need to figure out how to keep things centered
surface.setResizable(true);
//not necessary, but good for window management. Window label
surface.setTitle("KINECT 1");
//get number of devices, print to console
numDevices = Kinect.countDevices();
println("number of V1 Kinects = "+numDevices);
//set up depth for the first kinect tracking
kinect = new Kinect(this);
kinect.initDepth();
//Blank Image
depthImage = new PImage(kinect.width, kinect.height);
//set up second window
String [] args = {"2 Frame Test"};
SecondApplet sa = new SecondApplet();
PApplet.runSketch(args, sa);
}
//Draw first window's Kinect Threshold
void draw () {
if ((width/1.7778) < height) {
iWidth1 = width;
iHeight1 = width/1.7778;
} else {
iWidth1 = height*1.7778;
iHeight1 = height;
}
//Raw Image
image(kinect.getDepthImage(), 0, 0, iWidth1, iHeight1);
//Threshold Equation
int[] rawDepth = kinect.getRawDepth();
for (int i=0; i < rawDepth.length; i++) {
if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
depthImage.pixels[i] = color(255);
} else {
depthImage.pixels[i] = color(1);
}
}
}
public class SecondApplet extends PApplet {
public void settings() {
//arbitrary size
size(640, 360);
kinect2 = new Kinect(this);
kinect2.initDepth();
//Blank Image
depthImage2 = new PImage(kinect2.width, kinect2.height);
}
void draw () {
if ((width/1.7778) < height) {
iWidth2 = width;
iHeight2 = width/1.7778;
} else {
iWidth2 = height*1.7778;
iHeight2 = height;
}
image(kinect2.getDepthImage(), 0, 0, iWidth2, iHeight2);
surface.setResizable(true);
surface.setTitle("KINECT 2");
int[] rawDepth2 = kinect2.getRawDepth();
for (int i=0; i < rawDepth2.length; i++) {
if (rawDepth2[i] >= minDepth && rawDepth2[i] <= maxDepth) {
depthImage2.pixels[i] = color(255);
} else {
depthImage2.pixels[i] = color(1);
}
}
}
}
Curiously, the code returns a confirmation that there are two kinect devices connected in the console. For some reason, it cannot access both at the same time.
I'm not a very experienced code, so this code might look amateur. Open to feedback on other parts, but really just looking to solve this problem.
This code returns the error pasted above when there are two Kinect V1's connected to the computer.
Running Mac OS11.6.8 on an Intel MacBook Pro
Using Daniel Schiffman's OpenKinect for Processing as a starting point for the code.
I've run a successful iteration of this code with a slimmed down version of Daniel Schiffman's Depth Threshold example.
import org.openkinect.freenect.*;
import org.openkinect.processing.*;
Kinect kinect;
// Depth image
PImage depthImg;
// Which pixels do we care about?
// These thresholds can also be found with a variaty of methods
float minDepth = 996;
float maxDepth = 2493;
// What is the kinect's angle
float angle;
void setup() {
size(1280, 480);
kinect = new Kinect(this);
kinect.initDepth();
angle = kinect.getTilt();
// Blank image
depthImg = new PImage(kinect.width, kinect.height);
}
void draw() {
// Draw the raw image
image(kinect.getDepthImage(), 0, 0);
// Calibration
//minDepth = map(mouseX,0,width, 0, 4500);
//maxDepth = map(mouseY,0,height, 0, 4500);
// Threshold the depth image
int[] rawDepth = kinect.getRawDepth();
for (int i=0; i < rawDepth.length; i++) {
if (rawDepth[i] >= minDepth && rawDepth[i] <= maxDepth) {
depthImg.pixels[i] = color(255);
} else {
depthImg.pixels[i] = color(0);
}
}
// Draw the thresholded image
depthImg.updatePixels();
image(depthImg, kinect.width, 0);
//Comment for Calibration
fill(0);
text("TILT: " + angle, 10, 20);
text("THRESHOLD: [" + minDepth + ", " + maxDepth + "]", 10, 36);
//Calibration Text
//fill(255);
//textSize(32);
//text(minDepth + " " + maxDepth, 10, 64);
}
Using this code, I was able to get both cameras operating using the following process:
Connect a single Kinect v1 to the computer
Open and run the above code
Duplicate the sketch file
Connect the second Kinect V1 to the computer
Open and run the duplicated sketch of the same code
This worked for my purposes and remained stable for an extended period of time. However, this isn't a sustainable solution if anyone other than me wants to utilize this program.
Any help with this problem would be greatly appreciated

How can i make an output of multiple images with an for loop in processing?

I am making a programm that should display 10 images next to each other with the loadImages(int i) function and the output to that is in the void setup() function , but the problem is it only loads up the 10th picture and not the others before that (1-9). I know it is probably only a minor modification to the code but i dont get it. Thanks in advance!
import java.util.Random;
Random Random = new Random();
PImage img;
int[] cakes = new int[10];
int W, H;
void setup() {
for( int i=0 ;i<=cakes.length;i++){
img =loadImages(i);
}
W = img.width;
H = img.height;
surface.setSize(10 * W, 2 * H);
}
void mouseClicked() {
scramble(cakes);
}
PImage loadImages(int i) {
return loadImage("images/" + i + "_128x128.png");
}
void draw() {
background(255);
image(img, 0, 0);
}
void scramble(int[] a) {
for (int i = 0; i < a.length; i++) {
int rd0 = Random.nextInt(i+1);
int rd1 = Random.nextInt(i+1);
int temp = a[rd0];
a[rd0] = a[rd1];
a[rd1] = temp;
}
}
EDIT: as pointed out by #Rabbid76, it would be MUCH BETTER to avoid loading the image at every iteration of the draw loop. Consider this carefully.
You can get your images in a loop. As you guessed, it's only a minor modification:
void draw() {
background(255);
for (int i = 0; i < 10; i++) {
image(loadImages(i), i*128, 0); // i * 128 to draw the image side by side, or else you'll only see the last one you draw
}
}
Have fun!
You've to create an array of images:
PImage[] img = new PImage[10];
Load the images to the array:
void setup() {
for( int i=0 ;i<img .length;i++){
img[i] = loadImages(i);
}
// [...]
}
Finally draw the array of images. e.g.:
void draw() {
background(255);
for( int i=0 ;i < img.length; i++){
image(img[i], i*W, 0);
}
}

Redraw selected area of pixels around the current pixel in processing

I'm new to processing. I'm trying to change the color(or another parameter like hue, saturation..) of the pixels around every pixel.
I get nothing changed instead of desired result. Please, help anybody (+
PImage imgg;
void setup() {
size(250,166);
imgg = loadImage("input.jpg");
loadPixels();
image(imgg,0,0);
}
void draw() {
for (int i = 0; i < imgg.width; i++) {
for (int j = 0; j < imgg.height; j++) {
//get the brightness value of the current pixel
int Bright_coeff = int(brightness(pixels[j*imgg.width+i]));
//calculate the area around the current pixel
int Bright_dist = Bright_coeff/10 ;
//area around that pixel will update its color
for (int x = 0; x < imgg.width; x++ ){
for (int y = 0; y < imgg.height; y++){
//check if the distance between iterating pixels and current pixel from above cycle is less than Bright_dist
if ( dist(x, y, i, j)<Bright_dist ){
color qwerty = color(random(1,255),random(1,255),random(1,255)) ;
pixels[y*imgg.width+x] = qwerty;
updatePixels();
}else {
updatePixels();
}
}
}
}
}
}
loadPixels() loads the pixel data of the current display window.
loadPixels has to be done, after the image is drawn to the window by image():
PImage imgg;
void setup() {
size(128,128);
imgg = loadImage("input.jpg");
image(imgg,0,0);
loadPixels();
}
The display is just updated once, after draw() has been executed. updatePixels() set the pixel data for the display window. It is sufficient to do that once at the end of draw():
void draw() {
// [...]
updatePixels();
}

How to get clear mask of users in simple-openni?

I am trying to extract user silhouette and put it above my images. I was able to make a mask and cut user from rgb image. But the contour is messy.
The question is how I can make the mask more precise (to fit real user). I've tried ERODE-DILATE filters, but they don't do much. Maybe I need some Feather filter like in Photoshop. Or I don't know.
Here is my code.
import SimpleOpenNI.*;
SimpleOpenNI context;
PImage mask;
void setup()
{
size(640*2, 480);
context = new SimpleOpenNI(this);
if (context.isInit() == false)
{
exit();
return;
}
context.enableDepth();
context.enableRGB();
context.enableUser();
context.alternativeViewPointDepthToImage();
}
void draw()
{
frame.setTitle(int(frameRate) + " fps");
context.update();
int[] userMap = context.userMap();
background(0, 0, 0);
mask = loadImage("black640.jpg"); //just a black image
int xSize = context.depthWidth();
int ySize = context.depthHeight();
mask.loadPixels();
for (int y = 0; y < ySize; y++) {
for (int x = 0; x < xSize; x++) {
int index = x + y*xSize;
if (userMap[index]>0) {
mask.pixels[index]=color(255, 255, 255);
}
}
}
mask.updatePixels();
image(mask, 0, 0);
mask.filter(DILATE);
mask.filter(DILATE);
PImage rgb = context.rgbImage();
rgb.mask(mask);
image(rgb, context.depthWidth() + 10, 0);
}
It's good you're aligning the RGB and depth streams.
There are few things that could be improved in terms of efficiency:
No need to reload a black image every single frame (in the draw() loop) since you're modifying all the pixels anyway:
mask = loadImage("black640.jpg"); //just a black image
Also, since you don't need the x,y coordinates as you loop through the user data, you can use a single for loop which should be a bit faster:
for(int i = 0 ; i < numPixels ; i++){
mask.pixels[i] = userMap[i] > 0 ? color(255) : color(0);
}
instead of:
for (int y = 0; y < ySize; y++) {
for (int x = 0; x < xSize; x++) {
int index = x + y*xSize;
if (userMap[index]>0) {
mask.pixels[index]=color(255, 255, 255);
}
}
}
Another hacky thing you could do is retrieve the userImage() from SimpleOpenNI, instead of the userData() and apply a THRESHOLD filter to it, which in theory should give you the same result as above.
For example:
int[] userMap = context.userMap();
background(0, 0, 0);
mask = loadImage("black640.jpg"); //just a black image
int xSize = context.depthWidth();
int ySize = context.depthHeight();
mask.loadPixels();
for (int y = 0; y < ySize; y++) {
for (int x = 0; x < xSize; x++) {
int index = x + y*xSize;
if (userMap[index]>0) {
mask.pixels[index]=color(255, 255, 255);
}
}
}
could be:
mask = context.userImage();
mask.filter(THRESHOLD);
In terms of filtering, if you want to shrink the silhouette you should ERODE and bluring should give you a bit of that Photoshop like feathering.
Note that some filter() calls take arguments (like BLUR), but others don't like the ERODE/DILATE morphological filters, but you can still roll your own loops to deal with that.
I also recommend having some sort of easy to tweak interface (it can be fancy slider or a simple keyboard shortcut) when playing with filters.
Here's a rough attempt at the refactored sketch with the above comments:
import SimpleOpenNI.*;
SimpleOpenNI context;
PImage mask;
int numPixels = 640*480;
int dilateAmt = 1;
int erodeAmt = 1;
int blurAmt = 0;
void setup()
{
size(640*2, 480);
context = new SimpleOpenNI(this);
if (context.isInit() == false)
{
exit();
return;
}
context.enableDepth();
context.enableRGB();
context.enableUser();
context.alternativeViewPointDepthToImage();
mask = createImage(640,480,RGB);
}
void draw()
{
frame.setTitle(int(frameRate) + " fps");
context.update();
int[] userMap = context.userMap();
background(0, 0, 0);
//you don't need to keep reloading the image every single frame since you're updating all the pixels bellow anyway
// mask = loadImage("black640.jpg"); //just a black image
// mask.loadPixels();
// int xSize = context.depthWidth();
// int ySize = context.depthHeight();
// for (int y = 0; y < ySize; y++) {
// for (int x = 0; x < xSize; x++) {
// int index = x + y*xSize;
// if (userMap[index]>0) {
// mask.pixels[index]=color(255, 255, 255);
// }
// }
// }
//a single loop is usually faster than a nested loop and you don't need the x,y coordinates anyway
for(int i = 0 ; i < numPixels ; i++){
mask.pixels[i] = userMap[i] > 0 ? color(255) : color(0);
}
//erode
for(int i = 0 ; i < erodeAmt ; i++) mask.filter(ERODE);
//dilate
for(int i = 0 ; i < dilateAmt; i++) mask.filter(DILATE);
//blur
mask.filter(BLUR,blurAmt);
mask.updatePixels();
//preview the mask after you process it
image(mask, 0, 0);
PImage rgb = context.rgbImage();
rgb.mask(mask);
image(rgb, context.depthWidth() + 10, 0);
//print filter values for debugging purposes
fill(255);
text("erodeAmt: " + erodeAmt + "\tdilateAmt: " + dilateAmt + "\tblurAmt: " + blurAmt,15,15);
}
void keyPressed(){
if(key == 'e') erodeAmt--;
if(key == 'E') erodeAmt++;
if(key == 'd') dilateAmt--;
if(key == 'D') dilateAmt++;
if(key == 'b') blurAmt--;
if(key == 'B') blurAmt++;
//constrain values
if(erodeAmt < 0) erodeAmt = 0;
if(dilateAmt < 0) dilateAmt = 0;
if(blurAmt < 0) blurAmt = 0;
}
Unfortunately I can't test with an actual sensor right now, so please use the concepts explained, but bare in mind the full sketch code isn't tested.
This above sketch (if it runs) should allow you to use keys to control the filter parameters (e/E to decrease/increase erosion, d/D for dilation, b/B for blur). Hopefully you'll get satisfactory results.
When working with SimpleOpenNI in general I advise recording an .oni file (check out the RecorderPlay example for that) of a person for the most common use case. This will save you some time on the long run when testing and will allow you to work remotely with the sensor detached. One thing to bare in mind, the depth resolution is reduced to half on recordings (but using a usingRecording boolean flag should keep things safe)
The last and probably most important point is about the quality of the end result. Your resulting image can't be that much better if the source image isn't easy to work with to begin with. The depth data from the original Kinect sensor isn't great. The Asus sensors feel a wee bit more stable, but still the difference is negligible in most cases. If you are going to stick to one of these sensors, make sure you've got a clear background and decent lighting (without too much direct warm light (sunlight, incandescent lightbulbs, etc.) since they may interfere with the sensor)
If you want a more accurate user cut and the above filtering doesn't get the results you're after, consider switching to a better sensor like KinectV2. The depth quality is much better and the sensor is less susceptible to direct warm light. This may mean you need to use Windows (I see there's a KinectPV2 wrapper available) or OpenFrameworks(c++ collections of libraries similar to Processing) with ofxKinectV2
I've tried built-in erode-dilate-blur in processing. But they are very inefficient. Every time I increment blurAmount in img.filter(BLUR,blurAmount), my FPS decreases by 5 frames.
So I decided to try opencv. It is much better in comparison. The result is satisfactory.
import SimpleOpenNI.*;
import processing.video.*;
import gab.opencv.*;
SimpleOpenNI context;
OpenCV opencv;
PImage mask;
int numPixels = 640*480;
int dilateAmt = 1;
int erodeAmt = 1;
int blurAmt = 1;
Movie mov;
void setup(){
opencv = new OpenCV(this, 640, 480);
size(640*2, 480);
context = new SimpleOpenNI(this);
if (context.isInit() == false) {
exit();
return;
}
context.enableDepth();
context.enableRGB();
context.enableUser();
context.alternativeViewPointDepthToImage();
mask = createImage(640, 480, RGB);
mov = new Movie(this, "wild.mp4");
mov.play();
mov.speed(5);
mov.volume(0);
}
void movieEvent(Movie m) {
m.read();
}
void draw() {
frame.setTitle(int(frameRate) + " fps");
context.update();
int[] userMap = context.userMap();
background(0, 0, 0);
mask.loadPixels();
for (int i = 0; i < numPixels; i++) {
mask.pixels[i] = userMap[i] > 0 ? color(255) : color(0);
}
mask.updatePixels();
opencv.loadImage(mask);
opencv.gray();
for (int i = 0; i < erodeAmt; i++) {
opencv.erode();
}
for (int i = 0; i < dilateAmt; i++) {
opencv.dilate();
}
if (blurAmt>0) {//blur with 0 amount causes error
opencv.blur(blurAmt);
}
mask = opencv.getSnapshot();
image(mask, 0, 0);
PImage rgb = context.rgbImage();
rgb.mask(mask);
image(mov, context.depthWidth() + 10, 0);
image(rgb, context.depthWidth() + 10, 0);
fill(255);
text("erodeAmt: " + erodeAmt + "\tdilateAmt: " + dilateAmt + "\tblurAmt: " + blurAmt, 15, 15);
}
void keyPressed() {
if (key == 'e') erodeAmt--;
if (key == 'E') erodeAmt++;
if (key == 'd') dilateAmt--;
if (key == 'D') dilateAmt++;
if (key == 'b') blurAmt--;
if (key == 'B') blurAmt++;
//constrain values
if (erodeAmt < 0) erodeAmt = 0;
if (dilateAmt < 0) dilateAmt = 0;
if (blurAmt < 0) blurAmt = 0;
}

Simple edge detection method Java

I am working on a method in Java to do some simple edge detection. I want to take the difference of two color intensities one at a pixel and the other at the pixel directly below it. The picture that I am using is being colored black no matter what threshold I put in for the method. I am not sure if my current method is just not computing what I need it to but I am at a loss what i should be tracing to find the issue.
Here is my method thus far:
public void edgeDetection(double threshold)
{
Color white = new Color(1,1,1);
Color black = new Color(0,0,0);
Pixel topPixel = null;
Pixel lowerPixel = null;
double topIntensity;
double lowerIntensity;
for(int y = 0; y < this.getHeight()-1; y++){
for(int x = 0; x < this.getWidth(); x++){
topPixel = this.getPixel(x,y);
lowerPixel = this.getPixel(x,y+1);
topIntensity = (topPixel.getRed() + topPixel.getGreen() + topPixel.getBlue()) / 3;
lowerIntensity = (lowerPixel.getRed() + lowerPixel.getGreen() + lowerPixel.getBlue()) / 3;
if(Math.abs(topIntensity - lowerIntensity) < threshold)
topPixel.setColor(white);
else
topPixel.setColor(black);
}
}
}
new Color(1,1,1) calls the Color(int,int,int) constructor of Color which takes values between 0 and 255. So your Color white is still basically black (well, very dark grey, but not enough to notice).
If you want to use the Color(float,float,float) constructor, you need float literals:
Color white = new Color(1.0f,1.0f,1.0f);
public void edgeDetection(int edgeDist)
{
Pixel leftPixel = null;
Pixel rightPixel = null;
Pixel bottomPixel=null;
Pixel[][] pixels = this.getPixels2D();
Color rightColor = null;
boolean black;
for (int row = 0; row < pixels.length; row++)
{
for (int col = 0;
col < pixels[0].length; col++)
{
black=false;
leftPixel = pixels[row][col];
if (col<pixels[0].length-1)
{
rightPixel = pixels[row][col+1];
rightColor = rightPixel.getColor();
if (leftPixel.colorDistance(rightColor) >
edgeDist)
black=true;
}
if (row<pixels.length-1)
{
bottomPixel =pixels[row+1][col];
if (leftPixel.colorDistance(bottomPixel.getColor())>edgeDist)
black=true;
}
if (black)
leftPixel.setColor(Color.BLACK);
else
leftPixel.setColor(Color.WHITE);
}
}
}

Categories

Resources