Java Invert Image Alpha - java

I have a situation where I need to invert the alpha channel of a VolatileImage
My current implementation is the obvious, but very slow;
public BufferedImage invertImage(VolatileImage v) {
BufferedImage b = new BufferedImage(v.getWidth(), v.getHeight(), BufferedImage.TYPE_4BYTE_ABGR);
Graphics g = b.getGraphics();
g.drawImage(v, 0, 0, null);
for(int i = 0; i < b.getWidth(); i++) {
for(int(j = 0; j < b.getHeight(); j++) {
Color c = new Color(b.getRGB(i, j, true));
c = new Color(c.getRed(), c.getGreen(), c.getBlue(), 255 - c.getAlpha());
b.setRGB(i, j, c.getRGB());
}
}
return b;
}
This works fine, but is painfully slow. I have large images and need this to be fast. I have messed around with the AlphaComposite but to no avail - this is not really a composting problem as far as I understand.
Given that 255 - x is equivalent to x & 0xff for 0 <= x < 256, can I not do an en-masse XOR over the alpha channel somehow?

After a lot of googleing, I came across DataBuffer classes being used as maps into BufferedImages:
DataBufferByte buf = (DataBufferByte)b.getRaster().getDataBuffer();
byte[] values = buf.getData();
for(int i = 0; i < values.length; i += 4) values[i] = (byte)(values[i] ^ 0xff);
This inverts the values of the BufferedImage (you do not need to draw it back over, altering the array values alters the buffered image itself).
My tests show this method is about 20 times faster than jazzbassrob's improvement, which was about 1.5 times faster than my original method.

You should be able to speed it up by avoiding all the getters and the constructor inside the loop:
for(int i = 0; i < b.getWidth(); i++) {
for(int(j = 0; j < b.getHeight(); j++) {
b.setRGB(b.getRGB(i, j) ^ 0xFF000000);
}
}

Related

BufferedImage unexpectedly changing color

I have following code, which creates grayscale BufferedImage and then sets random colors of each pixel.
import java.awt.image.BufferedImage;
public class Main {
public static void main(String[] args) {
BufferedImage right = new BufferedImage(100, 100, BufferedImage.TYPE_BYTE_GRAY);
int correct = 0, error = 0;
for (int i = 0; i < right.getWidth(); i++) {
for (int j = 0; j < right.getHeight(); j++) {
int average = (int) (Math.random() * 255);
int color = (0xff << 24) | (average << 16) | (average << 8) | average;
right.setRGB(i, j, color);
if(color != right.getRGB(i, j)) {
error++;
} else {
correct++;
}
}
}
System.out.println(correct + ", " + error);
}
}
In approximately 25-30% pixels occurs weird behaviour, where I set color and right afterwards it has different value than was previously set. Am I setting colors the wrong way?
Here is your solution: ban getRGB and use the Raster (faster and easier than getRGB) or even better DataBuffer (fastest but you have to handle the encoding):
import java.awt.image.BufferedImage;
public class Main
{
public static void main(String[] args)
{
BufferedImage right = new BufferedImage(100, 100, BufferedImage.TYPE_BYTE_GRAY);
int correct = 0, error = 0;
for (int x=0 ; x < right.getWidth(); x++)
for (int j = 0; j < right.getHeight(); j++)
{
int average = (int) (Math.random() * 255) ;
right.getRaster().setSample(x, y, 0, average) ;
if ( average != right.getRaster().getSample(x, y, 0) ) error++ ;
else correct++;
}
System.out.println(correct + ", " + error);
}
}
In your case getRGB is terrible, because the encoding is an array of byte (8 bits), and you have to manipulate RGB values with getRGB. The raster does all the work of conversion for you.
I think your issue has to do with the image type (third parameter for BufferedImage constructor). If you change the type to BufferedImage.TYPE_INT_ARGB, then you will get 100% correct results.
Looking at the documentation for BufferedImage.getRGB(int,int) there is some conversion when you get RGB that is not the default color space
Returns an integer pixel in the default RGB color model (TYPE_INT_ARGB) and default sRGB colorspace. Color conversion takes place if this default model does not match the image ColorModel.
So you're probably seeing the mismatches due to the conversion.
Wild guess:
Remove (0xff << 24) | which is the alpha channel, how intransparent/opaque the color is. Given yes/no transparent and average < or >= 128 application of transparency, 25% could be the wrong color mapping (very wild guess).

How to get clear mask of users in simple-openni?

I am trying to extract user silhouette and put it above my images. I was able to make a mask and cut user from rgb image. But the contour is messy.
The question is how I can make the mask more precise (to fit real user). I've tried ERODE-DILATE filters, but they don't do much. Maybe I need some Feather filter like in Photoshop. Or I don't know.
Here is my code.
import SimpleOpenNI.*;
SimpleOpenNI context;
PImage mask;
void setup()
{
size(640*2, 480);
context = new SimpleOpenNI(this);
if (context.isInit() == false)
{
exit();
return;
}
context.enableDepth();
context.enableRGB();
context.enableUser();
context.alternativeViewPointDepthToImage();
}
void draw()
{
frame.setTitle(int(frameRate) + " fps");
context.update();
int[] userMap = context.userMap();
background(0, 0, 0);
mask = loadImage("black640.jpg"); //just a black image
int xSize = context.depthWidth();
int ySize = context.depthHeight();
mask.loadPixels();
for (int y = 0; y < ySize; y++) {
for (int x = 0; x < xSize; x++) {
int index = x + y*xSize;
if (userMap[index]>0) {
mask.pixels[index]=color(255, 255, 255);
}
}
}
mask.updatePixels();
image(mask, 0, 0);
mask.filter(DILATE);
mask.filter(DILATE);
PImage rgb = context.rgbImage();
rgb.mask(mask);
image(rgb, context.depthWidth() + 10, 0);
}
It's good you're aligning the RGB and depth streams.
There are few things that could be improved in terms of efficiency:
No need to reload a black image every single frame (in the draw() loop) since you're modifying all the pixels anyway:
mask = loadImage("black640.jpg"); //just a black image
Also, since you don't need the x,y coordinates as you loop through the user data, you can use a single for loop which should be a bit faster:
for(int i = 0 ; i < numPixels ; i++){
mask.pixels[i] = userMap[i] > 0 ? color(255) : color(0);
}
instead of:
for (int y = 0; y < ySize; y++) {
for (int x = 0; x < xSize; x++) {
int index = x + y*xSize;
if (userMap[index]>0) {
mask.pixels[index]=color(255, 255, 255);
}
}
}
Another hacky thing you could do is retrieve the userImage() from SimpleOpenNI, instead of the userData() and apply a THRESHOLD filter to it, which in theory should give you the same result as above.
For example:
int[] userMap = context.userMap();
background(0, 0, 0);
mask = loadImage("black640.jpg"); //just a black image
int xSize = context.depthWidth();
int ySize = context.depthHeight();
mask.loadPixels();
for (int y = 0; y < ySize; y++) {
for (int x = 0; x < xSize; x++) {
int index = x + y*xSize;
if (userMap[index]>0) {
mask.pixels[index]=color(255, 255, 255);
}
}
}
could be:
mask = context.userImage();
mask.filter(THRESHOLD);
In terms of filtering, if you want to shrink the silhouette you should ERODE and bluring should give you a bit of that Photoshop like feathering.
Note that some filter() calls take arguments (like BLUR), but others don't like the ERODE/DILATE morphological filters, but you can still roll your own loops to deal with that.
I also recommend having some sort of easy to tweak interface (it can be fancy slider or a simple keyboard shortcut) when playing with filters.
Here's a rough attempt at the refactored sketch with the above comments:
import SimpleOpenNI.*;
SimpleOpenNI context;
PImage mask;
int numPixels = 640*480;
int dilateAmt = 1;
int erodeAmt = 1;
int blurAmt = 0;
void setup()
{
size(640*2, 480);
context = new SimpleOpenNI(this);
if (context.isInit() == false)
{
exit();
return;
}
context.enableDepth();
context.enableRGB();
context.enableUser();
context.alternativeViewPointDepthToImage();
mask = createImage(640,480,RGB);
}
void draw()
{
frame.setTitle(int(frameRate) + " fps");
context.update();
int[] userMap = context.userMap();
background(0, 0, 0);
//you don't need to keep reloading the image every single frame since you're updating all the pixels bellow anyway
// mask = loadImage("black640.jpg"); //just a black image
// mask.loadPixels();
// int xSize = context.depthWidth();
// int ySize = context.depthHeight();
// for (int y = 0; y < ySize; y++) {
// for (int x = 0; x < xSize; x++) {
// int index = x + y*xSize;
// if (userMap[index]>0) {
// mask.pixels[index]=color(255, 255, 255);
// }
// }
// }
//a single loop is usually faster than a nested loop and you don't need the x,y coordinates anyway
for(int i = 0 ; i < numPixels ; i++){
mask.pixels[i] = userMap[i] > 0 ? color(255) : color(0);
}
//erode
for(int i = 0 ; i < erodeAmt ; i++) mask.filter(ERODE);
//dilate
for(int i = 0 ; i < dilateAmt; i++) mask.filter(DILATE);
//blur
mask.filter(BLUR,blurAmt);
mask.updatePixels();
//preview the mask after you process it
image(mask, 0, 0);
PImage rgb = context.rgbImage();
rgb.mask(mask);
image(rgb, context.depthWidth() + 10, 0);
//print filter values for debugging purposes
fill(255);
text("erodeAmt: " + erodeAmt + "\tdilateAmt: " + dilateAmt + "\tblurAmt: " + blurAmt,15,15);
}
void keyPressed(){
if(key == 'e') erodeAmt--;
if(key == 'E') erodeAmt++;
if(key == 'd') dilateAmt--;
if(key == 'D') dilateAmt++;
if(key == 'b') blurAmt--;
if(key == 'B') blurAmt++;
//constrain values
if(erodeAmt < 0) erodeAmt = 0;
if(dilateAmt < 0) dilateAmt = 0;
if(blurAmt < 0) blurAmt = 0;
}
Unfortunately I can't test with an actual sensor right now, so please use the concepts explained, but bare in mind the full sketch code isn't tested.
This above sketch (if it runs) should allow you to use keys to control the filter parameters (e/E to decrease/increase erosion, d/D for dilation, b/B for blur). Hopefully you'll get satisfactory results.
When working with SimpleOpenNI in general I advise recording an .oni file (check out the RecorderPlay example for that) of a person for the most common use case. This will save you some time on the long run when testing and will allow you to work remotely with the sensor detached. One thing to bare in mind, the depth resolution is reduced to half on recordings (but using a usingRecording boolean flag should keep things safe)
The last and probably most important point is about the quality of the end result. Your resulting image can't be that much better if the source image isn't easy to work with to begin with. The depth data from the original Kinect sensor isn't great. The Asus sensors feel a wee bit more stable, but still the difference is negligible in most cases. If you are going to stick to one of these sensors, make sure you've got a clear background and decent lighting (without too much direct warm light (sunlight, incandescent lightbulbs, etc.) since they may interfere with the sensor)
If you want a more accurate user cut and the above filtering doesn't get the results you're after, consider switching to a better sensor like KinectV2. The depth quality is much better and the sensor is less susceptible to direct warm light. This may mean you need to use Windows (I see there's a KinectPV2 wrapper available) or OpenFrameworks(c++ collections of libraries similar to Processing) with ofxKinectV2
I've tried built-in erode-dilate-blur in processing. But they are very inefficient. Every time I increment blurAmount in img.filter(BLUR,blurAmount), my FPS decreases by 5 frames.
So I decided to try opencv. It is much better in comparison. The result is satisfactory.
import SimpleOpenNI.*;
import processing.video.*;
import gab.opencv.*;
SimpleOpenNI context;
OpenCV opencv;
PImage mask;
int numPixels = 640*480;
int dilateAmt = 1;
int erodeAmt = 1;
int blurAmt = 1;
Movie mov;
void setup(){
opencv = new OpenCV(this, 640, 480);
size(640*2, 480);
context = new SimpleOpenNI(this);
if (context.isInit() == false) {
exit();
return;
}
context.enableDepth();
context.enableRGB();
context.enableUser();
context.alternativeViewPointDepthToImage();
mask = createImage(640, 480, RGB);
mov = new Movie(this, "wild.mp4");
mov.play();
mov.speed(5);
mov.volume(0);
}
void movieEvent(Movie m) {
m.read();
}
void draw() {
frame.setTitle(int(frameRate) + " fps");
context.update();
int[] userMap = context.userMap();
background(0, 0, 0);
mask.loadPixels();
for (int i = 0; i < numPixels; i++) {
mask.pixels[i] = userMap[i] > 0 ? color(255) : color(0);
}
mask.updatePixels();
opencv.loadImage(mask);
opencv.gray();
for (int i = 0; i < erodeAmt; i++) {
opencv.erode();
}
for (int i = 0; i < dilateAmt; i++) {
opencv.dilate();
}
if (blurAmt>0) {//blur with 0 amount causes error
opencv.blur(blurAmt);
}
mask = opencv.getSnapshot();
image(mask, 0, 0);
PImage rgb = context.rgbImage();
rgb.mask(mask);
image(mov, context.depthWidth() + 10, 0);
image(rgb, context.depthWidth() + 10, 0);
fill(255);
text("erodeAmt: " + erodeAmt + "\tdilateAmt: " + dilateAmt + "\tblurAmt: " + blurAmt, 15, 15);
}
void keyPressed() {
if (key == 'e') erodeAmt--;
if (key == 'E') erodeAmt++;
if (key == 'd') dilateAmt--;
if (key == 'D') dilateAmt++;
if (key == 'b') blurAmt--;
if (key == 'B') blurAmt++;
//constrain values
if (erodeAmt < 0) erodeAmt = 0;
if (dilateAmt < 0) dilateAmt = 0;
if (blurAmt < 0) blurAmt = 0;
}

How to threshold and BufferedImage in Java

I have read my original image as BufferedImage in Java and then following some operations, I am trying to threshold my image to either high(255) or low(0) but when I save my image, actually I try to overwrite it with new values, the pixels value are not only 0 and 255, some neighbouring values appear, I don't understand why.
READING MY IMAGE
File input = new File("/../Screenshots/1.jpg");
BufferedImage image = ImageIO.read(input);
for (int i = 0; i < image.getWidth(); i++) {
for (int j = 0; j < image.getHeight(); j++) {
Color c = new Color(image.getRGB(i, j));
powerspectrum[i][j] = (int) ((c.getRed() * 0.299)
+ (c.getGreen() * 0.587) + (c.getBlue() * 0.114));
}
}
THRESHOLDING MY IMAGE
for (int i = 0; i < image.getWidth(); i++) {
for (int j = 0; j < image.getHeight(); j++) {
if (gradient[i][j] <= upperthreshold
&& gradient[i][j] >= lowerthreshold)
spaces[i][j] = 255;
else
spaces[i][j] = 0;
Color gradColor = new Color(spaces[i][j], spaces[i][j],
spaces[i][j]);
image.setRGB(i, j, gradColor.getRGB());
}
}
SAVING MY IMAGE
File gradoutput = new File("/../Screenshots/3_GradThresh.jpg");
ImageIO.write(image, "jpg", gradoutput);
I don't know how to cut off the other intensity values.
I suspect this is because JPG is a lossy format. When you are saving a JPG to disk, it is doing compression. Try working with bitmap to see if that removes these neighboring gray area values.
+1 with the jpg compression issue. In image processing, we use PNG (best compression format without loss) or TIFF (worst scenario).
Btw, the methods setRGB/getRGB have terrible performances. The fastest is to modify directly the DataBuffer, but you have to do it for each type of image encoding. An alternative solution (but slower) is to use the Raster. Then you don't have to worry about the encoding.

reading black/white image in java with TYPE_USHORT_GRAY

I have the following code to read a black-white picture in java.
imageg = ImageIO.read(new File(path));
BufferedImage bufferedImage = new BufferedImage(image.getWidth(null), image.getHeight(null), BufferedImage.TYPE_USHORT_GRAY);
Graphics g = bufferedImage.createGraphics();
g.drawImage(image, 0, 0, null);
g.dispose();
int w = img.getWidth();
int h = img.getHeight();
int[][] array = new int[w][h];
for (int j = 0; j < w; j++) {
for (int k = 0; k < h; k++) {
array[j][k] = img.getRGB(j, k);
System.out.print(array[j][k]);
}
}
As you can see I have set the type of BufferedImage into TYPE_USHORT_GRAY and I expect that I see the numbers between 0 and 255 in the two D array mattrix. but I will see '-1' and another large integer. Can anyone highlight my mistake please?
As already mentioned in comments and answers, the mistake is using the getRGB() method which converts your pixel values to packed int format in default sRGB color space (TYPE_INT_ARGB). In this format, -1 is the same as ยด0xffffffff`, which means pure white.
To access your unsigned short pixel data directly, try:
int w = img.getWidth();
int h = img.getHeight();
DataBufferUShort buffer = (DataBufferUShort) img.getRaster().getDataBuffer(); // Safe cast as img is of type TYPE_USHORT_GRAY
// Conveniently, the buffer already contains the data array
short[] arrayUShort = buffer.getData();
// Access it like:
int grayPixel = arrayUShort[x + y * w] & 0xffff;
// ...or alternatively, if you like to re-arrange the data to a 2-dimensional array:
int[][] array = new int[w][h];
// Note: I switched the loop order to access pixels in more natural order
for (int y = 0; y < h; y++) {
for (int x = 0; x < w; x++) {
array[x][y] = buffer.getElem(x + y * w);
System.out.print(array[x][y]);
}
}
// Access it like:
grayPixel = array[x][y];
PS: It's probably still a good idea to look at the second link provided by #blackSmith, for proper color to gray conversion. ;-)
A BufferedImage of type TYPE_USHORT_GRAY as its name says stores pixels using 16 bits (size of short is 16 bits). The range 0..255 is only 8 bits, so the colors may be well beyond 255.
And BufferedImage.getRGB() does not return these 16 pixel data bits but quoting from its javadoc:
Returns an integer pixel in the default RGB color model (TYPE_INT_ARGB) and default sRGB colorspace.
getRGB() will always return the pixel in RGB format regardless of the type of the BufferedImage.

Android: how to execute this for loop fast as now it is?

I going to set the Pixel to my Bitmap to some specific point.
For that i am using the For Loop. But as because it is scanning whole image, it takes time.
So what is the alternate of it that can help me to execute it faster.
That for loop is as below:
public void drawLoop(){
int ANTILAISING_TOLERANCE = 100;
for(int x = 0; x < mask.getWidth(); x++){
for(int y = 0; y < mask.getHeight(); y++){
g = (mask.getPixel(x,y) & 0x0000FF00) >> 8;
r = (mask.getPixel(x,y) & 0x00FF0000) >> 16;
b = (mask.getPixel(x,y) & 0x000000FF);
if(Math.abs(sR-r) < ANTILAISING_TOLERANCE && Math.abs(sG-g) < ANTILAISING_TOLERANCE && Math.abs(sB-b) < ANTILAISING_TOLERANCE)
colored.setPixel(x, y, (colored.getPixel(x, y) & 0xFFFF0000));
}
}
imageView.setImageBitmap(colored);
coloreBitmap.add(colored.copy(Config.ARGB_8888, true));
position = coloreBitmap.size()-1;
System.out.println("Position in drawFunction is: "+position);
}
Please help me for that.
Thanks.
I also had this problem.
My program check every pixel on the bitmap, then checks if the green color (RGB) is higher then red and blue, an bitmap with the size of 3264 x 2448 (Samsung galaxy s2 camera size).
it takes 3 seconds to scan and check the whole bitmap, pretty fast if you ask me.
This is my code:
try {
decoder_image = BitmapRegionDecoder.newInstance("yourfilepath",false);
} catch (IOException e) {
e.printStackTrace();
}
example filepath: /mnt/sdcard/DCIM/Camera/image.jpg
try {
final int width = decoder_image.getWidth();
final int height = decoder_image.getHeight();
// Divide the bitmap into 1100x1100 sized chunks and process it.
// This makes sure that the app will not be "overloaded"
int wSteps = (int) Math.ceil(width / 1100.0);
int hSteps = (int) Math.ceil(height / 1100.0);
Rect rect = new Rect();
for (int h = 0; h < hSteps; h++) {
for (int w = 0; w < wSteps; w++) {
int w2 = Math.min(width, (w + 1) * 1100);
int h2 = Math.min(height, (h + 1) * 1100);
rect.set(w * 1100, h * 1100, w2, h2);
mask = decoder_image.decodeRegion(rect,
null);
try {
int bWidth = mask.getWidth();
int bHeight = mask.getHeight();
int[] pixels = new int[bWidth * bHeight];
mask.getPixels(pixels, 0, bWidth, 0, 0,
bWidth, bHeight);
for (int y = 0; y < bHeight; y++) {
for (int x = 0; x < bWidth; x++) {
int index = y * bWidth + x;
int r = (pixels[index] >> 16) & 0xff; //bitwise shifting
int g = (pixels[index] >> 8) & 0xff;
int b = pixels[index] & 0xff;
if(Math.abs(sR-r) < ANTILAISING_TOLERANCE && Math.abs(sG-g) < ANTILAISING_TOLERANCE && Math.abs(sB-b) < ANTILAISING_TOLERANCE)
colored.setPixel(x, y, (colored.getPixel(x, y) & 0xFFFF0000));
}
}
} finally {
mask.recycle();
}
}
}
imageView.setImageBitmap(colored);
coloreBitmap.add(colored.copy(Config.ARGB_8888, true));
position = coloreBitmap.size()-1;
System.out.println("Position in drawFunction is: "+position);
} finally {
decoder_image.recycle();
}
I also cut them into chunks, because samsung galaxy s2 does not have enough memory to scan the whole bitmap at once.
Hope this helped.
Edit:
I just notice (my fault) it was about setting a pixel, instead of only read them. I going to try now to make it fit your code, changed already some to your code, I am working on it at the moment.
Edit 2:
Made an adjustment to the code, I hope this works.
Don't forgot to change "yourfilepath" at the top of the code.
Just a suggestion to reduce the for loop by half. You should try with your images and see if it works.
Idea: By the assumption that the next pixel is same as current pixel, we only analyse the current pixel and apply the result to both current and next pixel.
Drawback: you have 50% chance to have 1 pixel distorted.
Example: Turn color 1 into 3
Original: 1 1 1 1 1 2 2 2 2 2 2 1 1 1
After for loop: 3 3 3 3 3 3 2 2 2 2 2 2 3 3 (Only 7 loops are executed. But color 2 shifted by 1 pixel.)
Using original logic, there will be 14 loops executed.
for(int x = 0; x < mask.getWidth(); x++){
for(int y = 0; y < mask.getHeight() - 1; y+=2) { // Change point 1
g = (mask.getPixel(x,y) & 0x0000FF00) >> 8;
r = (mask.getPixel(x,y) & 0x00FF0000) >> 16;
b = (mask.getPixel(x,y) & 0x000000FF);
if(Math.abs(sR-r) < ANTILAISING_TOLERANCE && Math.abs(sG-g) < ANTILAISING_TOLERANCE && Math.abs(sB-b) < ANTILAISING_TOLERANCE)
colored.setPixel(x, y, (colored.getPixel(x, y) & 0xFFFF0000));
colored.setPixel(x, y+1, (colored.getPixel(x, y) & 0xFFFF0000)); // Change point 2
}
}
iDroid,
You've got a very tough situation here. Whenever you do pixel by pixel operations, things get a little cumbersome. So, a bunch of minor optimizations are key, and I'm certain that many people will have a lot to add here. I'm not certain how much impact they will have in your overall process, but I know that these general behaviors saveme optimizing a LOT of code.
public void drawLoop(){
int ANTILAISING_TOLERANCE = 100;
//EDIT: Moving this to outside the loop is FAR better
// Saves you an object call and the number doesn't change in the loop anyway.
int maskHeight = mask.getHeight();
//EDIT: Reverse the loops. Comparisons vs. 0 are faster than any other number.
// and saves you a ton of method calls.
for(int x = mask.getWidth(); --x >= 0 ; ){
for(int y = maskHeight; --y >= 0 ; ){
//EDIT: Saves you 2 method calls for the same result.
int atPixel = mask.getPixel(x,y);
g = (atPixel & 0x0000FF00) >> 8;
r = (atPixel & 0x00FF0000) >> 16;
b = (atPixel & 0x000000FF);
if(Math.abs(sR-r) < ANTILAISING_TOLERANCE && Math.abs(sG-g) < ANTILAISING_TOLERANCE && Math.abs(sB-b) < ANTILAISING_TOLERANCE)
colored.setPixel(x, y, (colored.getPixel(x, y) & 0xFFFF0000));
}
}
imageView.setImageBitmap(colored);
coloreBitmap.add(colored.copy(Config.ARGB_8888, true));
position = coloreBitmap.size()-1;
System.out.println("Position in drawFunction is: "+position);
}
Aside from that, anything else will create "lossy" behavior but will have far higher yields.
Hope this helps,
FuzzicalLogic

Categories

Resources