I am working on an assignment using sobel edge detection on an image. I am currently struggling to do the operation for the gradient. I am receiving a "bad operand types for binary operator '*'" error when compiling. I think it may be because I defined all of my pixels as letters and I'm not sure what my next step should be. Any help would be greatly appreciated! Thank you in advance!
public static BufferedImage sobelEdgeDetect(BufferedImage input) {
int img_width = input.getWidth();
int img_height = input.getHeight();
BufferedImage output_img = new BufferedImage(
img_width, img_height, BufferedImage.TYPE_INT_RGB);
for (int x = 0; x < img_width; x++) {
for (int y = 0; y < img_height; y++) {
Color color_at_pos = new Color(input.getRGB(x, y));
int red = color_at_pos.getRed();
int green = color_at_pos.getGreen();
int blue = color_at_pos.getBlue();
int average = (red + green + blue) / 3;
Color A,B,C,D,F,G,H,I;
if(x-1 > 0 && y+1 < img_height){
A = new Color (input.getRGB(x-1,y+1));
} else {
A = Color.BLACK;
}
if(y+1 < img_height){
B = new Color (input.getRGB(x,y+1));
} else {
B = Color.BLACK;
}
if(x+1 < img_width && y+1 < img_height){
C = new Color (input.getRGB(x+1,y+1));
} else {
C = Color.BLACK;
}
if(x-1 > 0){
D = new Color (input.getRGB(x-1,y));
} else {
D = Color.BLACK;
}
if(x+1 < img_width){
F = new Color (input.getRGB(x+1,y));
} else {
F = Color.BLACK;
}
if(x-1 > 0 && y-1 > 0){
G = new Color (input.getRGB(x-1,y-1));
} else {
G = Color.BLACK;
}
if(y-1 > 0){
H = new Color (input.getRGB(x,y-1));
} else {
H = Color.BLACK;
}
if(x+1 > img_width && y-1 > 0){
I = new Color (input.getRGB(x+1,y-1));
} else {
I = Color.BLACK;
}
int gx = (-A + (-2*D) + -G + C + (2*F)+ I);
int gy = (A + (2*B) + C + (-G) + (-2*H) + (-I));
int result = (int)math.sqrt((gx*gx) + (gy*gy));
if (average < 0) {
average = 0;
} else if (average > 255) {
average = 255;
}
Color average_color = new Color(average, average, average);
output_img.setRGB(x, y, average_color.getRGB());
}
}
return output_img;
}
the problem lies within the handling of Colors, here:
int gx = (-A + (-2*D) + -G + C + (2*F)+ I);
int gy = (A + (2*B) + C + (-G) + (-2*H) + (-I));
this won't work.
to get the gradient you have to either
handle each color separate
handle the image in grayscale
i can't tell you which one would work for you.
handle each color separate:
using this approach you handle each color separate to detect edges for that color
//red:
int redGx = (-A.getRed() + (-2*D.getRed()) + -G.getRed() + C.getRed() + (2*F.getRed())+ I.getRed());
int redGy = ...
//green:
int greenGx = (-A.getGreen()...
handle as gray
int redGx = (toGray(A) + (-2*toGray(D)) + -toGray(G) + toGray(C) + (2*toGray(F))+ toGray(I));
int redGy = ...
you have to provide the methode toGray / average the colors
static int toGray(Color col){
return (color.getGreen()+color.getRed()+col.getBlue()) / 3;
}
Related
I currently have a script that takes a screencap of an area and searches that area for every color value. However, I want the script to stop running once a specific color is not found in any area of the image. My current script stops the moment that a pixel is not the correct color, which is not what I want.
import java.awt.*;
import java.awt.image.BufferedImage;
public class Main {
public static void main(String args[]) throws AWTException {
int i = 0;
while (i < 1){
BufferedImage image = new Robot().createScreenCapture(new Rectangle(Toolkit.getDefaultToolkit().getScreenSize()));
BufferedImage image2 = new Robot().createScreenCapture(new Rectangle(70, 102,200,222));
\ for (int y = 0; y < image2.getHeight(); y++) {
for (int x = 0; x < image2.getWidth(); x++) {
Color pixcolor = new Color(image2.getRGB(x, y));
int red = pixcolor.getRed();
int green = pixcolor.getGreen();
int blue = pixcolor.getBlue();
System.out.println("Red = " + red);
System.out.println("Green = " + green);
System.out.println("Blue = " + blue);
if (red == 253 && green == 222 && blue == 131){
continue;
}
else {
System.out.println(x);
System.out.println(y);
i ++;
System.exit(1);;
}
}
}}}}
Would something like this work. Basically I just use boolean 'isFound' to remember
if the color is found in the picture, if it is not, then the while loop ends.
import java.awt.*;
import java.awt.image.BufferedImage;
public class Main {
public static void main(String args[]) throws AWTException {
boolean isFound = false; // before was Boolean isNotFound = true;
while (!isFound) {
BufferedImage image = new Robot().createScreenCapture(new Rectangle(Toolkit.getDefaultToolkit().getScreenSize()));
BufferedImage image2 = new Robot().createScreenCapture(new Rectangle(70, 102, 200, 222));
isFound = false;
for (int y = 0; y < image2.getHeight(); y++) {
for (int x = 0; x < image2.getWidth(); x++) {
Color pixcolor = new Color(image2.getRGB(x, y));
int red = pixcolor.getRed();
int green = pixcolor.getGreen();
int blue = pixcolor.getBlue();
System.out.println("Red = " + red);
System.out.println("Green = " + green);
System.out.println("Blue = " + blue);
if (red == 253 && green == 222 && blue == 131) {
isFound = true;
break;
}
}
if (isFound) break;
}
}
}
}
I have created a sample SWT application. I am uploading few images into the application. I have to resize all the images which are above 16x16 (Width*Height) resolution and save those in separate location.
For this reason I am scaling the image and saving the scaled image to my destination location. Below is the piece of code which I am using to do that.
Using getImageData() to get the image data and to save I am using ImageLoader save() method.
final Image mySampleImage = ImageResizer.scaleImage(img, 16, 16);
final ImageLoader imageLoader = new ImageLoader();
imageLoader.data = new ImageData[] { mySampleImage.getImageData() };
final String fileExtension = inputImagePath.substring(inputImagePath.lastIndexOf(".") + 1);
if ("GIF".equalsIgnoreCase(fileExtension)) {
imageLoader.save(outputImagePath, SWT.IMAGE_GIF);
} else if ("PNG".equalsIgnoreCase(fileExtension)) {
imageLoader.save(outputImagePath, SWT.IMAGE_PNG);
}
ImageLoader imageLoader.save(outputImagePath, SWT.IMAGE_GIF); is throwing the below exeception when I am trying to save few specific images (GIF or PNG format).
org.eclipse.swt.SWTException: Unsupported color depth
at org.eclipse.swt.SWT.error(SWT.java:4533)
at org.eclipse.swt.SWT.error(SWT.java:4448)
at org.eclipse.swt.SWT.error(SWT.java:4419)
at org.eclipse.swt.internal.image.GIFFileFormat.unloadIntoByteStream(GIFFileFormat.java:427)
at org.eclipse.swt.internal.image.FileFormat.unloadIntoStream(FileFormat.java:124)
at org.eclipse.swt.internal.image.FileFormat.save(FileFormat.java:112)
at org.eclipse.swt.graphics.ImageLoader.save(ImageLoader.java:218)
at org.eclipse.swt.graphics.ImageLoader.save(ImageLoader.java:259)
at mainpackage.ImageResizer.resize(ImageResizer.java:55)
at mainpackage.ImageResizer.main(ImageResizer.java:110)
Let me know If there is any other way to do the same (or) there is any way to resolve this issue.
Finally I got a solution by referring to this existing eclipse bug Unsupported color depth eclipse bug.
In the below code i have created a PaletteData with RGB values and updated my Image Data.
My updateImagedata() method will take the scaled image and will return the proper updated imageData if the image depth is 32 or more.
private static ImageData updateImagedata(Image image) {
ImageData data = image.getImageData();
if (!data.palette.isDirect && data.depth <= 8)
return data;
// compute a histogram of color frequencies
HashMap<RGB, ColorCounter> freq = new HashMap<>();
int width = data.width;
int[] pixels = new int[width];
int[] maskPixels = new int[width];
for (int y = 0, height = data.height; y < height; ++y) {
data.getPixels(0, y, width, pixels, 0);
for (int x = 0; x < width; ++x) {
RGB rgb = data.palette.getRGB(pixels[x]);
ColorCounter counter = (ColorCounter) freq.get(rgb);
if (counter == null) {
counter = new ColorCounter();
counter.rgb = rgb;
freq.put(rgb, counter);
}
counter.count++;
}
}
// sort colors by most frequently used
ColorCounter[] counters = new ColorCounter[freq.size()];
freq.values().toArray(counters);
Arrays.sort(counters);
// pick the most frequently used 256 (or fewer), and make a palette
ImageData mask = null;
if (data.transparentPixel != -1 || data.maskData != null) {
mask = data.getTransparencyMask();
}
int n = Math.min(256, freq.size());
RGB[] rgbs = new RGB[n + (mask != null ? 1 : 0)];
for (int i = 0; i < n; ++i)
rgbs[i] = counters[i].rgb;
if (mask != null) {
rgbs[rgbs.length - 1] = data.transparentPixel != -1 ? data.palette.getRGB(data.transparentPixel)
: new RGB(255, 255, 255);
}
PaletteData palette = new PaletteData(rgbs);
ImageData newData = new ImageData(width, data.height, 8, palette);
if (mask != null)
newData.transparentPixel = rgbs.length - 1;
for (int y = 0, height = data.height; y < height; ++y) {
data.getPixels(0, y, width, pixels, 0);
if (mask != null)
mask.getPixels(0, y, width, maskPixels, 0);
for (int x = 0; x < width; ++x) {
if (mask != null && maskPixels[x] == 0) {
pixels[x] = rgbs.length - 1;
} else {
RGB rgb = data.palette.getRGB(pixels[x]);
pixels[x] = closest(rgbs, n, rgb);
}
}
newData.setPixels(0, y, width, pixels, 0);
}
return newData;
}
To find minimum index:
static int closest(RGB[] rgbs, int n, RGB rgb) {
int minDist = 256*256*3;
int minIndex = 0;
for (int i = 0; i < n; ++i) {
RGB rgb2 = rgbs[i];
int da = rgb2.red - rgb.red;
int dg = rgb2.green - rgb.green;
int db = rgb2.blue - rgb.blue;
int dist = da*da + dg*dg + db*db;
if (dist < minDist) {
minDist = dist;
minIndex = i;
}
}
return minIndex;
}
ColourCounter Class:
class ColorCounter implements Comparable<ColorCounter> {
RGB rgb;
int count;
public int compareTo(ColorCounter o) {
return o.count - count;
}
}
Basically, what I need to do is take a 2d array of bitflags and produce a list of 2d rectangles to fill the entire area with the minimum number of total shapes required to perfectly fill the space. I am doing this to convert a 2d top-down monochrome of a map into 2d rectangle shapes which perfectly represent the passed in image which will be used to generate a platform in a 3d world. I need to minimize the total number of shapes used, because each shape will represent a separate object, and flooding it with 1 unit sized squares for each pixel would be highly inefficient for that engine.
So far I have read in the image, processed it, and filled a two dimensional array of booleans which tells me if the pixel should be filled or unfilled, but I am unsure of the most efficient approach of continuing.
Here is what I have so far, as reference, if you aren't following:
public static void main(String[] args) {
File file = new File(args[0]);
BufferedImage bi = null;
try {
bi = ImageIO.read(file);
} catch (IOException ex) {
Logger.global.log(Level.SEVERE, null, ex);
}
if (bi != null) {
int[] rgb = bi.getRGB(0, 0, bi.getWidth(), bi.getHeight(), new int[bi.getWidth() * bi.getHeight()], 0, bi.getWidth());
Origin origin = new Origin(bi.getWidth() / 2, bi.getHeight() / 2);
boolean[][] flags = new boolean[bi.getWidth()][bi.getHeight()];
for (int y = 0; y < bi.getHeight(); y++) {
for (int x = 0; x < bi.getWidth(); x++) {
int index = y * bi.getWidth() + x;
int color = rgb[index];
int type = color == Color.WHITE.getRGB() ? 1 : (color == Color.RED.getRGB() ? 2 : 0);
if (type == 2) {
origin = new Origin(x, y);
}
flags[x][y] = type != 1;
}
}
List<Rectangle> list = new ArrayList();
//Fill list with rectangles
}
}
White represents no land. Black or Red represents land. The check for the red pixel marks the origin position of map, which was just for convenience and the rectangles will be offset by the origin position if it is found.
Edit: The processing script does not need to be fast, the produced list of rectangles will be dumped and that will be what will be imported and used later, so the processing of the image does not need to be particularly optimized, it doesn't make a difference.
I also just realized that expecting a 'perfect' solution is expecting too much, since this would qualify as a 'knapsack problem' of the multidimensionally constrained variety, if I am expecting exactly the fewest number of rectangles, so simply an algorithm that produces a minimal number of rectangles will suffice.
Here is a reference image for completion:
Edit 2: It doesn't look like this is such an easy thing to answer given no feedback yet, but I have started making progress, but I am sure I am missing something that would vastly reduce the number of rectangles. Here is the updated progress:
static int mapWidth;
static int mapHeight;
public static void main(String[] args) {
File file = new File(args[0]);
BufferedImage bi = null;
System.out.println("Reading image...");
try {
bi = ImageIO.read(file);
} catch (IOException ex) {
Logger.global.log(Level.SEVERE, null, ex);
}
if (bi != null) {
System.out.println("Complete!");
System.out.println("Interpreting image...");
mapWidth = bi.getWidth();
mapHeight = bi.getHeight();;
int[] rgb = bi.getRGB(0, 0, mapWidth, mapHeight, new int[mapWidth * mapHeight], 0, mapWidth);
Origin origin = new Origin(mapWidth / 2, mapHeight / 2);
boolean[][] flags = new boolean[mapWidth][mapHeight];
for (int y = 0; y < mapHeight; y++) {
for (int x = 0; x < mapWidth; x++) {
int index = y * mapWidth + x;
int color = rgb[index];
int type = color == Color.WHITE.getRGB() ? 1 : (color == Color.RED.getRGB() ? 2 : 0);
if (type == 2) {
origin = new Origin(x, y);
}
flags[x][y] = type != 1;
}
}
System.out.println("Complete!");
System.out.println("Processing...");
//Get Rectangles to fill space...
List<Rectangle> rectangles = getRectangles(flags, origin);
System.out.println("Complete!");
float rectangleCount = rectangles.size();
float totalCount = mapHeight * mapWidth;
System.out.println("Total units: " + (int)totalCount);
System.out.println("Total rectangles: " + (int)rectangleCount);
System.out.println("Rectangle reduction factor: " + ((1 - rectangleCount / totalCount) * 100.0) + "%");
System.out.println("Dumping data...");
try {
file = new File(file.getParentFile(), file.getName() + "_Rectangle_Data.txt");
if(file.exists()){
file.delete();
}
file.createNewFile();
BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(file)));
for(Rectangle rect: rectangles){
bw.write(rect.x + "," + rect.y + "," + rect.width + ","+ rect.height + "\n");
}
bw.flush();
bw.close();
} catch (Exception ex) {
Logger.global.log(Level.SEVERE, null, ex);
}
System.out.println("Complete!");
}else{
System.out.println("Error!");
}
}
public static void clearRange(boolean[][] flags, int xOff, int yOff, int width, int height) {
for (int y = yOff; y < yOff + height; y++) {
for (int x = xOff; x < xOff + width; x++) {
flags[x][y] = false;
}
}
}
public static boolean checkIfFilled(boolean[][] flags, int xOff, int yOff, int width, int height) {
for (int y = yOff; y < yOff + height; y++) {
for (int x = xOff; x < xOff + width; x++) {
if (!flags[x][y]) {
return false;
}
}
}
return true;
}
public static List<Rectangle> getRectangles(boolean[][] flags, Origin origin) {
List<Rectangle> rectangles = new ArrayList();
for (int y = 0; y < mapHeight; y++) {
for (int x = 0; x < mapWidth; x++) {
if (flags[x][y]) {
int maxWidth = 1;
int maxHeight = 1;
Loop:
//The search size limited to 400x400 so it will complete some time this century.
for (int w = Math.min(400, mapWidth - x); w > 1; w--) {
for (int h = Math.min(400, mapHeight - y); h > 1; h--) {
if (w * h > maxWidth * maxHeight) {
if (checkIfFilled(flags, x, y, w, h)) {
maxWidth = w;
maxHeight = h;
break Loop;
}
}
}
}
//Search also in the opposite direction
Loop:
for (int h = Math.min(400, mapHeight - y); h > 1; h--) {
for (int w = Math.min(400, mapWidth - x); w > 1; w--) {
if (w * h > maxWidth * maxHeight) {
if (checkIfFilled(flags, x, y, w, h)) {
maxWidth = w;
maxHeight = h;
break Loop;
}
}
}
}
rectangles.add(new Rectangle(x - origin.x, y - origin.y, maxWidth, maxHeight));
clearRange(flags, x, y, maxWidth, maxHeight);
}
}
}
return rectangles;
}
My current code's search for larger rectangles is limited to 400x400 to speed up testing, and outputs 17,979 rectangles, which is a 99.9058% total reduction of rectangles if I treated each pixel as a 1x1 square(19,095,720 pixels). So far so good.
I am using Android phone with ICECREAMSANDWICH version. In that, I am checking the face-detection behaviour. Additionally, referring the codes available on google.
During debugging with my phone int getMaxNumDetectedFaces () returns 3. So my phone is supported for this.
Then the following code is not working.
public void onFaceDetection(Face[] faces, Camera face_camera1) {
// TODO Auto-generated method stub
if(faces.length>0)
{
Log.d("FaceDetection","face detected:" +faces.length + "Face 1 location X:"+faces[0].rect.centerX()+"Y:"+faces[0].rect.centerY());
}
}
in this faces.length retruning zero tell some suggestions to solve this error.
I had work with the FaceDetection some time ago. When I was working on that the onFaceDetection didn't work for me, so,I found another way for work on it.
I worked with PreviewCallback, this method takes each frame and you can use it to recognize faces. The only problem here is the format, the default format is NV21 , and you can change it by setPreviewFormat(int), but that didn't work for me too, so, I had to make de conversion for get a Bitmap type that recives the FaceDetector. Here is my code:
public PreviewCallback mPreviewCallback = new PreviewCallback(){
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Size size = camera.getParameters().getPreviewSize();
Bitmap mfoto_imm = this.getBitmapFromNV21(data, size.width, size.height, true); //here I get the Bitmap from getBitmapFromNV21 that is the conversion method
Bitmap mfoto= mfoto_imm.copy(Bitmap.Config.RGB_565, true);
imagen.setImageBitmap(mfoto);
int alto= mfoto.getHeight();
int ancho= mfoto.getWidth();
int count;
canvas= new Canvas(mfoto);
dibujo.setColor(Color.GREEN);
dibujo.setAntiAlias(true);
dibujo.setStrokeWidth(8);
canvas.drawBitmap(mfoto, matrix, dibujo);
FaceDetector mface= new FaceDetector(ancho,alto,1);
FaceDetector.Face [] face= new FaceDetector.Face[1];
count = mface.findFaces(mfoto, face);
PointF midpoint = new PointF();
int fpx = 0;
int fpy = 0;
if (count > 0) {
face[count-1].getMidPoint(midpoint); // you have to take the last one less 1
fpx= (int)midpoint.x; // middle pint of the face in x.
fpy= (int)midpoint.y; // middle point of the face in y.
}
canvas.drawCircle(fpx, fpy, 10, dibujo); // here I draw a circle on the middle of the face
imagen.invalidate();
}
}
and here are the conversion methods.
public Bitmap getBitmapFromNV21(byte[] data, int width, int height, boolean rotated) {
Bitmap bitmap = null;
int[] pixels = new int[width * height];
// Conver the array
this.yuv2rgb(pixels, data, width, height, rotated);
if(rotated)
{
bitmap = Bitmap.createBitmap(pixels, height, width, Bitmap.Config.RGB_565);
}
else
{
bitmap = Bitmap.createBitmap(pixels, width, height, Bitmap.Config.RGB_565);
}
return bitmap;
}
public void yuv2rgb(int[] out, byte[] in, int width, int height, boolean rotated)
throws NullPointerException, IllegalArgumentException
{
final int size = width * height;
if(out == null) throw new NullPointerException("buffer 'out' == null");
if(out.length < size) throw new IllegalArgumentException("buffer 'out' length < " + size);
if(in == null) throw new NullPointerException("buffer 'in' == null");
if(in.length < (size * 3 / 2)) throw new IllegalArgumentException("buffer 'in' length != " + in.length + " < " + (size * 3/ 2));
// YCrCb
int Y, Cr = 0, Cb = 0;
int Rn = 0, Gn = 0, Bn = 0;
for(int j = 0, pixPtr = 0, cOff0 = size - width; j < height; j++) {
if((j & 0x1) == 0)
cOff0 += width;
int pixPos = height - 1 - j;
for(int i = 0, cOff = cOff0; i < width; i++, cOff++, pixPtr++, pixPos += height) {
// Get Y
Y = 0xff & in[pixPtr]; // 0xff es por el signo
// Get Cr y Cb
if((pixPtr & 0x1) == 0) {
Cr = in[cOff];
if(Cr < 0) Cr += 127; else Cr -= 128;
Cb = in[cOff + 1];
if(Cb < 0) Cb += 127; else Cb -= 128;
Bn = Cb + (Cb >> 1) + (Cb >> 2) + (Cb >> 6);
Gn = - (Cb >> 2) + (Cb >> 4) + (Cb >> 5) - (Cr >> 1) + (Cr >> 3) + (Cr >> 4) + (Cr >> 5);
Rn = Cr + (Cr >> 2) + (Cr >> 3) + (Cr >> 5);
}
int R = Y + Rn;
if(R < 0) R = 0; else if(R > 255) R = 255;
int B = Y + Bn;
if(B < 0) B = 0; else if(B > 255) B = 255;
int G = Y + Gn;
if(G < 0) G = 0; else if(G > 255) G = 255; //At this point the code could apply some filter From the separate components of the image.For example, they could swap 2 components or remove one
int rgb = 0xff000000 | (R << 16) | (G << 8) | B; //Depending on the option the output buffer is filled or not applying the transformation
if(rotated)
out[pixPos] = rgb;
else
out[pixPtr] = rgb;
}
}
}
};
The setPreviewFormat(int) in some devices doesn't work, but maybe you can try and create the Bitmap without use the conversion.
I hope this help you.
i have:
BufferedImage image;
//few lines of code
public void stateChanged(ChangeEvent e)
{
for (int i = 0; i < image.getWidth(); i++) {
for (int j = 0; j < image.getHeight(); j++)
{
Color color = new Color(image.getRGB(i, j));
int r, g, b;
val = sliderBrightness.getValue();
r = color.getRed() + val;
g = color.getGreen() + val;
b = color.getBlue() + val;
}
}
I haven't got any idea how to solve this problem, what should i modify that Image will react on JSlider brightness?
As shown here, use java.awt.image.RescaleOp to adjust the image's color bands as a function of the slider's position. Despite the name, AlphaTest, the example uses the constructor that applies "to all color (but not alpha) components in a BufferedImage."
public void stateChanged(ChangeEvent e)
{
for (int x = 0; x < image.getWidth(); x++) {
for (int y = 0; y < image.getHeight(); y++)
{
Color color = new Color(image.getRGB(x, y));
int r, g, b;
val = sliderBrightness.getValue();
r = checkColorRange(color.getRed() + val);
g = checkColorRange(color.getGreen() + val);
b = checkColorRange(color.getBlue() + val);
color = new Color(r, g, b);
image.setRGB(x, y, color.getRGB());
border.setIcon(new ImageIcon(image.getScaledInstance(350, 350, Image.SCALE_SMOOTH)));
border.repaint();
}
}
}
public int checkColorRange(int newColor){
if(newColor > 255){
newColor = 255;
} else if (newColor < 0) {
newColor = 0;
}
return newColor;
}
Also you should use x and y, instead of i and j, for clarity.