I wrote a simple application that shows you RGB values of touched color from image.
The problem is, everytime i touch my image one of RGB values is 255.
For example. I should have values #F0F0F0 i have #FFF0F0 or #F0FFF0.
Here's my code:
iv = (ImageView) findViewById(R.id.imageView1);
mTextLog = (TextView) findViewById(R.id.textView3);
iv.setOnTouchListener(new OnTouchListener() {
int x = 0, y = 0;
float fx, fy;
public boolean onTouch(View v, MotionEvent event) {
ImageView imageView = ((ImageView)v);
Bitmap bitmap = ((BitmapDrawable)imageView.getDrawable()).getBitmap();
v.getWidth();
v.getHeight();
bitmap.getWidth();
bitmap.getHeight();
fx = ((event.getX()/656)*960);
fy = ((event.getY()/721)*1029);
if(fx > 950) fx = 950;
if(fy > 1000) fy = 1000;
if(fx < 32) fx = 32;
x = Math.round(fx);
y = Math.round(fy);
if(fx > 0 && fy > 0){
int pixel = bitmap.getPixel(x, y);
int redValue = Color.red(pixel);
int blueValue = Color.blue(pixel);
int greenValue = Color.green(pixel);
if(redValue > 255) redValue = 255;
if(redValue < 0) redValue = 0;
if(greenValue > 255) greenValue = 255;
if(greenValue < 0) greenValue = 0;
if(blueValue > 255) blueValue = 255;
if(blueValue < 0) blueValue = 0;
br = (byte) redValue;
bg = (byte) greenValue;
bb = (byte) blueValue;
tv2.setText("Red: " + redValue + " Green: " + greenValue + " Blue: " + blueValue);
RelativeLayout rl = (RelativeLayout) findViewById(R.id.idd);
rl.setBackgroundColor(pixel);
Another problem is when I moving my finger on screen, chaning color on background is fine, but when i'm trying to send it to my microkontroler via bluetooth there's a problem.
Fe. if i touch black color two times, it sends first black, then blue. :O
It happens only when i return true from this onTouch method.
I would be greatful for any help.
And btw. sorry for my english.
I see some conflicts(Strange!) in your code. first one, if you calculate the colors correctly, so there is no need for checking the values if greater than 255 or less than 0, I mean these lines.
if(redValue > 255) redValue = 255;
if(redValue < 0) redValue = 0;
if(greenValue > 255) greenValue = 255;
if(greenValue < 0) greenValue = 0;
if(blueValue > 255) blueValue = 255;
if(blueValue < 0) blueValue = 0;
and (maybe) image(that working on) color type is different as you expected, as your example mentioned black then blue, maybe this is because the code except the last byte as Alpha or vice versa.
my suggestion is change the image and have another try, or use the following code to ensure which byte is representing the alpha value.
if the image is image/jpeg so the alpha should be 0xFF(255) all the time.
byte[] hcolor;
hcolor=ByteBuffer.allocate(4).putInt(pixel).array();
out(hcolor[0]);//the first byte represent the alpha, so it should be 255.
just have a try and come back and report, we need to fix it.
I changed few lines and the problem still exist.
int pixel = bitmap.getPixel(x, y);
int redValue = Color.red(pixel);
int blueValue = Color.blue(pixel);
int greenValue = Color.green(pixel);
hcolor = ByteBuffer.allocate(4).putInt(pixel).array();
br = (byte) redValue;
bg = (byte) greenValue;
bb = (byte) blueValue;
//tv2.setText("Red: " + redValue + " Green: " + greenValue >+ " Blue: " + blueValue);
tv2.setText("A: " + hcolor[0] + " R: " + hcolor[1] + " G: " >+ hcolor[2] + " B: " + hcolor[3]);
My textview shows me fe. that: "A: -1 R: -1 G: 25 B: -7" on pink color.
It's impossible to have 2 of 3 RGB values equals -1 if i touched somewhere else than on white.
There's code i use to send color to microcontroler.
"H" is my "key" that uC understands as "Now there'll be a RGB values"
if(mConnectedThread != null){
mConnectedThread.write(h);
try {
Thread.sleep(5);
} catch (InterruptedException e) {
e.printStackTrace();
}
mConnectedThread.write(br);
try {
Thread.sleep(5);
} catch (InterruptedException e) {
e.printStackTrace();
}
mConnectedThread.write(bg);
try {
Thread.sleep(5);
} catch (InterruptedException e) {
e.printStackTrace();
}
mConnectedThread.write(bb);
try {
Thread.sleep(5);
} catch (InterruptedException e) {
e.printStackTrace();
}
When I'm returning "false" from this onTouch method nothing wrong happens.
I just want to touch and move my finger to change color of RGB lamp in real time.
Here's my image i'm using in application.
click
Related
could you help me, you can tell me what I'm wrong with, my code has 2 for's that run through a whole 157x127 pixel image, from which I get black and white through RGB; in turn I have 2 counters of those two colors.
As I understand it, if the 157x127 measurement gives 19,939 pixels, of which there is black and white; but adding the counters gives me less (14218 for white and 5643 for black). Where am I wrong or is my logic wrong?
Add the image:
enter image description here
The application does not close itself
Code below:
BitmapDrawable drawable = (BitmapDrawable) imgviewREXMorphologEX.getDrawable();
Bitmap bitmap = drawable.getBitmap();
int Width = bitmap.getWidth();
int Height = bitmap.getHeight();
int contadorB = 0;
int contadorN = 0;
for (int x = 0; x <= Width - 1; x++) {
int xi = x;
for (int y = 0; y <= Height - 1; y++) {
int yi = y;
int coordenada = bitmap.getPixel(xi, yi);// pixel
double r = Color.red(coordenada);
double g = Color.green(coordenada);
double b = Color.blue(coordenada);
if (r == 255 && g == 255 && b == 255) {
contadorB++;
} else if (r == 0 && g == 0 && b == 0) {
contadorN++;
}
txtviewcolor3.setText(contadorB + " - " + contadorN + " - " + (contadorB + contadorN) + " - " + (Width * Height));
}
}
Add an else clause to your if statements, and see if there are actually some other colours in the image?
Your image is not just black or white!
See here an amplified section (800%, bottom right):
There are some pixels that are gray (brighter and darker ones). Depending on purpose, some limits can help (e.g. r < MIN && g < MIN && b < 10) or using the Formula to determine brightness of RGB color compared to some limit(s)
I am working on an assignment using sobel edge detection on an image. I am currently struggling to do the operation for the gradient. I am receiving a "bad operand types for binary operator '*'" error when compiling. I think it may be because I defined all of my pixels as letters and I'm not sure what my next step should be. Any help would be greatly appreciated! Thank you in advance!
public static BufferedImage sobelEdgeDetect(BufferedImage input) {
int img_width = input.getWidth();
int img_height = input.getHeight();
BufferedImage output_img = new BufferedImage(
img_width, img_height, BufferedImage.TYPE_INT_RGB);
for (int x = 0; x < img_width; x++) {
for (int y = 0; y < img_height; y++) {
Color color_at_pos = new Color(input.getRGB(x, y));
int red = color_at_pos.getRed();
int green = color_at_pos.getGreen();
int blue = color_at_pos.getBlue();
int average = (red + green + blue) / 3;
Color A,B,C,D,F,G,H,I;
if(x-1 > 0 && y+1 < img_height){
A = new Color (input.getRGB(x-1,y+1));
} else {
A = Color.BLACK;
}
if(y+1 < img_height){
B = new Color (input.getRGB(x,y+1));
} else {
B = Color.BLACK;
}
if(x+1 < img_width && y+1 < img_height){
C = new Color (input.getRGB(x+1,y+1));
} else {
C = Color.BLACK;
}
if(x-1 > 0){
D = new Color (input.getRGB(x-1,y));
} else {
D = Color.BLACK;
}
if(x+1 < img_width){
F = new Color (input.getRGB(x+1,y));
} else {
F = Color.BLACK;
}
if(x-1 > 0 && y-1 > 0){
G = new Color (input.getRGB(x-1,y-1));
} else {
G = Color.BLACK;
}
if(y-1 > 0){
H = new Color (input.getRGB(x,y-1));
} else {
H = Color.BLACK;
}
if(x+1 > img_width && y-1 > 0){
I = new Color (input.getRGB(x+1,y-1));
} else {
I = Color.BLACK;
}
int gx = (-A + (-2*D) + -G + C + (2*F)+ I);
int gy = (A + (2*B) + C + (-G) + (-2*H) + (-I));
int result = (int)math.sqrt((gx*gx) + (gy*gy));
if (average < 0) {
average = 0;
} else if (average > 255) {
average = 255;
}
Color average_color = new Color(average, average, average);
output_img.setRGB(x, y, average_color.getRGB());
}
}
return output_img;
}
the problem lies within the handling of Colors, here:
int gx = (-A + (-2*D) + -G + C + (2*F)+ I);
int gy = (A + (2*B) + C + (-G) + (-2*H) + (-I));
this won't work.
to get the gradient you have to either
handle each color separate
handle the image in grayscale
i can't tell you which one would work for you.
handle each color separate:
using this approach you handle each color separate to detect edges for that color
//red:
int redGx = (-A.getRed() + (-2*D.getRed()) + -G.getRed() + C.getRed() + (2*F.getRed())+ I.getRed());
int redGy = ...
//green:
int greenGx = (-A.getGreen()...
handle as gray
int redGx = (toGray(A) + (-2*toGray(D)) + -toGray(G) + toGray(C) + (2*toGray(F))+ toGray(I));
int redGy = ...
you have to provide the methode toGray / average the colors
static int toGray(Color col){
return (color.getGreen()+color.getRed()+col.getBlue()) / 3;
}
I'm trying to analyze an image-based 3digit number captcha from an online resource. The numbers do not move at all. I use BufferedImage's getSubimage(...) method to extract each number from the captcha. I have saved (0-9) for each of the ones, tens and hundreds place. (So 30 numbers in total)
I read the bytes of the online image into a byte[] and then create a BufferedImage object like this:
BufferedImage captcha = ImageIO.read(new ByteArrayInputStream(captchaBytes));
Then I compare this image to a list of images on my drive:
BufferedImage[] nums = new BufferedImage[10];
//Load images into the array here... The code is removed.
for(int i = 0; i < nums.length; i++) {
double x;
System.out.println(x = bufferedImagesEqualConfidence(nums[i], firstNumberImage));
if(x > 0.98) {
System.out.println("equal to image " + i + ".jpeg");
isNewEntry = false;
break;
}
}
This is how I compare two images:
static double bufferedImagesEqualConfidence(BufferedImage img1, BufferedImage img2) {
double difference = 0;
int pixels = img1.getWidth() * img1.getHeight();
if (img1.getWidth() == img2.getWidth() && img1.getHeight() == img2.getHeight()) {
for (int x = 0; x < img1.getWidth(); x++) {
for (int y = 0; y < img1.getHeight(); y++) {
int rgbA = img1.getRGB(x, y);
int rgbB = img2.getRGB(x, y);
int redA = (rgbA >> 16) & 0xff;
int greenA = (rgbA >> 8) & 0xff;
int blueA = (rgbA) & 0xff;
int redB = (rgbB >> 16) & 0xff;
int greenB = (rgbB >> 8) & 0xff;
int blueB = (rgbB) & 0xff;
difference += Math.abs(redA - redB);
difference += Math.abs(greenA - greenB);
difference += Math.abs(blueA - blueB);
}
}
} else {
return 0.0;
}
return 1-((difference/(double)pixels) / 255.0);
}
The image is loaded completely from a HttpURLConnection object wrapped in my own HttpGet object. And so I do: byte[] captchaBytes = hg.readAndGetBytes(); Which I know works because when I save BufferedImage captcha = ImageIO.read(new ByteArrayInputStream(captchaBytes));, it saves as a valid image on my drive.
However, even though 2 images are actually the same, the result shows they are not similar at all. BUT, when I save the image I downloaded from the online resource first, re-read it, and compare, it shows they are equal. This is what I'm doing when I say I save it and re-read it:
File temp = new File("temp.jpeg");
ImageIO.write(secondNumberImage, "jpeg", temp);
secondNumberImage = ImageIO.read(temp);
Image format: JPEG
I know this may have something to do with compression from ImageIO.write(...), but how can I make it so that I don't have to save the image?
The problem was within my bufferedImagesEqualConfidence method. Simply comparing RGB was not enough. I had to compare individual R/G/B values.
My initial bufferedImagesEqualConfidence that didn't work was:
static double bufferedImagesEqualConfidence(BufferedImage img1, BufferedImage img2) {
int similarity = 0;
int pixels = img1.getWidth() * img1.getHeight();
if (img1.getWidth() == img2.getWidth() && img1.getHeight() == img2.getHeight()) {
for (int x = 0; x < img1.getWidth(); x++) {
for (int y = 0; y < img1.getHeight(); y++) {
if (img1.getRGB(x, y) == img2.getRGB(x, y)) {
similarity++;
}
}
}
} else {
return 0.0;
}
return similarity / (double)pixels;
}
(Source: Java Compare one BufferedImage to Another)
The bufferedImagesEqualConfidence that worked is:
static double bufferedImagesEqualConfidence(BufferedImage img1, BufferedImage img2) {
double difference = 0;
int pixels = img1.getWidth() * img1.getHeight();
if (img1.getWidth() == img2.getWidth() && img1.getHeight() == img2.getHeight()) {
for (int x = 0; x < img1.getWidth(); x++) {
for (int y = 0; y < img1.getHeight(); y++) {
int rgbA = img1.getRGB(x, y);
int rgbB = img2.getRGB(x, y);
int redA = (rgbA >> 16) & 0xff;
int greenA = (rgbA >> 8) & 0xff;
int blueA = (rgbA) & 0xff;
int redB = (rgbB >> 16) & 0xff;
int greenB = (rgbB >> 8) & 0xff;
int blueB = (rgbB) & 0xff;
difference += Math.abs(redA - redB);
difference += Math.abs(greenA - greenB);
difference += Math.abs(blueA - blueB);
}
}
} else {
return 0.0;
}
return 1-((difference/(double)pixels) / 255.0);
}
(Source: Image Processing in Java)
I guess to find similarity between two images you have to compare the individual R/G/B values for each pixel rather than just the whole RGB value.
i am trying to get difference between two images ( same size ) i found this code :
BufferedImage img1 = null;
BufferedImage img2 = null;
try{
URL url1 = new URL("http://rosettacode.org/mw/images/3/3c/Lenna50.jpg");
URL url2 = new URL("http://rosettacode.org/mw/images/b/b6/Lenna100.jpg");
img1 = ImageIO.read(url1);
img2 = ImageIO.read(url2);
} catch (IOException e) {
e.printStackTrace();
}
int width1 = img1.getWidth(null);
int width2 = img2.getWidth(null);
int height1 = img1.getHeight(null);
int height2 = img2.getHeight(null);
if ((width1 != width2) || (height1 != height2)) {
System.err.println("Error: Images dimensions mismatch");
System.exit(1);
}
long diff = 0;
for (int i = 0; i < height1; i++) {
for (int j = 0; j < width1; j++) {
int rgb1 = img1.getRGB(i, j);
int rgb2 = img2.getRGB(i, j);
int r1 = (rgb1 >> 16) & 0xff;
int g1 = (rgb1 >> 8) & 0xff;
int b1 = (rgb1 ) & 0xff;
int r2 = (rgb2 >> 16) & 0xff;
int g2 = (rgb2 >> 8) & 0xff;
int b2 = (rgb2 ) & 0xff;
diff += Math.abs(r1 - r2);
diff += Math.abs(g1 - g2);
diff += Math.abs(b1 - b2);
}
}
double n = width1 * height1 * 3;
double p = diff / n / 255.0;
System.out.println("diff percent: " + (p * 100.0)); `
it works fine for the two images given in the url , but when i changed the images given i get this exception :
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: Coordinate out of bounds!
at sun.awt.image.ByteInterleavedRaster.getDataElements(ByteInterleavedRaster.java:299)
at java.awt.image.BufferedImage.getRGB(BufferedImage.java:871)
at Main.main(Main.java:77)
I changed the code to :
File sourceimage1 = new File("C:\\lo.jpg");
File sourceimage2 = new File("C:\\lo1.jpg");
img1 = ImageIO.read(sourceimage1);
img2 = ImageIO.read(sourceimage2);
the two images are black and white , and their dimensions are smaller than the two previous images (lenna50 and lenna100)
the lo.jpg and lo1.jpg are the same image to test the algorithm also they are in black and white
how can I change the code to make it work for any image dimension ?
Toggle the i and j
in the following code as I have done below:
int rgb1 = img1.getRGB(j, i);
int rgb2 = img2.getRGB(j, i);
Your error clearly says that while reading rgb point as code in line img1.getRGB(i, j); it is going out of Array for image RGB. Check the values of i & j inside your inner for loop and check whether something you are doing wrong. As already Hirak pointed may be that you are not initializing your variables properly and so the reason it is going out of height or width.
I am using Android phone with ICECREAMSANDWICH version. In that, I am checking the face-detection behaviour. Additionally, referring the codes available on google.
During debugging with my phone int getMaxNumDetectedFaces () returns 3. So my phone is supported for this.
Then the following code is not working.
public void onFaceDetection(Face[] faces, Camera face_camera1) {
// TODO Auto-generated method stub
if(faces.length>0)
{
Log.d("FaceDetection","face detected:" +faces.length + "Face 1 location X:"+faces[0].rect.centerX()+"Y:"+faces[0].rect.centerY());
}
}
in this faces.length retruning zero tell some suggestions to solve this error.
I had work with the FaceDetection some time ago. When I was working on that the onFaceDetection didn't work for me, so,I found another way for work on it.
I worked with PreviewCallback, this method takes each frame and you can use it to recognize faces. The only problem here is the format, the default format is NV21 , and you can change it by setPreviewFormat(int), but that didn't work for me too, so, I had to make de conversion for get a Bitmap type that recives the FaceDetector. Here is my code:
public PreviewCallback mPreviewCallback = new PreviewCallback(){
#Override
public void onPreviewFrame(byte[] data, Camera camera) {
Camera.Size size = camera.getParameters().getPreviewSize();
Bitmap mfoto_imm = this.getBitmapFromNV21(data, size.width, size.height, true); //here I get the Bitmap from getBitmapFromNV21 that is the conversion method
Bitmap mfoto= mfoto_imm.copy(Bitmap.Config.RGB_565, true);
imagen.setImageBitmap(mfoto);
int alto= mfoto.getHeight();
int ancho= mfoto.getWidth();
int count;
canvas= new Canvas(mfoto);
dibujo.setColor(Color.GREEN);
dibujo.setAntiAlias(true);
dibujo.setStrokeWidth(8);
canvas.drawBitmap(mfoto, matrix, dibujo);
FaceDetector mface= new FaceDetector(ancho,alto,1);
FaceDetector.Face [] face= new FaceDetector.Face[1];
count = mface.findFaces(mfoto, face);
PointF midpoint = new PointF();
int fpx = 0;
int fpy = 0;
if (count > 0) {
face[count-1].getMidPoint(midpoint); // you have to take the last one less 1
fpx= (int)midpoint.x; // middle pint of the face in x.
fpy= (int)midpoint.y; // middle point of the face in y.
}
canvas.drawCircle(fpx, fpy, 10, dibujo); // here I draw a circle on the middle of the face
imagen.invalidate();
}
}
and here are the conversion methods.
public Bitmap getBitmapFromNV21(byte[] data, int width, int height, boolean rotated) {
Bitmap bitmap = null;
int[] pixels = new int[width * height];
// Conver the array
this.yuv2rgb(pixels, data, width, height, rotated);
if(rotated)
{
bitmap = Bitmap.createBitmap(pixels, height, width, Bitmap.Config.RGB_565);
}
else
{
bitmap = Bitmap.createBitmap(pixels, width, height, Bitmap.Config.RGB_565);
}
return bitmap;
}
public void yuv2rgb(int[] out, byte[] in, int width, int height, boolean rotated)
throws NullPointerException, IllegalArgumentException
{
final int size = width * height;
if(out == null) throw new NullPointerException("buffer 'out' == null");
if(out.length < size) throw new IllegalArgumentException("buffer 'out' length < " + size);
if(in == null) throw new NullPointerException("buffer 'in' == null");
if(in.length < (size * 3 / 2)) throw new IllegalArgumentException("buffer 'in' length != " + in.length + " < " + (size * 3/ 2));
// YCrCb
int Y, Cr = 0, Cb = 0;
int Rn = 0, Gn = 0, Bn = 0;
for(int j = 0, pixPtr = 0, cOff0 = size - width; j < height; j++) {
if((j & 0x1) == 0)
cOff0 += width;
int pixPos = height - 1 - j;
for(int i = 0, cOff = cOff0; i < width; i++, cOff++, pixPtr++, pixPos += height) {
// Get Y
Y = 0xff & in[pixPtr]; // 0xff es por el signo
// Get Cr y Cb
if((pixPtr & 0x1) == 0) {
Cr = in[cOff];
if(Cr < 0) Cr += 127; else Cr -= 128;
Cb = in[cOff + 1];
if(Cb < 0) Cb += 127; else Cb -= 128;
Bn = Cb + (Cb >> 1) + (Cb >> 2) + (Cb >> 6);
Gn = - (Cb >> 2) + (Cb >> 4) + (Cb >> 5) - (Cr >> 1) + (Cr >> 3) + (Cr >> 4) + (Cr >> 5);
Rn = Cr + (Cr >> 2) + (Cr >> 3) + (Cr >> 5);
}
int R = Y + Rn;
if(R < 0) R = 0; else if(R > 255) R = 255;
int B = Y + Bn;
if(B < 0) B = 0; else if(B > 255) B = 255;
int G = Y + Gn;
if(G < 0) G = 0; else if(G > 255) G = 255; //At this point the code could apply some filter From the separate components of the image.For example, they could swap 2 components or remove one
int rgb = 0xff000000 | (R << 16) | (G << 8) | B; //Depending on the option the output buffer is filled or not applying the transformation
if(rotated)
out[pixPos] = rgb;
else
out[pixPtr] = rgb;
}
}
}
};
The setPreviewFormat(int) in some devices doesn't work, but maybe you can try and create the Bitmap without use the conversion.
I hope this help you.