Get Pixel Color around an image - java

I have an image, and I figured out how to use robot and getPixelColor() to grab the color of a certain pixel. The image is a character that I'm controlling, and I want robot to scan around the image constantly, and tell me if the pixels around it equal a certain color. Is this at all possible? Thanks!

Myself, I'd use the Robot to extract the image that's just a little larger than the "character", and then analyze the BufferedImage obtained. The details of course will depend on the details of your program. Probably the quickest would be to get the BufferedImage's Raster, then get thats dataBuffer, then get thats data, and analyze the array returned.
For example,
// screenRect is a Rectangle the contains your "character"
// + however many images around your character that you desire
BufferedImage img = robot.createScreenCapture(screenRect);
int[] imgData = ((DataBufferInt)img.getRaster().getDataBuffer()).getData();
// now that you've got the image ints, you can analyze them as you wish.
// All I've done below is get rid of the alpha value and display the ints.
for (int i = 0; i < screenRect.height; i++) {
for (int j = 0; j < screenRect.width; j++) {
int index = i * screenRect.width + j;
int imgValue = imgData[index] & 0xffffff;
System.out.printf("%06x ", imgValue );
}
System.out.println();
}

Related

Pixel relocation, showing side-by-side

I want to read individual pixels from one image and "relocate" them to another image. I basically want to simulate how it would be if I grabbed pixel by pixel from one image and "move" them to a blank canvas. Turning the pixels I grab from the original image white.
This is what I have right now, I'm able to read the pixels from the image and create a copy (which comes out saturated for some reason) of it.
import java.awt.Color;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
public class ImageTest
{
public static void main(String args[])throws IOException
{
//create buffered image object img
File oldImgFile = new File("/path/to/image/shrek4life.jpg");
BufferedImage oldImg = null;
BufferedImage newImg = null;
try{
oldImg = ImageIO.read(oldImgFile);
}catch(IOException e){}
newImg = new BufferedImage(oldImg.getWidth(), oldImg.getHeight(), BufferedImage.TYPE_INT_ARGB);
File f = null;
try
{
for(int i = 0; i < oldImg.getWidth(); i++){
for(int j = 0; j < oldImg.getHeight(); j++){
//get the rgb color of the old image and store it the new
Color c = new Color(oldImg.getRGB(i, j));
int r = c.getRed();
int g = c.getGreen();
int b = c.getBlue();
int col = (r<<16) | (g<<8) | b;
newImg.setRGB(i, j, col);
}
}
//write image
f = new File("newImg.jpg");
ImageIO.write(newImg, "jpg", f);
}catch(IOException e){
System.out.println("Error: " + e);
}
}//main() ends here
}//class ends here
And I would like to basically slow the process down and display it happening. But I'm not sure how to do that. Would I need to use to accomplish this ? I'm somewhat new to threading but I think I would need multiple threads to handle the painting of both pictures.
First of all, I would like to mention you are working in a very inefficient way. You are creating a Color, decomposing a pixel in its channels, and moving to the new image by a bit-shift. It is easier if you work directly with the integer the whole time (and more efficient).
I will assume the image "/path/to/image/shrek4life.jpg" has ARGB color space. I recommend ensure this, because if the old image does not have this color space you should make a conversion.
When you create the new image, you create it as ARGB color space, so each channel is expressed in a byte of the int, first byte for Alpha, second byte for red, third byte for green and the last one for blue.
I think you forgot the alpha channel when you manipulated the old image pixel to move it into the new image.
With this explanation in mind, I think you can change your code to increase the efficiency, like this:
for(int i = 0; i < oldImg.getWidth(); i++){
for(int j = 0; j < oldImg.getHeight(); j++){
int pixel = oldImg.getRGB(i,j);
newImg.setRGB(i, j, pixel );
//If you like to get more control over the pixels and print
//you can decompose the pixel using Color as you already do
//But when you understand fully the process I recommend erase it
Color c = new Color(pixel);
//Print the color or do whatever you like
}
}
About how to display the process of pixel relocation:
In process:
You can print the changed pixel as a number with its position in image (discouraged). System.out.println("pixel"+pixel+" X:"+i+" Y:"+j);
Use this tutorial in baeldung to print an image. I suggest draw a rectangle with the color of the image and wait for a key press (enter, for example) using Scanner. After the key was press, you can load the next pixel, an so on.
If a single rectangle with just one pixel has little information, I suggest add an array of rectangles to draw several pixels in a time. Even you can print an image, and see the process pixel by pixel, using Scanner to mark each step.
As #haraldK suggest, you can use Swing to display de relocation image. Through swing timer and invokes update()
Post process:
Save the image in a file. To improve the speed of process, I suggest save a few pixels (10 - 100).

Loop and change pixel values in Mat OpenCV Android

I am now building a project based on the sample color blob tracking method. I used bounding rectangles around the contours to indicate the blobs. Now I want to improve this algorithm by using an error correction method. What I do now is simply summing up the pixels in the rect region using elemsum method and calculate the average intensity and set it as the new blob detection parameter in each frame. However, the problem is that it is not accurate since those pixels outside the contour but inside the bounding rect will be counted as well. And the result is poor.
In order to solve the problem, I used another a straightforward way to loop through each pixel in the rectangle region (which is a submat), and set all pixel values out of range to the desired (or previous) hsv scalar. Then sum up all the pixels again and calculate the average intensity. This would much more accurate and easily solves the problem. The problem is that the program runs too slow on the phone (with around 1 frame per sec), though the result is accurate.
I found some sources online on how to do it in c++ using mat.forEach. I do not want to do the ndk thing and I would like to know if there is a more efficient way to do it in Java (Android).
UPDATE:
It turned out I can solve the problem by simply reducing the sampling rate. Instead of calculating the average intensity of all pixels, just a few number of them would do the job. My code:
for (int i=0; i< bounding_rect_hsv.rows();i+=10){
for (int j=0; j<bounding_rect_hsv.cols();j+=10){
double[] data = bounding_rect_hsv.get(i, j);
for (int k = 0; k < 3; k++){
if (data[k] > new_hsvColor.val[k] + 30 || data[k] < new_hsvColor.val[k] - 30) {
data[k] = new_hsvColor.val[k];
}
}
bounding_rect_hsv.put(i, j, data); //Puts element back into matrix
}
}
My source code:
Rect rect = Imgproc.boundingRect(points);
// draw enclosing rectangle (all same color, but you could use variable i to make them unique)
Imgproc.rectangle(original_frame, new Point(rect.x, rect.y), new Point(rect.x + rect.width, rect.y + rect.height), new Scalar(255, 0, 0, 255), 3);
//Todo: use the bounding rectangular to calculate average intensity (turn the pixels out of the contour to new_hsvColor)
//Just change the boundary values would be enough
bounding_rect_rgb = original_frame.submat(rect);
Imgproc.cvtColor(bounding_rect_rgb, bounding_rect_hsv, Imgproc.COLOR_RGB2HSV_FULL);
//Todo: change the logic so that pixels outside the contour will be changed to new_hsvColor
for (int i=0; i< bounding_rect_hsv.rows();i++){
for (int j=0; j<bounding_rect_hsv.cols();j++){
double[] data = bounding_rect_hsv.get(i, j);
for (int k = 0; k < 3; k++){
if (data[k] > new_hsvColor.val[k] + 30 || data[k] < new_hsvColor.val[k] - 30)
data[k] = new_hsvColor.val[k];
}
bounding_rect_hsv.put(i, j, data); //Puts element back into matrix
}
}
If you want to compute the mean value of pixels inside a contour you can simply:
Create a mask, using drawContours with parameter CV_FILLED and color Scalar(255) on a black (Scalar(0)) initialized CV_8UC1 image with same size as the original image.
Use mean to compute the mean of pixels under the mask.
You also don't need to convert to HSV every region (Rect), but you can convert the whole image once, and then access the desired region directly on the HSV image.
In the general case you want to sum the pixel values of a lot of rectangular regions, you may prefer to compute the integral image and compute the sum as the difference of values at bottom-right and top-left rectangle positions.

Fast way to manipulate each pixel from ImagePlus

Hello I need to access every pixels of an ImagePlus for image analysis.
Because of the huge amount of images to process, I was wondering if there are special effective ways/methods to access and/or modify each pixel from an imagePlus?
The only idea I naturally come out with is double for-looping through the image matrix, which takes me several dozens of seconds to achieve on a 1000x1000 image.
Here is my code:
ImagePlus Iorg = IJ.openImage("Demo1.png");
int[] pix = Iorg.getPixel(5, 5);
if(Iorg.getSlice() != 1) {
System.exit(0);
}
for(int w=0; w< Iorg.getDimensions()[0]; w++) {
for(int h=0; h<Iorg.getDimensions()[1]; h++) {
System.out.println(w + " x " + h);
// DO what needs to be done
}
}
Any idea?
Since images are uchar, what you want to do is the equivalent of
if(selected_pixel==255)
selected_pixel = 1;
else
selected_pixel = 0
You can create mask, that would be easier. I don't know in java ImagePlus but in matlab it is mask = image==255;.
Try to use those kind of matrix operations according to your need. I'm sure these methods should be somewhere inside the library(if ImagePlus is image processing library.)

Speed up looking through matrix

I have code
public static void program() throws Exception{
BufferedImage input = null;
long start = System.currentTimeMillis();
while((System.currentTimeMillis() - start)/1000 < 220){
for (int i = 1; i < 13; i++){
for (int j = 1; j < 7; j++){
input = robot.createScreenCapture(new Rectangle(3+i*40, 127+j*40, 40, 40));
if ((input.getRGB(6, 3) > -7000000) && (input.getRGB(6, 3)<-5000000)){
robot.mouseMove(10+i*40, 137+j*40);
robot.mousePress(InputEvent.BUTTON1_MASK);
robot.mouseRelease(InputEvent.BUTTON1_MASK);
}
}
}
}
}
On a webpage there's a matrix (12*6) and there will randomly spawn some images. Some are bad, some are good.
I'm looking for a better way to check for good images. At the moment, on good images on location (6,3) the RGB color is different from bad images.
I'm making screenshot from every box (40 * 40) and looking at pixel in location (6,3)
Don't know how to explain my code any better
EDIT:
Picture of the webpage. External links ok?
http://i.imgur.com/B5Ev1Y0.png
I'm not sure what exactly the bottleneck is in your code, but I have a hunch it might be the repeated calls to robot.createScreenCapture.
You could try calling robot.createScreenCapture on the entire matrix (i.e. a large rectangle that covers all the smaller rectangles you are interested in) outside your nested loops, and then look up the pixel values at the points you are interested in using offsets for the x and y coordinates for the sub rectangles you are inspecting.

image processing

I want to change the value of the pixels in an image, for which i need to store the image as a matrix. How can i perform this job? Please guide.
BufferedImage image = ImageIO.read(..);
image.setRGB(x, y, rgb);
Check the documentation of BufferedImage
Using image.setRGB is extremely slow.
You can use Catalano Framework
Example:
FastBitmap fb = new FastBitmap(bufferedImage);
int x = fb.getRed(0,0);
//If you prefer to retrieve the matrix you can do too.
int[][][] image = new int[fb.getHeight][fb.getWidth][3];
fb.toArrayRGB(image);
Firstly read the image into a BufferedImage.
BufferedImage image = ImageIO.read(new File("..."));
Then create matrix like structure in the 2D array like this and set RGB:
for(int i = 0; i < image.getWidth(); i++){
for(int j = 0; j < image.getHeight(); j++){
image.setRGB(i, j, rgb);
}
}
Image is 2d representation of data (pixel info)
2D means x&y directions. In case of image, these directions are generally treated as rows & columns
To change the pixel value, we have to get its location in these rows and columns
Getting pixel location is like that class teacher addressing the unknown student with his sitting position (ex:2nd bench 3rd person)
Like this we have to address the pixel by its rows and column location

Categories

Resources