I have code
public static void program() throws Exception{
BufferedImage input = null;
long start = System.currentTimeMillis();
while((System.currentTimeMillis() - start)/1000 < 220){
for (int i = 1; i < 13; i++){
for (int j = 1; j < 7; j++){
input = robot.createScreenCapture(new Rectangle(3+i*40, 127+j*40, 40, 40));
if ((input.getRGB(6, 3) > -7000000) && (input.getRGB(6, 3)<-5000000)){
robot.mouseMove(10+i*40, 137+j*40);
robot.mousePress(InputEvent.BUTTON1_MASK);
robot.mouseRelease(InputEvent.BUTTON1_MASK);
}
}
}
}
}
On a webpage there's a matrix (12*6) and there will randomly spawn some images. Some are bad, some are good.
I'm looking for a better way to check for good images. At the moment, on good images on location (6,3) the RGB color is different from bad images.
I'm making screenshot from every box (40 * 40) and looking at pixel in location (6,3)
Don't know how to explain my code any better
EDIT:
Picture of the webpage. External links ok?
http://i.imgur.com/B5Ev1Y0.png
I'm not sure what exactly the bottleneck is in your code, but I have a hunch it might be the repeated calls to robot.createScreenCapture.
You could try calling robot.createScreenCapture on the entire matrix (i.e. a large rectangle that covers all the smaller rectangles you are interested in) outside your nested loops, and then look up the pixel values at the points you are interested in using offsets for the x and y coordinates for the sub rectangles you are inspecting.
Related
I want to create a genetic algorithm that recreates images. I have created the program for this processing but the images that evolve are not anything close to the input image.
I believe that I have a problem with my fitness function. I have tried many things from changing the polygon types that are part of the DNA, I have tried to do both a crossover and a single parent, and I tried multiple fitness functions: histogram comparison across all channels, pixel comparison, brightness comparison(for black and white images).
public void calcFitness(PImage tar){
tar.loadPixels();
image.loadPixels();
int brightness = 0;
for(int i = 0; i < image.pixels.length;i++){
brightness += Math.abs(parent.brightness(tar.pixels[i])-parent.brightness(image.pixels[i]));
}
fitness = 1.0/ (Math.pow(1+brightness,2)/2);
}
public void calculateFitness(){
int[] rHist= new int[256], gHist= new int[256], bHist = new int[256];
image.loadPixels();
//Calculate Red Histogram
for(int i =0; i<image.pixels.length;i++) {
int red = image.pixels[i] >> 16 & 0xFF;
rHist[red]++;
}
//Calculate Green Histogram
for(int i =0; i<image.pixels.length;i++) {
int green = image.pixels[i] >> 8 & 0xFF;
gHist[green]++;
}
//Calculate Blue Histogram
for(int i =0; i<image.pixels.length;i++) {
int blue = image.pixels[i] & 0xFF;
bHist[blue]++;
}
//Compare the target histogram and the current one
for(int i = 0; i < 256; i++){
double totalDiff = 0;
totalDiff += Math.pow(main.rHist[i]-rHist[i],2)/2;
totalDiff += Math.pow(main.gHist[i]-gHist[i],2)/2;
totalDiff += Math.pow(main.bHist[i]-bHist[i],2)/2;
fitness+=Math.pow(1+totalDiff,-1);
}
}
public void evaluate(){
int totalFitness = 0;
for(int i = 0; i<POPULATION_SIZE;i++){
population[i].calcFitness(target);
//population[i].calculateFitness();
totalFitness+=population[i].fitness;
}
if(totalFitness>0) {
for (int i = 0; i < POPULATION_SIZE; i++) {
population[i].prob = population[i].fitness / totalFitness;
}
}
}
public void selection() {
SmartImage[] newPopulation = new SmartImage[POPULATION_SIZE];
for (int i = 0; i < POPULATION_SIZE; i++) {
DNA child;
DNA parentA = pickOne();
DNA parentB = pickOne();
child = parentA.crossover(parentB);
child.mutate(mutationRate);
newPopulation[i] = new SmartImage(parent, child, target.width, target.height);
}
population = newPopulation;
generation++;
}
What I expect from this is to get a general shape and color that is similar to my target image but all I get is random polygons with random colors and alphas.
The code looks fine at first glance. You should first check that your code is capable of converging to a target at all , for example by feeding a target image that is either generated by your algorithm with a random genome (or a very simple image that it should be easily recreated by your algorithm).
You are using the SAD (sum of absolute differences) metric between pixels to calculate fitness. You can try using SSD (sum of squared differences) like you are doing in the histogram difference method but between pixels or blocks, that will heavily penalize large differences so the remaining images won't be too different from the target. You can try using a more perceptual image space like HSV so the images will be closer visually even if they are farther in RGB space.
I think comparing the histogram of the entire image may be too lax, as there are many different images that will result in the same histogram. Comparing individual pixels may be too strict, the image needs to be aligned very precisely to get low differences, so everything gets low fitness values unless you are very lucky so the convergence will be too slow. I would recommend that you compare the histogram between overlapping blocks, and don't use all the 256 levels, use only about 16 levels or so (or use some kind of overlapping).
Read about Histogram of oriented gradients (HOG) and other similar techniques to get ideas to improve your fitness function. I took an online course about object recognition in images, Coursera - Deteccion de Objetos by the University of Barcelona but it's in Spanish. I'm pretty sure you can find similar study materials in English.
Edit: before trying something more complex a good idea would be doing the SAD or SSD on the average of each overlapping block (which would have a similar effect to strongly blurring the reference and generated images and then comparing the pixels, but faster). The fitness function should be resilient against small changes. An image that it's shifted by a few pixels or that is very similar after discarding the low-level detail should have much better fitness than a very different image and I think blurring will have that effect.
I have a sequence of images for which I want to calculate the median image (as to remove moving elements). Intuitively, hard-coding a loop to go through all the pixels would have a gross running time, as well as fairly large memory usage. Is there a way to easily do this in OpenCV? (I'm not interested in averaging, I need to do a median). I'm writing this for Android (using OpenCV4Android) so obviously computing power is limited.
As far as I know, there no OpenCV function that creates median image from sequence of images. I needed the same feature couple of years ago and I had to implement this myself. It is relatively slow because for each pixel you need to extract relevant pixel from multiple images (inefficient memory access) and calculate median (also a time consuming process).
Possible ways to increase efficiency are:
There no need to compute median from all images. Small subset of images will be enough.
You can find more efficient algorithms for finding median of some small groups. For example I used algorithm that can efficiently find median in group of nine values.
If the mean is ok:
Mat result(CV_64FC3, listImages[0].size());
for(int i = 0; i < listImages.size(); i++) {
result += listImages[i];
}
result /= listImages.size();
result.convertTo(result, CV_8UC3);
EDIT:
This quick pseudo-median should make the trick:
// Following algorithm will retain the pixel which is the closest to the mean
// Computing Mean
Mat tmpResult = Mat.zeros(listImages[0].size(), CV_64FC3);
for(int i = 0; i < listImages.size(); i++) {
tmpResult += listImages[i];
}
tmpResult /= listImages.size();
tmpResult.convertTo(tmpResult, CV_8UC3);
// We will now, for each pixel retain the closest to the mean
// Initializing result with the first image
Mat result(listImages[0].clone());
Mat diff1, diff2, minDiff;
for(int i = 1; i < listImages.size(); i++) {
// Computing diff between mean/newImage and mean/lastResult
absdiff(tmpResult, listImages[i], diff1);
absdiff(tmpResult, result, diff2);
// If a pixel of the new image is closer to the mean, it replaces the old one
min(diff1, diff2, minDiff);
// Get the old pixels that are still ok
result = result & ~(minDiff - diff2);
// Get the new pixels
result += listImages[i] & (minDiff - diff2);
}
However the classic one should be also pretty fast. It is O(nb^2 * w * h) where nb is the number of images and w, h their width, height. The above is O(nb * w * h) with more operations on Mats.
The code for the classical one (almost all computations will be made in native):
Mat tmp;
// We will sorting pixels where the first mat will get the lowest pixels and the last one, the highest
for(int i = 0; i < listImages.size(); i++) {
for(int j = i + 1; j < listImages.size(); j++) {
listImages[i].copyTo(tmp);
min(listImages[i], listImages[j], listImages[i]);
max(listImages[j], tmp, listImages[j]);
}
}
// We get the median
Mat result = listImages[listImages.size() / 2];
Hello I need to access every pixels of an ImagePlus for image analysis.
Because of the huge amount of images to process, I was wondering if there are special effective ways/methods to access and/or modify each pixel from an imagePlus?
The only idea I naturally come out with is double for-looping through the image matrix, which takes me several dozens of seconds to achieve on a 1000x1000 image.
Here is my code:
ImagePlus Iorg = IJ.openImage("Demo1.png");
int[] pix = Iorg.getPixel(5, 5);
if(Iorg.getSlice() != 1) {
System.exit(0);
}
for(int w=0; w< Iorg.getDimensions()[0]; w++) {
for(int h=0; h<Iorg.getDimensions()[1]; h++) {
System.out.println(w + " x " + h);
// DO what needs to be done
}
}
Any idea?
Since images are uchar, what you want to do is the equivalent of
if(selected_pixel==255)
selected_pixel = 1;
else
selected_pixel = 0
You can create mask, that would be easier. I don't know in java ImagePlus but in matlab it is mask = image==255;.
Try to use those kind of matrix operations according to your need. I'm sure these methods should be somewhere inside the library(if ImagePlus is image processing library.)
I want to look within a certain position in an image to see if the selected pixels have changed in color, how would I go about doing this? (Im trying to check for movement)
I was thinking I could do something like this:
public int[] rectanglePixels(BufferdImage img, Rectangle Range) {
int[] pixels = ((DataBufferByte) bufferedImage.getRaster().getDataBuffer()).getData();
int[] boxColors;
for(int y = 0; y < img.getHeight(); y++) {
for(int x = 0; x < img.getWidth; x++) {
boxColors = pixels[(x & Range.width) * Range.x + (y & Range.height) * Range.y * width]
}
}
return boxColors;
}
Maybe use that to extract the colors from the position? Not sure if im doing that right, but after that should I re-run this method, compare the two arrays for similarities? and if the number of similarities reach some threshold declare that the image has changed?
One approach to detect movement is the analysis of pixel color variation considering the entire image or a subimage in distinct times (n, n-1, n-2, ...). In this case you are considering a fixed camera. You might have two thresholds:
The threshold of color channel variation that defines that two pixels are distinct.
The threshold of distinct pixels between the images to consider there is movement. In other words: two images of the same scene at time n and n-1 have just 10 distinct pixels. It is a real movement or just noise?
Below an example showing how to counter the distict pixels in an image, given a color channel threshold.
for(int y=0; y<imageA.getHeight(); y++){
for(int x=0; x<imageA.getWidth(); x++){
redA = imageA.getIntComponent0(x, y);
greenA = imageA.getIntComponent1(x, y);
blueA = imageA.getIntComponent2(x, y);
redB = imageB.getIntComponent0(x, y);
greenB = imageB.getIntComponent1(x, y);
blueB = imageB.getIntComponent2(x, y);
if
(
Math.abs(redA-redB)> colorThreshold ||
Math.abs(greenA-greenB)> colorThreshold||
Math.abs(blueA-blueB)> colorThreshold
)
{
distinctPixels++;
}
}
}
However, there are Marvin plug-ins to do so. Check this source code example. It detects and display regions containing "movements", as shown in the image below.
There are more sophisticated approaches that determine/subtract background for this purpose or deal with camera movements. I guess you should start from the simplest scenario and then go to more complex ones.
You should use BufferedImage.getRGB(startX, startY, w, h, rgbArray, offset, scansize) unless you really want to play around with the loops and extra arrays.
Comparing two values through a threshold would serve as good indicator. Perhaps, you could calculate averages for each array to determine color and compare the two? If you do not want a threshold value just use .hashCode();
I have an image, and I figured out how to use robot and getPixelColor() to grab the color of a certain pixel. The image is a character that I'm controlling, and I want robot to scan around the image constantly, and tell me if the pixels around it equal a certain color. Is this at all possible? Thanks!
Myself, I'd use the Robot to extract the image that's just a little larger than the "character", and then analyze the BufferedImage obtained. The details of course will depend on the details of your program. Probably the quickest would be to get the BufferedImage's Raster, then get thats dataBuffer, then get thats data, and analyze the array returned.
For example,
// screenRect is a Rectangle the contains your "character"
// + however many images around your character that you desire
BufferedImage img = robot.createScreenCapture(screenRect);
int[] imgData = ((DataBufferInt)img.getRaster().getDataBuffer()).getData();
// now that you've got the image ints, you can analyze them as you wish.
// All I've done below is get rid of the alpha value and display the ints.
for (int i = 0; i < screenRect.height; i++) {
for (int j = 0; j < screenRect.width; j++) {
int index = i * screenRect.width + j;
int imgValue = imgData[index] & 0xffffff;
System.out.printf("%06x ", imgValue );
}
System.out.println();
}