Java: Checking if image moved - java

I want to look within a certain position in an image to see if the selected pixels have changed in color, how would I go about doing this? (Im trying to check for movement)
I was thinking I could do something like this:
public int[] rectanglePixels(BufferdImage img, Rectangle Range) {
int[] pixels = ((DataBufferByte) bufferedImage.getRaster().getDataBuffer()).getData();
int[] boxColors;
for(int y = 0; y < img.getHeight(); y++) {
for(int x = 0; x < img.getWidth; x++) {
boxColors = pixels[(x & Range.width) * Range.x + (y & Range.height) * Range.y * width]
}
}
return boxColors;
}
Maybe use that to extract the colors from the position? Not sure if im doing that right, but after that should I re-run this method, compare the two arrays for similarities? and if the number of similarities reach some threshold declare that the image has changed?

One approach to detect movement is the analysis of pixel color variation considering the entire image or a subimage in distinct times (n, n-1, n-2, ...). In this case you are considering a fixed camera. You might have two thresholds:
The threshold of color channel variation that defines that two pixels are distinct.
The threshold of distinct pixels between the images to consider there is movement. In other words: two images of the same scene at time n and n-1 have just 10 distinct pixels. It is a real movement or just noise?
Below an example showing how to counter the distict pixels in an image, given a color channel threshold.
for(int y=0; y<imageA.getHeight(); y++){
for(int x=0; x<imageA.getWidth(); x++){
redA = imageA.getIntComponent0(x, y);
greenA = imageA.getIntComponent1(x, y);
blueA = imageA.getIntComponent2(x, y);
redB = imageB.getIntComponent0(x, y);
greenB = imageB.getIntComponent1(x, y);
blueB = imageB.getIntComponent2(x, y);
if
(
Math.abs(redA-redB)> colorThreshold ||
Math.abs(greenA-greenB)> colorThreshold||
Math.abs(blueA-blueB)> colorThreshold
)
{
distinctPixels++;
}
}
}
However, there are Marvin plug-ins to do so. Check this source code example. It detects and display regions containing "movements", as shown in the image below.
There are more sophisticated approaches that determine/subtract background for this purpose or deal with camera movements. I guess you should start from the simplest scenario and then go to more complex ones.

You should use BufferedImage.getRGB(startX, startY, w, h, rgbArray, offset, scansize) unless you really want to play around with the loops and extra arrays.

Comparing two values through a threshold would serve as good indicator. Perhaps, you could calculate averages for each array to determine color and compare the two? If you do not want a threshold value just use .hashCode();

Related

Trying to get multiple images out of a single image

I've been stuck at something recently.
What I want to do is to get multiple sub-images out of 1 big image.
So take this example. I have a frame of 128x128 pixels where all the images need to be in.
I'm putting all the bufferedImages inside a list and scaling all those images to 128x128.
The image you see on that link is showing that I need 4 sub-images from that image, so at the end, I have 4 images which are 128x128 but 4 times.
Or if you have an image with 128x384 it will give 3 sub-images going from top to bottom.
https://i.stack.imgur.com/RsCkf.png
I know there is a function called
BufferedImage.getSubimage(int x, int y, int w, int h);
But the problem is that I can't figure out what math I need to implement.
What I tried is if the height or width is higher than 200 then divide it by 2 but that never worked for me.
I'm not sure I fully understand what you are asking, but I think what you want is something like this:
First, loop over the image in both dimensions.
Then compute the size of the tile (the smaller value of 128 and (image dimension - start pos)). This is to make sure you don't try to fetch a tile out of bounds. If your images are always a multiple of 128 in any dimension, you could just skip this step and just use 128 (just make sure you validate that input images follow this assumption).
If you only want tiles of exactly 128x128, you could also just skip the remainder, if the tile is less than 128x128, I'm not sure what your requirement is here. Anyway, I'll leave that to you. :-)
Finally, get the subimage of that size and coordinates and store in the list.
Code:
BufferedImage image = ...;
int tileSize = 128;
List<BufferedImage> tiles = new ArrayList<>();
for (int y = 0; y < image.height(); y += tileSize) {
int h = Math.min(tileSize, image.height() - y);
for (int x = 0; x < image.width(); x += tileSize) {
int w = Math.min(tileSize, image.width() - x);
tiles .add(image.getSubimage(x, y, w, h));
}
}

Loop and change pixel values in Mat OpenCV Android

I am now building a project based on the sample color blob tracking method. I used bounding rectangles around the contours to indicate the blobs. Now I want to improve this algorithm by using an error correction method. What I do now is simply summing up the pixels in the rect region using elemsum method and calculate the average intensity and set it as the new blob detection parameter in each frame. However, the problem is that it is not accurate since those pixels outside the contour but inside the bounding rect will be counted as well. And the result is poor.
In order to solve the problem, I used another a straightforward way to loop through each pixel in the rectangle region (which is a submat), and set all pixel values out of range to the desired (or previous) hsv scalar. Then sum up all the pixels again and calculate the average intensity. This would much more accurate and easily solves the problem. The problem is that the program runs too slow on the phone (with around 1 frame per sec), though the result is accurate.
I found some sources online on how to do it in c++ using mat.forEach. I do not want to do the ndk thing and I would like to know if there is a more efficient way to do it in Java (Android).
UPDATE:
It turned out I can solve the problem by simply reducing the sampling rate. Instead of calculating the average intensity of all pixels, just a few number of them would do the job. My code:
for (int i=0; i< bounding_rect_hsv.rows();i+=10){
for (int j=0; j<bounding_rect_hsv.cols();j+=10){
double[] data = bounding_rect_hsv.get(i, j);
for (int k = 0; k < 3; k++){
if (data[k] > new_hsvColor.val[k] + 30 || data[k] < new_hsvColor.val[k] - 30) {
data[k] = new_hsvColor.val[k];
}
}
bounding_rect_hsv.put(i, j, data); //Puts element back into matrix
}
}
My source code:
Rect rect = Imgproc.boundingRect(points);
// draw enclosing rectangle (all same color, but you could use variable i to make them unique)
Imgproc.rectangle(original_frame, new Point(rect.x, rect.y), new Point(rect.x + rect.width, rect.y + rect.height), new Scalar(255, 0, 0, 255), 3);
//Todo: use the bounding rectangular to calculate average intensity (turn the pixels out of the contour to new_hsvColor)
//Just change the boundary values would be enough
bounding_rect_rgb = original_frame.submat(rect);
Imgproc.cvtColor(bounding_rect_rgb, bounding_rect_hsv, Imgproc.COLOR_RGB2HSV_FULL);
//Todo: change the logic so that pixels outside the contour will be changed to new_hsvColor
for (int i=0; i< bounding_rect_hsv.rows();i++){
for (int j=0; j<bounding_rect_hsv.cols();j++){
double[] data = bounding_rect_hsv.get(i, j);
for (int k = 0; k < 3; k++){
if (data[k] > new_hsvColor.val[k] + 30 || data[k] < new_hsvColor.val[k] - 30)
data[k] = new_hsvColor.val[k];
}
bounding_rect_hsv.put(i, j, data); //Puts element back into matrix
}
}
If you want to compute the mean value of pixels inside a contour you can simply:
Create a mask, using drawContours with parameter CV_FILLED and color Scalar(255) on a black (Scalar(0)) initialized CV_8UC1 image with same size as the original image.
Use mean to compute the mean of pixels under the mask.
You also don't need to convert to HSV every region (Rect), but you can convert the whole image once, and then access the desired region directly on the HSV image.
In the general case you want to sum the pixel values of a lot of rectangular regions, you may prefer to compute the integral image and compute the sum as the difference of values at bottom-right and top-left rectangle positions.

Java rotation of pixel array

I have tried to make an algorithm in java to rotate a 2-d pixel array(Not restricted to 90 degrees), the only problem i have with this is: the end result leaves me with dots/holes within the image.
Here is the code :
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
int xp = (int) (nx + Math.cos(rotation) * (x - width / 2) + Math
.cos(rotation + Math.PI / 2) * (y - height / 2));
int yp = (int) (ny + Math.sin(rotation) * (x - width / 2) + Math
.sin(rotation + Math.PI / 2) * (y - height / 2));
int pixel = pixels[x + y * width];
Main.pixels[xp + yp * Main.WIDTH] = pixel;
}
}
'Main.pixels' is an array connected to a canvas display, this is what is displayed onto the monitor.
'pixels' and the function itself, is within a sprite class. The sprite class grabs the pixels from a '.png' image at initialization of the program.
I've tried looking at the 'Rotation Matrix' solutions. But they are too complicated for me. I have noticed that when the image gets closer to a point of 45 degrees, the image is some-what stretched ? What is going wrong? And what is the correct code; that adds the pixels to a larger scale array(E.g. Main.pixels[]).
Needs to be java! and relative to the code format above. I am not looking for complex examples, simply because i will not understand(As said above). Simple and straight to the point, is what i am looking for.
How id like the question to be answered.
Your formula is wrong because ....
Do this and the effect will be...
Simplify this...
Id recommend...
Im sorry if im asking to much, but i have looked for an answer relative to this question, that i can understand and use. But to always either be given a rotation of 90 degrees, or an example from another programming language.
You are pushing the pixels forward, and not every pixel is hit by the discretized rotation map. You can get rid of the gaps by calculating the source of each pixel instead.
Instead of
for each pixel p in the source
pixel q = rotate(p, theta)
q.setColor(p.getColor())
try
for each pixel q in the image
pixel p = rotate(q, -theta)
q.setColor(p.getColor())
This will still have visual artifacts. You can improve on this by interpolating instead of rounding the coordinates of the source pixel p to integer values.
Edit: Your rotation formulas looked odd, but they appear ok after using trig identities like cos(r+pi/2) = -sin(r) and sin(r+pi/2)=cos(r). They should not be the cause of any stretching.
To avoid holes you can:
compute the source coordinate from destination
(just reverse the computation to your current state) it is the same as Douglas Zare answer
use bilinear or better filtering
use less then single pixel step
usually 0.75 pixel is enough for covering the holes but you need to use floats instead of ints which sometimes is not possible (due to performance and or missing implementation or other reasons)
Distortion
if your image get distorted then you do not have aspect ratio correctly applied so x-pixel size is different then y-pixel size. You need to add scale to one axis so it matches the device/transforms applied. Here few hints:
Is the source image and destination image separate (not in place)? so Main.pixels and pixels are not the same thing... otherwise you are overwriting some pixels before their usage which could be another cause of distortion.
Just have realized you have cos,cos and sin,sin in rotation formula which is non standard and may be you got the angle delta wrongly signed somewhere so
Just to be sure here an example of the bullet #1. (reverse) with standard rotation formula (C++):
float c=Math.cos(-rotation);
float s=Math.sin(-rotation);
int x0=Main.width/2;
int y0=Main.height/2;
int x1= width/2;
int y1= height/2;
for (int a=0,y=0; y < Main.height; y++)
for (int x=0; x < Main.width; x++,a++)
{
// coordinate inside dst image rotation center biased
int xp=x-x0;
int yp=y-y0;
// rotate inverse
int xx=int(float(float(xp)*c-float(yp)*s));
int yy=int(float(float(xp)*s+float(yp)*c));
// coordinate inside src image
xp=xx+x1;
yp=yy+y1;
if ((xp>=0)&&(xp<width)&&(yp>=0)&&(yp<height))
Main.pixels[a]=pixels[xp + yp*width]; // copy pixel
else Main.pixels[a]=0; // out of src range pixel is black
}

Speed up looking through matrix

I have code
public static void program() throws Exception{
BufferedImage input = null;
long start = System.currentTimeMillis();
while((System.currentTimeMillis() - start)/1000 < 220){
for (int i = 1; i < 13; i++){
for (int j = 1; j < 7; j++){
input = robot.createScreenCapture(new Rectangle(3+i*40, 127+j*40, 40, 40));
if ((input.getRGB(6, 3) > -7000000) && (input.getRGB(6, 3)<-5000000)){
robot.mouseMove(10+i*40, 137+j*40);
robot.mousePress(InputEvent.BUTTON1_MASK);
robot.mouseRelease(InputEvent.BUTTON1_MASK);
}
}
}
}
}
On a webpage there's a matrix (12*6) and there will randomly spawn some images. Some are bad, some are good.
I'm looking for a better way to check for good images. At the moment, on good images on location (6,3) the RGB color is different from bad images.
I'm making screenshot from every box (40 * 40) and looking at pixel in location (6,3)
Don't know how to explain my code any better
EDIT:
Picture of the webpage. External links ok?
http://i.imgur.com/B5Ev1Y0.png
I'm not sure what exactly the bottleneck is in your code, but I have a hunch it might be the repeated calls to robot.createScreenCapture.
You could try calling robot.createScreenCapture on the entire matrix (i.e. a large rectangle that covers all the smaller rectangles you are interested in) outside your nested loops, and then look up the pixel values at the points you are interested in using offsets for the x and y coordinates for the sub rectangles you are inspecting.

Generating Box2D body(collision map) from tilemap efficiently

I am working on a platformer game that will use tile maps, which I don't know if is a good idea!
I've made a neat tile map editor with tools for setting a spawn point etc. but now that I want to be able to test play the game after editing map and for future use I need of course integrate physics which I've done with Box2D which comes with LibGDX!
I am creating a method to create a collision map from tile map which has data if tile is collideable or not!
So I came up with this great idea:
loop through the map and if we find a colliding tile loop through its neighbor tiles and see if they're colliding too, and do this til noncolliding tile is found when we set width and height for the colliding rectangle
after we got bunch of rectangle I sort them in order from biggest square to smallest so we get biggest pieces and I add the rectangles to final list and check against the final rect if any of them overlaps with current body so I don't have overlapping bodys
But you know, code tells more than 1000 words, right?
public void createBody() {
List<Rectangle> allRects = new ArrayList<Rectangle>();
for(int x = 0; x < info.getWidth(); x++) {
for(int y = 0; y < info.getHeight(); y++) {
if(tiles[x][y].getInfo().isColliding()) {
int width = 1;
int height = 1;
//loop through neighbors horizontally
for(int i = 0; i < info.getWidth() - x; i++) {
if(!tiles[x + i][y].getInfo().isColliding()) {
//if tile is not clipped, we set width to i which is current x offset
width = i;
break;
}
}
//only if width is bigger than zero can the rect have any tiels..
if(width > 0) {
boolean breakingBad = false;
//loop through neighbors horizontally
for(int j = 0; j < info.getHeight() - y; j++) {
//loop though neigbors vertizally
for(int i = 0; i < width; i++) {
//check if tile is not colliding
if(!tiles[x + i][y + j].getInfo().isColliding()) {
//and if so, we set height to j which is current y offset
height = j;
//breaking bad aka leaving both loops
breakingBad = true;
break;
}
}
if(breakingBad) {
break;
}
}
}
if(width * height > 0)
allRects.add(new Rectangle(x, y, width, height));
}
}
}
Collections.sort(allRects, new Comparator<Rectangle>() {
#Override
public int compare(Rectangle o1, Rectangle o2) {
Integer o1Square = o1.width * o1.height;
Integer o2Square = o2.width * o2.height;
return o2Square.compareTo(o1Square);
}
});
List<Rectangle> finalRects = new ArrayList<Rectangle>();
mainloop:
for(Rectangle rect: allRects) {
for(Rectangle finalRect: finalRects) {
if(finalRect.contains(rect)) {
continue mainloop;
}
}
finalRects.add(rect);
}
for(Rectangle rect: finalRects) {
PolygonShape polyShape = new PolygonShape();
polyShape.setAsBox((float)rect.getWidth() / 2, (float)rect.getHeight() / 2, Vector2.tmp.set((float)rect.getCenterX(), (float)rect.getCenterY()), 0f);
mapBody.createFixture(polyShape, 1);
polyShape.dispose();
}
}
however this sill seems pretty inefficient because for some reasons its still creating smaller fixtures than it could be possible, for example in upper right corner
also its creating single fixtures in the corners of the center rectangle and I can't figure out why!
Is the whole idea all inefficient, and should I use other method or manually create collision maps or what could be the best idea?
Originally each tile was its own fixture which caused weird bugs on their edges as expected
First off, a custom tile mapping tool is a great idea on the surface, but you're reinventing the wheel.
libGDX has built-in support for TMX maps.
http://libgdx.badlogicgames.com/nightlies/docs/api/com/badlogic/gdx/maps/tiled/TmxMapLoader.html
Instead of using your homebrew editor, you can use a full featured editor such as this Tiled - http://www.mapeditor.org/
So once you have a better system in place for your maps, I would look at this from an object oriented perspective. Since you want to use box2d physics, each collidableTile HAS A body. So all you need to do is assign a physics body to each collidableTile, and set the size according to your standard tile size.
Don't forget that there is a difference between the box2d world and your game screen, where box2d is measured in metric units, and your screen is measured in pixels. So you need to do some math to set positions and size properly. If you want a set of tiles to share a body, you may want to pass in the body as a parameter when you construct each collidableTile, and then adjust the size of the body based on how many adjacent tiles you can find. More complex shapes for the physics body may be more complex.
You can also save resources by setting those tiles to 'sleep', where box2d does a reduced simulation on those bodies until it detects a collision. If you're only using box2d for collision detection on terrain, you may want to consider other options, like using shape libraries to detect intersections, and then setting the box2d physics on your player characters body to stop downward acceleration while there is contact, or something.

Categories

Resources