I'm trying to scan a bar code for black and white lines (across the image from left to right). Can anyone help me in doing this? There are 95 bits in my bar code image and I want to scan across just once and get the values of those colors scanned with the .getRed, .getGreen, .getBlue methods.
I'm not sure if I started out right, but correct me if I'm wrong:
//Image already loaded in above code
//Scan Array
for (int y = 0; y < image.getHeight(); y++) {
for (int x = 0; x < image.getWidth(); x++ {
}
}
I was told that the code above scans the whole entire image and not just once from left to right. Any help?
Edit:
Black lines would give (0, 0, 0) which would then equal to 1.
White lines would give (255, 255, 255) which would then equal to 0.
Don't try to do this yourself. Use Zebra Crossing.
https://github.com/zxing/zxing/
You'd need to scan across a row, find contrast, find threshold, create candidate lists of valid width predictions and what the bar code could be, and sort them... it is a enormous headache.
Related
I've been stuck at something recently.
What I want to do is to get multiple sub-images out of 1 big image.
So take this example. I have a frame of 128x128 pixels where all the images need to be in.
I'm putting all the bufferedImages inside a list and scaling all those images to 128x128.
The image you see on that link is showing that I need 4 sub-images from that image, so at the end, I have 4 images which are 128x128 but 4 times.
Or if you have an image with 128x384 it will give 3 sub-images going from top to bottom.
https://i.stack.imgur.com/RsCkf.png
I know there is a function called
BufferedImage.getSubimage(int x, int y, int w, int h);
But the problem is that I can't figure out what math I need to implement.
What I tried is if the height or width is higher than 200 then divide it by 2 but that never worked for me.
I'm not sure I fully understand what you are asking, but I think what you want is something like this:
First, loop over the image in both dimensions.
Then compute the size of the tile (the smaller value of 128 and (image dimension - start pos)). This is to make sure you don't try to fetch a tile out of bounds. If your images are always a multiple of 128 in any dimension, you could just skip this step and just use 128 (just make sure you validate that input images follow this assumption).
If you only want tiles of exactly 128x128, you could also just skip the remainder, if the tile is less than 128x128, I'm not sure what your requirement is here. Anyway, I'll leave that to you. :-)
Finally, get the subimage of that size and coordinates and store in the list.
Code:
BufferedImage image = ...;
int tileSize = 128;
List<BufferedImage> tiles = new ArrayList<>();
for (int y = 0; y < image.height(); y += tileSize) {
int h = Math.min(tileSize, image.height() - y);
for (int x = 0; x < image.width(); x += tileSize) {
int w = Math.min(tileSize, image.width() - x);
tiles .add(image.getSubimage(x, y, w, h));
}
}
I was working on my game today and I found that the top of my trees have a weird texture problem where they overlap each other with a black box. It is only the top of the trees and the tops are split up into 9 blocks all with their own image. The 9 images are transparent, each is 32x32, and I've tried it a bunch of different ways with no luck. Does anyone know what the problem with the texture is? This isn't a generation question but an OpenGL/Slick2D question about textures. Here's a screenshot of the problem: Screenshot
EDIT: Here's a piece of the rendering code.
for (int x = (int) (World.instance.camera.getX() / Block.WIDTH); x < width; x++)
{
for (int y = (int) (World.instance.camera.getY() / Block.HEIGHT); y < height; y++)
{
try
{
if (blocks[x][y] != Block.AIR.getId())
{
g.drawImage(textureCache.get(blocks[x][y]), x * Block.WIDTH, y * Block.HEIGHT);
}
}
catch(Exception ex)
{
}
}
}
Looking at your code, it seems that you are only drawing a single image at each 32x32 square. So if tree A is in front of tree B, but tree A only partly fills a square, then tree A is the one listed in your blocks array and therefore retrieved from your "texture cache"; and not tree B. So tree A is all that is drawn.
To resolve this, your blocks structure would need to be three dimensional - basically, for each 32x32 square, you'd need some kind of "stack" of references to all the images whose corresponding object is found in that square. Then when you draw that square, draw all of the images in order, from the back to the front.
I have code
public static void program() throws Exception{
BufferedImage input = null;
long start = System.currentTimeMillis();
while((System.currentTimeMillis() - start)/1000 < 220){
for (int i = 1; i < 13; i++){
for (int j = 1; j < 7; j++){
input = robot.createScreenCapture(new Rectangle(3+i*40, 127+j*40, 40, 40));
if ((input.getRGB(6, 3) > -7000000) && (input.getRGB(6, 3)<-5000000)){
robot.mouseMove(10+i*40, 137+j*40);
robot.mousePress(InputEvent.BUTTON1_MASK);
robot.mouseRelease(InputEvent.BUTTON1_MASK);
}
}
}
}
}
On a webpage there's a matrix (12*6) and there will randomly spawn some images. Some are bad, some are good.
I'm looking for a better way to check for good images. At the moment, on good images on location (6,3) the RGB color is different from bad images.
I'm making screenshot from every box (40 * 40) and looking at pixel in location (6,3)
Don't know how to explain my code any better
EDIT:
Picture of the webpage. External links ok?
http://i.imgur.com/B5Ev1Y0.png
I'm not sure what exactly the bottleneck is in your code, but I have a hunch it might be the repeated calls to robot.createScreenCapture.
You could try calling robot.createScreenCapture on the entire matrix (i.e. a large rectangle that covers all the smaller rectangles you are interested in) outside your nested loops, and then look up the pixel values at the points you are interested in using offsets for the x and y coordinates for the sub rectangles you are inspecting.
I want to look within a certain position in an image to see if the selected pixels have changed in color, how would I go about doing this? (Im trying to check for movement)
I was thinking I could do something like this:
public int[] rectanglePixels(BufferdImage img, Rectangle Range) {
int[] pixels = ((DataBufferByte) bufferedImage.getRaster().getDataBuffer()).getData();
int[] boxColors;
for(int y = 0; y < img.getHeight(); y++) {
for(int x = 0; x < img.getWidth; x++) {
boxColors = pixels[(x & Range.width) * Range.x + (y & Range.height) * Range.y * width]
}
}
return boxColors;
}
Maybe use that to extract the colors from the position? Not sure if im doing that right, but after that should I re-run this method, compare the two arrays for similarities? and if the number of similarities reach some threshold declare that the image has changed?
One approach to detect movement is the analysis of pixel color variation considering the entire image or a subimage in distinct times (n, n-1, n-2, ...). In this case you are considering a fixed camera. You might have two thresholds:
The threshold of color channel variation that defines that two pixels are distinct.
The threshold of distinct pixels between the images to consider there is movement. In other words: two images of the same scene at time n and n-1 have just 10 distinct pixels. It is a real movement or just noise?
Below an example showing how to counter the distict pixels in an image, given a color channel threshold.
for(int y=0; y<imageA.getHeight(); y++){
for(int x=0; x<imageA.getWidth(); x++){
redA = imageA.getIntComponent0(x, y);
greenA = imageA.getIntComponent1(x, y);
blueA = imageA.getIntComponent2(x, y);
redB = imageB.getIntComponent0(x, y);
greenB = imageB.getIntComponent1(x, y);
blueB = imageB.getIntComponent2(x, y);
if
(
Math.abs(redA-redB)> colorThreshold ||
Math.abs(greenA-greenB)> colorThreshold||
Math.abs(blueA-blueB)> colorThreshold
)
{
distinctPixels++;
}
}
}
However, there are Marvin plug-ins to do so. Check this source code example. It detects and display regions containing "movements", as shown in the image below.
There are more sophisticated approaches that determine/subtract background for this purpose or deal with camera movements. I guess you should start from the simplest scenario and then go to more complex ones.
You should use BufferedImage.getRGB(startX, startY, w, h, rgbArray, offset, scansize) unless you really want to play around with the loops and extra arrays.
Comparing two values through a threshold would serve as good indicator. Perhaps, you could calculate averages for each array to determine color and compare the two? If you do not want a threshold value just use .hashCode();
For college, we have been given an assignment where, given an image, we have to identify the "figures", their color, and the amount of "pixel-groups" inside them. Let me explain:
The image above has one figure (in the image there can be multiple figures, but let us forget about that for now).
The background color of the canvas is the pixel at 0,0 (in this case, yellow)
The border color of the figure is black (it can be any color other than the canvas' background color).
The figure's background color is white (it can also be the same as the canvas' background color).
A figure can only have one background color.
There are two pixel groups in the figure. One is a pool of blue pixels, and the other is a pool of red with some green inside. As you can see, it doesn't matter the color of the pixel group's pixels (it is just different than the figure's background color). What matters is the fact that they're in contact (even diagonally). So despite having two different colors, such group is considered as just one anyway.
As you can see, the border can be as irregular as you wish. It only has, however, one color.
It is known that a pixel group will not touch the border.
I was told that a pixel group's colors can be any except the figure's background color. I assume that then it can be the same as the figure's border color (black).
We have been given a class capable of taking images and converting them to a matrix (each element being an integer representing the color of the pixel).
And that's it. I'm doing it with Java.
WHAT HAVE I DONE SO FAR
Iterate through each pixel in the matrix
If I find a pixel that is different from the background color, I will assume it belongs to the border of the figure. I will call this pixel initialPixel from now on.
Note that the initialPixel in the image I provided is that black pixel in the top-left corner of the figure. I made a sharp cut there purposefully to illustrate it.
My mission now is to find the background color of the figure (in this case white).
But I'm having quite a great deal of trouble to find such background color (white). This is the closest method I did, which worked for some cases - but not with this image:
Since I know the color of the border, I could find the first different color that is to the south of the initialPixel. Did sound like a good idea - it did work sometimes, but it would not work with the image provided: it will return yellow in this case, since initialPixel is quite away from the figure's contents.
Assuming I did find the figure's background color (white), my next task would be to realize that there exist two pixel groups within the figure. This one seems easier:
Since I now know the figure's background color (white), I can try iterating through each pixel within the figure and, if I find one that does not belong to the border and is not part of the figure's background, I can already tell there is one pixel group. I can begin a recursive function to find all pixels related to such group and "flag" them so that in the future iterations I can completely ignore such pixels.
WHAT I NEED
Yes, my problem is about how to find the figure's background color (keep in mind it can be the same as the whole image's background color - for now it is yellow, but it can be white as well) based on what I described before.
I don't need any code - I'm just having trouble thinking a proper algorithm for such. The fact that the border can have such weird irregular lines is killing me.
Or even better: have I been doing it wrong all along? Maybe I shouldn't have focused so much on that initialPixel at all. Maybe a different kind of initial method would have worked? Are there any documents/examples about topics like this? I realize there is a lot of research on "computer vision" and such, but I can't find much about this particular problem.
SOME CODE
My function to retrieve a vector with all the figures:
*Note: Figure is just a class that contains some values like the background color and the number of elements.
public Figure[] getFiguresFromImage(Image image) {
Figure[] tempFigures = new Figure[100];
int numberOfFigures = 0;
matrixOfImage = image.getMatrix();
int imageBackgroundColor = matrixOfImage[0][0];
int pixel = 0;
for (int y = 0; y < matrixOfImage.length; ++y) {
for (int x = 0; x < matrixOfImage[0].length; ++x) {
pixel = matrixOfImage[y][x];
if (!exploredPixels[y][x]) {
// This pixel has not been evaluated yet
if (pixel != imageBackgroundColor ) {
// This pixel is different than the background color
// Since it is a new pixel, I assume it is the initial pixel of a new figure
// Get the figure based on the initial pixel found
tempFigures[numberOfFigures] = retrieveFigure(y,x);
++numberOfFigures;
}
}
}
}
// ** Do some work here after getting my figures **
return null;
}
Then, clearly, the function retrieveFigure(y,x) is what I am being unable to do.
Notes:
For learning purposes, I should not be using any external libraries.
A good way to solve this problem is to treat the image as a graph, where there is one node ('component' in this answer) for each color filled area.
Here is one way to implement this approach:
Mark all pixels as unvisited.
For each pixel, if the pixel is unvisited, perform the flood fill algorithm on it. During the flood fill mark each connected pixel as visited.
Now you should have a list of solid color areas in your image (or 'components'), so you just have to figure out how they are connected to each other:
Find the component that has pixels adjacent to the background color component - this is your figure border. Note that you can find the background color component by finding the component with the 0,0 pixel.
Now find the components with pixels adjacent to the newly found 'figure border' component. There will be two such components - pick the one that isn't the background (ie that doesn't have the 0,0 pixel). This is your figure background.
To find the pixel groups, simply count the number of components with pixels adjacent to the figure background component (ignoring of course the figure border component)
Advantages of this approach:
runs in O(# pixels) time.
easy to understand and implement.
doesn't assume the background color and figure background color are different.
To make sure you understand how iterating through the components and their neighbors might work, here's an example pseudocode implementation for step 5:
List<Component> allComponents; // created in step 2
Component background; // found in step 3 (this is the component with the 0,0 pixel)
Component figureBorder; // found in step 4
List<Component> pixelGroups = new List<Component>(); // list of pixel groups
for each Component c in allComponents:
if c == background:
continue;
for each Pixel pixel in c.pixelList:
for each Pixel neighbor in pixel.neighbors:
if neighbor.getComponent() == figureBorder:
c.isPixelGroup = true;
int numPixelGroups = 0;
for each Component c in allComponents:
if (c.isPixelGroup)
numPixelGroups++;
Try this code :
import java.util.Scanner;
import java.awt.image.BufferedImage;
import java.io.*;
import javax.imageio.ImageIO;
class Analyzer{
private int pixdata[][];
private int rgbdata[][];
private BufferedImage image;
int background_color;
int border_color;
int imagebg_color;
private void populateRGB(){
rgbdata = new int[image.getWidth()][image.getHeight()];
for(int i = 0; i < image.getWidth(); i++){
for(int j = 0; j < image.getHeight(); j++){
rgbdata[i][j] = image.getRGB(i, j);
}
}
int howmanydone = 0;
int prevcolor,newcolor;
prevcolor = rgbdata[0][0];
/*
for(int i = 0; i < image.getWidth(); i++){
for(int j = 0; j < image.getHeight(); j++){
System.out.print(rgbdata[i][j]);
}
System.out.println("");
}*/
for(int i = 0; i < image.getWidth(); i++){
for(int j = 0; j < image.getHeight(); j++){
newcolor = rgbdata[i][j];
if((howmanydone == 0) && (newcolor != prevcolor)){
background_color = prevcolor;
border_color = newcolor;
prevcolor = newcolor;
howmanydone = 1;
}
if((newcolor != prevcolor) && (howmanydone == 1)){
imagebg_color = newcolor;
}
}
}
}
public Analyzer(){ background_color = 0; border_color = 0; imagebg_color = 0;}
public int background(){ return background_color; }
public int border() { return border_color;}
public int imagebg() {return imagebg_color;}
public int analyze(String filename,String what) throws IOException{
image = ImageIO.read(new File(filename));
pixdata = new int[image.getHeight()][image.getWidth()];
populateRGB();
if(what.equals("background"))return background();
if(what.equals("border"))return border();
if(what.equals("image-background"))return imagebg();
else return 0;
}
}
public class ImageAnalyze{
public static void main(String[] args){
Analyzer an = new Analyzer();
String imageName;
Scanner scan = new Scanner(System.in);
System.out.print("Enter image name:");
imageName = scan.nextLine();
try{
int a = an.analyze(imageName,"border");//"border","image-background","background" will get you different colors
System.out.printf("Color bg: %x",a);
}catch(Exception e){
System.out.println(e.getMessage());
}
}
}
the color returned is ARGB format. You will need to extract R,G and B from it.
There is a bug in this code. Working on implementation using Finite State machine. in the first state you're inside the image, hence 0,0 is the background color, then when there is a change, the change is the border color, then the third state is when inside the image + inside the border and the color changes.