To improve my knowledge of imaging and get some experience working with the topics, I decided to create a license plate recognition algorithm on the Android platform.
The first step is detection, for which I decided to implement a recent paper titled "A Robust and Efficient Approach to License Plate Detection". The paper presents their idea very well and uses quite simple techniques to achieve detection. Besides some details lacking in the paper, I implemented the bilinear downsampling, converting to gray scale, and the edging + adaptive thresholding as described in Section 3A, 3B.1, and 3B.2.
Unfortunately, I am not getting the output this paper presents in e.g. figure 3 and 6.
The image I use for testing is as follows:
The gray scale (and downsampled) version looks fine (see the bottom of this post for the actual implementation), I used a well-known combination of the RGB components to produce it (paper does not mention how, so I took a guess).
Next is the initial edge detection using the Sobel filter outlined. This produces an image similar to the ones presented in figure 6 of the paper.
And finally, the remove the "weak edges" they apply adaptive thresholding using a 20x20 window. Here is where things go wrong.
As you can see, it does not function properly, even though I am using their stated parameter values. Additionally I have tried:
Changing the beta parameter.
Use a 2d int array instead of Bitmap objects to simplify creating the integral image.
Try a higher Gamma parameter so the initial edge detection allows more "edges".
Change the window to e.g. 10x10.
Yet none of the changes made an improvement; it keeps producing images as the one above. My question is: what am I doing different than what is outlined in the paper? and how can I get the desired output?
Code
The (cleaned) code I use:
public int[][] toGrayscale(Bitmap bmpOriginal) {
int width = bmpOriginal.getWidth();
int height = bmpOriginal.getHeight();
// color information
int A, R, G, B;
int pixel;
int[][] greys = new int[width][height];
// scan through all pixels
for (int x = 0; x < width; ++x) {
for (int y = 0; y < height; ++y) {
// get pixel color
pixel = bmpOriginal.getPixel(x, y);
R = Color.red(pixel);
G = Color.green(pixel);
B = Color.blue(pixel);
int gray = (int) (0.2989 * R + 0.5870 * G + 0.1140 * B);
greys[x][y] = gray;
}
}
return greys;
}
The code for edge detection:
private int[][] detectEges(int[][] detectionBitmap) {
int width = detectionBitmap.length;
int height = detectionBitmap[0].length;
int[][] edges = new int[width][height];
// Loop over all pixels in the bitmap
int c1 = 0;
int c2 = 0;
for (int y = 0; y < height; y++) {
for (int x = 2; x < width -2; x++) {
// Calculate d0 for each pixel
int p0 = detectionBitmap[x][y];
int p1 = detectionBitmap[x-1][y];
int p2 = detectionBitmap[x+1][y];
int p3 = detectionBitmap[x-2][y];
int p4 = detectionBitmap[x+2][y];
int d0 = Math.abs(p1 + p2 - 2*p0) + Math.abs(p3 + p4 - 2*p0);
if(d0 >= Gamma) {
c1++;
edges[x][y] = Gamma;
} else {
c2++;
edges[x][y] = d0;
}
}
}
return edges;
}
The code for adaptive thresholding. The SAT implementation is taken from here:
private int[][] AdaptiveThreshold(int[][] detectionBitmap) {
// Create the integral image
processSummedAreaTable(detectionBitmap);
int width = detectionBitmap.length;
int height = detectionBitmap[0].length;
int[][] binaryImage = new int[width][height];
int white = 0;
int black = 0;
int h_w = 20; // The window size
int half = h_w/2;
// Loop over all pixels in the bitmap
for (int y = half; y < height - half; y++) {
for (int x = half; x < width - half; x++) {
// Calculate d0 for each pixel
int sum = 0;
for(int k = -half; k < half - 1; k++) {
for (int j = -half; j < half - 1; j++) {
sum += detectionBitmap[x + k][y + j];
}
}
if(detectionBitmap[x][y] >= (sum / (h_w * h_w)) * Beta) {
binaryImage[x][y] = 255;
white++;
} else {
binaryImage[x][y] = 0;
black++;
}
}
}
return binaryImage;
}
/**
* Process given matrix into its summed area table (in-place)
* O(MN) time, O(1) space
* #param matrix source matrix
*/
private void processSummedAreaTable(int[][] matrix) {
int rowSize = matrix.length;
int colSize = matrix[0].length;
for (int i=0; i<rowSize; i++) {
for (int j=0; j<colSize; j++) {
matrix[i][j] = getVal(i, j, matrix);
}
}
}
/**
* Helper method for processSummedAreaTable
* #param row current row number
* #param col current column number
* #param matrix source matrix
* #return sub-matrix sum
*/
private int getVal (int row, int col, int[][] matrix) {
int leftSum; // sub matrix sum of left matrix
int topSum; // sub matrix sum of top matrix
int topLeftSum; // sub matrix sum of top left matrix
int curr = matrix[row][col]; // current cell value
/* top left value is itself */
if (row == 0 && col == 0) {
return curr;
}
/* top row */
else if (row == 0) {
leftSum = matrix[row][col - 1];
return curr + leftSum;
}
/* left-most column */
if (col == 0) {
topSum = matrix[row - 1][col];
return curr + topSum;
}
else {
leftSum = matrix[row][col - 1];
topSum = matrix[row - 1][col];
topLeftSum = matrix[row - 1][col - 1]; // overlap between leftSum and topSum
return curr + leftSum + topSum - topLeftSum;
}
}
Marvin provides an approach to find text regions. Perhaps it can be a start point for you:
Find Text Regions in Images:
http://marvinproject.sourceforge.net/en/examples/findTextRegions.html
This approach was also used in this question:
How do I separates text region from image in java
Using your image I got this output:
Source Code:
package textRegions;
import static marvin.MarvinPluginCollection.findTextRegions;
import java.awt.Color;
import java.util.List;
import marvin.image.MarvinImage;
import marvin.image.MarvinSegment;
import marvin.io.MarvinImageIO;
public class FindVehiclePlate {
public FindVehiclePlate() {
MarvinImage image = MarvinImageIO.loadImage("./res/vehicle.jpg");
image = findText(image, 30, 20, 100, 170);
MarvinImageIO.saveImage(image, "./res/vehicle_out.png");
}
public MarvinImage findText(MarvinImage image, int maxWhiteSpace, int maxFontLineWidth, int minTextWidth, int grayScaleThreshold){
List<MarvinSegment> segments = findTextRegions(image, maxWhiteSpace, maxFontLineWidth, minTextWidth, grayScaleThreshold);
for(MarvinSegment s:segments){
if(s.height >= 10){
s.y1-=20;
s.y2+=20;
image.drawRect(s.x1, s.y1, s.x2-s.x1, s.y2-s.y1, Color.red);
image.drawRect(s.x1+1, s.y1+1, (s.x2-s.x1)-2, (s.y2-s.y1)-2, Color.red);
image.drawRect(s.x1+2, s.y1+2, (s.x2-s.x1)-4, (s.y2-s.y1)-4, Color.red);
}
}
return image;
}
public static void main(String[] args) {
new FindVehiclePlate();
}
}
Related
(please don't mark this question as not clear, I spent a lot of time posting it ;) )
Okay, I am trying to make a simple 2d java game engine as a learning project, and part of it is rendering a filled polygon as a feature.
I am creating this algorithm my self, and I really can't figure out what I am doing wrong.
My though process is something like so:
Loop through every line, get the number of points in that line, then get the X location of every point in that line,
Then loop through the line again this time checking if the x in the loop is inside one of the lines in the points array, if so, draw it.
Disclaimer: the Polygon class is another type of mesh, and its draw method returns an int array with lines drawn through each vertex.
Disclaimer 2: I've tried other people's solutions but none really helped me and none really explained it properly (which is not the point in a learning project).
The draw methods are called one per frame.
FilledPolygon:
#Override
public int[] draw() {
int[] pixels = new Polygon(verts).draw();
int[] filled = new int[width * height];
for (int y = 0; y < height; y++) {
int count = 0;
for (int x = 0; x < width; x++) {
if (pixels[x + y * width] == 0xffffffff) {
count++;
}
}
int[] points = new int[count];
int current = 0;
for (int x = 0; x < width; x++) {
if (pixels[x + y * width] == 0xffffffff) {
points[current] = x;
current++;
}
}
if (count >= 2) {
int num = count;
if (count % 2 != 0)
num--;
for (int i = 0; i < num; i += 2) {
for (int x = points[i]; x < points[i+1]; x++) {
filled[x + y * width] = 0xffffffff;
}
}
}
}
return filled;
}
The Polygon class simply uses Bresenham's line algorithm and has nothing to do with the problem.
The game class:
#Override
public void load() {
obj = new EngineObject();
obj.addComponent(new MeshRenderer(new FilledPolygon(new int[][] {
{0,0},
{60, 0},
{0, 60},
{80, 50}
})));
((MeshRenderer)(obj.getComponent(MeshRenderer.class))).color = CYAN;
obj.transform.position.Y = 100;
}
The expected result is to get this shape filled up.(it was created using the polygon mesh):
The actual result of using the FilledPolygon mesh:
You code seems to have several problems and I will not focus on that.
Your approach based on drawing the outline then filling the "inside" runs cannot work in the general case because the outlines join at the vertices and intersections, and the alternation outside-edge-inside-edge-outside is broken, in an unrecoverable way (you can't know which segment to fill by just looking at a row).
You'd better use a standard polygon filling algorithm. You will find many descriptions on the Web.
For a simple but somewhat inefficient solution, work as follows:
process all lines between the minimum and maximum ordinates; let Y be the current ordinate;
loop on the edges;
assign every vertex a positive or negative sign if y ≥ Y or y < Y (mind the asymmetry !);
whenever the endpoints of an edge have a different sign, compute the intersection between the edge and the line;
you will get an even number of intersections; sort them horizontally;
draw between every other point.
You can get a more efficient solution by keeping a trace of which edges cross the current line, in a so-called "active list". Check the algorithms known as "scanline fill".
Note that you imply that pixels[] has the same width*height size as filled[]. Based on the mangled output, I would say that they are just not the same.
Otherwise if you just want to fill a scanline (assuming everything is convex), that code is overcomplicated, simply look for the endpoints and loop between them:
public int[] draw() {
int[] pixels = new Polygon(verts).draw();
int[] filled = new int[width * height];
for (int y = 0; y < height; y++) {
int left = -1;
for (int x = 0; x < width; x++) {
if (pixels[x + y * width] == 0xffffffff) {
left = x;
break;
}
}
if (left >= 0) {
int right = left;
for (int x = width - 1; x > left; x--) {
if (pixels[x + y * width] == 0xffffffff) {
right = x;
break;
}
}
for (int x = left; x <= right; x++) {
filled[x + y * width] = 0xffffffff;
}
}
}
return filled;
}
However this kind of approach relies on having the entire polygon in the view, which may not always be the case in real life.
I'm currently working on a project with multiple objects (tangible objects).
I have a function called renderFrame() executed each frame that :
takes a pixel array as an argument
iterates through all objects and ask them if they should change a specific pixel
The problem is that every frame, a new pixel array must be generated an instanciated with these values.
To do this, I simply use a loop :
int[] pixels = new int[4 * WIDTH * HEIGHT];
for (int i = 0; i < pixels.length - 3; i += 4) {
pixels[i ] = 0; // red
pixels[i + 1] = 0; // green
pixels[i + 2] = 0; // blue
pixels[i + 3] = 255; // alpha
}
Each time I call the function renderFrame(), is it best (in terms of calculation speed) creating a new array and fill it thanks to my for() loop, or creating a copy of a static pixel "model" array like the following one?
private static int[] pixelsD = new int[4 * WIDTH * HEIGHT];
public void initializeArray() {
for (int i = 0; i < pixelsD.length - 3; i += 4) {
pixelsD[i ] = 0; // red
pixelsD[i + 1] = 0; // green
pixelsD[i + 2] = 0; // blue
pixelsD[i + 3] = 255; // alpha
}
}
public void renderFrame() {
int[] pixels = Arrays.copyOf(pixelsD, 4 * WIDTH * HEIGHT);
}
In my 2D game I'm using graphic tools to create nice, smooth terrain represented by black color:
Simple algorithm written in java looks for black color every 15 pixels, creating following set of lines (gray):
As you can see, there's some places that are mapped very bad, some are pretty good. In other case it would be not necessary to sample every 15 pixels, eg. if terrain is flat.
What's the best way to covert this curve to set of points [lines], using as little points as possible?
Sampling every 15 pixels = 55 FPS, 10 pixels = 40 FPS
Following algorithm is doing that job, sampling from right to left, outputting pasteable into code array:
public void loadMapFile(String path) throws IOException {
File mapFile = new File(path);
image = ImageIO.read(mapFile);
boolean black;
System.out.print("{ ");
int[] lastPoint = {0, 0};
for (int x = image.getWidth()-1; x >= 0; x -= 15) {
for (int y = 0; y < image.getHeight(); y++) {
black = image.getRGB(x, y) == -16777216 ? true : false;
if (black) {
lastPoint[0] = x;
lastPoint[1] = y;
System.out.print("{" + (x) + ", " + (y) + "}, ");
break;
}
}
}
System.out.println("}");
}
Im developing on Android, using Java and AndEngine
This problem is nearly identical to the problem of digitization of a signal (such as sound), where the basic law is that the signal in the input that had the frequency too high for the sampling rate will not be reflected in the digitized output. So the concern is that if you check ever 30 pixels and then test the middle as bmorris591 suggests, you might miss that 7 pixel hole between the sampling points. This suggests that if there are 10 pixel features you cannot afford to miss, you need to do scanning every 5 pixels: your sample rate should be twice the highest frequency present in the signal.
One thing that can help improve your algorithm is a better y-dimension search. Currently you are searching for the intersection between sky and terrain linearly, but a binary search should be faster
int y = image.getHeight()/2; // Start searching from the middle of the image
int yIncr = y/2;
while (yIncr>0) {
if (image.getRGB(x, y) == -16777216) {
// We hit the terrain, to towards the sky
y-=yIncr;
} else {
// We hit the sky, go towards the terrain
y+=yIncr;
}
yIncr = yIncr/2;
}
// Make sure y is on the first terrain point: move y up or down a few pixels
// Only one of the following two loops will execute, and only one or two iterations max
while (image.getRGB(x, y) != -16777216) y++;
while (image.getRGB(x, y-1) == -16777216) y--;
Other optimizations are possible. If you know that your terrain has no cliffs, then you only need to search the window from lastY+maxDropoff to lastY-maxDropoff. Also, if your terrain can never be as tall as the entire bitmap, you don't need to search the top of the bitmap either. This should help to free some CPU cycles you can use for higher-resolution x-scanning of the terrain.
I propose to find border points which exists on the border between white and dark pixels. After that we can digitize those points. To do that, we should define DELTA which specify which point we should skip and which we should add to result list.
DELTA = 3, Number of points = 223
DELTA = 5, Number of points = 136
DELTA = 10, Number of points = 70
Below, I have put source code, which prints image and looking for points. I hope, you will be able to read it and find a way to solve your problem.
import java.awt.Color;
import java.awt.Dimension;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.Point;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import javax.imageio.ImageIO;
import javax.swing.JFrame;
import javax.swing.JPanel;
public class Program {
public static void main(String[] args) throws IOException {
BufferedImage image = ImageIO.read(new File("/home/michal/Desktop/FkXG1.png"));
PathFinder pathFinder = new PathFinder(10);
List<Point> borderPoints = pathFinder.findBorderPoints(image);
System.out.println(Arrays.toString(borderPoints.toArray()));
System.out.println(borderPoints.size());
JFrame frame = new JFrame();
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.getContentPane().add(new ImageBorderPanel(image, borderPoints));
frame.pack();
frame.setMinimumSize(new Dimension(image.getWidth(), image.getHeight()));
frame.setVisible(true);
}
}
class PathFinder {
private int maxDelta = 3;
public PathFinder(int delta) {
this.maxDelta = delta;
}
public List<Point> findBorderPoints(BufferedImage image) {
int width = image.getWidth();
int[][] imageInBytes = convertTo2DWithoutUsingGetRGB(image);
int[] borderPoints = findBorderPoints(width, imageInBytes);
List<Integer> indexes = dwindlePoints(width, borderPoints);
List<Point> points = new ArrayList<Point>(indexes.size());
for (Integer index : indexes) {
points.add(new Point(index, borderPoints[index]));
}
return points;
}
private List<Integer> dwindlePoints(int width, int[] borderPoints) {
List<Integer> indexes = new ArrayList<Integer>(width);
indexes.add(borderPoints[0]);
int delta = 0;
for (int index = 1; index < width; index++) {
delta += Math.abs(borderPoints[index - 1] - borderPoints[index]);
if (delta >= maxDelta) {
indexes.add(index);
delta = 0;
}
}
return indexes;
}
private int[] findBorderPoints(int width, int[][] imageInBytes) {
int[] borderPoints = new int[width];
int black = Color.BLACK.getRGB();
for (int y = 0; y < imageInBytes.length; y++) {
int maxX = imageInBytes[y].length;
for (int x = 0; x < maxX; x++) {
int color = imageInBytes[y][x];
if (color == black && borderPoints[x] == 0) {
borderPoints[x] = y;
}
}
}
return borderPoints;
}
private int[][] convertTo2DWithoutUsingGetRGB(BufferedImage image) {
final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
final int width = image.getWidth();
final int height = image.getHeight();
final boolean hasAlphaChannel = image.getAlphaRaster() != null;
int[][] result = new int[height][width];
if (hasAlphaChannel) {
final int pixelLength = 4;
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += (((int) pixels[pixel] & 0xff) << 24); // alpha
argb += ((int) pixels[pixel + 1] & 0xff); // blue
argb += (((int) pixels[pixel + 2] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 3] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
} else {
final int pixelLength = 3;
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += -16777216; // 255 alpha
argb += ((int) pixels[pixel] & 0xff); // blue
argb += (((int) pixels[pixel + 1] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 2] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
}
return result;
}
}
class ImageBorderPanel extends JPanel {
private static final long serialVersionUID = 1L;
private BufferedImage image;
private List<Point> borderPoints;
public ImageBorderPanel(BufferedImage image, List<Point> borderPoints) {
this.image = image;
this.borderPoints = borderPoints;
}
#Override
public void paintComponent(Graphics g) {
super.paintComponent(g);
g.drawImage(image, 0, 0, null);
Graphics2D graphics2d = (Graphics2D) g;
g.setColor(Color.YELLOW);
for (Point point : borderPoints) {
graphics2d.fillRect(point.x, point.y, 3, 3);
}
}
}
In my source code I have used example from this question:
Java - get pixel array from image
The most efficient solution (with respect to points required) would be to allow for variable spacing between points along the X axis. This way, a large flat part would require very few points/samples and complex terrains would use more.
In 3D mesh processing, there is a nice mesh simplification algorithm named "quadric edge collapse", which you can adapt to your problem.
Here is the idea, translated to your problem - it actually gets much simpler than the original 3D algorithm:
Represent your curve with way too many points.
For each point, measure the error (i.e. difference to the smooth terrain) if you remove it.
Remove the point that gives the smallest error.
Repeat until you have reduced the number of points far enough or errors get too large.
To be more precise regarding step 2: Given points P, Q, R, the error of Q is the difference between the approximation of your terrain by two straight lines, P->Q and Q->R, and the approximation of your terrain by just one line P->R.
Note that when a point is removed only its neighbors need an update of their error value.
I've been having trouble with an image interpolation method in Processing. This is the code I've come up with and I'm aware that it will throw an out of bounds exception since the outer loop goes further than the original image but how can I fix that?
PImage nearestneighbor (PImage o, float sf)
{
PImage out = createImage((int)(sf*o.width),(int)(sf*o.height),RGB);
o.loadPixels();
out.loadPixels();
for (int i = 0; i < sf*o.height; i++)
{
for (int j = 0; j < sf*o.width; j++)
{
int y = round((o.width*i)/sf);
int x = round(j / sf);
out.pixels[(int)((sf*o.width*i)+j)] = o.pixels[(y+x)];
}
}
out.updatePixels();
return out;
}
My idea was to divide both components that represent the point in the scaled image by the scale factor and round it in order to obtain the nearest neighbor.
For getting rid of the IndexOutOfBoundsException try caching the result of (int)(sf*o.width) and (int)(sf*o.height).
Additionally you might want to make sure that x and y don't leave the bounds, e.g. by using Math.min(...) and Math.max(...).
Finally, it should be int y = round((i / sf) * o.width; since you want to get the pixel in the original scale and then muliply with the original width. Example: Assume a 100x100 image and a scaling factor of 1.2. The scaled height would be 120 and thus the highest value for i would be 119. Now, round((119 * 100) / 1.2) yields round(9916.66) = 9917. On the other hand round(119 / 1.2) * 100 yields round(99.16) * 100 = 9900 - you have a 17 pixel difference here.
Btw, the variable name y might be misleading here, since its not the y coordinate but the index of the pixel at the coordinates (0,y), i.e. the first pixel at height y.
Thus your code might look like this:
int scaledWidth = (int)(sf*o.width);
int scaledHeight = (int)(sf*o.height);
PImage out = createImage(scaledWidth, scaledHeight, RGB);
o.loadPixels();
out.loadPixels();
for (int i = 0; i < scaledHeight; i++) {
for (int j = 0; j < scaledWidth; j++) {
int y = Math.min( round(i / sf), o.height ) * o.width;
int x = Math.min( round(j / sf), o.width );
out.pixels[(int)((scaledWidth * i) + j)] = o.pixels[(y + x)];
}
}
I am developing a game in java just for fun. It is a ball brick breaking game of some sort.
Here is a level, when the ball hits one of the Orange bricks I would like to create a chain reaction to explode all other bricks that are NOT gray(unbreakable) and are within reach of the brick being exploded.
So it would clear out everything in this level without the gray bricks.
I am thinking I should ask the brick that is being exploded for other bricks to the LEFT, RIGHT, UP, and DOWN of that brick then start the same process with those cells.
//NOTE TO SELF: read up on Enums and List
When a explosive cell is hit with the ball it calls the explodeMyAdjecentCells();
//This is in the Cell class
public void explodeMyAdjecentCells() {
exploded = true;
ballGame.breakCell(x, y, imageURL[thickness - 1][0]);
cellBlocks.explodeCell(getX() - getWidth(),getY());
cellBlocks.explodeCell(getX() + getWidth(),getY());
cellBlocks.explodeCell(getX(),getY() - getHeight());
cellBlocks.explodeCell(getX(),getY() + getHeight());
remove();
ballGame.playSound("src\\ballgame\\Sound\\cellBrakes.wav", 100.0f, 0.0f, false, 0.0d);
}
//This is the CellHandler->(CellBlocks)
public void explodeCell(int _X, int _Y) {
for(int c = 0; c < cells.length; c++){
if(cells[c] != null && !cells[c].hasExploded()) {
if(cells[c].getX() == _X && cells[c].getY() == _Y) {
int type = cells[c].getThickness();
if(type != 7 && type != 6 && type != 2) {
cells[c].explodeMyAdjecentCells();
}
}
}
}
}
It successfully removes my all adjacent cells,
But in the explodeMyAdjecentCells() method, I have this line of code
ballGame.breakCell(x, y, imageURL[thickness - 1][0]);
//
This line tells the ParticleHandler to create 25 small images(particles) of the exploded cell.
Tough all my cells are removed the particleHandler do not create particles for all the removed cells.
The problem was solved youst now, its really stupid.
I had set particleHandler to create max 1500 particles. My god how did i not see that!
private int particleCellsMax = 1500;
private int particleCellsMax = 2500;
thx for all the help people, I will upload the source for creating the particles youst for fun if anyone needs it.
The source code for splitting image into parts was taken from:
Kalani's Tech Blog
//Particle Handler
public void breakCell(int _X, int _Y, String URL) {
File file = new File(URL);
try {
FileInputStream fis = new FileInputStream(file);
BufferedImage image = ImageIO.read(fis);
int rows = 5;
int colums = 5;
int parts = rows * colums;
int partWidth = image.getWidth() / colums;
int partHeight = image.getHeight() / rows;
int count = 0;
BufferedImage imgs[] = new BufferedImage[parts];
for(int x = 0; x < colums; x++) {
for(int y = 0; y < rows; y++) {
imgs[count] = new BufferedImage(partWidth, partHeight, image.getType());
Graphics2D g = imgs[count++].createGraphics();
g.drawImage(image, 0, 0, partWidth, partHeight, partWidth * y, partHeight * x, partWidth * y + partWidth, partHeight * x + partHeight, null);
g.dispose();
}
}
int numParts = imgs.length;
int c = 0;
for(int iy = 0; iy < rows; iy ++) {
for(int ix = 0; ix < colums; ix++) {
if(c < numParts) {
Image imagePart = Toolkit.getDefaultToolkit().createImage(imgs[c].getSource());
createCellPart(_X + ((image.getWidth() / colums) * ix), _Y + ((image.getHeight() / rows) * iy), c, imagePart);
c++;
} else {
break;
}
}
}
} catch(IOException io) {}
}
You could consider looking at this in a more OO way, and using 'tell don't ask'. So you would look at having a Brick class, which would know what its colour was, and its adjacent blocks. Then you would tell the first Block to explode, it would then know that if it was Orange (and maybe consider using Enums for this - not just numbers), then it would tell its adjacent Blocks to 'chain react' (or something like that), these blocks would then decide what to do (either explode in the case of an orange block - and call their adjacent blocks, or not in the case of a grey Block.
I know its quite different from what your doing currently, but will give you a better structured program hopefully.
I would imagine a method that would recursively get all touching cells of a similar color.
Then you can operate on that list (of all touching blocks) pretty easily and break all the ones are haven't been broken.
Also note that your getAdjentCell() method has side effects (it does the breaking) which isn't very intuitive based on the name.
// I agree with Matt that color (or type) should probably be an enum,
// or at least a class. int isn't very descriptive
public enum CellType { GRAY, RED, ORANGE }
public class Cell{
....
public final CellType type;
/**
* Recursively find all adjacent cells that have the same type as this one.
*/
public List<Cell> getTouchingSimilarCells() {
List<Cell> result = new ArrayList<Cell>();
result.add(this);
for (Cell c : getAdjecentCells()) {
if (c != null && c.type == this.type) {
result.addAll(c.getTouchingSimilarCells());
}
}
return result;
}
/**
* Get the 4 adjacent cells (above, below, left and right).<br/>
* NOTE: a cell may be null in the list if it does not exist.
*/
public List<Cell> getAdjecentCells() {
List<Cell> result = new ArrayList<Cell>();
result.add(cellBlock(this.getX() + 1, this.getY()));
result.add(cellBlock(this.getX() - 1, this.getY()));
result.add(cellBlock(this.getX(), this.getY() + 1));
result.add(cellBlock(this.getX(), this.getY() - 1));
return result;
}
}