Terrain curve to array of points - java

In my 2D game I'm using graphic tools to create nice, smooth terrain represented by black color:
Simple algorithm written in java looks for black color every 15 pixels, creating following set of lines (gray):
As you can see, there's some places that are mapped very bad, some are pretty good. In other case it would be not necessary to sample every 15 pixels, eg. if terrain is flat.
What's the best way to covert this curve to set of points [lines], using as little points as possible?
Sampling every 15 pixels = 55 FPS, 10 pixels = 40 FPS
Following algorithm is doing that job, sampling from right to left, outputting pasteable into code array:
public void loadMapFile(String path) throws IOException {
File mapFile = new File(path);
image = ImageIO.read(mapFile);
boolean black;
System.out.print("{ ");
int[] lastPoint = {0, 0};
for (int x = image.getWidth()-1; x >= 0; x -= 15) {
for (int y = 0; y < image.getHeight(); y++) {
black = image.getRGB(x, y) == -16777216 ? true : false;
if (black) {
lastPoint[0] = x;
lastPoint[1] = y;
System.out.print("{" + (x) + ", " + (y) + "}, ");
break;
}
}
}
System.out.println("}");
}
Im developing on Android, using Java and AndEngine

This problem is nearly identical to the problem of digitization of a signal (such as sound), where the basic law is that the signal in the input that had the frequency too high for the sampling rate will not be reflected in the digitized output. So the concern is that if you check ever 30 pixels and then test the middle as bmorris591 suggests, you might miss that 7 pixel hole between the sampling points. This suggests that if there are 10 pixel features you cannot afford to miss, you need to do scanning every 5 pixels: your sample rate should be twice the highest frequency present in the signal.
One thing that can help improve your algorithm is a better y-dimension search. Currently you are searching for the intersection between sky and terrain linearly, but a binary search should be faster
int y = image.getHeight()/2; // Start searching from the middle of the image
int yIncr = y/2;
while (yIncr>0) {
if (image.getRGB(x, y) == -16777216) {
// We hit the terrain, to towards the sky
y-=yIncr;
} else {
// We hit the sky, go towards the terrain
y+=yIncr;
}
yIncr = yIncr/2;
}
// Make sure y is on the first terrain point: move y up or down a few pixels
// Only one of the following two loops will execute, and only one or two iterations max
while (image.getRGB(x, y) != -16777216) y++;
while (image.getRGB(x, y-1) == -16777216) y--;
Other optimizations are possible. If you know that your terrain has no cliffs, then you only need to search the window from lastY+maxDropoff to lastY-maxDropoff. Also, if your terrain can never be as tall as the entire bitmap, you don't need to search the top of the bitmap either. This should help to free some CPU cycles you can use for higher-resolution x-scanning of the terrain.

I propose to find border points which exists on the border between white and dark pixels. After that we can digitize those points. To do that, we should define DELTA which specify which point we should skip and which we should add to result list.
DELTA = 3, Number of points = 223
DELTA = 5, Number of points = 136
DELTA = 10, Number of points = 70
Below, I have put source code, which prints image and looking for points. I hope, you will be able to read it and find a way to solve your problem.
import java.awt.Color;
import java.awt.Dimension;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.Point;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import javax.imageio.ImageIO;
import javax.swing.JFrame;
import javax.swing.JPanel;
public class Program {
public static void main(String[] args) throws IOException {
BufferedImage image = ImageIO.read(new File("/home/michal/Desktop/FkXG1.png"));
PathFinder pathFinder = new PathFinder(10);
List<Point> borderPoints = pathFinder.findBorderPoints(image);
System.out.println(Arrays.toString(borderPoints.toArray()));
System.out.println(borderPoints.size());
JFrame frame = new JFrame();
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.getContentPane().add(new ImageBorderPanel(image, borderPoints));
frame.pack();
frame.setMinimumSize(new Dimension(image.getWidth(), image.getHeight()));
frame.setVisible(true);
}
}
class PathFinder {
private int maxDelta = 3;
public PathFinder(int delta) {
this.maxDelta = delta;
}
public List<Point> findBorderPoints(BufferedImage image) {
int width = image.getWidth();
int[][] imageInBytes = convertTo2DWithoutUsingGetRGB(image);
int[] borderPoints = findBorderPoints(width, imageInBytes);
List<Integer> indexes = dwindlePoints(width, borderPoints);
List<Point> points = new ArrayList<Point>(indexes.size());
for (Integer index : indexes) {
points.add(new Point(index, borderPoints[index]));
}
return points;
}
private List<Integer> dwindlePoints(int width, int[] borderPoints) {
List<Integer> indexes = new ArrayList<Integer>(width);
indexes.add(borderPoints[0]);
int delta = 0;
for (int index = 1; index < width; index++) {
delta += Math.abs(borderPoints[index - 1] - borderPoints[index]);
if (delta >= maxDelta) {
indexes.add(index);
delta = 0;
}
}
return indexes;
}
private int[] findBorderPoints(int width, int[][] imageInBytes) {
int[] borderPoints = new int[width];
int black = Color.BLACK.getRGB();
for (int y = 0; y < imageInBytes.length; y++) {
int maxX = imageInBytes[y].length;
for (int x = 0; x < maxX; x++) {
int color = imageInBytes[y][x];
if (color == black && borderPoints[x] == 0) {
borderPoints[x] = y;
}
}
}
return borderPoints;
}
private int[][] convertTo2DWithoutUsingGetRGB(BufferedImage image) {
final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
final int width = image.getWidth();
final int height = image.getHeight();
final boolean hasAlphaChannel = image.getAlphaRaster() != null;
int[][] result = new int[height][width];
if (hasAlphaChannel) {
final int pixelLength = 4;
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += (((int) pixels[pixel] & 0xff) << 24); // alpha
argb += ((int) pixels[pixel + 1] & 0xff); // blue
argb += (((int) pixels[pixel + 2] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 3] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
} else {
final int pixelLength = 3;
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += -16777216; // 255 alpha
argb += ((int) pixels[pixel] & 0xff); // blue
argb += (((int) pixels[pixel + 1] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 2] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
}
return result;
}
}
class ImageBorderPanel extends JPanel {
private static final long serialVersionUID = 1L;
private BufferedImage image;
private List<Point> borderPoints;
public ImageBorderPanel(BufferedImage image, List<Point> borderPoints) {
this.image = image;
this.borderPoints = borderPoints;
}
#Override
public void paintComponent(Graphics g) {
super.paintComponent(g);
g.drawImage(image, 0, 0, null);
Graphics2D graphics2d = (Graphics2D) g;
g.setColor(Color.YELLOW);
for (Point point : borderPoints) {
graphics2d.fillRect(point.x, point.y, 3, 3);
}
}
}
In my source code I have used example from this question:
Java - get pixel array from image

The most efficient solution (with respect to points required) would be to allow for variable spacing between points along the X axis. This way, a large flat part would require very few points/samples and complex terrains would use more.
In 3D mesh processing, there is a nice mesh simplification algorithm named "quadric edge collapse", which you can adapt to your problem.
Here is the idea, translated to your problem - it actually gets much simpler than the original 3D algorithm:
Represent your curve with way too many points.
For each point, measure the error (i.e. difference to the smooth terrain) if you remove it.
Remove the point that gives the smallest error.
Repeat until you have reduced the number of points far enough or errors get too large.
To be more precise regarding step 2: Given points P, Q, R, the error of Q is the difference between the approximation of your terrain by two straight lines, P->Q and Q->R, and the approximation of your terrain by just one line P->R.
Note that when a point is removed only its neighbors need an update of their error value.

Related

Java Draw on a image with jframe

So I have looked at these questions
Change the color of each pixel in an image java
How to convert a Drawable to a Bitmap?
How to oepn an image to draw on that
So I have a program that takes a screen shots in a while loop and then checks the color of the image per pixel, which works fine, but the problem is it detects all the color in the image. I used this answer: Java colour detection
But when for example taking this image
I only want to see the green square but not the loose pixels, and then draw something over it so I can see if my program understands that it only needs to target that square.
So what would be the best way to approach this problem?
pseudo code
Step 1: Detect the color
Step 2: Save the position of the pixel
Step 3: Then when ever the same pixel is found Look for the distance between ?
Step 4: if the distance is below certain amount look at the third up coming pixel locations
Step 5: if step 4 fails look at the last found pixel and repeat step 2
Step 6: if its a row of mutiple pixels in a row save the locations and start looking at the same X but different y to see if its true (But should also work for circles ;-;)
Is the best way to address this problem? Or is there an external java libary that can detect this for you?
Update:
So after doing some programming I came up with this what is prob not the best way to do this in anyway but its seems to be finding the yellow square
package src;
import java.awt.Robot;
import java.awt.image.BufferedImage;
import java.io.File;
import java.util.*;
import java.awt.*;
import javax.swing.*;;
import java.util.List;
import java.util.Arrays;
public class ColorDectection
{
private static ArrayList<ArrayList<Integer>> yellowList = new ArrayList<ArrayList<Integer>>();
private static ArrayList<Integer> tmpYellowList = new ArrayList<Integer>();
private static int tmpYellowPixelX = -1;
/*
#params Image img
#params int w
#params int h
Confert notmal image to BufferedImage
Then set the width and height to image pixels cause image is always full screen
Loop all the pixels
*/
public void detectColorInImage(Image img, int w, int h) {
BufferedImage conImg = convertImage(img);
int[][] pixels = new int[w][h];
for( int i = 0; i < w; i++ )
for( int j = 0; j < h; j++ )
selectColor(conImg, i,j);
handleYellowList();
}
/*
#params Image img
#params int pixelX
#params int pixelY
get int rgb of pixel on location
transfer RGB to HSF
if color is found give pixel X and Y to the corresponding function
*/
private void selectColor(BufferedImage img, int pixelX, int pixelY){
int rgb = img.getRGB(pixelX,pixelY);
float hsb[] = new float[3];
int r = (rgb >> 16) & 0xFF;
int g = (rgb >> 8) & 0xFF;
int b = (rgb ) & 0xFF;
Color.RGBtoHSB(r, g, b, hsb);
if (hsb[1] < 0.1 && hsb[2] > 0.9) whiteFound();
else if (hsb[2] < 0.1) blackFound();
else {
float deg = hsb[0]*360;
if (deg >= 30 && deg < 90) yellowFound(pixelX, pixelY);
else if (deg >= 90 && deg < 150) greenFound();
else if (deg >= 150 && deg < 210) cyanFound();
else if (deg >= 210 && deg < 270) blueFound();
else if (deg >= 270 && deg < 330) magentaFound();
else redFound();
}
}
private void handleYellowList(){
// System.out.println(yellowList);
// Step 1: Check if there is an item in the array
// Step 2: Save the position of the pixel
// Step 3: Then when ever the same pixel is found Look for the distance between ?
// Step 4: if the distance is below certain amount look at the third up coming pixel locations
// Step 5: if step 4 fails look at the last found pixel and repeat step 2
// Step 6: if its a row of mutiple pixels in a row save the locations and start looking at the same X but different y to see if its true (But should also work for circles ;-;)
}
private void blackFound(){
//
}
private void whiteFound(){
//
}
/*
#params int pixelX
#params int pixelY
if is this is NOT the first time in the loop && of the pixelX and tmp pixel are not the same
then set tmpYellowPixelX to pixelX
1. add Pixel y to the list!
2. and if the tempArray list is smaller then 20 dont add them to the current list cuz its prob alone pixel!
3. add the new pixelY!
4. create new arraylist cuz TempPixel and PixelX were not the same so we must have been on a new row!
else if Check if its the first iteration
1. just add set tmpYellowPixelX to pixelX
2. add pixelY to the tmpYellowList
else just add pixelY to tmpYellowList
if color is found give pixel X and Y to the corresponding function
*/
private void yellowFound(int pixelX, int pixelY){
if (tmpYellowPixelX != pixelX && tmpYellowPixelX != -1){
tmpYellowPixelX = pixelX;
tmpYellowList.add(pixelY);
if(tmpYellowList.size() > 50) {
yellowList.add(tmpYellowList);
System.out.println(pixelX);
System.out.println(tmpYellowList.size());
}
tmpYellowList = new ArrayList<Integer>();
} else if (tmpYellowPixelX != pixelX && tmpYellowPixelX == -1 ) {
tmpYellowPixelX = pixelX;
tmpYellowList.add(pixelY);
} else {
tmpYellowList.add(pixelY);
}
}
private void greenFound(){
//
}
private void cyanFound(){
//
}
private void blueFound(){
//
}
private void magentaFound(){
//
}
private void redFound(){
//
}
private static BufferedImage convertImage(Image img) {
if (img instanceof BufferedImage) return (BufferedImage) img;
BufferedImage bimage = new BufferedImage(img.getWidth(null), img.getHeight(null), BufferedImage.TYPE_INT_ARGB);
Graphics2D bGr = bimage.createGraphics();
bGr.drawImage(img, 0, 0, null);
bGr.dispose();
return bimage;
}
}
Still Need to make yellowFound function modulair so I can use it on all of them and more propper testing

Implementing a License plate detection algorithm

To improve my knowledge of imaging and get some experience working with the topics, I decided to create a license plate recognition algorithm on the Android platform.
The first step is detection, for which I decided to implement a recent paper titled "A Robust and Efficient Approach to License Plate Detection". The paper presents their idea very well and uses quite simple techniques to achieve detection. Besides some details lacking in the paper, I implemented the bilinear downsampling, converting to gray scale, and the edging + adaptive thresholding as described in Section 3A, 3B.1, and 3B.2.
Unfortunately, I am not getting the output this paper presents in e.g. figure 3 and 6.
The image I use for testing is as follows:
The gray scale (and downsampled) version looks fine (see the bottom of this post for the actual implementation), I used a well-known combination of the RGB components to produce it (paper does not mention how, so I took a guess).
Next is the initial edge detection using the Sobel filter outlined. This produces an image similar to the ones presented in figure 6 of the paper.
And finally, the remove the "weak edges" they apply adaptive thresholding using a 20x20 window. Here is where things go wrong.
As you can see, it does not function properly, even though I am using their stated parameter values. Additionally I have tried:
Changing the beta parameter.
Use a 2d int array instead of Bitmap objects to simplify creating the integral image.
Try a higher Gamma parameter so the initial edge detection allows more "edges".
Change the window to e.g. 10x10.
Yet none of the changes made an improvement; it keeps producing images as the one above. My question is: what am I doing different than what is outlined in the paper? and how can I get the desired output?
Code
The (cleaned) code I use:
public int[][] toGrayscale(Bitmap bmpOriginal) {
int width = bmpOriginal.getWidth();
int height = bmpOriginal.getHeight();
// color information
int A, R, G, B;
int pixel;
int[][] greys = new int[width][height];
// scan through all pixels
for (int x = 0; x < width; ++x) {
for (int y = 0; y < height; ++y) {
// get pixel color
pixel = bmpOriginal.getPixel(x, y);
R = Color.red(pixel);
G = Color.green(pixel);
B = Color.blue(pixel);
int gray = (int) (0.2989 * R + 0.5870 * G + 0.1140 * B);
greys[x][y] = gray;
}
}
return greys;
}
The code for edge detection:
private int[][] detectEges(int[][] detectionBitmap) {
int width = detectionBitmap.length;
int height = detectionBitmap[0].length;
int[][] edges = new int[width][height];
// Loop over all pixels in the bitmap
int c1 = 0;
int c2 = 0;
for (int y = 0; y < height; y++) {
for (int x = 2; x < width -2; x++) {
// Calculate d0 for each pixel
int p0 = detectionBitmap[x][y];
int p1 = detectionBitmap[x-1][y];
int p2 = detectionBitmap[x+1][y];
int p3 = detectionBitmap[x-2][y];
int p4 = detectionBitmap[x+2][y];
int d0 = Math.abs(p1 + p2 - 2*p0) + Math.abs(p3 + p4 - 2*p0);
if(d0 >= Gamma) {
c1++;
edges[x][y] = Gamma;
} else {
c2++;
edges[x][y] = d0;
}
}
}
return edges;
}
The code for adaptive thresholding. The SAT implementation is taken from here:
private int[][] AdaptiveThreshold(int[][] detectionBitmap) {
// Create the integral image
processSummedAreaTable(detectionBitmap);
int width = detectionBitmap.length;
int height = detectionBitmap[0].length;
int[][] binaryImage = new int[width][height];
int white = 0;
int black = 0;
int h_w = 20; // The window size
int half = h_w/2;
// Loop over all pixels in the bitmap
for (int y = half; y < height - half; y++) {
for (int x = half; x < width - half; x++) {
// Calculate d0 for each pixel
int sum = 0;
for(int k = -half; k < half - 1; k++) {
for (int j = -half; j < half - 1; j++) {
sum += detectionBitmap[x + k][y + j];
}
}
if(detectionBitmap[x][y] >= (sum / (h_w * h_w)) * Beta) {
binaryImage[x][y] = 255;
white++;
} else {
binaryImage[x][y] = 0;
black++;
}
}
}
return binaryImage;
}
/**
* Process given matrix into its summed area table (in-place)
* O(MN) time, O(1) space
* #param matrix source matrix
*/
private void processSummedAreaTable(int[][] matrix) {
int rowSize = matrix.length;
int colSize = matrix[0].length;
for (int i=0; i<rowSize; i++) {
for (int j=0; j<colSize; j++) {
matrix[i][j] = getVal(i, j, matrix);
}
}
}
/**
* Helper method for processSummedAreaTable
* #param row current row number
* #param col current column number
* #param matrix source matrix
* #return sub-matrix sum
*/
private int getVal (int row, int col, int[][] matrix) {
int leftSum; // sub matrix sum of left matrix
int topSum; // sub matrix sum of top matrix
int topLeftSum; // sub matrix sum of top left matrix
int curr = matrix[row][col]; // current cell value
/* top left value is itself */
if (row == 0 && col == 0) {
return curr;
}
/* top row */
else if (row == 0) {
leftSum = matrix[row][col - 1];
return curr + leftSum;
}
/* left-most column */
if (col == 0) {
topSum = matrix[row - 1][col];
return curr + topSum;
}
else {
leftSum = matrix[row][col - 1];
topSum = matrix[row - 1][col];
topLeftSum = matrix[row - 1][col - 1]; // overlap between leftSum and topSum
return curr + leftSum + topSum - topLeftSum;
}
}
Marvin provides an approach to find text regions. Perhaps it can be a start point for you:
Find Text Regions in Images:
http://marvinproject.sourceforge.net/en/examples/findTextRegions.html
This approach was also used in this question:
How do I separates text region from image in java
Using your image I got this output:
Source Code:
package textRegions;
import static marvin.MarvinPluginCollection.findTextRegions;
import java.awt.Color;
import java.util.List;
import marvin.image.MarvinImage;
import marvin.image.MarvinSegment;
import marvin.io.MarvinImageIO;
public class FindVehiclePlate {
public FindVehiclePlate() {
MarvinImage image = MarvinImageIO.loadImage("./res/vehicle.jpg");
image = findText(image, 30, 20, 100, 170);
MarvinImageIO.saveImage(image, "./res/vehicle_out.png");
}
public MarvinImage findText(MarvinImage image, int maxWhiteSpace, int maxFontLineWidth, int minTextWidth, int grayScaleThreshold){
List<MarvinSegment> segments = findTextRegions(image, maxWhiteSpace, maxFontLineWidth, minTextWidth, grayScaleThreshold);
for(MarvinSegment s:segments){
if(s.height >= 10){
s.y1-=20;
s.y2+=20;
image.drawRect(s.x1, s.y1, s.x2-s.x1, s.y2-s.y1, Color.red);
image.drawRect(s.x1+1, s.y1+1, (s.x2-s.x1)-2, (s.y2-s.y1)-2, Color.red);
image.drawRect(s.x1+2, s.y1+2, (s.x2-s.x1)-4, (s.y2-s.y1)-4, Color.red);
}
}
return image;
}
public static void main(String[] args) {
new FindVehiclePlate();
}
}

Algorithm to get all pixels between color border?

I have a long png file containing many sprites in a row, but their width/height changes by a little bit. However, all sprites have a fixed blue color 1px border around it.
However, after each sprite, the borders are connected to each other by 2px (just border after border that interacts) see this:
But at the bottom of the sprites, it misses one pixel point
Is there an existing algorithm that can get all pixels between a color border like this, including the border when giving the pixels?
Or any other ideas how to grab all sprites of one file like this and give them a fixed size?
I took your image and transformed it to match your description.
In plain text I went form left to right and identify lines that might indicate a start or end to an image and used a tracker variable to decide which is which.
I approached it like this in Java:
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.awt.image.Raster;
import java.io.File;
import java.io.IOException;
public class PixelArtSizeFinder {
public static void main(String[] args) throws IOException {
File imageFile = new File("pixel_boat.png");
BufferedImage image = ImageIO.read(imageFile);
int w = image.getWidth();
int h = image.getHeight();
System.out.format("Size: %dx%d%n", w, h);
Raster data = image.getData();
int objectsFound = 0;
int startObjectWidth = 0;
int endObjectWidth = 0;
boolean scanningObject = false;
for (int x = 0; x < w; x++) {
boolean verticalLineContainsOnlyTransparentOrBorder = true;
for (int y = 0; y < h; y++) {
int[] pixel = data.getPixel(x, y, new int[4]);
if (isOther(pixel)) {
verticalLineContainsOnlyTransparentOrBorder = false;
}
}
if (verticalLineContainsOnlyTransparentOrBorder) {
if (scanningObject) {
endObjectWidth = x;
System.out.format("Object %d: %d-%d (%dpx)%n",
objectsFound,
startObjectWidth,
endObjectWidth,
endObjectWidth - startObjectWidth);
} else {
objectsFound++;
startObjectWidth = x;
}
scanningObject ^= true; //toggle
}
}
}
private static boolean isTransparent(int[] pixel) {
return pixel[3] == 0;
}
private static boolean isBorder(int[] pixel) {
return pixel[0] == 0 && pixel[1] == 187 && pixel[2] == 255 && pixel[3] == 255;
}
private static boolean isOther(int[] pixel) {
return !isTransparent(pixel) && !isBorder(pixel);
}
}
and the result was
Size: 171x72
Object 1: 0-27 (27px)
Object 2: 28-56 (28px)
Object 3: 57-85 (28px)
Object 4: 86-113 (27px)
Object 5: 114-142 (28px)
Object 6: 143-170 (27px)
I don't know if any algorithm or function already exists for this but what you can do is :
while the boats are all the same and you wanna get all the pixels between two blue pixels so you can use something like this :
for all i in vertical pixels
for all j in horizontal pixels
if pixel(i,j) == blue then
j = j+ 1
while pixel(i,j) != blue then
you save this pixel in an array for example
j = j+1
end while
end if
end for
end for
This is just an idea and for sure not the most optimal but you can you use it and perform it to make it better ;)

Get color of each pixel of an image using BufferedImages

I am trying to get every single color of every single pixel of an image.
My idea was following:
int[] pixels;
BufferedImage image;
image = ImageIO.read(this.getClass.getResources("image.png");
int[] pixels = ((DataBufferInt)image.getRaster().getDataBuffer()).getData();
Is that right? I can't even check what the "pixels" array contains, because i get following error:
java.awt.image.DataBufferByte cannot be cast to java.awt.image.DataBufferInt
I just would like to receive the color of every pixel in an array, how do i achieve that?
import java.io.*;
import java.awt.*;
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
public class GetPixelColor {
public static void main(String args[]) throws IOException {
File file = new File("your_file.jpg");
BufferedImage image = ImageIO.read(file);
// Getting pixel color by position x and y
int clr = image.getRGB(x, y);
int red = (clr & 0x00ff0000) >> 16;
int green = (clr & 0x0000ff00) >> 8;
int blue = clr & 0x000000ff;
System.out.println("Red Color value = " + red);
System.out.println("Green Color value = " + green);
System.out.println("Blue Color value = " + blue);
}
}
of course you have to add a for loop for all pixels
The problem (also with the answer that was linked from the first answer) is that you hardly ever know what exact type your buffered image will be after reading it with ImageIO. It could contain a DataBufferByte or a DataBufferInt. You may deduce it in some cases via BufferedImage#getType(), but in the worst case, it has type TYPE_CUSTOM, and then you can only fall back to some instanceof tests.
However, you can convert your image into a BufferedImage that is guaranteed to have a DataBufferInt with ARGB values - namely with something like
public static BufferedImage convertToARGB(BufferedImage image)
{
BufferedImage newImage = new BufferedImage(
image.getWidth(), image.getHeight(),
BufferedImage.TYPE_INT_ARGB);
Graphics2D g = newImage.createGraphics();
g.drawImage(image, 0, 0, null);
g.dispose();
return newImage;
}
Otherwise, you can call image.getRGB(x,y), which may perform the required conversions on the fly.
BTW: Note that obtaining the data buffer of a BufferedImage may degrade painting performance, because the image can no longer be "managed" and kept in VRAM internally.
import javax.imageio.ImageIO;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
public class Main {
public static void main(String[] args) throws IOException {
BufferedImage bufferedImage = ImageIO.read(new File("norris.jpg"));
int height = bufferedImage.getHeight(), width = bufferedImage.getWidth();
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int RGBA = bufferedImage.getRGB(x, y);
int alpha = (RGBA >> 24) & 255;
int red = (RGBA >> 16) & 255;
int green = (RGBA >> 8) & 255;
int blue = RGBA & 255;
}
}
}
}
Assume the buffered image represents an image with 8-bit RGBA color components packed into integer pixels, I search for "RGBA color space" on wikipedia and found following:
In the byte-order scheme, "RGBA" is understood to mean a byte R,
followed by a byte G, followed by a byte B, and followed by a byte A.
This scheme is commonly used for describing file formats or network
protocols, which are both byte-oriented.
With simple Bitwise and Bitshift you can get the value of each color and the alpha value of the pixel.
Very interesting is also the other order scheme of RGBA:
In the word-order scheme, "RGBA" is understood to represent a complete
32-bit word, where R is more significant than G, which is more
significant than B, which is more significant than A. This scheme can
be used to describe the memory layout on a particular system. Its
meaning varies depending on the endianness of the system.
byte[] pixels
not
int[] pixels
try this : Java - get pixel array from image
import java.awt.Color;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
public class ImageUtil {
public static Color[][] loadPixelsFromImage(File file) throws IOException {
BufferedImage image = ImageIO.read(file);
Color[][] colors = new Color[image.getWidth()][image.getHeight()];
for (int x = 0; x < image.getWidth(); x++) {
for (int y = 0; y < image.getHeight(); y++) {
colors[x][y] = new Color(image.getRGB(x, y));
}
}
return colors;
}
public static void main(String[] args) throws IOException {
Color[][] colors = loadPixelsFromImage(new File("image.png"));
System.out.println("Color[0][0] = " + colors[0][0]);
}
}
I know this has already been answered, but the answers given are a bit convoluted and could use improvement.
The simple idea is to just loop through every (x,y) pixel in the image, and get the color of that pixel.
BufferedImage image = MyImageLoader.getSomeImage();
for ( int x = 0; x < image.getWidth(); x++ ) {
for( int y = 0; y < image.getHeight(); y++ ) {
Color pixel = new Color( image.getRGB( x, y ) );
// Do something with pixel color here :)
}
}
You could then perhaps wrap this method in a class, and implement Java's Iterable API.
class IterableImage implements Iterable<Color> {
private BufferedImage image;
public IterableImage( BufferedImage image ) {
this.image = image;
}
#Override
public Iterator<Color> iterator() {
return new Itr();
}
private final class Itr implements Iterator<Color> {
private int x = 0, y = 0;
#Override
public boolean hasNext() {
return x < image.getWidth && y < image.getHeight();
}
#Override
public Color next() {
x += 1;
if ( x >= image.getWidth() ) {
x = 0;
y += 1;
}
return new Color( image.getRGB( x, y ) );
}
}
}
The usage of which might look something like the following
BufferedImage image = MyImageLoader.getSomeImage();
for ( Color color : new IterableImage( image ) ) {
// Do something with color here :)
}

Importing A Sprite from a sprite sheet

Well i have been watching a couple of videos of youtube on how take sprites from a spritesheet (8x8) and i really liked the tutorial by DesignsByZepher. However the method he uses results in him importing a sorite sheet and then changing the colors to in-code selected colours.
http://www.youtube.com/watch?v=6FMgQNDNMJc displaying the sheet
http://www.youtube.com/watch?v=7eotyB7oNHE for the color rendering
The code that i have made from watching his video is:
package exikle.learn.game.gfx;
import java.awt.image.BufferedImage;
import java.io.IOException;
import javax.imageio.ImageIO;
public class SpriteSheet {
public String path;
public int width;
public int height;
public int[] pixels;
public SpriteSheet(String path) {
BufferedImage image = null;
try {
image = ImageIO.read(SpriteSheet.class.getResourceAsStream(path));
} catch (IOException e) {
e.printStackTrace();
}
if (image == null) { return; }
this.path = path;
this.width = image.getWidth();
this.height = image.getHeight();
pixels = image.getRGB(0, 0, width, height, null, 0, width);
for (int i = 0; i < pixels.length; i++) {
pixels[i] = (pixels[i] & 0xff) / 64;
}
}
}
^This is the code where an image gets imported
package exikle.learn.game.gfx;
public class Colours {
public static int get(int colour1, int colour2, int colour3, int colour4) {
return (get(colour4) << 24) + (get(colour3) << 16)
+ (get(colour2) << 8) + get(colour1);
}
private static int get(int colour) {
if (colour < 0)
return 255;
int r = colour / 100 % 10;
int g = colour / 10 % 10;
int b = colour % 10;
return r * 36 + g * 6 + b;
}
}
^ and the code which i think deals with all the colors but im kinda confused about this.
My question is how do i remove the color modifier and just import and display the sprite sheet as is, so with the color it already has?
So you're fiddling with the Minicraft source, I see. The thing about Notch's code is that he substantially limited himself technically in this game. What the engine is doing is basically saying every sprite/tile can have 4 colors (from the grey-scaled spritesheet), he generates his own color palette that he retrieves colors from and sets accordingly during rendering. I can't remember exactly how many bits per channel he set and such.
However, you obviously are very new to programming and imo there's nothing better than fiddling with and analyzing other people's code.. that is, if you actually can do so. The Screen class is where the rendering takes place and hence it's what uses the spritesheet and therefore gives color accordingly to whatever tile you tell it to get. Markus is quite clever, despite poorly written code (which is completely forgiven as he did have 48 hours to make the damned thing ;))
if you want to just display the spritesheet as is, you can either rewrite the render function or overload it to something like this... (in class Screen)
public void render() {
for(int y = 0; y < h; y++) {
if(y >= sheet.h) continue; //prevent going out of bounds on y-axis
for(int x = 0; x < w; x++) {
if(x >= sheet.w) continue; //prevent going out of bounds on x-axis
pixels[x + y * w] = sheet.pixels[x + y * sheet.w];
}
}
}
This will just put whatever of the sheet it can fit into the screen for rendering (it's a really simple piece of code, but should work), the next step will be copying the pixels over to the actual raster for display, which I'm sure you can handle. (If you have copy-pasted all of the minicraft source code or some other slightly modified source code, you might want to change some things about that as well.)
All the cheers!
This basics would be to replace the get(int) method...
private static int get(int colour) {
//if (colour < 0)
// return 255;
//int r = colour / 100 % 10;
//int g = colour / 10 % 10;
//int b = colour % 10;
//return r * 36 + g * 6 + b;
return colour;
}
I'd also get rid of
for (int i = 0; i < pixels.length; i++) {
pixels[i] = (pixels[i] & 0xff) / 64;
}
From the main method
But to be honest, wouldn't it be easier to simply use BufferedImage#getSubImage?

Categories

Resources