I have following code, which creates grayscale BufferedImage and then sets random colors of each pixel.
import java.awt.image.BufferedImage;
public class Main {
public static void main(String[] args) {
BufferedImage right = new BufferedImage(100, 100, BufferedImage.TYPE_BYTE_GRAY);
int correct = 0, error = 0;
for (int i = 0; i < right.getWidth(); i++) {
for (int j = 0; j < right.getHeight(); j++) {
int average = (int) (Math.random() * 255);
int color = (0xff << 24) | (average << 16) | (average << 8) | average;
right.setRGB(i, j, color);
if(color != right.getRGB(i, j)) {
error++;
} else {
correct++;
}
}
}
System.out.println(correct + ", " + error);
}
}
In approximately 25-30% pixels occurs weird behaviour, where I set color and right afterwards it has different value than was previously set. Am I setting colors the wrong way?
Here is your solution: ban getRGB and use the Raster (faster and easier than getRGB) or even better DataBuffer (fastest but you have to handle the encoding):
import java.awt.image.BufferedImage;
public class Main
{
public static void main(String[] args)
{
BufferedImage right = new BufferedImage(100, 100, BufferedImage.TYPE_BYTE_GRAY);
int correct = 0, error = 0;
for (int x=0 ; x < right.getWidth(); x++)
for (int j = 0; j < right.getHeight(); j++)
{
int average = (int) (Math.random() * 255) ;
right.getRaster().setSample(x, y, 0, average) ;
if ( average != right.getRaster().getSample(x, y, 0) ) error++ ;
else correct++;
}
System.out.println(correct + ", " + error);
}
}
In your case getRGB is terrible, because the encoding is an array of byte (8 bits), and you have to manipulate RGB values with getRGB. The raster does all the work of conversion for you.
I think your issue has to do with the image type (third parameter for BufferedImage constructor). If you change the type to BufferedImage.TYPE_INT_ARGB, then you will get 100% correct results.
Looking at the documentation for BufferedImage.getRGB(int,int) there is some conversion when you get RGB that is not the default color space
Returns an integer pixel in the default RGB color model (TYPE_INT_ARGB) and default sRGB colorspace. Color conversion takes place if this default model does not match the image ColorModel.
So you're probably seeing the mismatches due to the conversion.
Wild guess:
Remove (0xff << 24) | which is the alpha channel, how intransparent/opaque the color is. Given yes/no transparent and average < or >= 128 application of transparency, 25% could be the wrong color mapping (very wild guess).
Related
Taking part in a Coursera course, I've been trying to use steganography to hide an image in another. This means I've tried to store the "main" picture's RGB values on 6 bits and the "second" picture's values on the last 2 bits.
I'm merging these two values to create a joint picture, and have also coded a class to parse the joint picture, and recover the original images.
Image recovery has not been successful, although it seems (from other examples provided within the course) that the parser is working fine. I suppose that saving the pictures after modification, using ImageIO.write somehow modifies the RGB values I have carefully set in the code. :D
public static BufferedImage mergeImage(BufferedImage original,
BufferedImage message, int hide) {
// hidden is the num of bits on which the second image is hidden
if (original != null) {
int width = original.getWidth();
int height = original.getHeight();
BufferedImage output = new BufferedImage(width, height,
BufferedImage.TYPE_INT_RGB);
for (int i = 0; i < width; i++) {
for (int j = 0; j < height; j++) {
int pix_orig = original.getRGB(i, j);
int pix_msg = message.getRGB(i, j);
int pixel = setpixel(pix_orig, pix_msg, hide);
output.setRGB(i, j, pixel);
}
}
return output;
}
return null;
}
public static int setpixel(int pixel_orig, int pixel_msg, int hide) {
int bits = (int) Math.pow(2, hide);
Color orig = new Color(pixel_orig);
Color msg = new Color(pixel_msg);
int red = ((orig.getRed() / bits) * bits); //+ (msg.getRed() / (256/bits));
if (red % 4 != 0){
counter+=1;
}
int green = ((orig.getGreen() / bits) * bits) + (msg.getGreen() / (256/bits));
int blue = ((orig.getBlue() / bits) * bits) + (msg.getBlue() / (256/bits));
int pixel = new Color(red, green, blue).getRGB();
return pixel;
}
This is the code I use for setting the RGB values of the merged picture. As you can see, I have commented part of the code belonging to red to check whether the main picture can actually be saved on 6 bits, assuming I take int hide=2
Although if I make the same checks in the parsing part of the code:
public static BufferedImage parseImage(BufferedImage input, int hidden){
// hidden is the num of bits on which the second image is hidden
if (input != null){
int width = input.getWidth();
int height = input.getHeight();
BufferedImage output = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
for(int i=0;i<width;i++){
for(int j=0;j<height;j++){
int pixel = input.getRGB(i, j);
pixel = setpixel(pixel,hidden);
output.setRGB(i, j, pixel);
}
}
return output;
}
return null;
}
public static int setpixel(int pixel, int hidden){
int bits = (int) Math.pow(2,hidden);
Color c = new Color(pixel);
if (c.getRed() % 4 != 0){
counter+=1;
}
int red = (c.getRed() - (c.getRed()/bits)*bits)*(256/bits);
int green = (c.getGreen() - (c.getGreen()/bits)*bits)*(256/bits);
int blue = (c.getBlue() - (c.getBlue()/bits)*bits)*(256/bits);
pixel = new Color(red,green,blue).getRGB();
return pixel;
}
I get ~100k pixels where the R value has a remainder if divided by four.
I suspect there' some problem with the function of ImageIO.write.
I know the question is going to be vague, but
1) Can someone confirm this
2) What can I do to get this code working?
Thanks a lot!
JPEG has lossy compression, which means some pixels will effectively be modified when reloading the image. This isn't a fault of ImageIO.write, it's how the format works. If you want to embed your data directly to pixel values, you want to save the image to a lossless format, such as BMP or PNG.
There is this image comparison code I am supposed to modify to highlight/point out the difference between two images. Is there a way to modify this code so as to highlight the differences in images. If not any suggestion on how to go about it would be greatly appreciated.
int width1 = img1.getWidth(null);
int width2 = img2.getWidth(null);
int height1 = img1.getHeight(null);
int height2 = img2.getHeight(null);
if ((width1 != width2) || (height1 != height2)) {
System.err.println("Error: Images dimensions mismatch");
System.exit(1);
}
long diff = 0;
for (int i = 0; i < height1; i++) {
for (int j = 0; j < width1; j++) {
int rgb1 = img1.getRGB(j, i);
int rgb2 = img2.getRGB(j, i);
int r1 = (rgb1 >> 16) & 0xff;
int g1 = (rgb1 >> 8) & 0xff;
int b1 = (rgb1) & 0xff;
int r2 = (rgb2 >> 16) & 0xff;
int g2 = (rgb2 >> 8) & 0xff;
int b2 = (rgb2) & 0xff;
diff += Math.abs(r1 - r2);
diff += Math.abs(g1 - g2);
diff += Math.abs(b1 - b2);
}
}
double n = width1 * height1 * 3;
double p = diff / n / 255.0;
return (p * 100.0);
This solution did the trick for me. It highlights differences, and has the best performance out of the methods I've tried. (Assumptions: images are the same size. This method hasn't been tested with transparencies.)
Average time to compare a 1600x860 PNG image 50 times (on same machine):
JDK7 ~178 milliseconds
JDK8 ~139 milliseconds
Does anyone have a better/faster solution?
public static BufferedImage getDifferenceImage(BufferedImage img1, BufferedImage img2) {
// convert images to pixel arrays...
final int w = img1.getWidth(),
h = img1.getHeight(),
highlight = Color.MAGENTA.getRGB();
final int[] p1 = img1.getRGB(0, 0, w, h, null, 0, w);
final int[] p2 = img2.getRGB(0, 0, w, h, null, 0, w);
// compare img1 to img2, pixel by pixel. If different, highlight img1's pixel...
for (int i = 0; i < p1.length; i++) {
if (p1[i] != p2[i]) {
p1[i] = highlight;
}
}
// save img1's pixels to a new BufferedImage, and return it...
// (May require TYPE_INT_ARGB)
final BufferedImage out = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
out.setRGB(0, 0, w, h, p1, 0, w);
return out;
}
Usage:
import javax.imageio.ImageIO;
import java.io.File;
ImageIO.write(
getDifferenceImage(
ImageIO.read(new File("a.png")),
ImageIO.read(new File("b.png"))),
"png",
new File("output.png"));
Some inspiration...
What I would do is set each pixel to be the difference between one pixel in one image and the corresponding pixel in the other image. The difference that is being calculated in your original code is based on the L1 norm. This is also called the sum of absolute differences too. In any case, write a method that would take in your two images, and return an image of the same size that sets each location to be the difference for each pair of pixels that share the same location in the final image. Basically, this will give you an indication as to which pixels are different. The whiter the pixel, the more difference there is between these two corresponding locations.
I'm also going to assume you're using a BufferedImage class, as getRGB() methods are used and you are bit-shifting to access individual channels. In other words, make a method that looks like this:
public static BufferedImage getDifferenceImage(BufferedImage img1, BufferedImage img2) {
int width1 = img1.getWidth(); // Change - getWidth() and getHeight() for BufferedImage
int width2 = img2.getWidth(); // take no arguments
int height1 = img1.getHeight();
int height2 = img2.getHeight();
if ((width1 != width2) || (height1 != height2)) {
System.err.println("Error: Images dimensions mismatch");
System.exit(1);
}
// NEW - Create output Buffered image of type RGB
BufferedImage outImg = new BufferedImage(width1, height1, BufferedImage.TYPE_INT_RGB);
// Modified - Changed to int as pixels are ints
int diff;
int result; // Stores output pixel
for (int i = 0; i < height1; i++) {
for (int j = 0; j < width1; j++) {
int rgb1 = img1.getRGB(j, i);
int rgb2 = img2.getRGB(j, i);
int r1 = (rgb1 >> 16) & 0xff;
int g1 = (rgb1 >> 8) & 0xff;
int b1 = (rgb1) & 0xff;
int r2 = (rgb2 >> 16) & 0xff;
int g2 = (rgb2 >> 8) & 0xff;
int b2 = (rgb2) & 0xff;
diff = Math.abs(r1 - r2); // Change
diff += Math.abs(g1 - g2);
diff += Math.abs(b1 - b2);
diff /= 3; // Change - Ensure result is between 0 - 255
// Make the difference image gray scale
// The RGB components are all the same
result = (diff << 16) | (diff << 8) | diff;
outImg.setRGB(j, i, result); // Set result
}
}
// Now return
return outImg;
}
To call this method, simply do:
outImg = getDifferenceImage(img1, img2);
This is assuming that you are calling this within a method of your class. Have fun and good luck!
Just to note that the answer from #NickGrealy can be made 10 times faster if you don't need to keep the first image and modify it in place.
Example:
// img1 will be updated with the changes from img2
public static BufferedImage getDifferenceImage(BufferedImage img1, BufferedImage img2) {
byte[] magenta = {-1, 0, -1};
byte[] buff1 = ((DataBufferByte) img1.getRaster().getDataBuffer()).getData();
byte[] buff2 = ((DataBufferByte) img2.getRaster().getDataBuffer()).getData();
for (int i = 1; i < buff1.lenght; i += 4) {
if (buff1[i] != buff2[i]) {
System.arraycopy(magenta, 0, buff1, i, 3);
}
}
}
I needed a fast approach to use on potentially lot of images for visual regression checking.
It runs in < 2 ms on my machine, and I am in a case where img1 is already saved on disk so I don't need to play with it, I'm just interested in the differences to be updated in the buffered image and write it to a new location for further inspection.
I have a situation where I need to invert the alpha channel of a VolatileImage
My current implementation is the obvious, but very slow;
public BufferedImage invertImage(VolatileImage v) {
BufferedImage b = new BufferedImage(v.getWidth(), v.getHeight(), BufferedImage.TYPE_4BYTE_ABGR);
Graphics g = b.getGraphics();
g.drawImage(v, 0, 0, null);
for(int i = 0; i < b.getWidth(); i++) {
for(int(j = 0; j < b.getHeight(); j++) {
Color c = new Color(b.getRGB(i, j, true));
c = new Color(c.getRed(), c.getGreen(), c.getBlue(), 255 - c.getAlpha());
b.setRGB(i, j, c.getRGB());
}
}
return b;
}
This works fine, but is painfully slow. I have large images and need this to be fast. I have messed around with the AlphaComposite but to no avail - this is not really a composting problem as far as I understand.
Given that 255 - x is equivalent to x & 0xff for 0 <= x < 256, can I not do an en-masse XOR over the alpha channel somehow?
After a lot of googleing, I came across DataBuffer classes being used as maps into BufferedImages:
DataBufferByte buf = (DataBufferByte)b.getRaster().getDataBuffer();
byte[] values = buf.getData();
for(int i = 0; i < values.length; i += 4) values[i] = (byte)(values[i] ^ 0xff);
This inverts the values of the BufferedImage (you do not need to draw it back over, altering the array values alters the buffered image itself).
My tests show this method is about 20 times faster than jazzbassrob's improvement, which was about 1.5 times faster than my original method.
You should be able to speed it up by avoiding all the getters and the constructor inside the loop:
for(int i = 0; i < b.getWidth(); i++) {
for(int(j = 0; j < b.getHeight(); j++) {
b.setRGB(b.getRGB(i, j) ^ 0xFF000000);
}
}
In my 2D game I'm using graphic tools to create nice, smooth terrain represented by black color:
Simple algorithm written in java looks for black color every 15 pixels, creating following set of lines (gray):
As you can see, there's some places that are mapped very bad, some are pretty good. In other case it would be not necessary to sample every 15 pixels, eg. if terrain is flat.
What's the best way to covert this curve to set of points [lines], using as little points as possible?
Sampling every 15 pixels = 55 FPS, 10 pixels = 40 FPS
Following algorithm is doing that job, sampling from right to left, outputting pasteable into code array:
public void loadMapFile(String path) throws IOException {
File mapFile = new File(path);
image = ImageIO.read(mapFile);
boolean black;
System.out.print("{ ");
int[] lastPoint = {0, 0};
for (int x = image.getWidth()-1; x >= 0; x -= 15) {
for (int y = 0; y < image.getHeight(); y++) {
black = image.getRGB(x, y) == -16777216 ? true : false;
if (black) {
lastPoint[0] = x;
lastPoint[1] = y;
System.out.print("{" + (x) + ", " + (y) + "}, ");
break;
}
}
}
System.out.println("}");
}
Im developing on Android, using Java and AndEngine
This problem is nearly identical to the problem of digitization of a signal (such as sound), where the basic law is that the signal in the input that had the frequency too high for the sampling rate will not be reflected in the digitized output. So the concern is that if you check ever 30 pixels and then test the middle as bmorris591 suggests, you might miss that 7 pixel hole between the sampling points. This suggests that if there are 10 pixel features you cannot afford to miss, you need to do scanning every 5 pixels: your sample rate should be twice the highest frequency present in the signal.
One thing that can help improve your algorithm is a better y-dimension search. Currently you are searching for the intersection between sky and terrain linearly, but a binary search should be faster
int y = image.getHeight()/2; // Start searching from the middle of the image
int yIncr = y/2;
while (yIncr>0) {
if (image.getRGB(x, y) == -16777216) {
// We hit the terrain, to towards the sky
y-=yIncr;
} else {
// We hit the sky, go towards the terrain
y+=yIncr;
}
yIncr = yIncr/2;
}
// Make sure y is on the first terrain point: move y up or down a few pixels
// Only one of the following two loops will execute, and only one or two iterations max
while (image.getRGB(x, y) != -16777216) y++;
while (image.getRGB(x, y-1) == -16777216) y--;
Other optimizations are possible. If you know that your terrain has no cliffs, then you only need to search the window from lastY+maxDropoff to lastY-maxDropoff. Also, if your terrain can never be as tall as the entire bitmap, you don't need to search the top of the bitmap either. This should help to free some CPU cycles you can use for higher-resolution x-scanning of the terrain.
I propose to find border points which exists on the border between white and dark pixels. After that we can digitize those points. To do that, we should define DELTA which specify which point we should skip and which we should add to result list.
DELTA = 3, Number of points = 223
DELTA = 5, Number of points = 136
DELTA = 10, Number of points = 70
Below, I have put source code, which prints image and looking for points. I hope, you will be able to read it and find a way to solve your problem.
import java.awt.Color;
import java.awt.Dimension;
import java.awt.Graphics;
import java.awt.Graphics2D;
import java.awt.Point;
import java.awt.image.BufferedImage;
import java.awt.image.DataBufferByte;
import java.io.File;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import javax.imageio.ImageIO;
import javax.swing.JFrame;
import javax.swing.JPanel;
public class Program {
public static void main(String[] args) throws IOException {
BufferedImage image = ImageIO.read(new File("/home/michal/Desktop/FkXG1.png"));
PathFinder pathFinder = new PathFinder(10);
List<Point> borderPoints = pathFinder.findBorderPoints(image);
System.out.println(Arrays.toString(borderPoints.toArray()));
System.out.println(borderPoints.size());
JFrame frame = new JFrame();
frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frame.getContentPane().add(new ImageBorderPanel(image, borderPoints));
frame.pack();
frame.setMinimumSize(new Dimension(image.getWidth(), image.getHeight()));
frame.setVisible(true);
}
}
class PathFinder {
private int maxDelta = 3;
public PathFinder(int delta) {
this.maxDelta = delta;
}
public List<Point> findBorderPoints(BufferedImage image) {
int width = image.getWidth();
int[][] imageInBytes = convertTo2DWithoutUsingGetRGB(image);
int[] borderPoints = findBorderPoints(width, imageInBytes);
List<Integer> indexes = dwindlePoints(width, borderPoints);
List<Point> points = new ArrayList<Point>(indexes.size());
for (Integer index : indexes) {
points.add(new Point(index, borderPoints[index]));
}
return points;
}
private List<Integer> dwindlePoints(int width, int[] borderPoints) {
List<Integer> indexes = new ArrayList<Integer>(width);
indexes.add(borderPoints[0]);
int delta = 0;
for (int index = 1; index < width; index++) {
delta += Math.abs(borderPoints[index - 1] - borderPoints[index]);
if (delta >= maxDelta) {
indexes.add(index);
delta = 0;
}
}
return indexes;
}
private int[] findBorderPoints(int width, int[][] imageInBytes) {
int[] borderPoints = new int[width];
int black = Color.BLACK.getRGB();
for (int y = 0; y < imageInBytes.length; y++) {
int maxX = imageInBytes[y].length;
for (int x = 0; x < maxX; x++) {
int color = imageInBytes[y][x];
if (color == black && borderPoints[x] == 0) {
borderPoints[x] = y;
}
}
}
return borderPoints;
}
private int[][] convertTo2DWithoutUsingGetRGB(BufferedImage image) {
final byte[] pixels = ((DataBufferByte) image.getRaster().getDataBuffer()).getData();
final int width = image.getWidth();
final int height = image.getHeight();
final boolean hasAlphaChannel = image.getAlphaRaster() != null;
int[][] result = new int[height][width];
if (hasAlphaChannel) {
final int pixelLength = 4;
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += (((int) pixels[pixel] & 0xff) << 24); // alpha
argb += ((int) pixels[pixel + 1] & 0xff); // blue
argb += (((int) pixels[pixel + 2] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 3] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
} else {
final int pixelLength = 3;
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel += pixelLength) {
int argb = 0;
argb += -16777216; // 255 alpha
argb += ((int) pixels[pixel] & 0xff); // blue
argb += (((int) pixels[pixel + 1] & 0xff) << 8); // green
argb += (((int) pixels[pixel + 2] & 0xff) << 16); // red
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
}
return result;
}
}
class ImageBorderPanel extends JPanel {
private static final long serialVersionUID = 1L;
private BufferedImage image;
private List<Point> borderPoints;
public ImageBorderPanel(BufferedImage image, List<Point> borderPoints) {
this.image = image;
this.borderPoints = borderPoints;
}
#Override
public void paintComponent(Graphics g) {
super.paintComponent(g);
g.drawImage(image, 0, 0, null);
Graphics2D graphics2d = (Graphics2D) g;
g.setColor(Color.YELLOW);
for (Point point : borderPoints) {
graphics2d.fillRect(point.x, point.y, 3, 3);
}
}
}
In my source code I have used example from this question:
Java - get pixel array from image
The most efficient solution (with respect to points required) would be to allow for variable spacing between points along the X axis. This way, a large flat part would require very few points/samples and complex terrains would use more.
In 3D mesh processing, there is a nice mesh simplification algorithm named "quadric edge collapse", which you can adapt to your problem.
Here is the idea, translated to your problem - it actually gets much simpler than the original 3D algorithm:
Represent your curve with way too many points.
For each point, measure the error (i.e. difference to the smooth terrain) if you remove it.
Remove the point that gives the smallest error.
Repeat until you have reduced the number of points far enough or errors get too large.
To be more precise regarding step 2: Given points P, Q, R, the error of Q is the difference between the approximation of your terrain by two straight lines, P->Q and Q->R, and the approximation of your terrain by just one line P->R.
Note that when a point is removed only its neighbors need an update of their error value.
Well i have been watching a couple of videos of youtube on how take sprites from a spritesheet (8x8) and i really liked the tutorial by DesignsByZepher. However the method he uses results in him importing a sorite sheet and then changing the colors to in-code selected colours.
http://www.youtube.com/watch?v=6FMgQNDNMJc displaying the sheet
http://www.youtube.com/watch?v=7eotyB7oNHE for the color rendering
The code that i have made from watching his video is:
package exikle.learn.game.gfx;
import java.awt.image.BufferedImage;
import java.io.IOException;
import javax.imageio.ImageIO;
public class SpriteSheet {
public String path;
public int width;
public int height;
public int[] pixels;
public SpriteSheet(String path) {
BufferedImage image = null;
try {
image = ImageIO.read(SpriteSheet.class.getResourceAsStream(path));
} catch (IOException e) {
e.printStackTrace();
}
if (image == null) { return; }
this.path = path;
this.width = image.getWidth();
this.height = image.getHeight();
pixels = image.getRGB(0, 0, width, height, null, 0, width);
for (int i = 0; i < pixels.length; i++) {
pixels[i] = (pixels[i] & 0xff) / 64;
}
}
}
^This is the code where an image gets imported
package exikle.learn.game.gfx;
public class Colours {
public static int get(int colour1, int colour2, int colour3, int colour4) {
return (get(colour4) << 24) + (get(colour3) << 16)
+ (get(colour2) << 8) + get(colour1);
}
private static int get(int colour) {
if (colour < 0)
return 255;
int r = colour / 100 % 10;
int g = colour / 10 % 10;
int b = colour % 10;
return r * 36 + g * 6 + b;
}
}
^ and the code which i think deals with all the colors but im kinda confused about this.
My question is how do i remove the color modifier and just import and display the sprite sheet as is, so with the color it already has?
So you're fiddling with the Minicraft source, I see. The thing about Notch's code is that he substantially limited himself technically in this game. What the engine is doing is basically saying every sprite/tile can have 4 colors (from the grey-scaled spritesheet), he generates his own color palette that he retrieves colors from and sets accordingly during rendering. I can't remember exactly how many bits per channel he set and such.
However, you obviously are very new to programming and imo there's nothing better than fiddling with and analyzing other people's code.. that is, if you actually can do so. The Screen class is where the rendering takes place and hence it's what uses the spritesheet and therefore gives color accordingly to whatever tile you tell it to get. Markus is quite clever, despite poorly written code (which is completely forgiven as he did have 48 hours to make the damned thing ;))
if you want to just display the spritesheet as is, you can either rewrite the render function or overload it to something like this... (in class Screen)
public void render() {
for(int y = 0; y < h; y++) {
if(y >= sheet.h) continue; //prevent going out of bounds on y-axis
for(int x = 0; x < w; x++) {
if(x >= sheet.w) continue; //prevent going out of bounds on x-axis
pixels[x + y * w] = sheet.pixels[x + y * sheet.w];
}
}
}
This will just put whatever of the sheet it can fit into the screen for rendering (it's a really simple piece of code, but should work), the next step will be copying the pixels over to the actual raster for display, which I'm sure you can handle. (If you have copy-pasted all of the minicraft source code or some other slightly modified source code, you might want to change some things about that as well.)
All the cheers!
This basics would be to replace the get(int) method...
private static int get(int colour) {
//if (colour < 0)
// return 255;
//int r = colour / 100 % 10;
//int g = colour / 10 % 10;
//int b = colour % 10;
//return r * 36 + g * 6 + b;
return colour;
}
I'd also get rid of
for (int i = 0; i < pixels.length; i++) {
pixels[i] = (pixels[i] & 0xff) / 64;
}
From the main method
But to be honest, wouldn't it be easier to simply use BufferedImage#getSubImage?