How to convert array of bytes into Image in Java SE - java

What is the right way to convert raw array of bytes into Image in Java SE.
array consist of bytes, where each three bytes represent one pixel, with each byte for corresponding RGB component.
Can anybody suggest a code sample?
Thanks,
Mike

You can do it using Raster class. It's better because it does not require iterating and copying of byte arrays.
byte[] raw = new byte[width*height*3]; // raw bytes of our image
DataBuffer buffer = new DataBufferByte(raw, raw.length);
//The most difficult part of awt api for me to learn
SampleModel sampleModel = new ComponentSampleModel(DataBuffer.TYPE_BYTE, width, height, 3, width*3, new int[]{2,1,0});
Raster raster = Raster.createRaster(sampleModel, buffer, null);
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_3BYTE_BGR);
image.setData(raster);

Assuming you know the height and width of the image.
BufferedImage img=new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
for(int r=0; r<height; r++)
for(int c=0; c<width; c++)
{
int index=r*width+c;
int red=colors[index] & 0xFF;
int green=colors[index+1] & 0xFF;
int blue=colors[index+2] & 0xFF;
int rgb = (red << 16) | (green << 8) | blue;
img.setRGB(c, r, rgb);
}
Roughly. This assumes the pixel data is encoded as a set of rows; and that the length of colors is 3 * width * height (which should be valid).

folkyatina's approach works if your RGB values are in B,G,R order, but if they are in R,G,B order I have found the following code to work:
DataBuffer rgbData = new DataBufferByte(rgbs, rgbs.length);
WritableRaster raster = Raster.createInterleavedRaster(
rgbData, width, height,
width * 3, // scanlineStride
3, // pixelStride
new int[]{0, 1, 2}, // bandOffsets
null);
ColorModel colorModel = new ComponentColorModel(
ColorSpace.getInstance(ColorSpace.CS_sRGB),
new int[]{8, 8, 8}, // bits
false, // hasAlpha
false, // isPreMultiplied
ComponentColorModel.OPAQUE,
DataBuffer.TYPE_BYTE);
return new BufferedImage(colorModel, raster, false, null);

There is a setRGB variant which accepts an int array of RGBA values:
BufferedImage img=new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
int[] raw = new int[data.length * 4 / 3];
for (int i = 0; i < data.length / 3; i++) {
raw[i] = 0xFF000000 |
((data[3 * i + 0] & 0xFF) << 16) |
((data[3 * i + 1] & 0xFF) << 8) |
((data[3 * i + 2] & 0xFF));
}
img.setRGB(0, 0, width, height, raw, 0, width);
The performance characteristics is similar to CoderTao's solution.

Assuming that your raw data is a 1d array like:
byte[] imageBytes = new byte[1024];
// transform to bufferImage
BufferedImage bufferedImage = ImageIO.read(new ByteArrayInputStream(imageBytes));
// if you want to do some operations to the image, like resize,
// use the lib (net.coobird.thumbnailator)
BufferedImage image = Thumbnails.of(bufferedImage).forceSize(WIDTH, HEIGHT)
.outputFormat("bmp").asBufferedImage();

Related

Java: dynamically create a png image from BufferedImage

I want to dynamically create an image, and the created image must meet some requirements.
The created image should be a png, and it must have the same behavior as if it's a loaded png from a file.
It's for creating a texture to use in LWJGL.
When I load a png image as a file and have a BufferedImage, I can use the following code for my texture:
(The Texture constructor is designed for using with loaded images)
public class Texture {
public Texture(BufferedImage bi) {
width = bi.getWidth();
height = bi.getHeight();
System.out.println(bi.toString());
int[] pixels_raw = new int[width * height];
pixels_raw = bi.getRGB(0, 0, width, height, null, 0, width);
ByteBuffer pixels = BufferUtils.createByteBuffer(width * height * 4);
for(int i = 0; i < width; i++) {
for(int j = 0; j < height; j++) {
int pixel = pixels_raw[i * width + j]; // This is the error line.
pixels.put((byte)((pixel >> 16) & 0xFF)); // red
pixels.put((byte)((pixel >> 8) & 0xFF)); // green
pixels.put((byte)(pixel & 0xFF)); // blue
pixels.put((byte)((pixel >> 24) & 0xFF)); // alpha
}
}
pixels.flip();
id = glGenTextures();
glBindTexture(GL_TEXTURE_2D, id);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
}
}
But when I try to create an image dynamically, without loading anything from a file, then I get an ArrayIndexOutOfBoundsException on line 18 of the above code (see comment in code).
Of course it has something to do with the bits per pixel of the created BufferedImage. I tried changing the image type for my BufferedImage, and changing the array size when initializing the pixels_raw array. But I still get array exceptions. So, the above constructor method does only works when I pass a BufferedImage instance which comes from a loaded png. When I pass in a BurfferedImage I created dynamically with the code below, it gives me the exceptions I mentioned before.
public class TextDrawer {
public BufferedImage drawText(String text, Font font, Color color) {
BufferedImage graphicsGetterBi = new BufferedImage(1, 1, BufferedImage.TYPE_INT_ARGB);
Graphics g = graphicsGetterBi.getGraphics();
Graphics2D g2 = (Graphics2D) g;
Rectangle2D bounds = font.getStringBounds(text, 0, text.length(), g2.getFontRenderContext());
BufferedImage bi = new BufferedImage((int) bounds.getWidth(), (int) bounds.getHeight(), BufferedImage.TYPE_INT_ARGB);
System.out.println("Created the image. \n");
g2.setColor(color);
g2.setFont(font);
g2.drawString(text, 0, 0);
return bi;
}
}
instead of int pixel = pixels_raw[i * width + j]; it should be int pixel = pixels_raw[i * height + j]; or int pixel = pixels_raw[j * width + i];. Consider you have image of width = 2x and height = x. Then the array size is 2x^2, while the maximum index you request for is (2x - 1) * 2x + x - 1 = 4x^2 - x - 1, which is more than 2x^2 for x > 2

Best way to create Image from palette Color array + indice byte array?

I'm developing a Java component for displaying some videos and for each frame of the video, my decoder gives me a Color[256] palette + a width*height bytes pixel indices array. Here's how I create my BufferedImage right now:
byte[] iArray = new byte[width * height * 3];
int j = 0;
for (byte i : this.lastFrameData) {
iArray[j] = (byte) this.currentPalette[i & 0xFF].getRed();
iArray[j + 1] = (byte) this.currentPalette[i & 0xFF].getGreen();
iArray[j + 2] = (byte) this.currentPalette[i & 0xFF].getBlue();
j += 3;
}
DataBufferByte dbb = new DataBufferByte(iArray, iArray.length);
ColorModel cm = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB), new int[] { 8, 8, 8 }, false, false, Transparency.OPAQUE, DataBuffer.TYPE_BYTE);
return new BufferedImage(cm, Raster.createInterleavedRaster(dbb, width, height, width * 3, 3, new int[] { 0, 1, 2 }, null), false, null);
This works but it looks ugly and I'm sure there is a better way. So what would the fastest way to create the BufferedImage be then?
/Edit: I've tried using the setRGB method directly on my BufferedImage but it resulted in worse performance than the above.
Thanks
I would do this:
int[] imagePixels = new int[width * height]
int j = 0;
for (byte i : this.lastFrameData) {
byte r = (byte) this.currentPalette[i & 0xFF].getRed();
byte g = (byte) this.currentPalette[i & 0xFF].getGreen();
byte b = (byte) this.currentPalette[i & 0xFF].getBlue();
imagePixels[j] = 0xFF000000 | (r<<16) | (g<<8) | b;
j++;
}
BufferedImage result = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
result.setRGB(0, 0, width, height, imagePixels , 0, width);
return result;
Maybe it is faster don't test it yet.

How to set RGB pixel in a BufferedImage to display a 16-bit-depth PNG?

I am trying to read and show a PNG file.
I have no problem dealing with images with 8-bit depth.
I proceed as follow:
BufferedImage result = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
Then I read the 3*8=24 bits of each pixel, save them in an array of byte data and put them in the image with:
for (int y = 0; y < height; y++)
for (int x = 0; x < width; x++)
result.setRGB(x, y, ((data[x * 3 + 0] & 0xff) << 16)
+ ((data[x * 3 + 1] & 0xff) << 8)
+ ((data[x * 3 + 2] & 0xff)));
The problem is now with 16-bit depth images. Of course data is bigger now and it contains 48bits, divided in 6 bytes, for each RGB triple: from the debugger data has the values I expect.
How can I set the RGB pixel? Do I have to change the BufferedImage declaration? Maybe with:
BufferedImage result = new BufferedImage(width, height, BufferedImage.TYPE_USHORT_565_RGB);
Many thanks in advance!
P.S.: following PNG standars, the image has color type 2 (RGB without alpha).
Maybe I'll have to use http://docs.oracle.com/javase/7/docs/api/java/awt/image/ColorModel.html
#haraldK has pointed in the right direction. I'm providing some working code which is from "PNGReader" of "icafe" Java image library.
if(bitsPerPixel == 16) {
if(interlace_method==NON_INTERLACED)
spixels = generate16BitRGBPixels(compr_data, false);
else {
spixels = generate16BitRGBInterlacedPixels(compr_data, false);
}
int[] off = {0, 1, 2}; //band offset, we have 3 bands
int numOfBands = 3;
boolean hasAlpha = false;
int trans = Transparency.OPAQUE;
int[] nBits = {16, 16, 16};
if(alpha != null) { // Deal with single color transparency
off = new int[] {0, 1, 2, 3}; //band offset, we have 4 bands
numOfBands = 4;
hasAlpha = true;
trans = Transparency.TRANSLUCENT;
nBits = new int[] {16, 16, 16, 16};
}
db = new DataBufferUShort(spixels, spixels.length);
raster = Raster.createInterleavedRaster(db, width, height, width*numOfBands, numOfBands, off, null);
cm = new ComponentColorModel(colorSpace, nBits, hasAlpha, false, trans, DataBuffer.TYPE_USHORT);
}
return new BufferedImage(cm, raster, false, null);
Here is the generate16BitRGBPixels() method:
private short[] generate16BitRGBPixels(byte[] compr_data, boolean fullAlpha) throws Exception {
//
int bytesPerPixel = 0;
byte[] pixBytes;
if (fullAlpha)
bytesPerPixel = 8;
else
bytesPerPixel = 6;
bytesPerScanLine = width*bytesPerPixel;
// Now inflate the data.
pixBytes = new byte[height * bytesPerScanLine];
// Wrap an InflaterInputStream with a bufferedInputStream to speed up reading
BufferedInputStream bis = new BufferedInputStream(new InflaterInputStream(new ByteArrayInputStream(compr_data)));
apply_defilter(bis, pixBytes, height, bytesPerPixel, bytesPerScanLine);
short[] spixels = null;
if(alpha != null) { // Deal with single color transparency
spixels = new short[width*height*4];
short redMask = (short)((alpha[1]&0xff)|(alpha[0]&0xff)<<8);
short greenMask = (short)((alpha[3]&0xff)|(alpha[2]&0xff)<<8);;
short blueMask = (short)((alpha[5]&0xff)|(alpha[4]&0xff)<<8);
for(int i = 0, index = 0; i < pixBytes.length; index += 4) {
short red = (short)((pixBytes[i++]&0xff)<<8|(pixBytes[i++]&0xff));
short green = (short)((pixBytes[i++]&0xff)<<8|(pixBytes[i++]&0xff));
short blue = (short)((pixBytes[i++]&0xff)<<8|(pixBytes[i++]&0xff));
spixels[index] = red;
spixels[index + 1] = green;
spixels[index + 2] = blue;
if(spixels[index] == redMask && spixels[index + 1] == greenMask && spixels[index + 2] == blueMask) {
spixels[index + 3] = (short)0x0000;
} else {
spixels[index + 3] = (short)0xffff;
}
}
} else
spixels = ArrayUtils.toShortArray(pixBytes, true);
return spixels;
}
and the ArrayUtils.toShortArray() method:
public static short[] toShortArray(byte[] data, int offset, int len, boolean bigEndian) {
ByteBuffer byteBuffer = ByteBuffer.wrap(data, offset, len);
if (bigEndian) {
byteBuffer.order(ByteOrder.BIG_ENDIAN);
} else {
byteBuffer.order(ByteOrder.LITTLE_ENDIAN);
}
ShortBuffer shortBuf = byteBuffer.asShortBuffer();
short[] array = new short[shortBuf.remaining()];
shortBuf.get(array);
return array;
}
If you want to create an image with 16 bits per sample (or 48 bits per pixel), there is no BufferedImage.TYPE_... constant for that. TYPE_USHORT_565_RGB creates an image with 16 bits per pixel, with samples of 5 (red), 6 (green) and 5 (blue) bits respectively. I think these USHORT RGB values are leftovers from a time when some computes actually had the option of a 16 bit display (aka "Thousands of colors").
What you need to do, to actually create an image with 16 bits per sample, is:
ColorModel cm;
WritableRaster raster;
BufferedImage result = new BufferedImage(cm, raster, cm.isAlphaPremultiplied(), null);
The raster is created from a data buffer of type DataBufferUShort with either 3 banks and a BandedSampleModel with 3 bands, or use a single bank and a PixelInterleavedSampleModel with a pixelStride of 3, scanLineStride of 3 * width and bandOffsets {0, 1, 2}.
Here's a full sample, using interleaved sample model:
ColorSpace sRGB = ColorSpace.getInstance(ColorSpace.CS_sRGB)
ColorModel cm = new ComponentColorModel(sRGB, false, false, Transparency.OPAQUE, DataBuffer.TYPE_USHORT);
WritableRaster raster = Raster.createInterleavedRaster(DataBuffer.TYPE_USHORT, w, h, 3, null);
BufferedImage rgb = new BufferedImage(cm, raster, cm.isAlphaPremultiplied(), null);
PS: With the data buffer exposed, you can access the short samples directly, to manipulate the pixels. This is much faster than using BufferedImage.getRGB(...)/setRGB(...), and will keep the original 16 bit per sample precision. BufferedImage.getRGB(...) will convert the pixel values to 32 bit pixel/8 bit per sample, and thus lose the extra precision.

Java byte Image Manipulation

I need to create a simple demo for image manipulation in Java. My code is swing based. I don't have to do anything complex, just show that the image has changed in some way. I have the image read as byte[]. Is there anyway that I can manipulate this byte array without corrupting the bytes to show some very simple manipulation. I don't wish to use paint() etc. Is there anything that I can do directly to the byte[] array to show some change?
edit:
I am reading jpg image as byteArrayInputStream using apache io library. The bytes are read ok and I can confirm it by writing them back as jpeg.
You can try to convert your RGB image to Grayscale. If the image as 3 bytes per pixel rapresented as RedGreenBlue you can use the followinf formula: y=0.299*r+0.587*g+0.114*b.
To be clear iterate over the byte array and replace the colors. Here an example:
byte[] newImage = new byte[rgbImage.length];
for (int i = 0; i < rgbImage.length; i += 3) {
newImage[i] = (byte) (rgbImage[i] * 0.299 + rgbImage[i + 1] * 0.587
+ rgbImage[i + 2] * 0.114);
newImage[i+1] = newImage[i];
newImage[i+2] = newImage[i];
}
UPDATE:
Above code assumes you're using raw RGB image, if you need to process a Jpeg file you can do this:
try {
BufferedImage inputImage = ImageIO.read(new File("input.jpg"));
BufferedImage outputImage = new BufferedImage(
inputImage.getWidth(), inputImage.getHeight(),
BufferedImage.TYPE_INT_RGB);
for (int x = 0; x < inputImage.getWidth(); x++) {
for (int y = 0; y < inputImage.getHeight(); y++) {
int rgb = inputImage.getRGB(x, y);
int blue = 0x0000ff & rgb;
int green = 0x0000ff & (rgb >> 8);
int red = 0x0000ff & (rgb >> 16);
int lum = (int) (red * 0.299 + green * 0.587 + blue * 0.114);
outputImage
.setRGB(x, y, lum | (lum << 8) | (lum << 16));
}
}
ImageIO.write(outputImage, "jpg", new File("output.jpg"));
} catch (IOException e) {
e.printStackTrace();
}

Does .Net PixelFormat have Java equivalent?

What in Java matches to C# PixelFormat's members.
i.e. Format24bppRgb matches to BufferedImage.TYPE_INT_RGB?
Here is my code. I got an image which has .Net's PixelFormat = Format32bppArgb. I am creating BufferedImage like this:
int sizeBytes = width * height;
DataBufferByte dataBuffer = new DataBufferByte(myImageBytes, sizeBytes);
WritableRaster raster = Raster.createInterleavedRaster(dataBuffer, // dataBuffer
width, // width
height, // height
width * 4, // scanlineStride
4, // pixelStride
new int[]{0, 1, 2, 3}, // bandOffsets
null); // location
java.awt.image.ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB), // ColorSpace
new int[]{8, 8, 8, 8}, // bits
true, // hasAlpha
false, // isPreMultiplied
ComponentColorModel.TRANSLUCENT, DataBuffer.TYPE_BYTE);
BufferedImage result = new BufferedImage(colorModel, raster, false, null);
After I create a bufferedImage red and blue colors are swapped in it.
Next, I tried to create an image as follow
BufferedImage result = new BufferedImage(width, height, BufferedImage.TYPE_4BYTE_ABGR);
WritableRaster r = result.getRaster();
int[] pixels = byteToInt(bytes);
r.setPixels(0, 0, width, height , pixels); // ! Here an exception occures, because after I converted the byte array to int one the width becomes too long.
Byte array was convert by this method
private int[] byteToInt(byte[] pixels) {
int[] ints = new int[pixels.length / 3];
int byteIdx = 0;
for (int pixel = 0; pixel < ints.length; pixel++) {
int red = (int) pixels[byteIdx++] & 0xFF;
int green = (int) pixels[byteIdx++] & 0xFF;
int blue = (int) pixels[byteIdx++] & 0xFF;
int rgb = (red << 16) | (green << 8) | blue;
ints[pixel] = rgb;
}
return ints;
}
The colors look fine now, but I got the exception
java.lang.ArrayIndexOutOfBoundsException: 27600
at sun.awt.image.ByteInterleavedRaster.setPixels(ByteInterleavedRaster.java:1106)
If I use the smaller width (e.g. width / 3) the colors look fine but the picture itself shrinks.
BufferedImage is definately a good place to start. Many of the values in PixelFormat will match up to values in BufferedImage - they each have 24-bit and 32-bit RGB/ARGB values, both have 5-5-5 and 5-6-5 combinations, etc.
If you're having trouble, post some code and we'll have a look at it, try to help. The thing I would recommend would be to play around with the byte ordering (in the pixel ints) until you get the result that you expect - try drawing the BufferedImage onto a GUI object like JPanel so you can see what it looks like.
If you've got an array of int[] for your pixel values, this is the code I usually use for displaying the array as an image...
int[] pixels;
ColorModel model = new DirectColorModel(32,0x00ff0000,0x0000ff00,0x000000ff,0xff000000);
Image image = new JLabel().createImage(new MemoryImageSource(width,height,model,pixels,0,width));

Categories

Resources