In writableRaster class, there is a method:
public void setPixels(int x,
int y,
int w,
int h,
int[] iArray)
Where in iArray should store information about pixels
For a pixel: 233(r)244(g)10(b), how should it store in iArray?
Will it be iArray[0] = 233, iArray[1] = 244, iArray[2] = 10
or iArray[0] = [24424410] ?
Short answer, it should be iArray[0] = 233, iArray[1] = 244, iArray[2] = 10 instead of iArray[0] = [24424410].
However, the amount of data will depend on what the WritableRaster is backing. Consider the following example where we retrieve the WriteableRaster of two BufferedImages; one containing RGB data and the other ARGB data. If we retrieve pixel values, we can see that the array is of length 3 for RGB data and of length 4 for ARGB data.
Code:
public static void main(String[] args) {
BufferedImage rgbImage = new BufferedImage(8, 8,
BufferedImage.TYPE_INT_RGB);
WritableRaster rgbRaster = rgbImage.getRaster();
BufferedImage argbImage = new BufferedImage(8, 8,
BufferedImage.TYPE_INT_ARGB);
WritableRaster argbRaster = argbImage.getRaster();
rgbImage.setRGB(0, 0, new Color(255, 125, 1, 16).getRGB());
argbImage.setRGB(0, 0, new Color(255, 125, 1, 16).getRGB());
int[] rgb = rgbRaster.getPixel(0, 0, (int[]) null);
int[] argb = argbRaster.getPixel(0, 0, (int[]) null);
System.out.print("rgb:");
for (int i = 0; i < rgb.length; ++i)
System.out.print(" "+rgb[i]);
System.out.print("\nargb:");
for (int i = 0; i < argb.length; ++i)
System.out.print(" "+argb[i]);
}
Output:
rgb: 255 125 1
argb: 255 125 1 16
Related
EDIT 3:
int rRows = result.length;
int rColums = result[0].length;
BufferedImage img = new BufferedImage(rColums, rRows, BufferedImage.TYPE_BYTE_GRAY);
for (int r = 0; r < rRows; r++) {
for (int t = 0; t < result[r].length; t++) {
img.setRGB(t, r, result[r][t]);
EDIT2:
Created the image like so....
BufferedImage img = new BufferedImage(rColums, rRows,BufferedImage.TYPE_BYTE_GRAY)
private static int[][] convertToArray(BufferedImage inputImage) {
final byte[] pixels = ((DataBufferByte) inputImage.getRaster().getDataBuffer()).getData();
final int width = inputImage.getWidth();
final int height = inputImage.getHeight();
System.out.println("height" + height + "width");
int[][] result = new int[height][width];
for (int pixel = 0, row = 0, col = 0; pixel < pixels.length; pixel++) {
int argb = 0;
argb = (int) pixels[pixel];
result[row][col] = argb;
col++;
if (col == width) {
col = 0;
row++;
}
}
return result;
Edit:
I've realized what I'm trying to ask is how to go from signed grayscale to unsigned grayscale, as I said adding 256 didnt work for me and also that would still seem to leave the image to dark as it wont raise the value of the +127 signed values to 256 unsigned.(hopefully I've expressed that correctly.
As per title I've an int[][] extracted from a buffered image via
((DataBufferByte) inputImage.getRaster().getDataBuffer()).getData()
The array ranges from -128 to 127. The problem is that when I attempt to reconstruct the image based on the int[][] by passing it to BufferedImage, it comes out too dark, and more like a black and white(mostly black) image.
I saw suggestion to add 256 to each sub zero value of the byte[] produced by the DataBufferByte, in the process of converting byte[] to in[][], but this actually produces a totally black image and I dont really get the logic of it, Like wouldnt you want to shift the entire scale over by 128, rather than just the negative numbers??
When you're writing your conversion. You have a signed byte, and you need to convert it to an ARGB int.
int unsignedByte = pixels[pixel]&0xff;
Now that isn't quite finished because we need it to be argb/grayscale.
argb = ( unsignedByte << 16 ) + ( unsignedByte << 8 ) + unsignedByte;
I've ignored the A part and just added rgb components. There is quite some documentation on this though.
Here is a complete example you can play with.
import java.awt.image.*;
import java.awt.*;
import javax.swing.*;
import java.util.HashSet;
import java.util.Set;
public class GrayScales{
public static void main(String[] args){
BufferedImage img = new BufferedImage( 256, 256, BufferedImage.TYPE_BYTE_GRAY );
Graphics g = img.createGraphics();
g.setColor( new Color( 128, 128, 128) );
g.fillOval( 32, 32, 192, 192 );
g.setColor( new Color( 255, 255, 255) );
g.fillOval( 96, 96, 64, 64 );
g.dispose();
BufferedImage dup = new BufferedImage( 256, 256, BufferedImage.TYPE_BYTE_GRAY );
byte[] pixels = ((DataBufferByte) img.getRaster().getDataBuffer()).getData();
Set<Integer> ints = new HashSet<>();
for(int i = 0; i<256; i++){
for(int j = 0; j<256; j++){
int px = (pixels[j*256 + i])&0xff;
ints.add(px);
int rgb = (px << 16) + (px << 8) + (px);
dup.setRGB(i, j, rgb);
}
}
System.out.println(ints);
JFrame frame = new JFrame("compare");
JLabel a = new JLabel( new ImageIcon( img ) );
JLabel b = new JLabel( new ImageIcon(dup) );
frame.add(a, BorderLayout.EAST);
frame.add(b, BorderLayout.WEST);
frame.pack();
frame.setVisible(true);
}
}
The code below generates a GIF image that’s half red and half blue.
How can I get one that’s half red and half transparent?
I’ve tried using the IndexColorModel constructor that takes a transparent pixel index as a parameter, and also changing the image type to IMAGE_TYPE_ARGB in the call to the BufferedImage constructor, but nothing is working for me.
int pixels[] = new int[90000];
for (int x = 0; x < 300; x++) {
for (int y = 0; y < 300; y++) {
pixels[(300 * y) + x] = (x < y) ? 1 : 0;
}
}
Color oneColor = Color.red;
Color anotherColor = Color.blue;
byte[] redMap = {(byte) (oneColor.getRed()), (byte) (anotherColor.getRed())};
byte[] greenMap = {(byte) (oneColor.getGreen()), (byte) (anotherColor.getGreen())};
byte[] blueMap = {(byte) (oneColor.getBlue()), (byte) (anotherColor.getBlue())};
IndexColorModel colorModel = new IndexColorModel(1, 2, redMap, greenMap, blueMap);
MemoryImageSource mis = new MemoryImageSource(300, 300, colorModel, pixels, 0, 300);
Image image = Toolkit.getDefaultToolkit().createImage(mis);
BufferedImage bufferedImage = new BufferedImage(300, 300, BufferedImage.TYPE_INT_RGB);
bufferedImage.getGraphics().drawImage(image, 0, 0, null);
try {
ImageIO.write(bufferedImage, "gif", new File("example.gif"));
} catch (IOException e) {
e.printStackTrace();
}
Turns out that BufferedImage.TYPE_BYTE_INDEXED is more appropriate for this case. This code does the trick:
Color oneColor = Color.blue;
Color anotherColor = Color.red;
byte[] redMap = {(byte) (oneColor.getRed()), (byte) (anotherColor.getRed())};
byte[] greenMap = {(byte) (oneColor.getGreen()), (byte) (anotherColor.getGreen())};
byte[] blueMap = {(byte) (oneColor.getBlue()), (byte) (anotherColor.getBlue())};
IndexColorModel colorModel = new IndexColorModel(1, 2, redMap, greenMap, blueMap, 0);
int transparency = colorModel.getTransparency();
int transparentPixel = colorModel.getTransparentPixel();
System.out.println("colorModel.getTransparency(): " + transparency);
System.out.println("colorModel.getTransparentPixel(): " + transparentPixel);
BufferedImage bufferedImage = new BufferedImage(300, 300, BufferedImage.TYPE_BYTE_INDEXED, colorModel);
WritableRaster writableRaster = bufferedImage.getRaster();
for (int x = 0; x < 300; x++) {
for (int y = 0; y < 300; y++) {
int[] fill = new int[1]; // A large block...
Arrays.fill(fill, (x < y) ? 0 : 1); // .. filled with one of the 7 first colors in the LUT.
writableRaster.setSamples(x, y, 1, 1, 0, fill);
}
}
In my experience, I changed the transparency of colors with alpha. For instance, transparent red looks like this:
Color transparentred = new Color (255, 0, 0, alpha);
Maybe try to set alpha for your redMap, blueMap, greenMap
Suppose I want 25% or 31% gray color in Java?
The following code shows
BufferedImage image = new BufferedImage(2, 2, BufferedImage.TYPE_BYTE_GRAY);
image.setRGB(0, 0, new Color(0,0,0).getRGB());
image.setRGB(1, 0, new Color(50, 50, 50).getRGB());
image.setRGB(0, 1, new Color(100,100,100).getRGB());
image.setRGB(1, 1, new Color(255,255,255).getRGB());
Raster raster = image.getData();
double[] data = raster.getPixels(0, 0, raster.getWidth(), raster.getHeight(), (double[]) null);
System.out.println(Arrays.toString(data));
obvious fact, that RGC relates with density (?) non linear
[0.0, 8.0, 32.0, 255.0]
So, how to create color of a given density?
UPDATE
I have tried methods, proposed by #icza and #hlg and also one more found by me:
double[] data;
Raster raster;
BufferedImage image = new BufferedImage(1, 1, BufferedImage.TYPE_BYTE_GRAY);
float[] grays = {0, 0.25f, 0.5f, 0.75f, 1};
ColorSpace linearRGB = ColorSpace.getInstance(ColorSpace.CS_LINEAR_RGB);
ColorSpace GRAY = ColorSpace.getInstance(ColorSpace.CS_GRAY);
Color color;
int[] rgb;
for(int i=0; i<grays.length; ++i) {
System.out.println("\n\nShould be " + (grays[i]*100) + "% gray");
color = new Color(linearRGB, new float[] {grays[i], grays[i], grays[i]}, 1f);
image.setRGB(0, 0, color.getRGB());
raster = image.getData();
data = raster.getPixels(0, 0, 1, 1, (double[]) null);
System.out.println("data by CS_LINEAR_RGB (hlg method) = " + Arrays.toString(data));
color = new Color(GRAY, new float[] {grays[i]}, 1f);
image.setRGB(0, 0, color.getRGB());
raster = image.getData();
data = raster.getPixels(0, 0, 1, 1, (double[]) null);
System.out.println("data by CS_GRAY = " + Arrays.toString(data));
rgb = getRGB(Math.round(grays[i]*255));
color = new Color(rgb[0], rgb[1], rgb[2]);
image.setRGB(0, 0, color.getRGB());
raster = image.getData();
data = raster.getPixels(0, 0, 1, 1, (double[]) null);
System.out.println("data by icza method = " + Arrays.toString(data));
}
and all gave different results!
Should be 0.0% gray
data by CS_LINEAR_RGB (hlg method) = [0.0]
data by CS_GRAY = [0.0]
data by icza method = [0.0]
Should be 25.0% gray
data by CS_LINEAR_RGB (hlg method) = [63.0]
data by CS_GRAY = [64.0]
data by icza method = [36.0]
Should be 50.0% gray
data by CS_LINEAR_RGB (hlg method) = [127.0]
data by CS_GRAY = [128.0]
data by icza method = [72.0]
Should be 75.0% gray
data by CS_LINEAR_RGB (hlg method) = [190.0]
data by CS_GRAY = [192.0]
data by icza method = [154.0]
Should be 100.0% gray
data by CS_LINEAR_RGB (hlg method) = [254.0]
data by CS_GRAY = [254.0]
data by icza method = [255.0]
Now I wonder which one is correct?
UPDATE 2
Sorry, gray/white percentage should be, of course, reversed.
When converting an RGB color to grayscale, the following weights are used:
0.2989, 0.5870, 0.1140
Source: Converting RGB to grayscale/intensity
And on Wikipedia: http://en.wikipedia.org/wiki/Grayscale
So formally:
gray = 0.2989*R + 0.5870*G + 0.1140*B
Basically what you need is the inverse of this function. You need to find R, G and B values which give the result gray value you are looking for. Since there are 3 parameters in the equation, in most of the cases there are lots of RGB values which will result in the gray value you are looking for.
Just think of it: an RGB color with high R component and none of G and B gives a gray, there may be another RGB color with some G component and none of R and B which gives the same gray color, so there are multiple possible RGB solutions to the desired gray color.
The Algorithm
Here is one possible solution. What it does is it tries to set the first of the RGB components to be as big so multiplying by its weight will give back the gray. If it "overflows" beyond 255, it is cut, we decrease the gray with the amount the max value of the component can "represent" and we try to do this for the next component with the remaining gray amount.
Here I use a gray input range of 0..255. If you want to specify it in percent, just convert it like gray = 255*percent/100.
private static double[] WEIGHTS = { 0.2989, 0.5870, 0.1140 };
public static int[] getRGB(int gray) {
int[] rgb = new int[3];
for (int i = 0; i < 3; i++) {
rgb[i] = (int) (gray / WEIGHTS[i]);
if (rgb[i] < 256)
return rgb; // Successfully "distributed" all of gray, return it
// Not quite there, cut it...
rgb[i] = 255;
// And distribute the remaining on the rest of the RGB components:
gray -= (int) (255 * WEIGHTS[i]);
}
return rgb;
}
To verify it, use the following method:
public static int toGray(int[] rgb) {
double gray = 0;
for (int i = 0; i < 3; i++)
gray += rgb[i] * WEIGHTS[i];
return (int) gray;
}
Test:
for (int gray = 0; gray <= 255; gray += 50) {
int[] rgb = getRGB(gray);
System.out.printf("Input: %3d, Output: %3d, RGB: %3d, %3d, %3d\n",
gray, toGray(rgb), rgb[0], rgb[1], rgb[2]);
}
Test Output:
Input: 0, Output: 0, RGB: 0, 0, 0
Input: 50, Output: 49, RGB: 167, 0, 0
Input: 100, Output: 99, RGB: 255, 40, 0
Input: 150, Output: 150, RGB: 255, 126, 0
Input: 200, Output: 200, RGB: 255, 211, 0
Input: 250, Output: 250, RGB: 255, 255, 219
The results show what we expected based on the algorithm: R component is "filled" first, once it reaches 255, G component gets "filled" and last the G component gets used.
The huge differences are due to the gamma encoding in sRGB (Wikipedia). sRGB is the default color space used in the Color constructor. If you set your colors using a linear RGB color space instead, the grey values are not distorted:
ColorSpace linearRGB = ColorSpace.getInstance(ColorSpace.CS_LINEAR_RGB);
Color grey50 = new Color(linearRGB, new float[]{50f/255,50f/255,50f/255}, 1f);
Color grey100 = new Color(linearRGB, new float[]{100f/255,100f/255,100f/255}, 1f);
Color grey255 = new Color(linearRGB, new float[]{1f,1f,1f}, 1f);
However, when setting the pixel by using Color.getRGB and ImageBuffer.setRGB, the linear grey scale values are converted to sRGB and back. Thus they are gamma encoded and decoded, yielding rounding errors depending on the chosen color space.
These errors can be avoided by setting the raw pixel data behind the gray scale color model directly:
WritableRaster writable = image.getRaster();
writable.setPixel(0,0, new int[]{64});
Note, that you have to round the percentage values, e.g. for 25% you can not store 63.75. If you need more precision, use TYPE_USHORT_GRAY instead of TYPE_BYTE_GRAY.
A color has a specific luminance, that you want to preserve if the color is more gray.
The luminance might be something like:
Y = 0.2989*R + 0.5870*G + 0.1140*B
Y = 0.2126*R + 0.7152*G + 0.0722*B
So new Color(Y, Y, Y) corresponds to the gray value with the same luminance.
Graying to a specific percentage is an interpolation.
Color grayed(Color color, int perc) {
double percGrayed = perc / 100.0;
double percColored = 1.0 - percGrayed;
double[] weights = { 0.2989, 0.5870, 0.1140 };
double[] rgb = { color.getR(), color.getG(), color.getB() };
// Determine luminance:
double y = 0.0;
for (int i = 0; i < 3; ++i) {
y += weights[i] * rgb[i];
}
// Interpolate between (R, G, B) and (Y, Y, Y):
for (int i = 0; i < 3; ++i) {
rgb[i] *= percColoured;
rgb[i] += y * percGrayed;
}
return new Color((int)rgb[0], (int)rgb[1], (int)rgb[2]);
}
Color grayedColor = grayed(color, 30); // 30% grayed.
I'm developing a Java component for displaying some videos and for each frame of the video, my decoder gives me a Color[256] palette + a width*height bytes pixel indices array. Here's how I create my BufferedImage right now:
byte[] iArray = new byte[width * height * 3];
int j = 0;
for (byte i : this.lastFrameData) {
iArray[j] = (byte) this.currentPalette[i & 0xFF].getRed();
iArray[j + 1] = (byte) this.currentPalette[i & 0xFF].getGreen();
iArray[j + 2] = (byte) this.currentPalette[i & 0xFF].getBlue();
j += 3;
}
DataBufferByte dbb = new DataBufferByte(iArray, iArray.length);
ColorModel cm = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB), new int[] { 8, 8, 8 }, false, false, Transparency.OPAQUE, DataBuffer.TYPE_BYTE);
return new BufferedImage(cm, Raster.createInterleavedRaster(dbb, width, height, width * 3, 3, new int[] { 0, 1, 2 }, null), false, null);
This works but it looks ugly and I'm sure there is a better way. So what would the fastest way to create the BufferedImage be then?
/Edit: I've tried using the setRGB method directly on my BufferedImage but it resulted in worse performance than the above.
Thanks
I would do this:
int[] imagePixels = new int[width * height]
int j = 0;
for (byte i : this.lastFrameData) {
byte r = (byte) this.currentPalette[i & 0xFF].getRed();
byte g = (byte) this.currentPalette[i & 0xFF].getGreen();
byte b = (byte) this.currentPalette[i & 0xFF].getBlue();
imagePixels[j] = 0xFF000000 | (r<<16) | (g<<8) | b;
j++;
}
BufferedImage result = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
result.setRGB(0, 0, width, height, imagePixels , 0, width);
return result;
Maybe it is faster don't test it yet.
I am trying to read and show a PNG file.
I have no problem dealing with images with 8-bit depth.
I proceed as follow:
BufferedImage result = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
Then I read the 3*8=24 bits of each pixel, save them in an array of byte data and put them in the image with:
for (int y = 0; y < height; y++)
for (int x = 0; x < width; x++)
result.setRGB(x, y, ((data[x * 3 + 0] & 0xff) << 16)
+ ((data[x * 3 + 1] & 0xff) << 8)
+ ((data[x * 3 + 2] & 0xff)));
The problem is now with 16-bit depth images. Of course data is bigger now and it contains 48bits, divided in 6 bytes, for each RGB triple: from the debugger data has the values I expect.
How can I set the RGB pixel? Do I have to change the BufferedImage declaration? Maybe with:
BufferedImage result = new BufferedImage(width, height, BufferedImage.TYPE_USHORT_565_RGB);
Many thanks in advance!
P.S.: following PNG standars, the image has color type 2 (RGB without alpha).
Maybe I'll have to use http://docs.oracle.com/javase/7/docs/api/java/awt/image/ColorModel.html
#haraldK has pointed in the right direction. I'm providing some working code which is from "PNGReader" of "icafe" Java image library.
if(bitsPerPixel == 16) {
if(interlace_method==NON_INTERLACED)
spixels = generate16BitRGBPixels(compr_data, false);
else {
spixels = generate16BitRGBInterlacedPixels(compr_data, false);
}
int[] off = {0, 1, 2}; //band offset, we have 3 bands
int numOfBands = 3;
boolean hasAlpha = false;
int trans = Transparency.OPAQUE;
int[] nBits = {16, 16, 16};
if(alpha != null) { // Deal with single color transparency
off = new int[] {0, 1, 2, 3}; //band offset, we have 4 bands
numOfBands = 4;
hasAlpha = true;
trans = Transparency.TRANSLUCENT;
nBits = new int[] {16, 16, 16, 16};
}
db = new DataBufferUShort(spixels, spixels.length);
raster = Raster.createInterleavedRaster(db, width, height, width*numOfBands, numOfBands, off, null);
cm = new ComponentColorModel(colorSpace, nBits, hasAlpha, false, trans, DataBuffer.TYPE_USHORT);
}
return new BufferedImage(cm, raster, false, null);
Here is the generate16BitRGBPixels() method:
private short[] generate16BitRGBPixels(byte[] compr_data, boolean fullAlpha) throws Exception {
//
int bytesPerPixel = 0;
byte[] pixBytes;
if (fullAlpha)
bytesPerPixel = 8;
else
bytesPerPixel = 6;
bytesPerScanLine = width*bytesPerPixel;
// Now inflate the data.
pixBytes = new byte[height * bytesPerScanLine];
// Wrap an InflaterInputStream with a bufferedInputStream to speed up reading
BufferedInputStream bis = new BufferedInputStream(new InflaterInputStream(new ByteArrayInputStream(compr_data)));
apply_defilter(bis, pixBytes, height, bytesPerPixel, bytesPerScanLine);
short[] spixels = null;
if(alpha != null) { // Deal with single color transparency
spixels = new short[width*height*4];
short redMask = (short)((alpha[1]&0xff)|(alpha[0]&0xff)<<8);
short greenMask = (short)((alpha[3]&0xff)|(alpha[2]&0xff)<<8);;
short blueMask = (short)((alpha[5]&0xff)|(alpha[4]&0xff)<<8);
for(int i = 0, index = 0; i < pixBytes.length; index += 4) {
short red = (short)((pixBytes[i++]&0xff)<<8|(pixBytes[i++]&0xff));
short green = (short)((pixBytes[i++]&0xff)<<8|(pixBytes[i++]&0xff));
short blue = (short)((pixBytes[i++]&0xff)<<8|(pixBytes[i++]&0xff));
spixels[index] = red;
spixels[index + 1] = green;
spixels[index + 2] = blue;
if(spixels[index] == redMask && spixels[index + 1] == greenMask && spixels[index + 2] == blueMask) {
spixels[index + 3] = (short)0x0000;
} else {
spixels[index + 3] = (short)0xffff;
}
}
} else
spixels = ArrayUtils.toShortArray(pixBytes, true);
return spixels;
}
and the ArrayUtils.toShortArray() method:
public static short[] toShortArray(byte[] data, int offset, int len, boolean bigEndian) {
ByteBuffer byteBuffer = ByteBuffer.wrap(data, offset, len);
if (bigEndian) {
byteBuffer.order(ByteOrder.BIG_ENDIAN);
} else {
byteBuffer.order(ByteOrder.LITTLE_ENDIAN);
}
ShortBuffer shortBuf = byteBuffer.asShortBuffer();
short[] array = new short[shortBuf.remaining()];
shortBuf.get(array);
return array;
}
If you want to create an image with 16 bits per sample (or 48 bits per pixel), there is no BufferedImage.TYPE_... constant for that. TYPE_USHORT_565_RGB creates an image with 16 bits per pixel, with samples of 5 (red), 6 (green) and 5 (blue) bits respectively. I think these USHORT RGB values are leftovers from a time when some computes actually had the option of a 16 bit display (aka "Thousands of colors").
What you need to do, to actually create an image with 16 bits per sample, is:
ColorModel cm;
WritableRaster raster;
BufferedImage result = new BufferedImage(cm, raster, cm.isAlphaPremultiplied(), null);
The raster is created from a data buffer of type DataBufferUShort with either 3 banks and a BandedSampleModel with 3 bands, or use a single bank and a PixelInterleavedSampleModel with a pixelStride of 3, scanLineStride of 3 * width and bandOffsets {0, 1, 2}.
Here's a full sample, using interleaved sample model:
ColorSpace sRGB = ColorSpace.getInstance(ColorSpace.CS_sRGB)
ColorModel cm = new ComponentColorModel(sRGB, false, false, Transparency.OPAQUE, DataBuffer.TYPE_USHORT);
WritableRaster raster = Raster.createInterleavedRaster(DataBuffer.TYPE_USHORT, w, h, 3, null);
BufferedImage rgb = new BufferedImage(cm, raster, cm.isAlphaPremultiplied(), null);
PS: With the data buffer exposed, you can access the short samples directly, to manipulate the pixels. This is much faster than using BufferedImage.getRGB(...)/setRGB(...), and will keep the original 16 bit per sample precision. BufferedImage.getRGB(...) will convert the pixel values to 32 bit pixel/8 bit per sample, and thus lose the extra precision.