How would I get a BufferedImage from an OpenGL window? - java

I'm coding a Java LWJGL game, and everything's going along great, except whenever I try to figure out a way to create a BufferedImage of the current game area. I've searched the internet, browsed all of the opengl functions, and I am getting no where... Anyone have any ideas? Here's all I have so far, but it only makes a blank .png:
if(Input.getKeyDown(Input.KEY_F2)) {
try {
String fileName = "screenshot-" + Util.getSystemTime(false);
File imageToSave = new File(MainComponent.screenshotsFolder, fileName + ".png");
int duplicate = 0;
while(true) {
duplicate++;
if(imageToSave.exists() == false) {
imageToSave.createNewFile();
break;
}
imageToSave = new File(MainComponent.screenshotsFolder, fileName + "_" + duplicate + ".png");
}
imageToSave.createNewFile();
// Create a buffered image:
BufferedImage image = new BufferedImage(MainComponent.WIDTH, MainComponent.HEIGHT, BufferedImage.TYPE_INT_ARGB);
//Wrtie the new buffered image to file:
ImageIO.write(image, "png", imageToSave);
} catch (IOException e) {
e.printStackTrace();
}
}

You never actually write something into your BufferedImage.
Read the Buffer
You can use glReadPixels to access the selected buffer. (I assume WIDTH and HEIGHT as your OpenGLContext dimensions.)
FloatBuffer imageData = BufferUtils.createFloatBuffer(WIDTH * HEIGHT * 3);
GL11.glReadPixels(0, 0, WIDTH, HEIGHT, GL11.GL_RGB, GL11.GL_FLOAT, imageData);
imageData.rewind();
Use whatever parameters suit your needs best, I just picked floats randomly.
Set the Image Data
You already figured out how to create and save your image, but in between you should also set some content to the image. You can do this with BufferedImage().setRGB() (Note that I don't use a good naming as you do, to keep this example concise.)
// create image
BufferedImage image = new BufferedImage(
WIDTH, HEIGHT, BufferedImage.TYPE_INT_RGB
);
// set content
image.setRGB(0, 0, WIDTH, HEIGHT, rgbArray, 0, WIDTH);
// save it
File outputfile = new File("Screenshot.png");
try {
ImageIO.write(image, "png", outputfile);
} catch (IOException e) {
e.printStackTrace();
}
The most tricky part is now getting the rgbArray. The problems are that
OpenGL gives you three values (in this case, i.e. using GL11.GL_RGB), while the BufferedImage expects one value.
OpenGL counts the rows from bottom to top while BufferedImage counts from top to bottom.
Calculate one Integer from three Floats
To get rid of problem one you have to calculate the integer value which fits the three number you get.
I will show this with a simple example, the color red which is (1.0f, 0.0f, 0.0f) in your FloatBuffer.
For the integer value it might be easy to think of numbers in hex values, as you might know from CSS where it's very common to name colors with those. Red would be #ff0000 in CSS or in Java of course 0xff0000.
Colors in RGB with integers are usually represented from 0 to 255 (or 00 to ff in hex), while you use 0 to 1 with floats or doubles. So first you have to map them to the correct range by simply multiplying the values by 255 and casting them to integers:
int r = (int)(fR * 255);
Now you can think of the hex value as just putting those numbers next to each other:
rgb = 255 0 0 = ff 00 00
To achieve this you can bitshift the integer values. Since one hex value (0-f) is 4 byte long, you have to shift the value of green 8 bytes to the left (two hex values) and the value of red 16 bytes. After that you can simply add them up.
int rgb = (r << 16) + (g << 8) + b;
Getting from BottomUp to TopDown
I know the terminology bottom-up -> top-down is not correct here, but it was catchy.
To access 2D data in a 1D array you usually use some formula (this case row-major order) like
int index = offset + (y - yOffset) * stride + (x - xOffset);
Since you want to have the complete image the offsets can be left out and the formula simplified to
int index = y * stride + x;
Of course the stride is simply the WIDTH, i.e. the maximum achievable x value (or in other terms the row length).
The problem you now face is that OpenGL uses the bottom row as row 0 while the BufferedImage uses the top row as row 0. To get rid of that problem just invert y:
int index = ((HEIGHT - 1) - y) * WIDTH + x;
Filling the int[]-array with the Buffer's Data
Now you know how to calculate the rgb value, the correct index and you have all data you need. Let's fill the int[]-array with those information.
int[] rgbArray = new int[WIDTH * HEIGHT];
for(int y = 0; y < HEIGHT; ++y) {
for(int x = 0; x < WIDTH; ++x) {
int r = (int)(imageData.get() * 255) << 16;
int g = (int)(imageData.get() * 255) << 8;
int b = (int)(imageData.get() * 255);
int i = ((HEIGHT - 1) - y) * WIDTH + x;
rgbArray[i] = r + g + b;
}
}
Note three things about this little piece of code.
The size of the array. Obviously it's just WIDTH * HEIGHT and not WIDTH * HEIGHT * 3 as the buffer's size was.
Since OpenGL uses row-major order, you have to use the column value (x) as the inner loop for this 2D array (and of course there are other ways to write this, but this seemed to be the most intuitive one).
Accessing imageData with imageData.get() is probably not the safest way to do it, but since the calculations are carefully done it should do the job just fine. Just remember to flip() or rewind() the buffer before calling get() the first time!
Putting it all together
So with all the information available now we can just put a method saveScreenshot() together.
private void saveScreenshot() {
// read current buffer
FloatBuffer imageData = BufferUtils.createFloatBuffer(WIDTH * HEIGHT * 3);
GL11.glReadPixels(
0, 0, WIDTH, HEIGHT, GL11.GL_RGB, GL11.GL_FLOAT, imageData
);
imageData.rewind();
// fill rgbArray for BufferedImage
int[] rgbArray = new int[WIDTH * HEIGHT];
for(int y = 0; y < HEIGHT; ++y) {
for(int x = 0; x < WIDTH; ++x) {
int r = (int)(imageData.get() * 255) << 16;
int g = (int)(imageData.get() * 255) << 8;
int b = (int)(imageData.get() * 255);
int i = ((HEIGHT - 1) - y) * WIDTH + x;
rgbArray[i] = r + g + b;
}
}
// create and save image
BufferedImage image = new BufferedImage(
WIDTH, HEIGHT, BufferedImage.TYPE_INT_RGB
);
image.setRGB(0, 0, WIDTH, HEIGHT, rgbArray, 0, WIDTH);
File outputfile = getNextScreenFile();
try {
ImageIO.write(image, "png", outputfile);
} catch (IOException e) {
e.printStackTrace();
System.err.println("Can not save screenshot!");
}
}
private File getNextScreenFile() {
// create image name
String fileName = "screenshot_" + getSystemTime(false);
File imageToSave = new File(fileName + ".png");
// check for duplicates
int duplicate = 0;
while(imageToSave.exists()) {
imageToSave = new File(fileName + "_" + ++duplicate + ".png");
}
return imageToSave;
}
// format the time
public static String getSystemTime(boolean getTimeOnly) {
SimpleDateFormat dateFormat = new SimpleDateFormat(
getTimeOnly?"HH-mm-ss":"yyyy-MM-dd'T'HH-mm-ss"
);
return dateFormat.format(new Date());
}
I also uploaded a very simple full working example.

Related

How to get 0..255 colors from a picture in Java? [duplicate]

This question already has answers here:
reading black/white image in java with TYPE_USHORT_GRAY
(2 answers)
Closed 2 years ago.
I have tried to grayscale a already black-white-gray picture and it become black.
When I try to grayscale a picture with Java, I do like this:
// This turns the image data to grayscale and return the data
private static RealMatrix imageData(File picture) {
try {
BufferedImage image = ImageIO.read(picture);
int width = image.getWidth();
int height = image.getHeight();
RealMatrix data = MatrixUtils.createRealMatrix(height * width, 1);
// Convert to grayscale
int countRows = 0;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
// Turn image to grayscale
int p = image.getRGB(x, y);
int r = (p >> 16) & 0xff;
int g = (p >> 8) & 0xff;
int b = p & 0xff;
// calculate average and save
int avg = (r + g + b) / 3;
data.setEntry(countRows, 0, avg);
countRows++;
}
}
return data;
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
The problem what I see is that p is an 32-bit value and I only want 8-bit value. Even if the picture is already grayscaled, the p value is already a 32-bit value. That cause trouble for me.
So if I grayscale a gray picture, it will become black. Or at least darker.
And I want 0..255 values of p, which is a 32-bit integer value.
Do you have any suggestions how to read pictures as they where 8-bit?
It's for image classification.
Summarize:
I need help to get each pixels from a picture in 0..255 format.
One way is to gray scale it, but how can I verify if the picture is already gray scaled?
Update:
I have tried to read a picture as it was 8-bit values. It works. Then I try to save the picture with the same values. The picture becomes very dark.
I have a matlab example I want to show.
First I read my picture:
image = imread("subject01.normal");
And then I save the picture.
imwrite(uint8(image), "theSameImage.gif")
If I try with a minimal Java code snipped for reading an image.
private static void imageData(File picture) {
try {
BufferedImage image = ImageIO.read(picture);
int width = image.getWidth();
int height = image.getHeight();
DataBuffer buffer = image.getRaster().getDataBuffer();
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int p = buffer.getElem(x + y * width);
image.setRGB(x, y, p);
}
}
File output = new File(picture.getName());
ImageIO.write(image, "gif", output);
return data;
} catch (Exception e) {
e.printStackTrace();
return null;
}
}
I will get this picture:
So even if there is a marked answer in this question, it's still not going to help you.
You mentioned that the image you're reading is a grayscale image, and that getRGB returns values like 23241 and 23551.
That means your image uses a CS_GRAY ColorSpace, not an RGB color space. You can confirm this by calling getType(), which would return TYPE_USHORT_GRAY.
That means that your p value is a gray-level in range 0 - 65535. Since you want the result to be a double in range 0 - 255, you need to calculate:
double avg = p * 255.0 / 65535.0;
Unless you're 100% sure the input image will always be grayscale, you should check the type in the code and handle the p value accordingly.

Draw Image using CYMK values

I want to convert a buffered image from RGBA format to CYMK format without using auto conversion tools or libraries,so i tried to extract the RGBA values from individual pixels that i got using BufferedImage.getRGB() and here what I've done so far :
BufferedImage img = new BufferedImage("image path")
int R,G,B,pixel,A;
float Rc,Gc,Bc,K,C,M,Y;
int height = img.getHeight();
int width = img.getWidth();
for(int y = 0 ; y < height ; y++){
for(int x = 0 ; x < width ; x++){
pixel = img.getRGB(x, y);
//I shifted the int bytes to get RGBA values
A = (pixel>>24)&0xff;
R = (pixel>>16)&0xff;
G = (pixel>>8)&0xff;
B = (pixel)&0xff;
Rc = (float) ((float)R/255.0);
Gc = (float) ((float)G/255.0);
Bc = (float) ((float)B/255.0);
// Equations i found on the internet to get CYMK values
K = 1 - Math.max(Bc, Math.max(Rc, Gc));
C = (1- Rc - K)/(1-K);
Y = (1- Bc - K)/(1-K);
M = (1- Gc - K)/(1-K);
}
}
Now after I've extracted it ,i want to draw or construct an image using theses values ,can you tell me of a method or a way to do this because i don't thinkBufferedImage.setRGB() would work ,and also when i printed the values of C,Y,M some of them hadNaN value can someone tell me what that means and how to deal with it ?
While it is possible, converting RGB to CMYK without a proper color profile will not produce the best results. For better performance and higher color fidelity, I really recommend using an ICC color profile (see ICC_Profile and ICC_ColorSpace classes) and ColorConvertOp. :-)
Anyway, here's how to do it using your own conversion. The important part is creating a CMYK color space, and a ColorModel and BufferedImage using that color space (you could also load a CMYK color space from an ICC profile as mentioned above, but the colors would probably look more off, as it uses different calculations than you do).
public static void main(String[] args) throws IOException {
BufferedImage img = ImageIO.read(new File(args[0]));
int height = img.getHeight();
int width = img.getWidth();
// Create a color model and image in CMYK color space (see custom class below)
ComponentColorModel cmykModel = new ComponentColorModel(CMYKColorSpace.INSTANCE, false, false, Transparency.TRANSLUCENT, DataBuffer.TYPE_BYTE);
BufferedImage cmykImg = new BufferedImage(cmykModel, cmykModel.createCompatibleWritableRaster(width, height), cmykModel.isAlphaPremultiplied(), null);
WritableRaster cmykRaster = cmykImg.getRaster();
int R,G,B,pixel;
float Rc,Gc,Bc,K,C,M,Y;
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
pixel = img.getRGB(x, y);
// Now, as cmykImg already is in CMYK color space, you could actually just invoke
//cmykImg.setRGB(x, y, pixel);
// and the method would perform automatic conversion to the dest color space (CMYK)
// But, here you go... (I just cleaned up your code a little bit):
R = (pixel >> 16) & 0xff;
G = (pixel >> 8) & 0xff;
B = (pixel) & 0xff;
Rc = R / 255f;
Gc = G / 255f;
Bc = B / 255f;
// Equations I found on the internet to get CMYK values
K = 1 - Math.max(Bc, Math.max(Rc, Gc));
if (K == 1f) {
// All black (this is where you would get NaN values I think)
C = M = Y = 0;
}
else {
C = (1- Rc - K)/(1-K);
M = (1- Gc - K)/(1-K);
Y = (1- Bc - K)/(1-K);
}
// ...and store the CMYK values (as bytes in 0..255 range) in the raster
cmykRaster.setDataElements(x, y, new byte[] {(byte) (C * 255), (byte) (M * 255), (byte) (Y * 255), (byte) (K * 255)});
}
}
// You should now have a CMYK buffered image
System.out.println("cmykImg: " + cmykImg);
}
// A simple and not very accurate CMYK color space
// Full source at https://github.com/haraldk/TwelveMonkeys/blob/master/imageio/imageio-core/src/main/java/com/twelvemonkeys/imageio/color/CMYKColorSpace.java
final static class CMYKColorSpace extends ColorSpace {
static final ColorSpace INSTANCE = new CMYKColorSpace();
final ColorSpace sRGB = getInstance(CS_sRGB);
private CMYKColorSpace() {
super(ColorSpace.TYPE_CMYK, 4);
}
public static ColorSpace getInstance() {
return INSTANCE;
}
public float[] toRGB(float[] colorvalue) {
return new float[]{
(1 - colorvalue[0]) * (1 - colorvalue[3]),
(1 - colorvalue[1]) * (1 - colorvalue[3]),
(1 - colorvalue[2]) * (1 - colorvalue[3])
};
}
public float[] fromRGB(float[] rgbvalue) {
// NOTE: This is essentially the same equation you use, except
// this is slightly optimized, and values are already in range [0..1]
// Compute CMY
float c = 1 - rgbvalue[0];
float m = 1 - rgbvalue[1];
float y = 1 - rgbvalue[2];
// Find K
float k = Math.min(c, Math.min(m, y));
// Convert to CMYK values
return new float[]{(c - k), (m - k), (y - k), k};
}
public float[] toCIEXYZ(float[] colorvalue) {
return sRGB.toCIEXYZ(toRGB(colorvalue));
}
public float[] fromCIEXYZ(float[] colorvalue) {
return sRGB.fromCIEXYZ(fromRGB(colorvalue));
}
}
PS: Your question talks about RGBA and CMYK, but your code just ignores the alpha value, so I did the same. If you really wanted to, you could just keep the alpha value as-is and have a CMYK+A image, to allow alpha-compositing in CMYK color space. I'll leave that as an exercise. ;-)

Handling of java.awt.Color while saving file (JPG) with ImageIO.write

Taking part in a Coursera course, I've been trying to use steganography to hide an image in another. This means I've tried to store the "main" picture's RGB values on 6 bits and the "second" picture's values on the last 2 bits.
I'm merging these two values to create a joint picture, and have also coded a class to parse the joint picture, and recover the original images.
Image recovery has not been successful, although it seems (from other examples provided within the course) that the parser is working fine. I suppose that saving the pictures after modification, using ImageIO.write somehow modifies the RGB values I have carefully set in the code. :D
public static BufferedImage mergeImage(BufferedImage original,
BufferedImage message, int hide) {
// hidden is the num of bits on which the second image is hidden
if (original != null) {
int width = original.getWidth();
int height = original.getHeight();
BufferedImage output = new BufferedImage(width, height,
BufferedImage.TYPE_INT_RGB);
for (int i = 0; i < width; i++) {
for (int j = 0; j < height; j++) {
int pix_orig = original.getRGB(i, j);
int pix_msg = message.getRGB(i, j);
int pixel = setpixel(pix_orig, pix_msg, hide);
output.setRGB(i, j, pixel);
}
}
return output;
}
return null;
}
public static int setpixel(int pixel_orig, int pixel_msg, int hide) {
int bits = (int) Math.pow(2, hide);
Color orig = new Color(pixel_orig);
Color msg = new Color(pixel_msg);
int red = ((orig.getRed() / bits) * bits); //+ (msg.getRed() / (256/bits));
if (red % 4 != 0){
counter+=1;
}
int green = ((orig.getGreen() / bits) * bits) + (msg.getGreen() / (256/bits));
int blue = ((orig.getBlue() / bits) * bits) + (msg.getBlue() / (256/bits));
int pixel = new Color(red, green, blue).getRGB();
return pixel;
}
This is the code I use for setting the RGB values of the merged picture. As you can see, I have commented part of the code belonging to red to check whether the main picture can actually be saved on 6 bits, assuming I take int hide=2
Although if I make the same checks in the parsing part of the code:
public static BufferedImage parseImage(BufferedImage input, int hidden){
// hidden is the num of bits on which the second image is hidden
if (input != null){
int width = input.getWidth();
int height = input.getHeight();
BufferedImage output = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
for(int i=0;i<width;i++){
for(int j=0;j<height;j++){
int pixel = input.getRGB(i, j);
pixel = setpixel(pixel,hidden);
output.setRGB(i, j, pixel);
}
}
return output;
}
return null;
}
public static int setpixel(int pixel, int hidden){
int bits = (int) Math.pow(2,hidden);
Color c = new Color(pixel);
if (c.getRed() % 4 != 0){
counter+=1;
}
int red = (c.getRed() - (c.getRed()/bits)*bits)*(256/bits);
int green = (c.getGreen() - (c.getGreen()/bits)*bits)*(256/bits);
int blue = (c.getBlue() - (c.getBlue()/bits)*bits)*(256/bits);
pixel = new Color(red,green,blue).getRGB();
return pixel;
}
I get ~100k pixels where the R value has a remainder if divided by four.
I suspect there' some problem with the function of ImageIO.write.
I know the question is going to be vague, but
1) Can someone confirm this
2) What can I do to get this code working?
Thanks a lot!
JPEG has lossy compression, which means some pixels will effectively be modified when reloading the image. This isn't a fault of ImageIO.write, it's how the format works. If you want to embed your data directly to pixel values, you want to save the image to a lossless format, such as BMP or PNG.

How to create BufferedImage for 32 bits per sample, 3 samples image data

I am trying to create a BufferedImage from some image data which is a byte array. The image is RGB format with 3 samples per pixel - R, G, and B and 32 bits per sample (for each sample, not all 3 samples).
Now I want to create a BufferedImage from this byte array. This is what I have done:
ColorModel cm = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB), new int[] {32, 32, 32}, false, false, Transparency.OPAQUE, DataBuffer.TYPE_INT);
Object tempArray = ArrayUtils.toNBits(bitsPerSample, pixels, samplesPerPixel*imageWidth, endian == IOUtils.BIG_ENDIAN);
WritableRaster raster = cm.createCompatibleWritableRaster(imageWidth, imageHeight);
raster.setDataElements(0, 0, imageWidth, imageHeight, tempArray);
BufferedImage bi = new BufferedImage(cm, raster, false, null);
The above code works with 24 bits per sample RGB image but not 32 bits per sample. The generated image is garbage which is shown on the right of the image. It is supposed to be like the left side of the image.
Note: the only image reader on my machine which can read this image is ImageMagick. All the others show similar results as the garbage one to the right of the following image.
The ArrayUtils.toNBits() just translates the byte array to int array with correct endianess. I'm sure this one is correct as I have cross checked with other methods to generate the same int array.
I guess the problem might arise from the fact I am using all the 32 bits int to represent the color which would contain negative values. Looks like I need long data type, but there is no DataBuffer type for long.
Instances of ComponentColorModel created with transfer types
DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, and DataBuffer.TYPE_INT
have pixel sample values which are treated as unsigned integral
values.
The above quote is from Java document for ComponentColorModel. This means the 32 bit sample does get treated as unsigned integer value. Then the problem could be somewhere else.
Has any body met similar problem and got a workaround or I may have done some thing wrong here?
Update2: The "real" problem lies in the fact when 32 bit sample is used, the algorithm for the ComponentColorModel will shift 1 to the left 0 times (1<<0) since shift on int is always within 0~31 inclusive. This is not the expected value. To solve this problem (actually shift left 32 times), the only thing needs to be done is change 1 from int to long type as 1L as shown in the fix below.
Update: from the answer by HaraldK and the comments, we have finally agreed that the problem is coming from Java's ComponentColorModel which is not handling 32 bit sample correctly. The proposed fix by HaraldK works for my case too. The following is my version:
import java.awt.Transparency;
import java.awt.color.ColorSpace;
import java.awt.image.ComponentColorModel;
import java.awt.image.DataBuffer;
public class Int32ComponentColorModel extends ComponentColorModel {
//
public Int32ComponentColorModel(ColorSpace cs, boolean alpha) {
super(cs, alpha, false, alpha ? Transparency.TRANSLUCENT : Transparency.OPAQUE, DataBuffer.TYPE_INT);
}
#Override
public float[] getNormalizedComponents(Object pixel, float[] normComponents, int normOffset) {
int numComponents = getNumComponents();
if (normComponents == null || normComponents.length < numComponents + normOffset) {
normComponents = new float[numComponents + normOffset];
}
switch (transferType) {
case DataBuffer.TYPE_INT:
int[] ipixel = (int[]) pixel;
for (int c = 0, nc = normOffset; c < numComponents; c++, nc++) {
normComponents[nc] = ipixel[c] / ((float) ((1L << getComponentSize(c)) - 1));
}
break;
default: // I don't think we can ever come this far. Just in case!!!
throw new UnsupportedOperationException("This method has not been implemented for transferType " + transferType);
}
return normComponents;
}
}
Update:
This seems to be a known bug: ComponentColorModel.getNormalizedComponents() does not handle 32-bit TYPE_INT, reported 10 (TEN!) years ago, against Java 5.
The upside, Java is now partly open-sourced. We can now propose a patch, and with some luck it will be evaluated for Java 9 or so... :-P
The bug proposes the following workaround:
Subclass ComponentColorModel and override getNormalizedComponents() to properly handle 32 bit per sample TYPE_INT data by dividing the incoming pixel value by 'Math.pow(2, 32) - 1' when dealing with this data, rather than using the erroneous bit shift. (Using a floating point value is ok, since getNormalizedComponents() converts everything to floating point anyway).
My fix is a little different, but the basic idea is the same (feel free to optimize as you see fit :-)):
private static class TypeIntComponentColorModel extends ComponentColorModel {
public TypeIntComponentColorModel(final ColorSpace cs, final boolean alpha) {
super(cs, alpha, false, alpha ? TRANSLUCENT : OPAQUE, DataBuffer.TYPE_INT);
}
#Override
public float[] getNormalizedComponents(Object pixel, float[] normComponents, int normOffset) {
int numComponents = getNumComponents();
if (normComponents == null) {
normComponents = new float[numComponents + normOffset];
}
switch (transferType) {
case DataBuffer.TYPE_INT:
int[] ipixel = (int[]) pixel;
for (int c = 0, nc = normOffset; c < numComponents; c++, nc++) {
normComponents[nc] = ((float) (ipixel[c] & 0xffffffffl)) / ((float) ((1l << getComponentSize(c)) - 1));
}
break;
default:
throw new UnsupportedOperationException("This method has not been implemented for transferType " + transferType);
}
return normComponents;
}
}
Consider the below code. If run as is, for me it displays a mostly black image, with the upper right quarter white overlayed with a black circle. If I change the datatype to TYPE_USHORT (uncomment the transferType line), it displays half/half white and a linear gradient from black to white, with an orange circle in the middle (as it should).
Using ColorConvertOp to convert to a standard type seems to make no difference.
public class Int32Image {
public static void main(String[] args) {
// Define dimensions and layout of the image
int w = 300;
int h = 200;
int transferType = DataBuffer.TYPE_INT;
// int transferType = DataBuffer.TYPE_USHORT;
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB), false, false, Transparency.OPAQUE, transferType);
WritableRaster raster = colorModel.createCompatibleWritableRaster(w, h);
BufferedImage image = new BufferedImage(colorModel, raster, false, null);
// Start with linear gradient
if (raster.getTransferType() == DataBuffer.TYPE_INT) {
DataBufferInt buffer = (DataBufferInt) raster.getDataBuffer();
int[] data = buffer.getData();
for (int y = 0; y < h; y++) {
int value = (int) (y * 0xffffffffL / h);
for (int x = 0; x < w; x++) {
int offset = y * w * 3 + x * 3;
data[offset] = value;
data[offset + 1] = value;
data[offset + 2] = value;
}
}
}
else if (raster.getTransferType() == DataBuffer.TYPE_USHORT) {
DataBufferUShort buffer = (DataBufferUShort) raster.getDataBuffer();
short[] data = buffer.getData();
for (int y = 0; y < h; y++) {
short value = (short) (y * 0xffffL / h);
for (int x = 0; x < w; x++) {
int offset = y * w * 3 + x * 3;
data[offset] = value;
data[offset + 1] = value;
data[offset + 2] = value;
}
}
}
// Paint something (in color)
Graphics2D g = image.createGraphics();
g.setColor(Color.WHITE);
g.fillRect(0, 0, w / 2, h);
g.setColor(Color.ORANGE);
g.fillOval(100, 50, w - 200, h - 100);
g.dispose();
System.out.println("image = " + image);
// image = new ColorConvertOp(null).filter(image, new BufferedImage(image.getWidth(), image.getHeight(), BufferedImage.TYPE_INT_ARGB));
JFrame frame = new JFrame();
frame.add(new JLabel(new ImageIcon(image)));
frame.pack();
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
}
To me, this seems to suggest that there's something wrong with the ColorModel using transferType TYPE_INT. But I'd be happy to be wrong. ;-)
Another thing you could try, is to scale the values down to 16 bit, use a TYPE_USHORT raster and color model, and see if that makes a difference. I bet it will, but I'm too lazy to try. ;-)

DCT2 of png using jTransforms

What I'm trying to do is to compute 2D DCT of an image in Java and then save the result back to file.
Read file:
coverImage = readImg(coverPath);
private BufferedImage readImg(String path) {
BufferedImage destination = null;
try {
destination = ImageIO.read(new File(path));
} catch (IOException e) {
e.printStackTrace();
}
return destination;
}
Convert to float array:
cover = convertToFloatArray(coverImage);
private float[] convertToFloatArray(BufferedImage source) {
securedImage = (WritableRaster) source.getData();
float[] floatArray = new float[source.getHeight() * source.getWidth()];
floatArray = securedImage.getPixels(0, 0, source.getWidth(), source.getHeight(), floatArray);
return floatArray;
}
Run the DCT:
runDCT(cover, coverImage.getHeight(), coverImage.getWidth());
private void runDCT(float[] floatArray, int rows, int cols) {
dct = new FloatDCT_2D(rows, cols);
dct.forward(floatArray, false);
securedImage.setPixels(0, 0, cols, rows, floatArray);
}
And then save it as image:
convertDctToImage(securedImage, coverImage.getHeight(), coverImage.getWidth());
private void convertDctToImage(WritableRaster secured, int rows, int cols) {
coverImage.setData(secured);
File file = new File(securedPath);
try {
ImageIO.write(coverImage, "png", file);
} catch (IOException ex) {
Logger.getLogger(DCT2D.class.getName()).log(Level.SEVERE, null, ex);
}
}
But what I get is: http://kyle.pl/up/2012/05/29/dct_stack.png
Can anyone tell me what I'm doing wrong? Or maybe I don't understand something here?
This is a piece of code, that works for me:
//reading image
BufferedImage image = javax.imageio.ImageIO.read(new File(filename));
//width * 2, because DoubleFFT_2D needs 2x more space - for Real and Imaginary parts of complex numbers
double[][] brightness = new double[img.getHeight()][img.getWidth() * 2];
//convert colored image to grayscale (brightness of each pixel)
for ( int y = 0; y < image.getHeight(); y++ ) {
raster.getDataElements( 0, y, image.getWidth(), 1, dataElements );
for ( int x = 0; x < image.getWidth(); x++ ) {
//notice x and y swapped - it's JTransforms format of arrays
brightness[y][x] = brightnessRGB(dataElements[x]);
}
}
//do FT (not FFT, because FFT is only* for images with width and height being 2**N)
//DoubleFFT_2D writes data to the same array - to brightness
new DoubleFFT_2D(img.getHeight(), img.getWidth()).realForwardFull(brightness);
//visualising frequency domain
BufferedImage fd = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_INT_RGB);
outRaster = fd.getRaster();
for ( int y = 0; y < img.getHeight(); y++ ) {
for ( int x = 0; x < img.getWidth(); x++ ) {
//we calculate complex number vector length (sqrt(Re**2 + Im**2)). But these lengths are to big to
//fit in 0 - 255 scale of colors. So I divide it on 223. Instead of "223", you may want to choose
//another factor, wich would make you frequency domain look best
int power = (int) (Math.sqrt(Math.pow(brightness[y][2 * x], 2) + Math.pow(brightness[y][2 * x + 1], 2)) / 223);
power = power > 255 ? 255 : power;
//draw a grayscale color on image "fd"
fd.setRGB(x, y, new Color(c, c, c).getRGB());
}
}
draw(fd);
Resulting image should look like big black space in the middle and white spots in all four corners. Usually people visualise FD so, that zero frequency appears in the center of the image. So, if you need classical FD (one, that looks like star for reallife images), you need to upgrade "fd.setRGB(x, y..." a bit:
int w2 = img.getWidth() / 2;
int h2 = img.getHeight() / 2;
int newX = x + w2 >= img.getWidth() ? x - w2 : x + w2;
int newY = y + h2 >= img.getHeight() ? y - h2 : y + h2;
fd.setRGB(newX, newY, new Color(power, power, power).getRGB());
brightnessRGB and draw methods for the lazy:
public static int brightnessRGB(int rgb) {
int r = (rgb >> 16) & 0xff;
int g = (rgb >> 8) & 0xff;
int b = rgb & 0xff;
return (r+g+b)/3;
}
private static void draw(BufferedImage img) {
JLabel picLabel = new JLabel(new ImageIcon(img));
JPanel jPanelMain = new JPanel();
jPanelMain.add(picLabel);
JFrame jFrame = new JFrame();
jFrame.add(jPanelMain);
jFrame.pack();
jFrame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
jFrame.setVisible(true);
}
I know, I'm a bit late, but I just did all that for my program. So, let it be here for those, who'll get here from googling.

Categories

Resources