I've got a byte array storing 16-bit pixel data from an already-deconstructed DICOM file. What I need to do now is convert/export that pixel data somehow into a TIFF file format. I'm using the imageio-tiff-3.3.2.jar plugin to handle the tiff conversion/header data. But now I need to pack that image data array into a BufferedImage of the original image dimensions so it can be exported to TIFF. But it seems that BufferedImage doesn't support 16-bit images. Is there a way around this problem, such as an external library? Is there another way I can pack that image data into a TIFF image of the original DICOM dimensions? Keep in mind, this process has to be completely lossless. I've looked around and tried out some things for the last few days, but so far nothing has worked for me.
Let me know if you have any questions or if there's anything I can do to clear up any confusion.
EDIT: Intended and Current image
Given your input data of a raw byte array, containing unsigned 16 bit image data, here's two ways to create a BufferedImage.
The first one will be slower, as it involves copying the byte array into a short array. It will also need twice the amount of memory. The upside is that it creates a standard TYPE_USHORT_GRAY BufferedImage, which may be faster to display and may be more compatible.
private static BufferedImage createCopyUsingByteBuffer(int w, int h, byte[] rawBytes) {
short[] rawShorts = new short[rawBytes.length / 2];
ByteBuffer.wrap(rawBytes)
// .order(ByteOrder.LITTLE_ENDIAN) // Depending on the data's endianness
.asShortBuffer()
.get(rawShorts);
DataBuffer dataBuffer = new DataBufferUShort(rawShorts, rawShorts.length);
int stride = 1;
WritableRaster raster = Raster.createInterleavedRaster(dataBuffer, w, h, w * stride, stride, new int[] {0}, null);
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_GRAY), false, false, Transparency.OPAQUE, DataBuffer.TYPE_USHORT);
return new BufferedImage(colorModel, raster, colorModel.isAlphaPremultiplied(), null);
}
A variant that is much faster (previous version takes 4-5x more time) to create, but results in a TYPE_CUSTOM image, that might be slower to display (it does seem to perform reasonable though, in my tests). It's much faster, and uses very little extra memory, as it does no copying/conversion of the input data at creation time.
Instead, it uses a custom sample model, that has DataBuffer.TYPE_USHORT as transfer type, but uses DataBufferByte as data buffer.
private static BufferedImage createNoCopy(int w, int h, byte[] rawBytes) {
DataBuffer dataBuffer = new DataBufferByte(rawBytes, rawBytes.length);
int stride = 2;
SampleModel sampleModel = new MyComponentSampleModel(w, h, stride);
WritableRaster raster = Raster.createWritableRaster(sampleModel, dataBuffer, null);
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_GRAY), false, false, Transparency.OPAQUE, DataBuffer.TYPE_USHORT);
return new BufferedImage(colorModel, raster, colorModel.isAlphaPremultiplied(), null);
}
private static class MyComponentSampleModel extends ComponentSampleModel {
public MyComponentSampleModel(int w, int h, int stride) {
super(DataBuffer.TYPE_USHORT, w, h, stride, w * stride, new int[] {0});
}
#Override
public Object getDataElements(int x, int y, Object obj, DataBuffer data) {
if ((x < 0) || (y < 0) || (x >= width) || (y >= height)) {
throw new ArrayIndexOutOfBoundsException("Coordinate out of bounds!");
}
// Simplified, as we only support TYPE_USHORT
int numDataElems = getNumDataElements();
int pixelOffset = y * scanlineStride + x * pixelStride;
short[] sdata;
if (obj == null) {
sdata = new short[numDataElems];
}
else {
sdata = (short[]) obj;
}
for (int i = 0; i < numDataElems; i++) {
sdata[i] = (short) (data.getElem(0, pixelOffset) << 8 | data.getElem(0, pixelOffset + 1));
// If little endian, swap the element order, like this:
// sdata[i] = (short) (data.getElem(0, pixelOffset + 1) << 8 | data.getElem(0, pixelOffset));
}
return sdata;
}
}
If your image looks strange after this conversion, try flipping the endianness, as commented in the code.
And finally, some code to exercise the above:
public static void main(String[] args) {
int w = 1760;
int h = 2140;
byte[] rawBytes = new byte[w * h * 2]; // This will be your input array, 7532800 bytes
ShortBuffer buffer = ByteBuffer.wrap(rawBytes)
// .order(ByteOrder.LITTLE_ENDIAN) // Try swapping the byte order to see sharp edges
.asShortBuffer();
// Let's make a simple gradient, from black UL to white BR
int max = 65535; // Unsigned short max value
for (int y = 0; y < h; y++) {
double v = max * y / (double) h;
for (int x = 0; x < w; x++) {
buffer.put((short) Math.round((v + max * x / (double) w) / 2.0));
}
}
final BufferedImage image = createNoCopy(w, h, rawBytes);
// final BufferedImage image = createCopyUsingByteBuffer(w, h, rawBytes);
SwingUtilities.invokeLater(new Runnable() {
#Override
public void run() {
JFrame frame = new JFrame("Test");
frame.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);
frame.add(new JScrollPane(new JLabel(new ImageIcon(image))));
frame.pack();
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
});
}
Here's what the output should look like (scaled down to 1/10th):
The easiest thing to do is to create a BufferedImage of type TYPE_USHORT_GRAY, which is type to use for 16 bits encoding.
public BufferedImage Convert(short[] array, final int width, final int height)
{
BufferedImage image = new BufferedImage(width, height, BufferedImage.TYPE_USHORT_GRAY) ;
short[] sb = ((DataBufferUShort) image.getRaster().getDataBuffer()).getData() ;
System.arraycopy(array, 0, sb, 0, array.length) ;
return image ;
}
Then you can use Java.imageio to save your image as a TIFF or a PNG. I think that the Twelve Monkey Project allows a better TIFF support for imageio, but you have to check first.
[EDIT] In your case because you deal with huge DICOM images that cannot be stored into a regular BufferedImage, you have to create your own type using the Unsafe class to allocated the DataBuffer.
Create a new class DataBufferLongShort that will allocate the needed array/DataBuffer using the Unsafe class. Then you can use Long indexes instead of Integer
Create a new class DataBuffer that extends the classical DataBuffer in order to add a type TYPE_LONG_USHORT
Then you can create the ColorModel with the new DataBuffer.
Related
Taking part in a Coursera course, I've been trying to use steganography to hide an image in another. This means I've tried to store the "main" picture's RGB values on 6 bits and the "second" picture's values on the last 2 bits.
I'm merging these two values to create a joint picture, and have also coded a class to parse the joint picture, and recover the original images.
Image recovery has not been successful, although it seems (from other examples provided within the course) that the parser is working fine. I suppose that saving the pictures after modification, using ImageIO.write somehow modifies the RGB values I have carefully set in the code. :D
public static BufferedImage mergeImage(BufferedImage original,
BufferedImage message, int hide) {
// hidden is the num of bits on which the second image is hidden
if (original != null) {
int width = original.getWidth();
int height = original.getHeight();
BufferedImage output = new BufferedImage(width, height,
BufferedImage.TYPE_INT_RGB);
for (int i = 0; i < width; i++) {
for (int j = 0; j < height; j++) {
int pix_orig = original.getRGB(i, j);
int pix_msg = message.getRGB(i, j);
int pixel = setpixel(pix_orig, pix_msg, hide);
output.setRGB(i, j, pixel);
}
}
return output;
}
return null;
}
public static int setpixel(int pixel_orig, int pixel_msg, int hide) {
int bits = (int) Math.pow(2, hide);
Color orig = new Color(pixel_orig);
Color msg = new Color(pixel_msg);
int red = ((orig.getRed() / bits) * bits); //+ (msg.getRed() / (256/bits));
if (red % 4 != 0){
counter+=1;
}
int green = ((orig.getGreen() / bits) * bits) + (msg.getGreen() / (256/bits));
int blue = ((orig.getBlue() / bits) * bits) + (msg.getBlue() / (256/bits));
int pixel = new Color(red, green, blue).getRGB();
return pixel;
}
This is the code I use for setting the RGB values of the merged picture. As you can see, I have commented part of the code belonging to red to check whether the main picture can actually be saved on 6 bits, assuming I take int hide=2
Although if I make the same checks in the parsing part of the code:
public static BufferedImage parseImage(BufferedImage input, int hidden){
// hidden is the num of bits on which the second image is hidden
if (input != null){
int width = input.getWidth();
int height = input.getHeight();
BufferedImage output = new BufferedImage(width, height, BufferedImage.TYPE_INT_RGB);
for(int i=0;i<width;i++){
for(int j=0;j<height;j++){
int pixel = input.getRGB(i, j);
pixel = setpixel(pixel,hidden);
output.setRGB(i, j, pixel);
}
}
return output;
}
return null;
}
public static int setpixel(int pixel, int hidden){
int bits = (int) Math.pow(2,hidden);
Color c = new Color(pixel);
if (c.getRed() % 4 != 0){
counter+=1;
}
int red = (c.getRed() - (c.getRed()/bits)*bits)*(256/bits);
int green = (c.getGreen() - (c.getGreen()/bits)*bits)*(256/bits);
int blue = (c.getBlue() - (c.getBlue()/bits)*bits)*(256/bits);
pixel = new Color(red,green,blue).getRGB();
return pixel;
}
I get ~100k pixels where the R value has a remainder if divided by four.
I suspect there' some problem with the function of ImageIO.write.
I know the question is going to be vague, but
1) Can someone confirm this
2) What can I do to get this code working?
Thanks a lot!
JPEG has lossy compression, which means some pixels will effectively be modified when reloading the image. This isn't a fault of ImageIO.write, it's how the format works. If you want to embed your data directly to pixel values, you want to save the image to a lossless format, such as BMP or PNG.
I am trying to create a BufferedImage from some image data which is a byte array. The image is RGB format with 3 samples per pixel - R, G, and B and 32 bits per sample (for each sample, not all 3 samples).
Now I want to create a BufferedImage from this byte array. This is what I have done:
ColorModel cm = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB), new int[] {32, 32, 32}, false, false, Transparency.OPAQUE, DataBuffer.TYPE_INT);
Object tempArray = ArrayUtils.toNBits(bitsPerSample, pixels, samplesPerPixel*imageWidth, endian == IOUtils.BIG_ENDIAN);
WritableRaster raster = cm.createCompatibleWritableRaster(imageWidth, imageHeight);
raster.setDataElements(0, 0, imageWidth, imageHeight, tempArray);
BufferedImage bi = new BufferedImage(cm, raster, false, null);
The above code works with 24 bits per sample RGB image but not 32 bits per sample. The generated image is garbage which is shown on the right of the image. It is supposed to be like the left side of the image.
Note: the only image reader on my machine which can read this image is ImageMagick. All the others show similar results as the garbage one to the right of the following image.
The ArrayUtils.toNBits() just translates the byte array to int array with correct endianess. I'm sure this one is correct as I have cross checked with other methods to generate the same int array.
I guess the problem might arise from the fact I am using all the 32 bits int to represent the color which would contain negative values. Looks like I need long data type, but there is no DataBuffer type for long.
Instances of ComponentColorModel created with transfer types
DataBuffer.TYPE_BYTE, DataBuffer.TYPE_USHORT, and DataBuffer.TYPE_INT
have pixel sample values which are treated as unsigned integral
values.
The above quote is from Java document for ComponentColorModel. This means the 32 bit sample does get treated as unsigned integer value. Then the problem could be somewhere else.
Has any body met similar problem and got a workaround or I may have done some thing wrong here?
Update2: The "real" problem lies in the fact when 32 bit sample is used, the algorithm for the ComponentColorModel will shift 1 to the left 0 times (1<<0) since shift on int is always within 0~31 inclusive. This is not the expected value. To solve this problem (actually shift left 32 times), the only thing needs to be done is change 1 from int to long type as 1L as shown in the fix below.
Update: from the answer by HaraldK and the comments, we have finally agreed that the problem is coming from Java's ComponentColorModel which is not handling 32 bit sample correctly. The proposed fix by HaraldK works for my case too. The following is my version:
import java.awt.Transparency;
import java.awt.color.ColorSpace;
import java.awt.image.ComponentColorModel;
import java.awt.image.DataBuffer;
public class Int32ComponentColorModel extends ComponentColorModel {
//
public Int32ComponentColorModel(ColorSpace cs, boolean alpha) {
super(cs, alpha, false, alpha ? Transparency.TRANSLUCENT : Transparency.OPAQUE, DataBuffer.TYPE_INT);
}
#Override
public float[] getNormalizedComponents(Object pixel, float[] normComponents, int normOffset) {
int numComponents = getNumComponents();
if (normComponents == null || normComponents.length < numComponents + normOffset) {
normComponents = new float[numComponents + normOffset];
}
switch (transferType) {
case DataBuffer.TYPE_INT:
int[] ipixel = (int[]) pixel;
for (int c = 0, nc = normOffset; c < numComponents; c++, nc++) {
normComponents[nc] = ipixel[c] / ((float) ((1L << getComponentSize(c)) - 1));
}
break;
default: // I don't think we can ever come this far. Just in case!!!
throw new UnsupportedOperationException("This method has not been implemented for transferType " + transferType);
}
return normComponents;
}
}
Update:
This seems to be a known bug: ComponentColorModel.getNormalizedComponents() does not handle 32-bit TYPE_INT, reported 10 (TEN!) years ago, against Java 5.
The upside, Java is now partly open-sourced. We can now propose a patch, and with some luck it will be evaluated for Java 9 or so... :-P
The bug proposes the following workaround:
Subclass ComponentColorModel and override getNormalizedComponents() to properly handle 32 bit per sample TYPE_INT data by dividing the incoming pixel value by 'Math.pow(2, 32) - 1' when dealing with this data, rather than using the erroneous bit shift. (Using a floating point value is ok, since getNormalizedComponents() converts everything to floating point anyway).
My fix is a little different, but the basic idea is the same (feel free to optimize as you see fit :-)):
private static class TypeIntComponentColorModel extends ComponentColorModel {
public TypeIntComponentColorModel(final ColorSpace cs, final boolean alpha) {
super(cs, alpha, false, alpha ? TRANSLUCENT : OPAQUE, DataBuffer.TYPE_INT);
}
#Override
public float[] getNormalizedComponents(Object pixel, float[] normComponents, int normOffset) {
int numComponents = getNumComponents();
if (normComponents == null) {
normComponents = new float[numComponents + normOffset];
}
switch (transferType) {
case DataBuffer.TYPE_INT:
int[] ipixel = (int[]) pixel;
for (int c = 0, nc = normOffset; c < numComponents; c++, nc++) {
normComponents[nc] = ((float) (ipixel[c] & 0xffffffffl)) / ((float) ((1l << getComponentSize(c)) - 1));
}
break;
default:
throw new UnsupportedOperationException("This method has not been implemented for transferType " + transferType);
}
return normComponents;
}
}
Consider the below code. If run as is, for me it displays a mostly black image, with the upper right quarter white overlayed with a black circle. If I change the datatype to TYPE_USHORT (uncomment the transferType line), it displays half/half white and a linear gradient from black to white, with an orange circle in the middle (as it should).
Using ColorConvertOp to convert to a standard type seems to make no difference.
public class Int32Image {
public static void main(String[] args) {
// Define dimensions and layout of the image
int w = 300;
int h = 200;
int transferType = DataBuffer.TYPE_INT;
// int transferType = DataBuffer.TYPE_USHORT;
ColorModel colorModel = new ComponentColorModel(ColorSpace.getInstance(ColorSpace.CS_sRGB), false, false, Transparency.OPAQUE, transferType);
WritableRaster raster = colorModel.createCompatibleWritableRaster(w, h);
BufferedImage image = new BufferedImage(colorModel, raster, false, null);
// Start with linear gradient
if (raster.getTransferType() == DataBuffer.TYPE_INT) {
DataBufferInt buffer = (DataBufferInt) raster.getDataBuffer();
int[] data = buffer.getData();
for (int y = 0; y < h; y++) {
int value = (int) (y * 0xffffffffL / h);
for (int x = 0; x < w; x++) {
int offset = y * w * 3 + x * 3;
data[offset] = value;
data[offset + 1] = value;
data[offset + 2] = value;
}
}
}
else if (raster.getTransferType() == DataBuffer.TYPE_USHORT) {
DataBufferUShort buffer = (DataBufferUShort) raster.getDataBuffer();
short[] data = buffer.getData();
for (int y = 0; y < h; y++) {
short value = (short) (y * 0xffffL / h);
for (int x = 0; x < w; x++) {
int offset = y * w * 3 + x * 3;
data[offset] = value;
data[offset + 1] = value;
data[offset + 2] = value;
}
}
}
// Paint something (in color)
Graphics2D g = image.createGraphics();
g.setColor(Color.WHITE);
g.fillRect(0, 0, w / 2, h);
g.setColor(Color.ORANGE);
g.fillOval(100, 50, w - 200, h - 100);
g.dispose();
System.out.println("image = " + image);
// image = new ColorConvertOp(null).filter(image, new BufferedImage(image.getWidth(), image.getHeight(), BufferedImage.TYPE_INT_ARGB));
JFrame frame = new JFrame();
frame.add(new JLabel(new ImageIcon(image)));
frame.pack();
frame.setLocationRelativeTo(null);
frame.setVisible(true);
}
}
To me, this seems to suggest that there's something wrong with the ColorModel using transferType TYPE_INT. But I'd be happy to be wrong. ;-)
Another thing you could try, is to scale the values down to 16 bit, use a TYPE_USHORT raster and color model, and see if that makes a difference. I bet it will, but I'm too lazy to try. ;-)
I'm coding a Java LWJGL game, and everything's going along great, except whenever I try to figure out a way to create a BufferedImage of the current game area. I've searched the internet, browsed all of the opengl functions, and I am getting no where... Anyone have any ideas? Here's all I have so far, but it only makes a blank .png:
if(Input.getKeyDown(Input.KEY_F2)) {
try {
String fileName = "screenshot-" + Util.getSystemTime(false);
File imageToSave = new File(MainComponent.screenshotsFolder, fileName + ".png");
int duplicate = 0;
while(true) {
duplicate++;
if(imageToSave.exists() == false) {
imageToSave.createNewFile();
break;
}
imageToSave = new File(MainComponent.screenshotsFolder, fileName + "_" + duplicate + ".png");
}
imageToSave.createNewFile();
// Create a buffered image:
BufferedImage image = new BufferedImage(MainComponent.WIDTH, MainComponent.HEIGHT, BufferedImage.TYPE_INT_ARGB);
//Wrtie the new buffered image to file:
ImageIO.write(image, "png", imageToSave);
} catch (IOException e) {
e.printStackTrace();
}
}
You never actually write something into your BufferedImage.
Read the Buffer
You can use glReadPixels to access the selected buffer. (I assume WIDTH and HEIGHT as your OpenGLContext dimensions.)
FloatBuffer imageData = BufferUtils.createFloatBuffer(WIDTH * HEIGHT * 3);
GL11.glReadPixels(0, 0, WIDTH, HEIGHT, GL11.GL_RGB, GL11.GL_FLOAT, imageData);
imageData.rewind();
Use whatever parameters suit your needs best, I just picked floats randomly.
Set the Image Data
You already figured out how to create and save your image, but in between you should also set some content to the image. You can do this with BufferedImage().setRGB() (Note that I don't use a good naming as you do, to keep this example concise.)
// create image
BufferedImage image = new BufferedImage(
WIDTH, HEIGHT, BufferedImage.TYPE_INT_RGB
);
// set content
image.setRGB(0, 0, WIDTH, HEIGHT, rgbArray, 0, WIDTH);
// save it
File outputfile = new File("Screenshot.png");
try {
ImageIO.write(image, "png", outputfile);
} catch (IOException e) {
e.printStackTrace();
}
The most tricky part is now getting the rgbArray. The problems are that
OpenGL gives you three values (in this case, i.e. using GL11.GL_RGB), while the BufferedImage expects one value.
OpenGL counts the rows from bottom to top while BufferedImage counts from top to bottom.
Calculate one Integer from three Floats
To get rid of problem one you have to calculate the integer value which fits the three number you get.
I will show this with a simple example, the color red which is (1.0f, 0.0f, 0.0f) in your FloatBuffer.
For the integer value it might be easy to think of numbers in hex values, as you might know from CSS where it's very common to name colors with those. Red would be #ff0000 in CSS or in Java of course 0xff0000.
Colors in RGB with integers are usually represented from 0 to 255 (or 00 to ff in hex), while you use 0 to 1 with floats or doubles. So first you have to map them to the correct range by simply multiplying the values by 255 and casting them to integers:
int r = (int)(fR * 255);
Now you can think of the hex value as just putting those numbers next to each other:
rgb = 255 0 0 = ff 00 00
To achieve this you can bitshift the integer values. Since one hex value (0-f) is 4 byte long, you have to shift the value of green 8 bytes to the left (two hex values) and the value of red 16 bytes. After that you can simply add them up.
int rgb = (r << 16) + (g << 8) + b;
Getting from BottomUp to TopDown
I know the terminology bottom-up -> top-down is not correct here, but it was catchy.
To access 2D data in a 1D array you usually use some formula (this case row-major order) like
int index = offset + (y - yOffset) * stride + (x - xOffset);
Since you want to have the complete image the offsets can be left out and the formula simplified to
int index = y * stride + x;
Of course the stride is simply the WIDTH, i.e. the maximum achievable x value (or in other terms the row length).
The problem you now face is that OpenGL uses the bottom row as row 0 while the BufferedImage uses the top row as row 0. To get rid of that problem just invert y:
int index = ((HEIGHT - 1) - y) * WIDTH + x;
Filling the int[]-array with the Buffer's Data
Now you know how to calculate the rgb value, the correct index and you have all data you need. Let's fill the int[]-array with those information.
int[] rgbArray = new int[WIDTH * HEIGHT];
for(int y = 0; y < HEIGHT; ++y) {
for(int x = 0; x < WIDTH; ++x) {
int r = (int)(imageData.get() * 255) << 16;
int g = (int)(imageData.get() * 255) << 8;
int b = (int)(imageData.get() * 255);
int i = ((HEIGHT - 1) - y) * WIDTH + x;
rgbArray[i] = r + g + b;
}
}
Note three things about this little piece of code.
The size of the array. Obviously it's just WIDTH * HEIGHT and not WIDTH * HEIGHT * 3 as the buffer's size was.
Since OpenGL uses row-major order, you have to use the column value (x) as the inner loop for this 2D array (and of course there are other ways to write this, but this seemed to be the most intuitive one).
Accessing imageData with imageData.get() is probably not the safest way to do it, but since the calculations are carefully done it should do the job just fine. Just remember to flip() or rewind() the buffer before calling get() the first time!
Putting it all together
So with all the information available now we can just put a method saveScreenshot() together.
private void saveScreenshot() {
// read current buffer
FloatBuffer imageData = BufferUtils.createFloatBuffer(WIDTH * HEIGHT * 3);
GL11.glReadPixels(
0, 0, WIDTH, HEIGHT, GL11.GL_RGB, GL11.GL_FLOAT, imageData
);
imageData.rewind();
// fill rgbArray for BufferedImage
int[] rgbArray = new int[WIDTH * HEIGHT];
for(int y = 0; y < HEIGHT; ++y) {
for(int x = 0; x < WIDTH; ++x) {
int r = (int)(imageData.get() * 255) << 16;
int g = (int)(imageData.get() * 255) << 8;
int b = (int)(imageData.get() * 255);
int i = ((HEIGHT - 1) - y) * WIDTH + x;
rgbArray[i] = r + g + b;
}
}
// create and save image
BufferedImage image = new BufferedImage(
WIDTH, HEIGHT, BufferedImage.TYPE_INT_RGB
);
image.setRGB(0, 0, WIDTH, HEIGHT, rgbArray, 0, WIDTH);
File outputfile = getNextScreenFile();
try {
ImageIO.write(image, "png", outputfile);
} catch (IOException e) {
e.printStackTrace();
System.err.println("Can not save screenshot!");
}
}
private File getNextScreenFile() {
// create image name
String fileName = "screenshot_" + getSystemTime(false);
File imageToSave = new File(fileName + ".png");
// check for duplicates
int duplicate = 0;
while(imageToSave.exists()) {
imageToSave = new File(fileName + "_" + ++duplicate + ".png");
}
return imageToSave;
}
// format the time
public static String getSystemTime(boolean getTimeOnly) {
SimpleDateFormat dateFormat = new SimpleDateFormat(
getTimeOnly?"HH-mm-ss":"yyyy-MM-dd'T'HH-mm-ss"
);
return dateFormat.format(new Date());
}
I also uploaded a very simple full working example.
I am getting the int array from png image how I will convert this to bufferdimage or creating new PNG file ?
int[] pixel = new int[w1*h1];
int i = 0;
for (int xx = 0; xx < h1; xx++) {
for (int yy = 0; yy < w1; yy++) {
pixel[i] = img.getRGB(yy, xx);
i++;
}
}
If you have an array of integers which are packed RGB values, this is the java code to save it to a file:
int width = 100;
int height = 100;
int[] rgbs = buildRaster(width, height);
DataBuffer rgbData = new DataBufferInt(rgbs, rgbs.length);
WritableRaster raster = Raster.createPackedRaster(rgbData, width, height, width,
new int[]{0xff0000, 0xff00, 0xff},
null);
ColorModel colorModel = new DirectColorModel(24, 0xff0000, 0xff00, 0xff);
BufferedImage img = new BufferedImage(colorModel, raster, false, null);
String fname = "/tmp/whatI.png";
ImageIO.write(img, "png", new File(fname));
System.out.println("wrote to "+fname);
The reason for the arrays 0xff0000, 0xff00, 0xff is that the RGB bytes are packed with blue in the least significant byte. If you pack your ints different, alter that array.
You can rebuild the image manually, this is however a pretty expensive operation.
BufferedImage image = new BufferedImage(64, 64, BufferedImage.TYPE_INT_RGB);
Graphics g = image.getGraphics();
for(int i = 0; i < pixels.size(); i++)
{
g.setColor(new java.awt.Color(pixels.get(i).getRed(), pixels.get(i).getGreen(), pixels.get(i).getBlue()));
g.fillRect(pixels.get(i).getxPos(), pixels.get(i).getyPos(), 1, 1);
}
try
{
ImageIO.write(image, "PNG", new File("imageName.png"))
}
catch(IOException error)
{
error.printStackTrace();
}
I formatted your image array into an object, this is personal preference tho (of course you could us an int array with this model as well). Keep in mind that you can always add the alpha to there as well.
Try the ImageIO class, which can take a byte array representing pixel data to build an image object and then writing it out in a particular format.
try {
BufferedImage bufferedImage = ImageIO.read(new ByteArrayInputStream(yourBytes));
ImageIO.write(bufferedImage, "png", new File("out.png"));
} catch (IOException e) {
e.printStackTrace();
}
I've been trying for two days to find a way to perfectly convert a CMYK image to a RGB one in Java. I went through a lot of different ways to do it, all found on the Web, some of them on Stackoverflow, but I couldn't just find the way that would do it simply and without this awful color fading that is typical to such conversions. I know that tools like Photoshop or Irfanview do it perfectly in two clicks but I wanted it to be Java coded.
Well, long story short, I found a way, and here it is.
Thank you for your feedbacks.
Whome, I tried your way but it gave me either inverted or very strange colors whether I saved the image using ImageIO.write() or JAI.create().
haraldk, I haven't try your code yet. I had a look at it and it does not seem straightforward to me. I'll give it a try later.
Meanwhile, allow me to post my own way, that's actually made up of other people ways (this guy: https://stackoverflow.com/a/9470843/2435757 and that other guy: http://www.coderanch.com/t/485449/java/java/RGB-CMYK-Image, among others). It works although, as a new BufferedImage is created, information such as the resolution, or the compression method (for a TIFF image) are lost and must be reset, which this method does not (I think that the only non-JRE lib required here is Apache common xmlgraphics):
BufferedImage img = null;
try {
img = ImageIO.read(new File("cmyk.jpg"));
} catch (IOException e) {}
ColorSpace cmyk = DeviceCMYKColorSpace.getInstance();
int w = img.getWidth(), h = img.getHeight();
BufferedImage image = null;
byte[] buffer = ((DataBufferByte) img.getRaster().getDataBuffer()).getData();
int pixelCount = buffer.length;
byte[] new_data = new byte[pixelCount / 4 * 3];
float lastC = -1, lastM = -1, lastY = -1, lastK = -1;
float C, M, Y, K;
float[] rgb = new float[3];
// loop through each pixel changing CMYK values to RGB
int pixelReached = 0;
for (int i = 0 ; i < pixelCount ; i += 4) {
C = (buffer[i] & 0xff) / 255f;
M = (buffer[i + 1] & 0xff) / 255f;
Y = (buffer[i + 2] & 0xff) / 255f;
K = (buffer[i + 3] & 0xff) / 255f;
if (lastC == C && lastM == M && lastY == Y && lastK == K) {
//use existing values if not changed
} else { //work out new
rgb = cmyk.toRGB(new float[] {C, M, Y, K});
//cache values
lastC = C;
lastM = M;
lastY = Y;
lastK = K;
}
new_data[pixelReached++] = (byte) (rgb[0] * 255);
new_data[pixelReached++] = (byte) (rgb[1] * 255);
new_data[pixelReached++] = (byte) (rgb[2] * 255);
}
// turn data into RGB image
image = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
int[] l_bandoff = {0, 1, 2};
PixelInterleavedSampleModel l_sm = new PixelInterleavedSampleModel(DataBuffer.TYPE_INT, w, h, 3, w * 3, l_bandoff);
image.setData(new ByteInterleavedRaster(l_sm, new DataBufferByte(new_data, new_data.length), new Point(0, 0)));
// write
ImageIO.write(image, "jpg", new File("rgb.jpg"));
The above code gives me excellent results for both JPEG and TIFF images, although I happened to get a very strange result with a particular image.
Here is another, much simpler and straightforward, way by JMagick:
ImageInfo info = new ImageInfo("cmyk.tif");
MagickImage image = new MagickImage(info);
image.transformRgbImage(ColorspaceType.CMYKColorspace);
image.setFileName("rgb.tif");
image.writeImage(info);
Couldn't be shorter, could it? Also works like a charm for both JPEG and TIFF.
And no, haraldk, I didn't use any reference to a color profile. That seems quite weird to me too. I can only assume that both ways use a default color profile and that I've been lucky enough for it to work fine in all cases so far.
I am waiting for your feedbacks on this.
Cheers.
PS: I would be more than glad to give you links to the images I use, but Stackoverflow says I'm not reliable enough :-) In another post maybe, if you require them.
What SO answers did you try and found not working properly?
Did any of them gave this example code. Does it create color fading? Would you please share an example image link creating a problem?
/**
* ImageIO cannot read CMYK-jpegs, it throws IIOException(Unsupported Image Type).
* This method tries to read cmyk image.
* #param file
* #return image TYPE_4BYTE_ABGR
* #throws Exception
*/
public static BufferedImage readCMYKImage(File file) throws Exception {
Iterator<ImageReader> readers = ImageIO.getImageReadersByFormatName("JPEG");
ImageReader reader = null;
while(readers.hasNext()) {
reader = readers.next();
if(reader.canReadRaster())
break;
}
FileInputStream fis = new FileInputStream(file);
try {
ImageInputStream input = ImageIO.createImageInputStream(fis);
reader.setInput(input); // original CMYK-jpeg stream
Raster raster = reader.readRaster(0, null); // read image raster
BufferedImage image = new BufferedImage(raster.getWidth(), raster.getHeight(), BufferedImage.TYPE_4BYTE_ABGR);
image.getRaster().setRect(raster);
return image;
} finally {
try { fis.close(); } catch(Exception ex) {}
}
}