Grayscale PNG writing from BufferedImage using PNGJ - hightest-level approach - java

I just want to, at first, use PNGJ at the highest possible level to write a grayscale PNG with bit-depth 8.
I am working from a BufferedImage. Here's a snippet of the code used:
BufferedImage scaledBI;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
...
ImageInfo imi = new ImageInfo(scaledBI.getWidth(), scaledBI.getHeight(), 8, false, true, false);
PngWriter pngw = new PngWriter(baos, imi);
pngw.setCompLevel(3);
DataBufferByte db =((DataBufferByte) scaledBI.getRaster().getDataBuffer());
byte[] dbbuf = db.getData();
ImageLineByte line = new ImageLineByte(imi, dbbuf);
for (int row = 0; row < imi.rows; row++) {
pngw.writeRow(line, row);
}
pngw.end();
return baos.toByteArray();
The original image is in j2k and is a palm print scan. The PNG looks similar to the background of the j2k with some vertical gray lines of different shades.
I tried the same buffered image data with ImageIO:
ImageIO.write(scaledBI, "png", baos);
and com.pngencoder.PngEncoder:
new PngEncoder().withBufferedImage(scaledBI).withCompressionLevel(3).toStream(baos);
which both render the image properly.
The long term aim is to improve on the speed offered by ImageIO and PngEncoder (which doesn't support grayscale).
I am stuck on just trying to run PNGJ with all default settings. I looked at the snippets in PNGJ Wiki Snippets and the test code but I couldn't find a simple example with grayscale without scanline or chunk manipulation.
Looking at the PNGJ code it seems like we are properly going through block deflation row after row. What am I missing here?

my problem sprung from my misunderstanding of what ImageLineByte stores, although as the name suggests it deals with a line, not the entire image data. I got confused by the fact that it references the image info. So my output was fine once I used ImageByteLine for each line:
for (int row = 0, offset = 0; row < imi.rows; row++) {
int newOffset = offset + imi.cols;
byte[] lineBuffer = Arrays.copyOfRange(dbbuf, offset, newOffset);
ImageLineByte line = new ImageLineByte(imi, lineBuffer);
pngw.writeRow(line, row);
offset = newOffset;
}
Sorry for the trouble.

Related

How to store an image into a fits file

This is now solved.
See the code. It works now!!!
The Initial question was:
I need to store an image , received in a byte[] array, into a fits file. I'm using java 8 and fits.jar lib (nom.tam.fits).
The following code seems to work in first instance but when I open the fits file in ds9, it displays a narrow band instead of the image.
In the following code, data is the byte[] array for the image
(one byte per pixel)
Fits f = new Fits();
int[] uBytes = new int[imageData.length];//the byte[] with the pixels
for (int i = 0; i < data.length; i+=1) {
uBytes[i] = (int) (imageData[i] & 0xff);
}
int dims[] = new int[]{imageSize_x, imageSize_y};
int [][] data = (int[][]) ArrayFuncs.curl(uBytes,dims);
BasicHDU h= Fits.makeHDU(data);
f.addHDU(h);
OutputStream os = new FileOutputStream("testImage.fits");
BufferedDataOutputStream s= new
BufferedDataOutputStream(os);
f.write(s);
f.close();
I must confess I was a bit silly since I've already managed to solve the same problem when tried to display the image with JavaFX
JavaFX. Displaying an image from byte[]

Convert image from byte array to bitmap with planar 4:2:0 YUV full scale pixel format

Update: I contacted the vendor of the device and they let me know it is using the planar 4:2:0 YUV full scale pixel format. Upon researching I found out there seem to be 3 major formats for YUV 4:2:0 : I420, J420 and YV12.
I was excited because there were constants for this image format in the android YuvImage class, when running my code however I got the the following exception:
java.lang.IllegalArgumentException: only support ImageFormat.NV21 and ImageFormat.YUY2 for now
Well thats a bummer..
After that I learned about the differences between YUV420 and NV21:
I tried to write some simple function to interleave the 2 chroma planes like shown in the NV21 pixel format image.
public static void convertYUY420ToNV21(byte[] data_yuv420, byte[] data_yuv_N21) {
int idx = (int) (data_yuv_N21.length * (2.0f / 3.0f));
int j = idx;
int chroma_plane_end = (int) (idx + ((data_yuv_N21.length - idx) / 2));
for (int i = idx; i < chroma_plane_end; i++) {
data_yuv_N21[j] = data_yuv420[i];
j += 2;
}
j = idx + 1;
for (int i = chroma_plane_end; i < data_yuv_N21.length; i++) {
data_yuv_N21[j] = data_yuv420[i];
j += 2;
}
However, the result seems still the same as from my original code..
One possible reason I was thinking about was the size of the byte array (1843200). I read that for YUV420 the depth of one pixel is 12bit. The camera resolution is 1280x720 which are 921,600 pixels or 1,382,400 bytes. That is one third less than the actual byte array size. I read there might be some padding between the planes but I'm stuck on how to find out about that.
The YuvImage class has a strides parameter in its constructor but I'm not sure how to use even after reading the android developer documentation.
Any clues?
Original Post:
I'm having the following problem: I'm trying to access the camera of a device where there is no information provided on what type of camera or image format is used. The only information provided is on how to retrieve a byte array containing the video stream output.
I found out however that the resolution is 1280x720 and the byte array size is 1843200. By googling I stumbled across cameras with the exact same size and dimensions using YUYV and similar pixel formats.
Based on that knowledge I wrote the code below:
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuv = new YuvImage(data, ImageFormat.YUY2, 1280, 720, null);
yuv.compressToJpeg(new Rect(0, 0, 1280, 720), 100, out);
byte[] bytes = out.toByteArray();
bitmap = BitmapFactory.decodeByteArray(bytes, 0, bytes.length);
if (bitmap != null) {
ImageView cameraImageView = (ImageView) findViewById(R.id.imageView);
cameraImageView.setImageBitmap(bitmap);
}
The BitmapFactory.decodeByteArray function returned a valid bitmap but when displaying it I saw the image having a green tint and purple spots, probably something related to the color channels?
Sample Image:
Is there a way how to find out the exact pixel format/ encoding that has been used? I'm not sure what other things to try from here on out.
Any advice is appreciated, thanks!
try this :
/**
* Save YUV image data (NV21 or YUV420sp) as JPEG to a FileOutputStream.
*/
public static boolean saveYUYToJPEG(byte[] imageData,File saveTo,int format,int quality,int width,int height,int rotation,boolean flipHorizontally){
FileOutputStream fileOutputStream=null;
YuvImage yuvImg=null;
try {
fileOutputStream=new FileOutputStream(saveTo);
yuvImg=new YuvImage(imageData,format,width,height,null);
ByteArrayOutputStream jpegOutput=new ByteArrayOutputStream(imageData.length);
yuvImg.compressToJpeg(new Rect(0,0,width - 1,height - 1),90,jpegOutput);
Bitmap yuvBitmap=BitmapFactory.decodeByteArray(jpegOutput.toByteArray(),0,jpegOutput.size());
Matrix imageMatrix=new Matrix();
if (rotation != 0) {
imageMatrix.postRotate(rotation);
}
if (flipHorizontally) {
}
yuvBitmap=Bitmap.createBitmap(yuvBitmap,0,0,yuvBitmap.getWidth(),yuvBitmap.getHeight(),imageMatrix,true);
yuvBitmap.compress(CompressFormat.JPEG,quality,fileOutputStream);
}
catch ( FileNotFoundException e) {
return false;
}
return true;
}

Writing javafx.scene.image.Image to file?

How would I go about writing a javafx.scene.image.Image image to a file. I know you can use ImageIO on BufferedImages but is there any way to do it with a javafx Image?
Just convert it to a BufferedImage first, using javafx.embed.swing.SwingFXUtils:
Image image = ... ; // javafx.scene.image.Image
String format = ... ;
File file = ... ;
ImageIO.write(SwingFXUtils.fromFXImage(image, null), format, file);
Almost 3 years later and I now have the knowledge to do and answer this. Yes the original answer was also valid but it involved first converting the image to a BufferedImage and I ideally wanted to avoid swing entirely. While this does output the raw RGBA version of the image that's good enough for what I needed to do. I actually could just use raw BGRA since I was writing the software to open the result but since gimp can't open that I figure I'd convert it to RGBA.
Image img = new Image("file:test.png");
int width = (int) img.getWidth();
int height = (int) img.getHeight();
PixelReader reader = img.getPixelReader();
byte[] buffer = new byte[width * height * 4];
WritablePixelFormat<ByteBuffer> format = PixelFormat.getByteBgraInstance();
reader.getPixels(0, 0, width, height, format, buffer, 0, width * 4);
try {
BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream("test.data"));
for(int count = 0; count < buffer.length; count += 4) {
out.write(buffer[count + 2]);
out.write(buffer[count + 1]);
out.write(buffer[count]);
out.write(buffer[count + 3]);
}
out.flush();
out.close();
} catch(IOException e) {
e.printStackTrace();
}
JavaFX has no built-in method to do this.
To solve this problem, I implemented a very small (< 20KiB) library for writing PNG files: https://github.com/Glavo/SimplePNG
Usage:
Image img = new Image("path-to-image.jpg");
try (PNGWriter writer = new PNGWriter(Files.newOutputStream(Path.of("output.png")))) {
writer.write(PNGJavaFXUtils.asArgbImage(img));
}
// Or you can use the shortcut:
// PNGJavaFXUtils.writeImage(img, Path.of("output.png"));
It has no dependencies and can work on the JRE that only have java.base.
I avoid the dependence on Java AWT (java.desktop) through it.

How to get byte[] from javafx imageView?

How do i get byte[] from javafx image/imageview class? I want to store my image as a Blob into my database.This is the method that i use for it
public PreparedStatement prepareQuery(HSQLDBConnector connector) {
try {
Blob logoBlob = connector.connection.createBlob();
logoBlob.setBytes(0,logo.getImage());//stuck here
for (int i = 0, a = 1; i < data.length; i++, a++) {
connector.prepStatCreateProfile.setString(a, data[i]);
}
//store LOB
connector.prepStatCreateProfile.setBlob(11, logoBlob);
} catch (SQLException ex) {
ex.printStackTrace();
}
return connector.prepStatCreateProfile;
}
Is there a way to convert from my current object (imageview),image) into byte[]?, or shoud i start to think about using other class for my image/ alternatively point to the location with reference and work with paths/urls?
try this one:
BufferedImage bImage = SwingFXUtils.fromFXImage(logo.getImage(), null);
ByteArrayOutputStream s = new ByteArrayOutputStream();
ImageIO.write(bImage, "png", s);
byte[] res = s.toByteArray();
s.close(); //especially if you are using a different output stream.
should work depending on the logo class
you need to specify a format while writing and reading, and as far as I remember bmp is not supported so you will end up with a png byte array on the database
pure java fx solution trace ( == you will have to fill in missing points :)
Image i = logo.getImage();
PixelReader pr = i.getPixelReader();
PixelFormat f = pr.getPixelFormat();
WriteablePixelFromat wf = f.getIntArgbInstance(); //???
int[] buffer = new int[size as desumed from the format f, should be i.width*i.height*4];
pr.getPixels(int 0, int 0, int i.width, i.height, wf, buffer, 0, 0);
Lorenzo's answer is correct, this answer just examines efficiency and portability aspects.
Depending on the image type and storage requirements, it may be efficient to convert the image to a compressed format for storage, for example:
ByteArrayOutputStream byteOutput = new ByteArrayOutputStream();
ImageIO.write(SwingFXUtils.fromFXImage(fxImage, null), "png", byteOutput);
Blob logoBlob = connector.connection.createBlob();
logoBlob.setBytes(0, byteOutput.toByteArray());
Another advantage of doing a conversion to a common format like png before persisting the image is that other programs which deal with the database would be able to read the image without trying to convert it from a JavaFX specific byte array storage format.

Java: saving image as JPEG skew problem

I am trying to save an image to JPEG. The code below works fine when image width is a multiple of 4, but the image is skewed otherwise. It has something to do with padding. When I was debugging I was able to save the image as a bitmap correctly, by padding each row with 0s. However, this did not work out with the JPEG.
Main point to remember is my image is represented as bgr (blue green red 1 byte each) byte array which I receive from a native call.
byte[] data = captureImage(OpenGLCanvas.getLastFocused().getViewId(), x, y);
if (data.length != 3*x*y)
{
// 3 bytes per pixel
return false;
}
// create buffered image from raw data
DataBufferByte buffer = new DataBufferByte(data, 3*x*y);
ComponentSampleModel csm = new ComponentSampleModel(DataBuffer.TYPE_BYTE, x, y, 3, 3*x, new int[]{0,1,2} );
WritableRaster raster = Raster.createWritableRaster(csm, buffer, new Point(0,0));
BufferedImage buff_image = new BufferedImage(x, y, BufferedImage.TYPE_INT_BGR); // because windows goes the wrong way...
buff_image.setData(raster);
//save the BufferedImage as a jpeg
try
{
File file = new File(file_name);
FileOutputStream out = new FileOutputStream(file);
JPEGImageEncoder encoder = JPEGCodec.createJPEGEncoder(out);
JPEGEncodeParam param = encoder.getDefaultJPEGEncodeParam(buff_image);
param.setQuality(1.0f, false);
encoder.setJPEGEncodeParam(param);
encoder.encode(buff_image);
out.close();
// or JDK 1.4
// ImageIO.write(image, "JPEG", out);
}
catch (Exception ex)
{
// Write permissions on "file_name"
return false;
}
I also looked on creating the JPEG in C++ but there was even less material on that, but it is still an option.
Any help greatly apprecieated.
Leon
Thanks for your suggestions, but I have managed to work it out.
To capture the image I was using WINGDIAPI HBITMAP WINAPI CreateDIBSection in C++, then OpenGL would draw to that bitmap. Unbeknown to be, there was padding added to the bitmap automatically the width was not a multiple of 4.
Therefore Java was incorrectly interpreting the byte array.
Correct way is to interpret bytes is
byte[] data = captureImage(OpenGLCanvas.getLastFocused().getViewId(), x, y);
int x_padding = x%4;
BufferedImage buff_image = new BufferedImage(x, y, BufferedImage.TYPE_INT_RGB);
int val;
for (int j = 0; j < y; j++)
{
for (int i = 0; i < x; i++)
{
val = ( data[(i + j*x)*3 + j*x_padding + 2]& 0xff) +
((data[(i + j*x)*3 + j*x_padding + 1]& 0xff) << 8) +
((data[(i + j*x)*3 + j*x_padding + 0]& 0xff) << 16);
buff_image.setRGB(i, j, val);
}
}
//save the BufferedImage as a jpeg
try
{
File file = new File(file_name);
FileOutputStream out = new FileOutputStream(file);
JPEGImageEncoder encoder = JPEGCodec.createJPEGEncoder(out);
JPEGEncodeParam param = encoder.getDefaultJPEGEncodeParam(buff_image);
param.setQuality(1.0f, false);
encoder.setJPEGEncodeParam(param);
encoder.encode(buff_image);
out.close();
}
The JPEG standard is extremely complex. I am thinking it may be an issue with padding the output of the DCT somehow. The DCT is done to transform the content from YCrCb 4:2:2 to signal space with one DCT for each channel, Y,Cr, and Cb. The DCT is done on a "Macroblock" or "minimum coded block" depending on your context. JPEG usually has 8x8 macroblocks. When on the edge and there are not enough pixel it clamps the edge value and "drags it across" and does a DCT on that.
I am not sure if this helps, but it sounds like a non standard conforming file. I suggest you use JPEGSnoop to find out more. There are also several explanations about how JPEG compression works.
One possibility is that the sample rate may be encoded incorrectly. It might be something exotic such as 4:2:1 So you might be pulling twice as many X samples as there really are, thus distorting the image.
it is an image I capture from the screen
Maybe the Screen Image class will be easier to use.

Categories

Resources