This is now solved.
See the code. It works now!!!
The Initial question was:
I need to store an image , received in a byte[] array, into a fits file. I'm using java 8 and fits.jar lib (nom.tam.fits).
The following code seems to work in first instance but when I open the fits file in ds9, it displays a narrow band instead of the image.
In the following code, data is the byte[] array for the image
(one byte per pixel)
Fits f = new Fits();
int[] uBytes = new int[imageData.length];//the byte[] with the pixels
for (int i = 0; i < data.length; i+=1) {
uBytes[i] = (int) (imageData[i] & 0xff);
}
int dims[] = new int[]{imageSize_x, imageSize_y};
int [][] data = (int[][]) ArrayFuncs.curl(uBytes,dims);
BasicHDU h= Fits.makeHDU(data);
f.addHDU(h);
OutputStream os = new FileOutputStream("testImage.fits");
BufferedDataOutputStream s= new
BufferedDataOutputStream(os);
f.write(s);
f.close();
I must confess I was a bit silly since I've already managed to solve the same problem when tried to display the image with JavaFX
JavaFX. Displaying an image from byte[]
Related
I just want to, at first, use PNGJ at the highest possible level to write a grayscale PNG with bit-depth 8.
I am working from a BufferedImage. Here's a snippet of the code used:
BufferedImage scaledBI;
ByteArrayOutputStream baos = new ByteArrayOutputStream();
...
ImageInfo imi = new ImageInfo(scaledBI.getWidth(), scaledBI.getHeight(), 8, false, true, false);
PngWriter pngw = new PngWriter(baos, imi);
pngw.setCompLevel(3);
DataBufferByte db =((DataBufferByte) scaledBI.getRaster().getDataBuffer());
byte[] dbbuf = db.getData();
ImageLineByte line = new ImageLineByte(imi, dbbuf);
for (int row = 0; row < imi.rows; row++) {
pngw.writeRow(line, row);
}
pngw.end();
return baos.toByteArray();
The original image is in j2k and is a palm print scan. The PNG looks similar to the background of the j2k with some vertical gray lines of different shades.
I tried the same buffered image data with ImageIO:
ImageIO.write(scaledBI, "png", baos);
and com.pngencoder.PngEncoder:
new PngEncoder().withBufferedImage(scaledBI).withCompressionLevel(3).toStream(baos);
which both render the image properly.
The long term aim is to improve on the speed offered by ImageIO and PngEncoder (which doesn't support grayscale).
I am stuck on just trying to run PNGJ with all default settings. I looked at the snippets in PNGJ Wiki Snippets and the test code but I couldn't find a simple example with grayscale without scanline or chunk manipulation.
Looking at the PNGJ code it seems like we are properly going through block deflation row after row. What am I missing here?
my problem sprung from my misunderstanding of what ImageLineByte stores, although as the name suggests it deals with a line, not the entire image data. I got confused by the fact that it references the image info. So my output was fine once I used ImageByteLine for each line:
for (int row = 0, offset = 0; row < imi.rows; row++) {
int newOffset = offset + imi.cols;
byte[] lineBuffer = Arrays.copyOfRange(dbbuf, offset, newOffset);
ImageLineByte line = new ImageLineByte(imi, lineBuffer);
pngw.writeRow(line, row);
offset = newOffset;
}
Sorry for the trouble.
How would I go about writing a javafx.scene.image.Image image to a file. I know you can use ImageIO on BufferedImages but is there any way to do it with a javafx Image?
Just convert it to a BufferedImage first, using javafx.embed.swing.SwingFXUtils:
Image image = ... ; // javafx.scene.image.Image
String format = ... ;
File file = ... ;
ImageIO.write(SwingFXUtils.fromFXImage(image, null), format, file);
Almost 3 years later and I now have the knowledge to do and answer this. Yes the original answer was also valid but it involved first converting the image to a BufferedImage and I ideally wanted to avoid swing entirely. While this does output the raw RGBA version of the image that's good enough for what I needed to do. I actually could just use raw BGRA since I was writing the software to open the result but since gimp can't open that I figure I'd convert it to RGBA.
Image img = new Image("file:test.png");
int width = (int) img.getWidth();
int height = (int) img.getHeight();
PixelReader reader = img.getPixelReader();
byte[] buffer = new byte[width * height * 4];
WritablePixelFormat<ByteBuffer> format = PixelFormat.getByteBgraInstance();
reader.getPixels(0, 0, width, height, format, buffer, 0, width * 4);
try {
BufferedOutputStream out = new BufferedOutputStream(new FileOutputStream("test.data"));
for(int count = 0; count < buffer.length; count += 4) {
out.write(buffer[count + 2]);
out.write(buffer[count + 1]);
out.write(buffer[count]);
out.write(buffer[count + 3]);
}
out.flush();
out.close();
} catch(IOException e) {
e.printStackTrace();
}
JavaFX has no built-in method to do this.
To solve this problem, I implemented a very small (< 20KiB) library for writing PNG files: https://github.com/Glavo/SimplePNG
Usage:
Image img = new Image("path-to-image.jpg");
try (PNGWriter writer = new PNGWriter(Files.newOutputStream(Path.of("output.png")))) {
writer.write(PNGJavaFXUtils.asArgbImage(img));
}
// Or you can use the shortcut:
// PNGJavaFXUtils.writeImage(img, Path.of("output.png"));
It has no dependencies and can work on the JRE that only have java.base.
I avoid the dependence on Java AWT (java.desktop) through it.
How do i get byte[] from javafx image/imageview class? I want to store my image as a Blob into my database.This is the method that i use for it
public PreparedStatement prepareQuery(HSQLDBConnector connector) {
try {
Blob logoBlob = connector.connection.createBlob();
logoBlob.setBytes(0,logo.getImage());//stuck here
for (int i = 0, a = 1; i < data.length; i++, a++) {
connector.prepStatCreateProfile.setString(a, data[i]);
}
//store LOB
connector.prepStatCreateProfile.setBlob(11, logoBlob);
} catch (SQLException ex) {
ex.printStackTrace();
}
return connector.prepStatCreateProfile;
}
Is there a way to convert from my current object (imageview),image) into byte[]?, or shoud i start to think about using other class for my image/ alternatively point to the location with reference and work with paths/urls?
try this one:
BufferedImage bImage = SwingFXUtils.fromFXImage(logo.getImage(), null);
ByteArrayOutputStream s = new ByteArrayOutputStream();
ImageIO.write(bImage, "png", s);
byte[] res = s.toByteArray();
s.close(); //especially if you are using a different output stream.
should work depending on the logo class
you need to specify a format while writing and reading, and as far as I remember bmp is not supported so you will end up with a png byte array on the database
pure java fx solution trace ( == you will have to fill in missing points :)
Image i = logo.getImage();
PixelReader pr = i.getPixelReader();
PixelFormat f = pr.getPixelFormat();
WriteablePixelFromat wf = f.getIntArgbInstance(); //???
int[] buffer = new int[size as desumed from the format f, should be i.width*i.height*4];
pr.getPixels(int 0, int 0, int i.width, i.height, wf, buffer, 0, 0);
Lorenzo's answer is correct, this answer just examines efficiency and portability aspects.
Depending on the image type and storage requirements, it may be efficient to convert the image to a compressed format for storage, for example:
ByteArrayOutputStream byteOutput = new ByteArrayOutputStream();
ImageIO.write(SwingFXUtils.fromFXImage(fxImage, null), "png", byteOutput);
Blob logoBlob = connector.connection.createBlob();
logoBlob.setBytes(0, byteOutput.toByteArray());
Another advantage of doing a conversion to a common format like png before persisting the image is that other programs which deal with the database would be able to read the image without trying to convert it from a JavaFX specific byte array storage format.
I am creating an application which will read image byte/pixel/data from an .bmp image and store it in an byte/char/int/etc array.
Now, from this array, I want to subtract 10 (in decimal) from the data stored in the 10th index of an array.
I am able to successfully store the image information in the array created. But when I try to write the array information back to .bmp image, the image created is not viewable.
This is the piece of code which I tried to do so.
In this code, I am not subtracting 10 from the 10th index of an array.
public class Test1 {
public static void main(String[] args) throws IOException{
File inputFile = new File("d://test.bmp");
FileReader inputStream = new FileReader("d://test.bmp");
FileOutputStream outputStream = new FileOutputStream("d://test1.bmp");
/*
* Create byte array large enough to hold the content of the file.
* Use File.length to determine size of the file in bytes.
*/
char fileContent[] = new char[(int)inputFile.length()];
for(int i = 0; i < (int)inputFile.length(); i++){
fileContent[i] = (char) inputStream.read();
}
for(int i = 0; i < (int)inputFile.length(); i++){
outputStream.write(fileContent[i]);
}
}
}
Instead of char[], use byte[]
Here's a modified version if your code which works:
public class Test {
public static void main(String[] args) throws IOException {
File inputFile = new File("someinputfile.bmp");
FileOutputStream outputStream = new FileOutputStream("outputfile.bmp");
/*
* Create byte array large enough to hold the content of the file.
* Use File.length to determine size of the file in bytes.
*/
byte fileContent[] = new byte[(int)inputFile.length()];
new FileInputStream(inputFile).read(fileContent);
for(int i = 0; i < (int)inputFile.length(); i++){
outputStream.write(fileContent[i]);
}
outputStream.close();
}
}
To make your existing code work you should replace the FileReader with a FileInputStream. According to the FileReader javadoc:
FileReader is meant for reading streams of characters. For reading streams of raw bytes, consider using a FileInputStream.
Modifying your sample as below
public static void main(String[] args) throws IOException
{
File inputFile = new File("d://test.bmp");
FileInputStream inputStream = new FileInputStream("d://test.bmp");
FileOutputStream outputStream = new FileOutputStream("d://test1.bmp");
/*
* Create byte array large enough to hold the content of the file.
* Use File.length to determine size of the file in bytes.
*/
byte fileContent[] = new byte[(int)inputFile.length()];
for(int i = 0; i < (int)inputFile.length(); i++){
fileContent[i] = (byte) inputStream.read();
}
inputStream.close();
for(int i = 0; i < (int)inputFile.length(); i++){
outputStream.write(fileContent[i]);
}
outputStream.flush();
outputStream.close();
}
This work for me to create a copy of the original image.
Though as mentioned in the comments above this is probably not the correct approach for what you are trying to achieve.
Other have pointed you at errors in your code (using char instead of byte mostly), however, even if you fix that, you probably will end up with a non-loadable image if you change the value of the 10th byte in the file.
This is because, a .bmp image file starts with an header containing information about the file (color depth, dimensions, ... see BMP file format) before any actual image data. Specifically, the 10th byte is part of a 4 byte integer storing the offset of the actual image data (pixel array). So subtracting 10 from this value will probably make the offset pointing at the wrong point in the file, and your image loader doing bound checking will probably consider this invalid.
What you really want to do is load the image as an image and manipulate the pixel values directly. Something like that:
BufferedImage originalImage = ImageIO.read(new File("d://test.bmp"));
int rgb = originalImage.getRGB(10, 0);
originalImage.setRGB(rgb >= 10 ? rgb - 10 : 0);
ImageIO.write(originalImage, "bmp", new File("d://test1.bmp"));
I am trying to save an image to JPEG. The code below works fine when image width is a multiple of 4, but the image is skewed otherwise. It has something to do with padding. When I was debugging I was able to save the image as a bitmap correctly, by padding each row with 0s. However, this did not work out with the JPEG.
Main point to remember is my image is represented as bgr (blue green red 1 byte each) byte array which I receive from a native call.
byte[] data = captureImage(OpenGLCanvas.getLastFocused().getViewId(), x, y);
if (data.length != 3*x*y)
{
// 3 bytes per pixel
return false;
}
// create buffered image from raw data
DataBufferByte buffer = new DataBufferByte(data, 3*x*y);
ComponentSampleModel csm = new ComponentSampleModel(DataBuffer.TYPE_BYTE, x, y, 3, 3*x, new int[]{0,1,2} );
WritableRaster raster = Raster.createWritableRaster(csm, buffer, new Point(0,0));
BufferedImage buff_image = new BufferedImage(x, y, BufferedImage.TYPE_INT_BGR); // because windows goes the wrong way...
buff_image.setData(raster);
//save the BufferedImage as a jpeg
try
{
File file = new File(file_name);
FileOutputStream out = new FileOutputStream(file);
JPEGImageEncoder encoder = JPEGCodec.createJPEGEncoder(out);
JPEGEncodeParam param = encoder.getDefaultJPEGEncodeParam(buff_image);
param.setQuality(1.0f, false);
encoder.setJPEGEncodeParam(param);
encoder.encode(buff_image);
out.close();
// or JDK 1.4
// ImageIO.write(image, "JPEG", out);
}
catch (Exception ex)
{
// Write permissions on "file_name"
return false;
}
I also looked on creating the JPEG in C++ but there was even less material on that, but it is still an option.
Any help greatly apprecieated.
Leon
Thanks for your suggestions, but I have managed to work it out.
To capture the image I was using WINGDIAPI HBITMAP WINAPI CreateDIBSection in C++, then OpenGL would draw to that bitmap. Unbeknown to be, there was padding added to the bitmap automatically the width was not a multiple of 4.
Therefore Java was incorrectly interpreting the byte array.
Correct way is to interpret bytes is
byte[] data = captureImage(OpenGLCanvas.getLastFocused().getViewId(), x, y);
int x_padding = x%4;
BufferedImage buff_image = new BufferedImage(x, y, BufferedImage.TYPE_INT_RGB);
int val;
for (int j = 0; j < y; j++)
{
for (int i = 0; i < x; i++)
{
val = ( data[(i + j*x)*3 + j*x_padding + 2]& 0xff) +
((data[(i + j*x)*3 + j*x_padding + 1]& 0xff) << 8) +
((data[(i + j*x)*3 + j*x_padding + 0]& 0xff) << 16);
buff_image.setRGB(i, j, val);
}
}
//save the BufferedImage as a jpeg
try
{
File file = new File(file_name);
FileOutputStream out = new FileOutputStream(file);
JPEGImageEncoder encoder = JPEGCodec.createJPEGEncoder(out);
JPEGEncodeParam param = encoder.getDefaultJPEGEncodeParam(buff_image);
param.setQuality(1.0f, false);
encoder.setJPEGEncodeParam(param);
encoder.encode(buff_image);
out.close();
}
The JPEG standard is extremely complex. I am thinking it may be an issue with padding the output of the DCT somehow. The DCT is done to transform the content from YCrCb 4:2:2 to signal space with one DCT for each channel, Y,Cr, and Cb. The DCT is done on a "Macroblock" or "minimum coded block" depending on your context. JPEG usually has 8x8 macroblocks. When on the edge and there are not enough pixel it clamps the edge value and "drags it across" and does a DCT on that.
I am not sure if this helps, but it sounds like a non standard conforming file. I suggest you use JPEGSnoop to find out more. There are also several explanations about how JPEG compression works.
One possibility is that the sample rate may be encoded incorrectly. It might be something exotic such as 4:2:1 So you might be pulling twice as many X samples as there really are, thus distorting the image.
it is an image I capture from the screen
Maybe the Screen Image class will be easier to use.