Help on adding plug-in to Java ImageWriter - java

I am trying to save a BufferedImage as a PNM file. I already installed the JAI (Java Advanced Imaging), and have the PNMWriter plug-in imported. However, I don't know how to add it to my ImageWriter so it can write in .pnm. When I run ImageIO.getWriterFormatNames() to get the possible format names, only the standard ones (.png, .bmp, .jpg....) come up... What do

I implemented this myself for my software. It was only 30 lines of source code and I did not want to add Java Advanced Imaging for something that can be solved so easily. Here is my solution:
public static void write(BufferedImage image, OutputStream stream) throws IOException
{
/*
* Write file header.
*/
int imageWidth = image.getWidth();
int imageHeight = image.getHeight();
stream.write('P');
stream.write('6');
stream.write('\n');
stream.write(Integer.toString(imageWidth).getBytes());
stream.write(' ');
stream.write(Integer.toString(imageHeight).getBytes());
stream.write('\n');
stream.write(Integer.toString(255).getBytes());
stream.write('\n');
/*
* Write each row of pixels.
*/
for (int y = 0; y < imageHeight; y++)
{
for (int x = 0; x < imageWidth; x++)
{
int pixel = image.getRGB(x, y);
int b = (pixel & 0xff);
int g = ((pixel >> 8) & 0xff);
int r = ((pixel >> 16) & 0xff);
stream.write(r);
stream.write(g);
stream.write(b);
}
}
stream.flush();
}

Use JAI (JAI class), not ImageIO (Java standard), use:
JAI.create("ImageWrite", renderedImage, file, "pnm");

Related

Alpha channel ignored when using ImageIO.read()

I'm currently having an issue with alpha channels when reading PNG files with ImageIO.read(...)
fileInputStream = new FileInputStream(path);
BufferedImage image = ImageIO.read(fileInputStream);
//Just copying data into an integer array
int[] pixels = new int[image.getWidth() * image.getHeight()];
image.getRGB(0, 0, width, height, pixels, 0, width);
However, when trying to read values from the pixel array by bit shifting as seen below, the alpha channel is always returning -1
int a = (pixels[i] & 0xff000000) >> 24;
int r = (pixels[i] & 0xff0000) >> 16;
int g = (pixels[i] & 0xff00) >> 8;
int b = (pixels[i] & 0xff);
//a = -1, the other channels are fine
By Googling the problem I understand that the BufferedImage type needs to be defined as below to allow for the alpha channel to work:
BufferedImage image = new BufferedImage(width, height BufferedImage.TYPE_INT_ARGB);
But ImageIO.read(...) returns a BufferedImage without giving the option to specify the image type. So how can I do this?
Any help is much appreciated.
Thanks in advance
I think, your "int unpacking" code might be wrong.
I used (pixel >> 24) & 0xff (where pixel is the rgba value of a specific pixel) and it worked fine.
I compared this with the results of java.awt.Color and they worked fine.
I "stole" the "extraction" code directly from java.awt.Color, this is, yet another reason, I tend not to perform these operations this way, it's to easy to screw them up
And my awesome test code...
BufferedImage image = ImageIO.read(new File("BYO image"));
int width = image.getWidth();
int height = image.getHeight();
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int pixel = image.getRGB(x, y);
//value = 0xff000000 | rgba;
int a = (pixel >> 24) & 0xff;
Color color = new Color(pixel, true);
System.out.println(x + "x" + y + " = " + color.getAlpha() + "; " + a);
}
}
nb: Before some one tells that this is inefficient, I wasn't going for efficiency, I was going for quick to write
You may also want to have a look at How to convert get.rgb(x,y) integer pixel to Color(r,g,b,a) in Java?, which I also used to validate my results
I think the problem is that you're using arithmetic shift (>>) instead of logical shift (>>>). Thus 0xff000000 >> 24 becomes 0xffffffff (i.e. -1)

Get RGB of JPG in Android [duplicate]

This question already has an answer here:
Can't import javax.imageio.ImageIO in Android application
(1 answer)
Closed 5 years ago.
first of all thank for your time. I have a jar library which would be included as library in my Android Application.
This jar, among other things, is able to get the RGB values from a jpg image. This works perfectly in my java application but when I runs it in my Android application it does not work because the class ImageIO.read(File file) (Bufferedimage) does not implemented in Android.
I read something about using Bitmap class but i do not find out anything about it.
Could you help me with this method you find here below?
public static int[][][] getImageRgb(BufferedImage image) {
int width = image.getWidth();
int height = image.getHeight();
int[][][] rgb = new int[height][width][3];
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
int pixel = image.getRGB(j, i);
rgb[i][j] = getPixelRgb(pixel); }
}
return rgb;
}
Where getPixelRgb is a function aims this:
public static int[] getPixelRgb(int pixel) {
// int alpha = (pixel >> 24) & 0xff;
int red = (pixel >> 16) & 0xff;
int green = (pixel >> 8) & 0xff;
int blue = (pixel) & 0xff;
return new int[]{red, green, blue};
}
I really I do not know how to transform this methods for Android.
I look forward to hearing from you.
Thank a lot.
What you need is in the official docs:
int getPixel (int x, int y)
Returns the Color at the specified location.
You can create a Bitmap from a resource in res/drawable folder or if you're downloading the image, you need to first save it to the device storage.

Using Graphics to Draw a BufferedImage with AlphaBlending Java

I do a lot of game programming in my free time, and am currently working on a game engine library. Previous to this point I have made customized per game engines built straight into the application, however, to challenge my logical skills even further, I decided I wanted to make an engine that I could literally use with any game that I write, kind of like a plugin.
Before this point, I have been pulling textures in using a BufferedImage using getRBG to pull the pixel[] out and by hand writing over the background int[] with the texture int[] array in the (X,Y) position that the Renderable resided. Then when everything was written to the master int[] I would make a new BufferedImage and use setRGB using the master int[] and use a BufferStrategy and it's Graphic to drawImage of the BufferedImage. I liked this method because I felt like i had complete control over the way things were rendered, but I don't think it was very efficient.
Here is a look at the way I used to write to the master int[]
public void draw(Render render, int x, int y, float alphaMul){
for(int i=0; i<render.Width; i++){
int xPix = i + x;
if(Width - 1<xPix || xPix<0)continue;
for(int j=0; j<render.Height; j++){
int yPix = j + y;
if(Height - 1<yPix || yPix<0)continue;
int srcARGB = render.Pixels[i + j * render.Width];
int dstARGB = Pixels[xPix + yPix * Width];
int srcAlpha = (int)((0xFF & (srcARGB >> 24))*alphaMul);
int srcRed = 0xFF & (srcARGB >> 16);
int srcGreen = 0xFF & (srcARGB >> 8);
int srcBlue = 0xFF & (srcARGB);
int dstAlpha = 0xFF & (dstARGB >> 24);
int dstRed = 0xFF & (dstARGB >> 16);
int dstGreen = 0xFF & (dstARGB >> 8);
int dstBlue = 0xFF & (dstARGB);
float srcAlphaF = srcAlpha/255.0f;
float dstAlphaF = dstAlpha/255.0f;
int outAlpha = (int)((srcAlphaF + (dstAlphaF)*(1 - (srcAlphaF)))*255);
int outRed = (int)(srcRed*srcAlphaF) + (int)(dstRed * (1 - srcAlphaF));
int outGreen = (int)(srcGreen*srcAlphaF) + (int)(dstGreen * (1 - srcAlphaF));
int outBlue = (int)(srcBlue*srcAlphaF) + (int)(dstBlue * (1 - srcAlphaF));
int outARGB = (outAlpha<<24)|(outRed << 16) | (outGreen << 8) | (outBlue);
Pixels[xPix + yPix * Width] = outARGB;
}
}
}
I have recently found out it may be multitudes faster, where using drawImage I can loop through all of the Renderables and draw them as BufferedImages using their respective (X,Y) positions. But, I do not know how to work alphaBlending with that. So my questions are, how would I go about getting the results that I want?, and Would it be resource and time beneficial over my previous method?
Thanks
-Craig

output image is black in android

I first tried to convert jpg into array of rgb values and then tried to revert same array into jpg
picw = selectedImage.getWidth();
pich = selectedImage.getHeight();
int[] pix = new int[picw * pich];
selectedImage.getPixels(pix, 0, picw, 0, 0, picw, pich);
int R, G, B;
for (int y = 0; y < pich; y++) {
for (int x = 0; x < picw; x++) {
int index = y * picw + x;
R = (pix[index] >> 16) & 0xff;
G = (pix[index] >> 8) & 0xff;
B = pix[index] & 0xff;
pix[index] = (R << 16) | (G << 8) | B;
}
}
Untill this point all things are fine(i checked by Loging the array), but when i create bitmap to compress it in jpg, the output is of black image.
Bitmap bmp = Bitmap.createBitmap(pix, picw, pich,Bitmap.Config.ARGB_8888);
File folder = Environment.getExternalStoragePublicDirectory(Environment.DIRECTORY_DOWNLOADS);
File file = new File(folder,"Wonder.jpg");
FileOutputStream fileOutputStream = null;
try {
fileOutputStream = new FileOutputStream(file);
bmp.compress(Bitmap.CompressFormat.JPEG, 100, fileOutputStream);
} catch (FileNotFoundException e) {
e.printStackTrace();
}finally {
if (fileOutputStream != null) {
try {
fileOutputStream.close();
} catch (IOException e) {
e.printStackTrace();
}
}
}
Please help me to move Further, Thanks
First let me explain how this data is stored per pixel:
Each pixel has 32bits of data to store a value for: Alpha, Red, Green and Blue. Each of these values is just 8 bits (or a byte). (there are a lot of other formats to store color information, but the one you specified is ARGB_8888).
In this format, white is 0xffffffff and black is 0xff000000.
So, like i said in the comments, the alpha seems to be missing. A red pixel without any alpha like 0x00ff0000 is not going to be visible.
Alpha can be added by first storing it:
A = (pix[index] >> 24) & 0xff;
Although the value is probably going to be 255 (because JPEG doesn't have alpha), i think it would be wise to use it like this in case you decide to use another format that does have alpha.
Then you should put the alpha back in:
pix[index] = (A << 24) | (R << 16) | (G << 8) | B;
This should write the exact same value to pix[index] which it already contains, not changing anything. But it will leave you with the original image instead of just black.

Java byte Image Manipulation

I need to create a simple demo for image manipulation in Java. My code is swing based. I don't have to do anything complex, just show that the image has changed in some way. I have the image read as byte[]. Is there anyway that I can manipulate this byte array without corrupting the bytes to show some very simple manipulation. I don't wish to use paint() etc. Is there anything that I can do directly to the byte[] array to show some change?
edit:
I am reading jpg image as byteArrayInputStream using apache io library. The bytes are read ok and I can confirm it by writing them back as jpeg.
You can try to convert your RGB image to Grayscale. If the image as 3 bytes per pixel rapresented as RedGreenBlue you can use the followinf formula: y=0.299*r+0.587*g+0.114*b.
To be clear iterate over the byte array and replace the colors. Here an example:
byte[] newImage = new byte[rgbImage.length];
for (int i = 0; i < rgbImage.length; i += 3) {
newImage[i] = (byte) (rgbImage[i] * 0.299 + rgbImage[i + 1] * 0.587
+ rgbImage[i + 2] * 0.114);
newImage[i+1] = newImage[i];
newImage[i+2] = newImage[i];
}
UPDATE:
Above code assumes you're using raw RGB image, if you need to process a Jpeg file you can do this:
try {
BufferedImage inputImage = ImageIO.read(new File("input.jpg"));
BufferedImage outputImage = new BufferedImage(
inputImage.getWidth(), inputImage.getHeight(),
BufferedImage.TYPE_INT_RGB);
for (int x = 0; x < inputImage.getWidth(); x++) {
for (int y = 0; y < inputImage.getHeight(); y++) {
int rgb = inputImage.getRGB(x, y);
int blue = 0x0000ff & rgb;
int green = 0x0000ff & (rgb >> 8);
int red = 0x0000ff & (rgb >> 16);
int lum = (int) (red * 0.299 + green * 0.587 + blue * 0.114);
outputImage
.setRGB(x, y, lum | (lum << 8) | (lum << 16));
}
}
ImageIO.write(outputImage, "jpg", new File("output.jpg"));
} catch (IOException e) {
e.printStackTrace();
}

Categories

Resources