Get RGBA pixels from ARGB BufferedImage? - java

Is there a simple way to get an rgba int[] from an argb BufferedImage? I need it to be converted for opengl, but I don't want to have to iterate through the pixel array and convert it myself.

OpenGL 1.2+ supports a GL_BGRA pixel format and reversed packed pixels.
On the surface BGRA does not sound like what you want, but let me explain.
Calls like glTexImage2D (...) do what is known as pixel transfer, which involves packing and unpacking image data. During the process of pixel transfer, data conversion may be performed, special alignment rules may be followed, etc. The data conversion step is what we are particularly interested in here; you can transfer pixels in a number of different layouts besides the obvious RGBA component order.
If you reverse the byte order (e.g. data type = GL_UNSIGNED_INT_8_8_8_8_REV) together with a GL_BGRA format, you will effectively transfer ARGB pixels without any real effort on your part.
Example glTexImage2D (...) call:
glTexImage2D (..., GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, image);
The usual use-case for _REV packed data types is handling endian differences between different processors, but it also comes in handy when you want to reverse the order of components in an image (since there is no such thing as GL_ARGB).
Do not convert things for OpenGL - it is perfectly capable of doing this by itself.

In order to transition between argb and rgba you can apply "bit-wise shifts" in order to convert them back and forth in a fast and concise format.
argb = rgba <<< 8
rgba = argb <<< 24
If you have any further questions, this topic might should give you a more in-depth answer on converting between rgba and argb.
Also, if you'd like to learn more about java's bitwise operators check out this link

Related

How to make pixels with a certain RGB range transparent

I have a fixed size frame buffer (1200x720 RGBA efficiently converted from YUV) in a java byte array.
I would like to set a certain shade of a color (white in my case, regardless of its alpha value) to fully transparent.
Currently I am doing this via CPU by traversing the byte array and zero'ing the pixel if RGB > 0xC8. This somewhat works but is obviously extremely slow (>1sec/frame) for doing so on a live stream.
I've been researching methods to do this via GPU/OpenGL on Android and I see mentioning of Alpha test, blending, and color keying. It seems the alpha test is not useful here since it relies on the alpha information rather than RGB's values.
Any idea how to do this on Android using OpenGL/java?
It seems the alpha test is not useful here
The logic for an alpha-test is implemented in the fragment shader, so rather than testing alpha just change the test to implement a check on the RGB value. The technique here is generic and 100% flexible. The underlying operation you are looking for is fragment shaders which trigger the discard operation when the color key matches.
Alternatively you can use the same conditional check but rather than calling discard just set the output color to vec4(0.0) and use blending to avoid modifying the framebuffer for that fragment. On the whole I would expect this to be more efficient; discard tends to have odd perfomance side-effects.
You should create a custom renderscript script to convert those pixels, you can also use it to convert from yuv so you only process the pixels in the buffer once

DCT implementation

I'm trying to implement image compression algorithm based on DCT for color JPEG. I'm newbie in image processing so I need some help. What I need is clarification of an algorithm.
I'm using DCT implementation from here
So, here is the algorithm as I understood it:
Load an image using ImageIO into BufferedImage.
Create 3 matrices (1 for each channel: red, green, blue):
int rgb = bufferedImage.getRGB(i, j);
int red = (rgb >> 16) & 0xFF;
int green = (rgb >> 8) & 0xFF;
int blue = rgb & 0xFF;
Increase matrices to the size so they can be split in chunks 8x8 (where 8 is the size of DCT matrix, N)
For each matrix, split it into chunks of the size 8x8 (result: splittedImage)
Perform forwardDCT on matrices from splittedImage (result: dctImage).
Perform quantization on matrices from dctImage (result: quantizedImage)
Here I don't know what to do. I can:
merge quantizedImage matrices into one matrix margedImage, convert it into Vector and perform compressImage method.
or convert small matrices from quantizedImage into Vector and perform compressImage method on them, and then marge them into one matrix
So, here I got 3 matrices for red, green and blue colors. Than I convert those matrices into one RGB matrix and create new BufferedImage and using method setRGB to set pixel values. Then perform saving image to file.
Extra questions:
Is it better to convert RGB into YCbCr and perform DCT on Y, Cb and Cr?
Javadoc of compressImage method says that it's not Huffman Encoding, but Run-Length encoding. So will the compressed image be opened by image viewer? Or I should use Huffman Encoding according to JPEG specification, and is there any open source Huffman Encoding implementation in Java?
If you want to follow the implementation steps, I suggest reading:
http://www.amazon.com/Compressed-Image-File-Formats-JPEG/dp/0201604434/ref=sr_1_1?ie=UTF8&qid=1399765722&sr=8-1&keywords=compressed+image+file+formats
In regard your questions:
1) The JPEG standard knows nothing about color spaces and does not care whether you use RGB or YCbCr, or CMYK. There are several JPEG file format (e.g., JFIF, EXIF, ADOBE) that specify the color spaces--usually YCbCr.
The reason for using YCbCr is that if follows the JPEG trend of concentrating information. There tends to be more useful information in the Y component than the Cb or Cr components. Using YCbCr, you can sample 4 Ys for ever Cb and Cr (or even 16) for every Y. That reduces the amount of data to be compressed by 1/2.
Note that the JPEG file formats specify limits on sampling (JPEG allows 2:3 sampling while most implementations do not).
2) The DCT coefficients are Runlength encoded then huffman (or arithmetic) encoded. You have to use both.

Detecting if a BufferedImage contains transparent pixels

I'm trying to optimise a rendering engine in Java to not draw object's which are covered up by 'solid' child objects drawn in front of them, i.e. the parent is occluded by its children.
I'm wanting to know if an arbitrary BufferedImage I load in from a file contains any transparent pixels - as this affects my occlusion testing.
I've found I can use BufferedImage.getColorModel().hasAlpha() to find if the image supports alpha, but in the case that it does, it doesn't tell me if it definitely contains non-opaque pixels.
I know I could loop over the pixel data & test each one's alpha value & return as soon as I discover a non-opaque pixel, but I was wondering if there's already something native I could use, a flag that is set internally perhaps? Or something a little less intensive than iterating through pixels.
Any input appreciated, thanks.
Unfortunately, you will have to loop through each pixel (until you find a transparent pixel) to be sure.
If you don't need to be 100% sure, you could of course test only some pixels, where you think transparency is most likely to occur.
By looking at various images, I think you'll find that most images that has transparent parts contains transparency along the edges. This optimization will help in many common cases.
Unfortunately, I don't think that there's an optimization that can be done in one of the most common cases, the one where the color model allows transparency, but there really are no transparent pixels... You really need to test every pixel in this case, to know for sure.
Accessing the alpha values in its "native representation" (through the Raster/DataBuffer/SampleModel classes) is going to be faster than using BufferedImage.getRGB(x, y) and mask out the alpha values.
I'm pretty sure you'll need to loop through each pixel and check for an Alpha value.
The best alternative I can offer is to write a custom method for reading the pixel data - ie your own Raster. Within this class, as you're reading the pixel data from the source file into the data buffer, you can check for the alpha values as you go. Of course, this isn't much help if you're using a built-in image reading class, and involves a lot more effort.

Do certain image file types always correspond with certain BufferedImage constant types?

The BufferedImage class in Java contains a getType() method which returns an integer correlating with a BufferedImage constant type variable describing some information about how the image is encoded (you can look at the BufferedImage source to determine what number corresponds to what constant type variable). For instance, if it returns the integer corresponding with BufferedImage.TYPE_3BYTE_BGR, then that means that the BufferedImage is an 8-bit RGB image with no alpha and with blue, green, and yellow each being represented by 3 bits.
Some of these image types seem to correlate with certain properties of a particular format. For instance, TYPE_BYTE_INDEXED says that it is created from "a 256-color 6/6/6 color cube palette". This sounds a lot like GIF images, which are created from 256 colors.
Curious, I scanned several hundred photos on my hard drive and converted each of them to a BufferedImage, using ImageIO.read(File file), then called BufferedImage.getType() on them. I determined that there were only a few BufferedImage types that were generated from certain image types. The results were as follows:
JPG: TYPE_3BYTE_BGR, TYPE_BYTE_GRAY
PNG: TYPE_3BYTE_BGR, TYPE_BYTE_GRAY, TYPE_4BYTE_BGRA
GIF: TYPE_BYTE_INDEXED
While it looks like both JPGs and PNGs shared some similar BufferedImage constant types, only a PNG in my test resulted in a TYPE_4BYTE_BGRA and every GIF resulted in a TYPE_BYTE_INDEXED.
I'm not too familiar with image formats and it's true that my sample size isn't all that large. So I figured I'd ask: assuming that an image is properly formatted, do certain image types always result in BufferedImages with certain constant types? To provide a specific example, would a properly formatted GIF image always correspond to TYPE_BYTE_INDEXED? Or is it possible for all properly formatted images to correspond with all of the BufferedImage constant types?
[Do] certain image types always result in BufferedImage with certain constant types?
As in in your other question; No, there is no direct relationship between the BufferedImage types and file formats.
Or is it possible for all properly formatted images to correspond with all of the BufferedImage constant types?
Basically, yes. Of course, a color image would lose information if converted to gray,
a 16 bit per sample image would lose precision if converted to 8 bits per sample, etc.
However, different file formats have different ways of storing pixels and colors, and usually a certain BufferedImage type more closely represent the "layout" used in the file format.
Let's use your GIF example:
The storage "layout" of a GIF (before applying LZW compression) is normally closest to that of TYPE_BYTE_INDEXED, so that is usually the "cheapest" conversion to do in Java. For GIFs with up to 16 colors, TYPE_BYTE_BINARY would work just as well. And it's always possible for a GIF to be decoded into TYPE_4BYTE_ABGR or TYPE_INT_ARGB (or even TYPE_3BYTE_BGR or TYPE_INT_RGB if no transparent color).
In other words, the type of image depends on the decoder, and in some cases (like for the ImageIO API) the user.
To summarize, what you have found, is that the GIF plugin for ImageIO (GIFImageReader) by default will decode a GIF with more than 16 colors to TYPE_BYTE_INDEXED. Using a different decoder/framework may yield different results.
A little bit of history that might enlighten the curious reader:
The types of BufferedImages where not modeled to correspond to image formats. They were modeled to correspond to display hardware. An image having the same pixel layout as the display hardware is always going to be faster to display. Other layouts would first need to go through some kind of conversion. Now with modern display hardware being very fast, this is of course less of a concern, but in "ancient" times this was important.
Incidentally, many "ancient" image formats were created ad hoc, or for specific applications running on specific display hardware. Because of this, the pixel layout of the display hardware were often used in the file format. Again, because no conversion was needed, and it was the fastest/simplest thing to implement.
So, yes, there is a relationship. It's just not a direct "given A => B" relationship, it's a "given A and C => B".

Determine if an image is b/w or colored in Java

Is there an efficient way to determine if an image is in greyscale or in color? By efficient I don't mean reading all the pixels of the image and looking for every single RGB value then.
For example, in Python there is a function inside the Imaging library called 'getcolors' that returns a hash of pairs { (R G B) -> counter } for the whole image and I just have to iterate over that hash looking for only one entry in color.
UPDATE:
For future readers of this post: I implemented a solution reading pixel by pixel the image (as #npinti suggested on his link) and it seems to be fast enough for me (you should take your time implementing it, won't take you more than 10 minutes). It seems the Python implementation of the pixel by pixel way is really bad (inneficient and slow).
If you are using a BufferedImage, this previous SO post should provide helpful.

Categories

Resources