why the grayscale image has 3 channels - java

i am performing operations on a grayscale image, and the resultant image of these operations has the same extension as the input image. for an example if the input image is .jpg or .png the output image is either .jpg or .png respectively.
and I am converting the image into grayscale as follows:
ImgProc.cvtColor(mat, grayscale, ImgProc.COLOR_BGR2GRAY),
and I am checking the channels count using:
.channels()
the problem is when I wnat to know how many channels the image contain, despit it is a grayscale image, i always receive umber of channels = 3!!
kindly please let me know why that is happening

The depth (or better color depth) is the number of bits used to represent a color value. a color depth of 8 usually means 8-bits per channel (so you have 256 color values - or better: shades of grey- per channel - from 0 to 255) and 3 channels mean then one pixel value is composed of 3*8=24 bits.
However, this also depends on nomenclature. Usually you will say
"Color depth is 8-bits per channel"
but you also could say
"The color depth of the image is 32-bits"
and then mean 8 bits per RGBA channel or
"The image has a color depth of 24-bits"
and mean 8-bits per R,G and B channels.

The grayscale image has three channels because technically it is not a grayscale image. It is a colored image with the same values for all the three channels (r, g, b) in every pixel. Therefore, visually it looks like a grayscale image.
To check the channels in the image, use-
img.getbands()

Related

Java image Tile Processing, byte binary image

I'm processing the large tiff image in java. My object is just read pixel value, calculate the ink area.
The tiff image is black & white image, byte binary image.
So I think pixel values are either 0 or 1 that white is 1, black is 0.
But some sample files are correct. some files are not.
In some files 0 is black, 1 is white.
Is it possible?
The common point is 0:black, 1:white file is processed in windows photo editor.
0:white, 1:black is processed in EskoArtwork imaging engine.
Can the definition of an image pixel change depending on the engine?
The tiff image is black & white image, byte binary image. So I think pixel values are either 0 or 1 that white is 1, black is 0. But some sample files are correct. some files are not. In some files 0 is black, 1 is white.
Is it possible?
Absolutely. This is specified in the TIFF Tag PhotometricInterpretation. An excerpt is:
IFD Image
Code 262 (hex 0x0106)
Name PhotometricInterpretation
LibTiff name TIFFTAG_PHOTOMETRIC
Type SHORT
Count 1
Default None
Description
The color space of the image data.
The specification considers these values baseline:
0 = WhiteIsZero. For bilevel and grayscale images: 0 is imaged as white.
1 = BlackIsZero. For bilevel and grayscale images: 0 is imaged as black.
...
You could have found this easily by searching "bilevel tiff tag" as the second link in Google.

Kernel Image Processing with a buffered image

So I have a given 3 by 3 array mask[][] and a buffered image. I must loop through the position[row][column] of the buffered image, and I must place the middle pixel of the mask to the current buffered image position. Then for each color component , I must multiply the value of the color component in the image that falls under the mask to the corresponding mask value, and add all nine values together. Lastly I combine the three color values to get the RGB color of the pixel.

Trying to read a PNG and xuggler

Im using xuggler to put a transparent mask in a video.
Im trying to open a valid png using the ImageIO.read(), but when it renders, theres always a white backgroun in my picture.
This is my code for reading.
url = new URL(stringUrl);
imagem = ImageIO.read(url);
boolean hasAlpha = imagem.getColorModel().hasAlpha();
This boolean is always false.
And in Xuggler when i make the render
mediaReader
.setBufferedImageTypeToGenerate(BufferedImage.TYPE_3BYTE_BGR);
What im doing wrong?
The "problem" with your original image file, is that it is a 24 bit RGB (3 channel) PNG, with a tRNS chunk.
The tRNS chunk specifies a color (255, 255, 255, or white, in this case) that is to be replaced with fully transparent pixels. In effect, applying this transparency works kind of like a bit mask, where all pixels are either fully transparent or fully opaque. However, being an "ancillary" (optional, or non-critical) chunk according to the PNG specificiation, PNG decoders may choose to ignore this information.
Unfortunately, it seems like the default PNGImageReader (the "PNG plugin") for ImageIO does not apply this bit mask, and instead keeps the image in fully opaque RGB (which is still spec compliant behavior).
Your second image file is a 32 bit RGBA (4 channels incl. alpha). In this case, any reader must (and does) decode all channels, including the full 8 bit alpha channel. ImageIO's PNGImageReader does decode this with the expected transparency.
PS: It is probably possibly to "fix" the transparency from your first image, using the meta data.
The meta data for your image contains the following information in the native PNG format:
<javax_imageio_png_1.0>
....
<tRNS>
<tRNS_RGB red="255" green="255" blue="255"/>
</tRNS>
</javax_imageio_png_1.0>
After parsing this, you could convert the image, by first creating a new 4 channel image (like TYPE_4BYTE_ABGR), copying the R, G and B channels to this new image, and finally set the alpha channel to either 0 for (all pixels that match the transparent color), or 255 (for all others).
PPS: You probably want to perform these operations directly on the Rasters, as using the BufferedImage.getRGB(..)/setRGB(..) methods will convert the values to an sRGB color profile, which may not exactly match the RGB value from the tRNS chunk.
There you go! :-)

Libary/method for loading a HDR/high-bittage image and accessing its pixel values?

Short;
I need to get the value of a specific pixel from a supplied high color depth image.
Details:
I am currently using Processing to make a Slit-scanning program.
Essentially, I am using a greyscale image to pick frames from an animation, and using pixels from those frames to make a new image.
For example if the greyscale image has a black pixel, it takes the same pixel in the first frame, and adds it to an image.
If its a white pixel, it does the same with the last frame.
Anything inbetween, naturally, picks the frames inbetween.
The gist is, if supplied a horizontal gradient, and a video of a sunset, then youd have the start of the sunset on the left, slowly transitioning to the end on the right.
My problem is, when using Processing, I seem to be only able to get greyscale values of
0-255 using the default library.
Black = 0
White = 255
This limits me to using only 256 frames for the source animation, or to put up with a pixaly, unsmooth end image.
I really need to be able to supply, and thus get, pixel values in a much bigger range.
Say,
Black = 0
White = 65025
Is there any Java lib that can do this? That I can supply, say, a HDR Tiff or TGA image file, and be able to read the full range of color out of it?
Thanks,
Ok, found a great library for this;
https://code.google.com/p/pngj/
Supports the full PNG feature set - including 16 bit greyscale or full color images.
Allows me to retrieve rows from a image, then pixels from those rows.
PngReader pngr = new PngReader(new File(filename));
tmrows = pngr.readRows();
ImageLineInt neededline = (ImageLineInt)tmrows.getImageLine(y);
if (neededline.imgInfo.greyscale==true){
//get the right pixel for greyscale
value = neededline.getScanline()[x];
} else {
//get the right pixel for RGB
value = neededline.getScanline()[x*3];
}
You simply multiply by 3 as the scanline consists of RGBRGBRGB (etc) for a full color image without alpha.

YUV image processing

I'm developing an android app that for every YUV image passed from camera, it randomly pick 10 pixels from it and check if they are red or blue.
I know how to do this for RGB images, but not for YUV format.
I cannot convert it pixel by pixel into a RGB image because of the run time constrains.
I'm assuming you're using the Camera API's preview callbacks, where you get a byte[] array of data for each frame.
First, you need to select which YUV format you want to use. NV21 is required to be supported, and YV12 is required since Android 3.0. None of the other formats are guaranteed to be available. So NV21 is the safest choice, and also the default.
Both of these formats are YUV 4:2:0 formats; the color information is subsampled by 2x in both dimensions, and the layout of the image data is fairly different from the standard interleaved RGB format. FourCC.org's NV21 description, as one source, has the layout information you want - first the Y plane, then the UV data interleaved. But since the two color planes are only 1/4 of the size of the Y plane, you'll have to decide how you want to upsample them - the simplest is nearest neighbor. So if you want pixel (x,y) from the image of size (w, h), the nearest neighbor approach is:
Y = image[ y * w + x];
U = image[ w * h + floor(y/2) * (w/2) + floor(x/2) + 1]
V = image[ w * h + floor(y/2) * (w/2) + floor(x/2) + 0]
More sophisticated upsampling (bilinear, cubic, etc) for the chroma channels can be used as well, but what's suitable depends on the application.
Once you have the YUV pixel, you'll need to interpret it. If you're more comfortable operating in RGB, you can use these JPEG conversion equations at Wikipedia to get the RGB values.
Or, you can just use large positive values of V (Cr) to indicate red, especially if U (Cb) is small.
From the answer of Reuben Scratton in
Converting YUV->RGB(Image processing)->YUV during onPreviewFrame in android?
You can make the camera preview use RGB format instead of YUV.
Try this:
Camera.Parameters.setPreviewFormat(ImageFormat.RGB_565);
YUV is just another colour space. You can define red in YUV space just as you can in RGB space. A simple calculator suggests an RGB value of 255,0,0 (red) should appear as something like 76,84,255 in YUV space so just look for something close to that.

Categories

Resources