I'm processing the large tiff image in java. My object is just read pixel value, calculate the ink area.
The tiff image is black & white image, byte binary image.
So I think pixel values are either 0 or 1 that white is 1, black is 0.
But some sample files are correct. some files are not.
In some files 0 is black, 1 is white.
Is it possible?
The common point is 0:black, 1:white file is processed in windows photo editor.
0:white, 1:black is processed in EskoArtwork imaging engine.
Can the definition of an image pixel change depending on the engine?
The tiff image is black & white image, byte binary image. So I think pixel values are either 0 or 1 that white is 1, black is 0. But some sample files are correct. some files are not. In some files 0 is black, 1 is white.
Is it possible?
Absolutely. This is specified in the TIFF Tag PhotometricInterpretation. An excerpt is:
IFD Image
Code 262 (hex 0x0106)
Name PhotometricInterpretation
LibTiff name TIFFTAG_PHOTOMETRIC
Type SHORT
Count 1
Default None
Description
The color space of the image data.
The specification considers these values baseline:
0 = WhiteIsZero. For bilevel and grayscale images: 0 is imaged as white.
1 = BlackIsZero. For bilevel and grayscale images: 0 is imaged as black.
...
You could have found this easily by searching "bilevel tiff tag" as the second link in Google.
Related
Im using xuggler to put a transparent mask in a video.
Im trying to open a valid png using the ImageIO.read(), but when it renders, theres always a white backgroun in my picture.
This is my code for reading.
url = new URL(stringUrl);
imagem = ImageIO.read(url);
boolean hasAlpha = imagem.getColorModel().hasAlpha();
This boolean is always false.
And in Xuggler when i make the render
mediaReader
.setBufferedImageTypeToGenerate(BufferedImage.TYPE_3BYTE_BGR);
What im doing wrong?
The "problem" with your original image file, is that it is a 24 bit RGB (3 channel) PNG, with a tRNS chunk.
The tRNS chunk specifies a color (255, 255, 255, or white, in this case) that is to be replaced with fully transparent pixels. In effect, applying this transparency works kind of like a bit mask, where all pixels are either fully transparent or fully opaque. However, being an "ancillary" (optional, or non-critical) chunk according to the PNG specificiation, PNG decoders may choose to ignore this information.
Unfortunately, it seems like the default PNGImageReader (the "PNG plugin") for ImageIO does not apply this bit mask, and instead keeps the image in fully opaque RGB (which is still spec compliant behavior).
Your second image file is a 32 bit RGBA (4 channels incl. alpha). In this case, any reader must (and does) decode all channels, including the full 8 bit alpha channel. ImageIO's PNGImageReader does decode this with the expected transparency.
PS: It is probably possibly to "fix" the transparency from your first image, using the meta data.
The meta data for your image contains the following information in the native PNG format:
<javax_imageio_png_1.0>
....
<tRNS>
<tRNS_RGB red="255" green="255" blue="255"/>
</tRNS>
</javax_imageio_png_1.0>
After parsing this, you could convert the image, by first creating a new 4 channel image (like TYPE_4BYTE_ABGR), copying the R, G and B channels to this new image, and finally set the alpha channel to either 0 for (all pixels that match the transparent color), or 255 (for all others).
PPS: You probably want to perform these operations directly on the Rasters, as using the BufferedImage.getRGB(..)/setRGB(..) methods will convert the values to an sRGB color profile, which may not exactly match the RGB value from the tRNS chunk.
There you go! :-)
i am performing operations on a grayscale image, and the resultant image of these operations has the same extension as the input image. for an example if the input image is .jpg or .png the output image is either .jpg or .png respectively.
and I am converting the image into grayscale as follows:
ImgProc.cvtColor(mat, grayscale, ImgProc.COLOR_BGR2GRAY),
and I am checking the channels count using:
.channels()
the problem is when I wnat to know how many channels the image contain, despit it is a grayscale image, i always receive umber of channels = 3!!
kindly please let me know why that is happening
The depth (or better color depth) is the number of bits used to represent a color value. a color depth of 8 usually means 8-bits per channel (so you have 256 color values - or better: shades of grey- per channel - from 0 to 255) and 3 channels mean then one pixel value is composed of 3*8=24 bits.
However, this also depends on nomenclature. Usually you will say
"Color depth is 8-bits per channel"
but you also could say
"The color depth of the image is 32-bits"
and then mean 8 bits per RGBA channel or
"The image has a color depth of 24-bits"
and mean 8-bits per R,G and B channels.
The grayscale image has three channels because technically it is not a grayscale image. It is a colored image with the same values for all the three channels (r, g, b) in every pixel. Therefore, visually it looks like a grayscale image.
To check the channels in the image, use-
img.getbands()
I am using Itext pdf to generate pdfs having images. For CMYK type JPEG images, i am getting pdfs of nearly double the size of the image used. But for the same image's RGB version, the pdf is nearly of the same size of the image.
I would like to know the exact reason behind the increase in size of the pdf. Please note that the pdf contains only the image and a few text comments.
I've taken a CMYK JPEG image from Wikipedia with a file size of 714 KByte.
I've created a PDF file with nothing but this image. This resulted in a file size of 1.06 MB, of which 714 KByte consist of the original image and 373 KByte for the color space info that is needed when you introduce a CMYK image. Together that's about 1.06 MB, which means that the overhead of the PDF objects is really small.
I guess you overlook the fact that the PDF expects ICC-based Color space info along with CMYK JPEG images. I didn't see any other abnormal results when testing with the image I found on Wikipedia.
Short;
I need to get the value of a specific pixel from a supplied high color depth image.
Details:
I am currently using Processing to make a Slit-scanning program.
Essentially, I am using a greyscale image to pick frames from an animation, and using pixels from those frames to make a new image.
For example if the greyscale image has a black pixel, it takes the same pixel in the first frame, and adds it to an image.
If its a white pixel, it does the same with the last frame.
Anything inbetween, naturally, picks the frames inbetween.
The gist is, if supplied a horizontal gradient, and a video of a sunset, then youd have the start of the sunset on the left, slowly transitioning to the end on the right.
My problem is, when using Processing, I seem to be only able to get greyscale values of
0-255 using the default library.
Black = 0
White = 255
This limits me to using only 256 frames for the source animation, or to put up with a pixaly, unsmooth end image.
I really need to be able to supply, and thus get, pixel values in a much bigger range.
Say,
Black = 0
White = 65025
Is there any Java lib that can do this? That I can supply, say, a HDR Tiff or TGA image file, and be able to read the full range of color out of it?
Thanks,
Ok, found a great library for this;
https://code.google.com/p/pngj/
Supports the full PNG feature set - including 16 bit greyscale or full color images.
Allows me to retrieve rows from a image, then pixels from those rows.
PngReader pngr = new PngReader(new File(filename));
tmrows = pngr.readRows();
ImageLineInt neededline = (ImageLineInt)tmrows.getImageLine(y);
if (neededline.imgInfo.greyscale==true){
//get the right pixel for greyscale
value = neededline.getScanline()[x];
} else {
//get the right pixel for RGB
value = neededline.getScanline()[x*3];
}
You simply multiply by 3 as the scanline consists of RGBRGBRGB (etc) for a full color image without alpha.
Why the same width&height images don't have the same sizes? As I understand they both have the same amount of pixels, don't they? So why can one weigh more than the other?
In the Bitmap format (files with extension .bmp):
The header size could be different. (In header, the file format, image size, image color type, and such kind of additional information is stored.)
The size for one pixel could be different. 1 bit/pixel for black/white images. 8 bit/pixel for at-most-256-color images. 24 bit/pixel for standard images. 32 bit/pixel for images with transparency information (Although .bmp files nearly-never have transparency information, .png files often have.).
In the JPEG, PNG, or other format: 1, 2 from the above are also applied. Additionaly,
The image is compressed and stored (for example, jpg, png, ...).
They may have layer or animation information (for example, gif).
Because pixels can have different size
It can be 1bit(black&white),8bit,16bit,24bit,32bit and even more
So,two images with same width(480px) and height(640px) but with with different pixel size have different size.
i.e 480 X 640 X 32bit pixel != 480 X 640 X 1bit pixel