does anyone know how can I get the exact values from JPG image, for example: I have created an image that only contains values from 0 - 255, this image is on gray scale, that means : RGB values are exactly the same. In the other hand I exactly know the value from each pixel in each position, but when I use the getRGB() Im gettin values that does not match with the original values. Eg:
I have a '0' in the position [0][0] the getRGB() is returning a decimal number '13'; in the next position [0][1] I have a '1' but the getRGB() is returning a decimal number '13' too... So guys I know that these values can be because of the compression of the image. But anyone has an idea of what adjust I can make to get the correct values???
I will appreciate any help..
JPEG uses lossy compression algorithm: it sacrafices data precision to achieve better compression. So you can't get back the exact RGB values.
If you need to get back the exact RGB values, use PNG image format which uses lossless compression algorithms.
Also BufferedImage.getRGB() returns the pixel data quoting from the javadoc:
Returns an integer pixel in the default RGB color model (TYPE_INT_ARGB) and default sRGB colorspace.
So if you have gray values in the range of 0..255, the RGB values returned by getRGB() will be in the range of 0...16777215 (16777215=0xffffff). The int returned by getRGB() is not the gray version but contains the RGB components (and the alpha if it has transparency).
Related
I want to extract the pixel values from a jpeg image using getRGB(). But the problem is that the getRGB() method returns different values each time I invoke the method after setRGB(). After going through some solutions over the internet, many suggested to use .png images instead of .jpg. Now, is there a way to extract the pixel value ranging between 0-255 from a jpg image ?
I'm currently working on comparing a filtered image to its original (unfiltered) image using the SSIM (Structural similarity) index using Java.
My research brought me to a mathematical formula where the average, variance, covariance and the dynamic range of the two BufferedImages are needed.
Getting to calculate the average and the variance was not a big problem for me, however I can't figure out a way to get the number of bits per pixel needed to calculate the dynamic range, and the covariance value, is this something I can obtain from the BufferedImage.
BufferedImage has a getColorModel() method, and in the returned ColorModel object there is a getPixelSize() method which returns the number of bits per pixel described by that ColorModel.
i am performing operations on a grayscale image, and the resultant image of these operations has the same extension as the input image. for an example if the input image is .jpg or .png the output image is either .jpg or .png respectively.
and I am converting the image into grayscale as follows:
ImgProc.cvtColor(mat, grayscale, ImgProc.COLOR_BGR2GRAY),
and I am checking the channels count using:
.channels()
the problem is when I wnat to know how many channels the image contain, despit it is a grayscale image, i always receive umber of channels = 3!!
kindly please let me know why that is happening
The depth (or better color depth) is the number of bits used to represent a color value. a color depth of 8 usually means 8-bits per channel (so you have 256 color values - or better: shades of grey- per channel - from 0 to 255) and 3 channels mean then one pixel value is composed of 3*8=24 bits.
However, this also depends on nomenclature. Usually you will say
"Color depth is 8-bits per channel"
but you also could say
"The color depth of the image is 32-bits"
and then mean 8 bits per RGBA channel or
"The image has a color depth of 24-bits"
and mean 8-bits per R,G and B channels.
The grayscale image has three channels because technically it is not a grayscale image. It is a colored image with the same values for all the three channels (r, g, b) in every pixel. Therefore, visually it looks like a grayscale image.
To check the channels in the image, use-
img.getbands()
I'm developing an android app that for every YUV image passed from camera, it randomly pick 10 pixels from it and check if they are red or blue.
I know how to do this for RGB images, but not for YUV format.
I cannot convert it pixel by pixel into a RGB image because of the run time constrains.
I'm assuming you're using the Camera API's preview callbacks, where you get a byte[] array of data for each frame.
First, you need to select which YUV format you want to use. NV21 is required to be supported, and YV12 is required since Android 3.0. None of the other formats are guaranteed to be available. So NV21 is the safest choice, and also the default.
Both of these formats are YUV 4:2:0 formats; the color information is subsampled by 2x in both dimensions, and the layout of the image data is fairly different from the standard interleaved RGB format. FourCC.org's NV21 description, as one source, has the layout information you want - first the Y plane, then the UV data interleaved. But since the two color planes are only 1/4 of the size of the Y plane, you'll have to decide how you want to upsample them - the simplest is nearest neighbor. So if you want pixel (x,y) from the image of size (w, h), the nearest neighbor approach is:
Y = image[ y * w + x];
U = image[ w * h + floor(y/2) * (w/2) + floor(x/2) + 1]
V = image[ w * h + floor(y/2) * (w/2) + floor(x/2) + 0]
More sophisticated upsampling (bilinear, cubic, etc) for the chroma channels can be used as well, but what's suitable depends on the application.
Once you have the YUV pixel, you'll need to interpret it. If you're more comfortable operating in RGB, you can use these JPEG conversion equations at Wikipedia to get the RGB values.
Or, you can just use large positive values of V (Cr) to indicate red, especially if U (Cb) is small.
From the answer of Reuben Scratton in
Converting YUV->RGB(Image processing)->YUV during onPreviewFrame in android?
You can make the camera preview use RGB format instead of YUV.
Try this:
Camera.Parameters.setPreviewFormat(ImageFormat.RGB_565);
YUV is just another colour space. You can define red in YUV space just as you can in RGB space. A simple calculator suggests an RGB value of 255,0,0 (red) should appear as something like 76,84,255 in YUV space so just look for something close to that.
Here is what I read about it but cant understand exactly what it does:
One way to implement rubber-banding is to draw in XOR mode. You set
XOR mode by calling the setXORMode() method for a graphics context and
passing a color to it — usually the background color. In this mode
the pixels are not written directly to the screen. The color in which
you are drawing is combined with the color of the pixel currently
displayed together with a third color that you specify, by exclusive
ORing them together, and the resultant pixel color is written to the
screen. The third color is usually set to be the background color, so
the color of the pixel that is written is the result of the following
operation:
resultant_Color = foreground_color^background_color^current_color
I know how XORing works but don't know what the above paragraph means. Please elucidate it for me
It takes a color in and applies an XOR mask just like a regular XOR would a bit mask, except it is on the RGB colors, so it produces the color you pass in if it overlays a color with the same values or the inverse of that colors RGB and and color below its RGB if the values are different.
Just write some code and try it and it will be immediate evident what happens.