OpenCV Adaptive Thresholding in Java - java

The javadoc mentions signature of adaptive threshold function as
adaptiveThreshold(src, dst, maxValue, adaptiveMethod, thresholdType, blockSize, C)
I need to somehow put in values for blockSize and C(offset) automatically given a particular image. So I get a colored image which I convert to gray scale and then apply adaptive threshold as a pre-proccessing step for OCR.
Currently I hardcode the values for blocksize and C and see what gives me better result and then settle on that value. Is there a way to somehow find the best(or better) values for these parameters so that given a grayscale image my algorithm knows what would be good values for blocksize and 'C'.
PS: The adaptive threshold method that I am using is ADAPTIVE_THRESH_MEAN_C.

Related

Thresholding in luminance image

I have an image that contains an illuminate. First I crop the area which I want to process then convert it into the binary image. I use Otsu's thresholding, but it gives a bad result for this problem. I have to try to use adaptive threshold, but this method dependent on block size and C parameter (opencv method). What should I do to get a good result in this problem?
Original image, but I crop the certain area
:
Otsu Thresholding result
adaptive threshold in not suitable for your case. if you like to simply create a binary image with black background and white text (or vise-versa), and you have tight cropped area, you can simply do below steps:
1-convert image to gray scale
2-normalize your image (ignore 1% of darkest and lightest pixels)
3-use a fixed threshold (something between 0.3 to 0.7)
4-do some morphological enhancement like eroding, dilating, opening and closing for eliminating noise.
adaptive thresholding used in case of uneven luminance when you have a gradient light on board which is not present in you example.

Java Splash Screen scale factor

Java seems to scale the splash screen passed as a jvm switch e.g.
java -splash:splash_file.png
depending on the size of the monitor.
On the source i can see a reference to some natively calculated scale factor. Does anybody know how this scaling factor is calculated?
I would assume it's calculated by the standard way for graphics which takes an image of a given size in an unbounded "world" (world coordinates), transforms it to a normalized device (think unit square), and then transforms it again to the screen coordinates. The transformations consist of a translation and a scaling of the points.
Given the splash screen's window in the world (the way it should appear without translation or scaling), the normalized (x,y) values are obtained as follows:
The first part is the translation and the second the scale factor. This reduces the image to be contained in a 1 x 1 square so all the (x,y) values are fractional.
To go from the normalized to screen coordinate system the values are calculated as follows:
These operations are typically done efficiently with the help of translation and scaling matrix multiplications. Rotation can also be applied.
This is actually a low-level view of how you could take images, shapes, etc. drawn anyway you like and present them consistently across any sized screens. I'm not sure exactly how it's done in the example you give but it would likely be some variation of this. See the beginning of this presentation for a visual representation.
This value is actually both jdk implementation-dependent and architecture-dependent.
I browsed the OpenJDK code, and for a lot of architectures, it's simply hardcoded to 1.
For example, in the windows bindings, you'll find:
SPLASHEXPORT char*
SplashGetScaledImageName(const char* jarName, const char* fileName,
float *scaleFactor)
{
*scaleFactor = 1;
return NULL;
}
The scaleFactor value is then stored into a struct that is accessed via JNI through the _GetScaleFactor method.

How to calculate and display a laplacian filtered image

I would like to know how to calculate and display a laplacian filtered image for an example Laplacian filter like below..
-1 6 -1
6 -20 6
-1 6 -1
Please help me with this. Thank you. I appreciate any help I can get.
Assuming that you are able to scan image pixel by pixel.
Original_Image(size) = Result_Image (size)
for( int i=1; i<Result_Image.rows-1;i++) // ignore first and last rows to avoid going out of range
{
for( int i=1;i<Result_Image.cols-1;i++)
{
Result_Image(i,j)= -20*Result_Image(i,j)+6*(Result_Image(i-1,j)+Result_Image(i+1,j)+Result_Image(i,j-1)+Result_Image(i,j+1))-1*(Result_Image(i-1,j-1)+Result_Image(i+1,j+1)+Result_Image(i+1,j-1)+Result_Image(i-1,j+1));
}
}
You can achieve this by using OpenCv for Java or similar image prcocessing libraries for Java if Java itself doesnt support scanning images pixel by pixel .
Note = this algorithm can be inefficent,the libraries have some useful functions for filtering images
You asked about Java, but in case you meant something more basic I will try to answer more generally.
Given a Filter Coefficients (You have an approximation of the Laplacian filter) the way to apply it on an image is Convolution (Assuming the Filter is LSI - Linear Spatially Invariant).
The convolution can be computed directly (Loops) of in the frequency domain (Using Convolution Theorem).
There are few things to consider (Real World Problems):
The convolution "Asks" for pixels beyond the image. Hence boundary conditions should be imposed ("Pixels" beyond the image are zero, constant, nearest neighbor, etc...).
The result of the convolution is bigger then the input image due to the "Filter Transient". Again, logical decision should be made.
If you're limited to Fixed Point math, proper scaling should be made after the operation (Rule of Thumb, built to keep the image mean, says the sum of all filter coefficients should be 1, hence you need the scaling).
Good Luck.

Fast Adaptive Threshold for Canny Edge Detector in Android

According to my research, Canny Edge Detector is very useful for detecting the edge of an image. After I put many effort on it, I found that OpenCV function can do that, which is
Imgproc.Canny(Mat image, Mat edges, double threshold1, double threshold2)
But for the low threshold and high threshold, I know that different image has different threshold, so can I know if there are any fast adaptive threshold method can automatically assign the low and high threshold according to different image?
This is relatively easy to do. Check out this older SO post on the subject.
A quick way is to compute the mean and standard deviation of the current image and apply +/- one standard deviation to the image.
The example in C++ would be something like:
Mat img = ...;
Scalar mu, sigma;
meanStdDev(img, mu, sigma);
Mat edges;
Canny(img, edges, mu.val[0] - sigma.val[0], mu.val[0] + sigma.val[0]);
Another method is to compute the median of the image and target a ratio above and below the median (e.g., 0.66*medianValue and 1.33*medianValue).
Hope that helps!
Opencv has an adaptive threshold function.
With OpenCV4Android it is like this:
Imgproc.adaptiveThreshold(src, dst, maxValue, adaptiveMethod, thresholdType, blockSize, C);
An example:
Imgproc.adaptiveThreshold(mInput, mInput, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY_INV, 15, 4);
As for how to choose the parameters, you have to read the docs for more details. Choosing the right threshold for each image is a whole different question.

Determine if an image is b/w or colored in Java

Is there an efficient way to determine if an image is in greyscale or in color? By efficient I don't mean reading all the pixels of the image and looking for every single RGB value then.
For example, in Python there is a function inside the Imaging library called 'getcolors' that returns a hash of pairs { (R G B) -> counter } for the whole image and I just have to iterate over that hash looking for only one entry in color.
UPDATE:
For future readers of this post: I implemented a solution reading pixel by pixel the image (as #npinti suggested on his link) and it seems to be fast enough for me (you should take your time implementing it, won't take you more than 10 minutes). It seems the Python implementation of the pixel by pixel way is really bad (inneficient and slow).
If you are using a BufferedImage, this previous SO post should provide helpful.

Categories

Resources