Downscale image for MNIST - java

I'm trying to solve MNIST classification problem on Android devices. I have a trained model already, now I want to be able to recognize a single digit on the photo.
After taking a photo I make some pre-processing, before passing image to the model.
Here's an example of original image:
After that I make it black-and-white only, so it starts looking like this:
Please, don't pay attention to the changes in dimensions - the were introduced by the way I make screenshots, in the app both images still have the same size.
After casting it to BW colors I extract the number's blob, downscale it to 20*20 (respecting the aspect ratio) and then add padding aroung to make it fit the MNIST 28*28 size. The final result is the following:
Notice, that I upscaled image to show up the problem. And the problem is the following: after downscaling a lot of useful information gets lost. Sometimes the whole edges of the number are gone. Is there any way to avoid it? Maybe I can somehow make white lines thicker before downscaling?
P.S. I use Catalano framework for images processing.
EDIT After applying the suggested filter from the answer here's what I get:

I'm not sure about the framework you've mentioned,
but one thing that can be of help here, is to use some morphological operations on the original image, before going for MNIST style normalization.
Namely, one can do an erosion as follows (I'm recording the approach in python, there should be analogues in the framework you use, as the operations are pretty standard).
import numpy as np
import cv2
xx = cv2.imread('6.jpg') # your original image of 6
kernel = np.ones((20,20), np.uint8)
erosion = cv2.erode(xx, kernel, iterations = 2)
cv2.imwrite('6A.jpg',erosion) # this will be used as a replacement for the original image
this will produce something that looks like this. Then, if you do the binarization of the new image (say threshold by gray intensity 150), and do the resize followed by padding, you should get something like this one, which is more robust.
Note also, that you need to centralize the image at the very last stage (against its center of mass) before feeding to any classifier.
The end result, in MNIST's standards is as follows ( physical dimensions 28x28).

Related

image enhancement of plots and other line diagrams

I am looking for library routines for the image enhancement of (scientific) plots and diagrams. Typical examples are shown in
http://www.jcheminf.com/content/pdf/1758-2946-4-11.pdf
and Figure 3 of http://en.wikipedia.org/wiki/Anti-aliasing
These have the features that:
They usually use a very small number of primitives (line, character, circle, rectangle)
They are usually monochrome (black/white) or have a very small number of block colours
The originals have no gradients or patterns.
I wish to reconstruct the primitives and am looking for an algorithm to restore clean lines in the image before the next stage of analysis (which may include line detection and OCR). The noise often comes from :
use of JPGs (the noise is often seen close to the original primitive)
antialiasing
I require Free/Open Source solutions and would ideally like existing Java libraries. If there are any which already do some of the job or reconstructing lines that would be a bonus! For characters recognition I would be happy to isolate each character at this stage and defer OCR, though pointers to that would also be appreciated.
UPDATE:
I am surprised that even with a bounty there have been no substantive replies to the question. I am therefore investigating it myself. I still invite answers but they should go beyond my own answer.
ANSWER TO OWN QUESTION
Since there there have been no answers after nearly a week here is what I now plan:
I found mention of the Canny edge-detection algorithm on another SO post and then found:
[http://www.tomgibara.com/computer-vision/canny-edge-detector][2]
from Tom Gibara.
This is very easy to use in default mode and the main program is:
public static void main(String[] args) throws Exception {
File file = new File("c.bmp");
//create the detector
CannyEdgeDetector detector = new CannyEdgeDetector();
//adjust its parameters as desired
detector.setLowThreshold(0.5f);
detector.setHighThreshold(1f);
//apply it to an image
BufferedImage img = ImageIO.read(file);
detector.setSourceImage(img);
detector.process();
BufferedImage edges = detector.getEdgesImage();
ImageIO.write(edges, "png", new File("c.png"));
}
Here ImageIO reads and writes bitmaps. The unprocessed image is read as a 24-bit BMP (ImageIO seems to fail with lower colour range). The defaults are Gibara's out-of-the-box.
The edge detection is very impressive and outlines all the lines and characters. This bitmap
is converted to the edges
So now I have two tasks:
fit straight lines to the outlines, which are essentially clean "tramlines". I expect this to be straightforward for clean diagrams. I'd be grateful for any mention of Java libraries to fit line primitives to outlines.
recognize the characters. Gibara has done an excellent job of separating them and so this is an exercise of recognising the individual glyphs. I can use the outlines to isolate the individual pixel maps for each glyph and then pass these to JavaOCR. Alternatively the outlines may be good enough to recognize the characters directly. I do NOT know what the font is, but most characters are in the 32-255 range and I believe I can build up heuristic maps.
See How do I properly load a BufferedImage in java? for loading bitmaps in Java
Java Library
OpenCV is the go-to library for computer vision tasks like this. There are Java bindings here: http://code.google.com/p/javacv/ . OpenCV covers everything from basic image processing filters to high-level object and motion detection algorithms.
Line Detection
For detecting straight lines, try the Hough Transform. The OpenCV Tutorials have a good explanation: http://opencv.itseez.com/doc/tutorials/imgproc/imgtrans/hough_lines/hough_lines.html#how-does-it-work
The classical Hough transform outputs infinite lines, but OpenCV also implements a variant called the Probabilistic Hough Transform that outputs line segments. It should give what you need. The original academic paper is here: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.34.9440&rep=rep1&type=pdf
Once you detect line segments, you might want to detect linked line segments and join them together. For your simple images, you will probably do just fine with a brute-force comparison of all segment endpoints. If you detect more than one endpoint within a small radius, say 2 pixels, join them together to make sure your lines are continuous. You can also measure the angle between joined line segments to detect polygons.
Circle Detection
There is another version of the Hough transform that can detect circles, explained here: http://opencv.itseez.com/doc/tutorials/imgproc/imgtrans/hough_circle/hough_circle.html#hough-circle
I wish to reconstruct the primitives and am looking for an algorithm
to restore clean lines in the image before the next stage of analysis
(which may include line detection and OCR).
Have you looked at jaitools? ( http://code.google.com/p/jaitools/ ).
They have API for vectorizing graphics which are quite fast and flexible; see API and docs here: http://jaitools.org/

Java image library - turn grid image into array

If I have an image of a table of boxes, with some coloured in, is there an image processing library that can help me turn this into an array?
Thanks
You can use a thresholding function to binarize the image into dark/light pixels so dark pixels are 0 and light ones are 1.
Then you would want to remove image artifacts using dilation and erosion functions to remove noise (all these are well defined on Wikipedia).
Finally if you know where the boxes are, you can just get the value in the center of each box to determine the array value, or possibly use an area near the center and take the prevailing value (i.e. more 0's is a filled in square, more 1's is and empty square).
If you are scanning these boxes and there is a lot of variation in the position of the boxes, you will have to perform some level of image registration using known points, or fiducials.
As far as what tools to use to do this, I'd recommend first trying this manually using a tool like ImageJ, which has a UI and can also be used programatically since it is written all in Java.
Other good libraries for this include OpenCV and the Java Advanced Imaging API.
Your results will definitely vary depending on the input images and how consistenly lit and positioned they are.
The best way to see how it will do for your data is to try applying these processing steps manually to see where your threshold value should be, how much dilating/eroding you need to get consistent results.

Scalable clipping mask

I need to to clip variablesized images into puzzle shaped pices like this(not squares): http://www.fernando.com.ar/jquery-puzzle/
I have considered the posibility of doing this with a php library like Cairo or GD, but have little to no experience with these librays, and see no immidiate soulution for creating a clipping mask dynamicaly scalable for different sized images.
I'm looking for guidance/tips on which serverside programing language to use to accomplish this task, and preferably an approach to this problem.
You can create an image using GD with the size of the puzzle piece. and then copy the full image on that image with the right cropping to get the right part of the image.
Then you can just dynamically color in every part of the piece you want to remove with a distinct color (eg #0f0) and then use imagecolorallocatealpha to make that color transparent. Do it for each piece and you have your server side image pieces.
However, if I where you I would create the clipping mask of each puzzle peace in advance in the distinct color. That would make two images per connection (one with the "circle" connecter sticking out and one where this circle connector fits into). That way you can just copy these masks onto the image to create nice edges quickly.
GD is quite complicated, I've heard very good things about Image Magick for which there is a PHP version and lots of documentation on php.net. However, not all web servers would have this installed by default.
http://www.php.net/manual/en/book.imagick.php
If you choose to do it using PHP with GD then the code here may help:
http://php.amnuts.com/index.php?do=view&id=15&file=class.imagemask.php
Essentially what you need to do with GD is to start with a mask at a particular size and then use the imagecopyresampled function to copy the mask image resource to a larger or smaller size. To see what I mean, check out the _getMaskImage method class shown at the url above. A working example of the output can be seen at:
http://php.amnuts.com/demos/image-mask/
The problem with doing it via GD, as far as I can tell, is that you need to do it a pixel at a time if you want to achieve varying opacity levels, so processing a large image could take a few seconds. With ImageMagick this may not be the case.

OpenGL ES 1.1/2.0 shaders to compare images on Android

i'm developing a software that compare images and i need to do it in a fast way! Actually i compare them using plain c but it's too slow.
I want to compare them using shaders and a couple of gl surfaces (textures), using c and not java, but this doesn't change the situation so much, and get back a list of changed parts, but i really don't know where to start.
Basically i want to use something like SIMD neon instruction to compare pixel colors to check for changes (well, i need to check only the first pixel fragment color, ex. only red ... these are photos so is unrealistic that it doesn't change) but instead to use neon instructions i want to use pixel shaders to do the comparison and get the list of changed part back
More, if it's possible, i want to use parallel comparison on the same image splitting it in blocks :)
Someone can give an hit?
note: i know that i can't output back a list of stuff, but, well, use a third texture as output is good anyway for me (if i put on the texture 2 ushorts that indicates x and y i'm ok and with an uint on the end of the texture that report the number of changed pixels)
OpenGL ES 1.1 doesn't have shaders, and the best route I can think of for what you want to do ends with a 50% reduction in colour precision. Issues are:
without extensions there's additive blending, but not subtractive. No problem, just upload the second of your textures with all colour values inverted.
OpenGL clamps output colours to the range [0, 1] and without extensions you're limited to one byte per channel. So you'd need to upload textures with 7bit colour channels to ensure you got the correct results within the 8bits coming back.
Shaders would allow a slightly circuitous route around that, because you can add or subtract or do whatever you want, and can split up the results. If you're sending two three channel 24bit images in to get a four channel 32bit image out, obviously there's enough space to fit in 9 bits per source channel, even though you're going to have to divide the data oddly and reconstruct it later.
In practice you're going to pay quite a lot for uploading and downloading images from the GPU, so NEON might be a better choice not just to avoid packing peculiarities. Assuming the Android kit supplies the same compiler intrinsics as the iPhone kit (likely, since they'll both include GCC), this page has a bit of an introduction showing how to convert an image to greyscale. So it's not exactly what you're looking for, but it's image processing in C using NEON so it should be a good start.
In both cases you're likely to end up with an image of the differences, rather than a simple count and list. A count is a concurrent operation, whatever way you think about it, so isn't
really something you'd do in GL or via NEON. You'd need to inspect the final image to work it out.

Appending to an Image File

I have written a program that takes a 'photo' and for every pixel it chooses to insert an image from a range of other photos. The image chosen is the photo of which the average colour is closest to the original pixel from the photograph.
I have done this by firstly averaging the rgb values from every pixel in 'stock' image and then converting it to CIE LAB so i could calculate the how 'close' it is to the pixel in question in terms of human perception of the colour.
I have then compiled an image where each pixel in the original 'photo' image has been replaced with the 'closest' stock image.
It works nicely and the effect is good however the stock image size is 300 by 300 pixels and even with the virtual machine flags of "-Xms2048m -Xmx2048m", which yes I know is ridiculus, on 555px by 540px image I can only replace the stock images scaled down to 50 px before I get an out of memory error.
So basically I am trying to think of solutions. Firstly I think the image effect itself may be improved by averaging every 4 pixels (2x2 square) of the original image into a single pixel and then replacing this pixel with the image, as this way the small photos will be more visible in the individual print. This should also allow me to draw the stock images at a greater size. Does anyone have any experience in this sort of image manipulation? If so what tricks have you discovered to produce a nice image.
Ultimately I think the way to reduce the memory errors would be to repeatedly save the image to disk and append the next line of images to the file whilst continually removing the old set of rendered images from memory. How can this be done? Is it similar to appending a normal file.
Any help in this last matter would be greatly appreciated.
Thanks,
Alex
I suggest looking into the Java Advanced Imaging (JAI) API. You're probably using BufferedImage right now, which does keep everything in memory: source images as well as output images. This is known as "immediate mode" processing. When you call a method to resize the image, it happens immediately. As a result, you're still keeping the stock images in memory.
With JAI, there are two benefits you can take advantage of.
Deferred mode processing.
Tile computation.
Deferred mode means that the output images are not computed right when you call methods on the images. Instead, a call to resize an image creates a small "operator" object that can do the resizing later. This lets you construct chains, trees, or pipelines of operations. So, your work would build a tree of operations like "crop, resize, composite" for each stock image. The nice part is that the operations are just command objects so you aren't consuming all the memory while you build up your commands.
This API is pull-based. It defers computation until some output action pulls pixels from the operators. This quickly helps save time and memory by avoiding needless pixel operations.
For example, suppose you need an output image that is 2048 x 2048 pixels, scaled up from a 512x512 crop out of a source image that's 1600x512 pixels. Obviously, it doesn't make sense to scale up the entire 1600x512 source image, just to throw away 2/3 of the pixels. Instead, the scaling operator will have a "region of interest" (ROI) based on it's output dimensions. The scaling operator projects the ROI onto the source image and only computes those pixels.
The commands must eventually get evaluated. This happens in a few situations, mostly relating to output of the final image. So, asking for a BufferedImage to display the output on the screen will force all the commands to evaluate. Similarly, writing the output image to disk will force evaluation.
In some cases, you can keep the second benefit of JAI, which is tile based rendering. Whereas BufferedImage does all its work right away, across all pixels, tile rendering just operates on rectangular sections of the image at a time.
Using the example from before, the 2048x2048 output image will get broken into tiles. Suppose these are 256x256, then the entire image gets broken into 64 tiles. The JAI operator objects know how to work a tile at a tile. So, scaling the 512x512 section of the source image really happens 64 times on 64x64 source pixels at a time.
Computing a tile at a time means looping across the tiles, which would seem to take more time. However, two things work in your favor when doing tile computation. First, tiles can be evaluated on multiple threads concurrently. Second, the transient memory usage is much, much lower than immediate mode computation.
All of which is a long-winded explanation for why you want to use JAI for this type of image processing.
A couple of notes and caveats:
You can defeat tile based rendering without realizing it. Anywhere you've got a BufferedImage in the workstream, it cannot act as a tile source or sink.
If you render to disk using the JAI or JAI Image I/O operators for JPEG, then you're in good shape. If you try to use the JDK's built-in image classes, you'll need all the memory. (Basically, avoid mixing the two types of image manipulation. Immediate mode and deferred mode don't mix well.)
All the fancy stuff with ROIs, tiles, and deferred mode are transparent to the program. You just make API call on the JAI class. You only deal with the machinery if you need more control over things like tile sizes, caching, and concurrency.
Here's a suggestion that might be useful;
Try segregating the two main tasks into individual programs. Your first task is to decide which images go where, and that can be a simple mapping from coordinates to filenames, which can be represented as lines of text:
0,0,image123.jpg
0,1,image542.jpg
.....
After that task is done (and it sounds like you have it well handled), then you can have a separate program handle the compilation.
This compilation could be done by appending to an image, but you probably don't want to mess around with file formats yourself. It's better to let your programming environment do it by using a Java Image object of some sort. The biggest one you can fit in memory pixelwise will be 2GB leading to sqrt(2x10^9) maximum height and width. From this number and dividing by the number of images you have for height and width, you will get the overall pixels per subimage allowed., and can paint them into the appropriate places.
Every time you 'append' are you perhaps implicitly creating a new object with one more pixel to replace the old one (ie, a parallel to the classic problem of repeatedly appending to a String instead of using a StringBuilder) ?
If you post the portion of your code that does the storing and appending, someone will probably help you find an efficient way of recoding it.

Categories

Resources