java GifSequenceWriter gif quality - java

I have created JAVA app that uses GifSequenceWriter class for creating animated GIF from PNG sequence but I noticed that gif quality is pretty bad, so I would like to know if there is some option to set the maximum quality which is 256 colors for the output gif as it seems that at the moment (by default) it uses like 128 color only by default at best (or even less!) which is very bad and almost unusable (shadows look terribly with all that color bending for example, no fine detail preserved etc.).
I tested creating the same PNG sequence in separate GIF making tool (a small Windows application) and the output gif is - compared to output from GifSequenceWriter - very very nice (shadows are preserved with all the fine detail, almost invisible color bending etc.) so I guess there most probably should be some kind of IIOMetadataNode .setAttribute() value inside GifSequenceWriter class dealing with the image output quality but I was not able identify it myself...
Or maybe it should be some attribute from imageWriteParam? I searched a bit more and it seems that the actual "quality" parameter - at least for JPEG images - can be set via imageWriteParam.setCompressionQuality(float someFloatValueHere) but when I try that it throw exceptions cos I guess there is no compression quality parameter for GIF...or is it?
Any guess anyone how to solve this issue?
BTW yes I have searched internet - no luck (I cannot even find any list of .setAttribute() options/values anywhere, strange, isn't it?)
EDIT
So after reading link with attribute values provided by #Marco13 I have implemented this code part into GifSequenceWriter:
IIOMetadataNode localColorTableNode = getNode(root, "LocalColorTable");
localColorTableNode.setAttribute("sizeOfLocalColorTable", "256");
localColorTableNode.setAttribute("sortFlag", "FALSE");
IIOMetadataNode colorTableEntryNode = getNode(localColorTableNode, "ColorTableEntry");
colorTableEntryNode.setAttribute("index", someIntA);
colorTableEntryNode.setAttribute("red", someIntB);
colorTableEntryNode.setAttribute("green", someIntC);
colorTableEntryNode.setAttribute("blue", someIntD);
But I have no clue what kind of someIntA/someIntB/someIntC/someIntD int values should I add - any guess anyone, am I going the right direction or what? I kind of feel I may be on the right spot (just maybe!) but I think I need a small kick or something...maybe I do not understand something right here and I need some explanation...or just pure right away solution for those int values...? :)

Related

Downscale image for MNIST

I'm trying to solve MNIST classification problem on Android devices. I have a trained model already, now I want to be able to recognize a single digit on the photo.
After taking a photo I make some pre-processing, before passing image to the model.
Here's an example of original image:
After that I make it black-and-white only, so it starts looking like this:
Please, don't pay attention to the changes in dimensions - the were introduced by the way I make screenshots, in the app both images still have the same size.
After casting it to BW colors I extract the number's blob, downscale it to 20*20 (respecting the aspect ratio) and then add padding aroung to make it fit the MNIST 28*28 size. The final result is the following:
Notice, that I upscaled image to show up the problem. And the problem is the following: after downscaling a lot of useful information gets lost. Sometimes the whole edges of the number are gone. Is there any way to avoid it? Maybe I can somehow make white lines thicker before downscaling?
P.S. I use Catalano framework for images processing.
EDIT After applying the suggested filter from the answer here's what I get:
I'm not sure about the framework you've mentioned,
but one thing that can be of help here, is to use some morphological operations on the original image, before going for MNIST style normalization.
Namely, one can do an erosion as follows (I'm recording the approach in python, there should be analogues in the framework you use, as the operations are pretty standard).
import numpy as np
import cv2
xx = cv2.imread('6.jpg') # your original image of 6
kernel = np.ones((20,20), np.uint8)
erosion = cv2.erode(xx, kernel, iterations = 2)
cv2.imwrite('6A.jpg',erosion) # this will be used as a replacement for the original image
this will produce something that looks like this. Then, if you do the binarization of the new image (say threshold by gray intensity 150), and do the resize followed by padding, you should get something like this one, which is more robust.
Note also, that you need to centralize the image at the very last stage (against its center of mass) before feeding to any classifier.
The end result, in MNIST's standards is as follows ( physical dimensions 28x28).

How to make xml file for side faces in face detection

I have been working on face detection and i am able to detect frontal faces like all other people using the haarcascade xml files. My next task is to detect a side face (Non-Frontal). I am working in opencv. The profileface xml is not able to detect the side faces with accuracy. So i feel only option left is to make my own xml file which can detect side faces. Can anyone help me out?
Thanks
Did you try to combine the frontal and profile face recognition?
I was working with this as well, and the result was pretty good actually.
You also need to specify the min and max framesize as accurate as possible.
Unfortunately I did not find a side face haarcascade, so it looks like, you need to train your own one.
if you just want to test this out, you don't actually need so many pictures of faces.
You need a lot of negatives. Because opencv provides a function to generate positive pictures based on a single image of the face and a bunch of negative pictures.
To find the negative pictures you could simply take a video of the backgrounds where you want to detect the faces, and then just extract all the images form the video file. From only 3 minutes you get over 2000 images.
For the training I would recommend you to keep the size of all pictures very small, because otherwise it will take like forever to train the cascade file.
maybe you can see opencv Cascade Classifier Training for a reference. I didn't try it, but offer you for a reference.
website: http://docs.opencv.org/2.4/doc/user_guide/ug_traincascade.html
and there are some Q&A for training.
website: http://www.computer-vision-software.com/blog/2009/11/faq-opencv-haartraining/

Best resolution resizing image using ImgScalr

I am very new to ImgScalr API.I need to resize my images to different views, one of them being a mobile view and second a thumbnail view.
I have made use of the resize method, but have a doubt. Which is the best of the resize method to resizing the image out of the multiple options available that keeps the proper aspect ratio(as in the image doesnt become blurred)
One thing I noticed was that every resize method takes in a targetSize argument. How does specifiying this field make sure that the aspect ratio of the image does not get affected.
What should the ideal arguments to the resize method be, given that I need to generate a 2 KB thumbnail view of my input image that may be of size of around 2 MB.
I am a bit confused because of the lack of enough documentation and examples.
imgscalr author here - definitely understand the confusion, the code base itself (if you happen to glance at GitHub) is almost 50% comments if you are curious how the library works, but from a usage perspective you are right - I didn't put a lot of time into examples.
Hopefully I can hit some highlights quickly for you...
Aspect Ratios
A core design tenant of imgscalr is to always honor the aspect ratio - so if you pass in 200x1 (some ridiculous dimension as an example) it will attempt to calculate the minimum dimension that will meet those 'target' dimensions.
This is handy if you always want your thumbnails in a certain box, like 200x200 -- just pass that in and imgscalr will determine a final width/height that won't be bigger than that (possibly something like 200x127 or 78x200)
Quality
By default the library does what is called a 'balanced' approach to quality by considering the delta in dimension change as well as scaling up/scaling down and chooses the most approach approach (speed VS quality).
You can force it to always scale as quickly as possible (good idea for scaling up operations) or can force it to always use high or ultra quality (good idea if you want really crisp thumbnails or other operations that drastically reduce the image resolution and you want them to still look decent)
On top of that you can also ask the library to apply some additional filtering to the image (called Image Ops) -- I ship some handy defaults out of the box like the anti-aliasing one if you are getting jagged edges on a lot of source material you are scaling (common when scaling screenshots of desktops and other things with diag straight lines)
Overall
The library is meant to be as simple as possible to use, something no harder than:
BufferedImage thumbnail = Scalr.resize(src, 128);
will get you started... all the other operations around quality, fitting, modes, ops, etc. are just additional things you can chose to do if you decide the result isn't quite what you wanted.
Hope that helps!

OpenGL ES 1.1/2.0 shaders to compare images on Android

i'm developing a software that compare images and i need to do it in a fast way! Actually i compare them using plain c but it's too slow.
I want to compare them using shaders and a couple of gl surfaces (textures), using c and not java, but this doesn't change the situation so much, and get back a list of changed parts, but i really don't know where to start.
Basically i want to use something like SIMD neon instruction to compare pixel colors to check for changes (well, i need to check only the first pixel fragment color, ex. only red ... these are photos so is unrealistic that it doesn't change) but instead to use neon instructions i want to use pixel shaders to do the comparison and get the list of changed part back
More, if it's possible, i want to use parallel comparison on the same image splitting it in blocks :)
Someone can give an hit?
note: i know that i can't output back a list of stuff, but, well, use a third texture as output is good anyway for me (if i put on the texture 2 ushorts that indicates x and y i'm ok and with an uint on the end of the texture that report the number of changed pixels)
OpenGL ES 1.1 doesn't have shaders, and the best route I can think of for what you want to do ends with a 50% reduction in colour precision. Issues are:
without extensions there's additive blending, but not subtractive. No problem, just upload the second of your textures with all colour values inverted.
OpenGL clamps output colours to the range [0, 1] and without extensions you're limited to one byte per channel. So you'd need to upload textures with 7bit colour channels to ensure you got the correct results within the 8bits coming back.
Shaders would allow a slightly circuitous route around that, because you can add or subtract or do whatever you want, and can split up the results. If you're sending two three channel 24bit images in to get a four channel 32bit image out, obviously there's enough space to fit in 9 bits per source channel, even though you're going to have to divide the data oddly and reconstruct it later.
In practice you're going to pay quite a lot for uploading and downloading images from the GPU, so NEON might be a better choice not just to avoid packing peculiarities. Assuming the Android kit supplies the same compiler intrinsics as the iPhone kit (likely, since they'll both include GCC), this page has a bit of an introduction showing how to convert an image to greyscale. So it's not exactly what you're looking for, but it's image processing in C using NEON so it should be a good start.
In both cases you're likely to end up with an image of the differences, rather than a simple count and list. A count is a concurrent operation, whatever way you think about it, so isn't
really something you'd do in GL or via NEON. You'd need to inspect the final image to work it out.

Android fuzzy / faded fonts possible?

So I am developing a very simple app, mostly for personal use, am am looking for a simple solution to a simple problem.
In its simplest form I am looking for a way to have a line of text with just one or two words blurred out. Basically I am looking to blur text beyond readability but still hinting at what is hidden. Kind of a knowledge / memory app to help memorize some definitions by prompting with a few key words.
I am having issues finding a simple way to accomplish this. Am I just missing an attribute to blur text?
I have thought about:
overriding say the textview onDraw but that seems overkill and I am unsure if there are any methods available to easily blur text.
using the toHtml and trying out the new CSS3 blur effects but I don't think that that is a reasonable solution and I am not sure that the Android platform supports all the CSS3 format, if any.
the simplest and most desirable solution in my book was to find a font (ttf, off, etc) file, derived from a common font, that is already blurred as I described, and use that alternating with the non blurred version of that font to achieve the desired effect.
make the described font but that just plain requires too much time on my part and the outcome is not necessarily good :)
I have thought about some alternative ways to simulate this effect but they all result in fading the text, which is undesirable, since I want to have some visual prompts to indicate the obscured texts length.
Any ideas? It's been years since I have developed in Java and I am unsure what is available and what the Android OS supports.
I haven't looked into using these properties for only part of the text, but TextView has some possibly useful properties related to text shadows. Using something like the following XML attributes, you could hide the actual text and just show a blurred shadow.
android:textColor - #0000 (fully transparent so that the crisp text is not shown)
android:shadowColor - #FFFF (or whatever color you want to appear)
android:shadowDx - 0 (the shadow is in the same horizontal position as the text)
android:shadowDy - 0 (the shadow is in the same vertical position as the text)
android:shadowRadius - Depends on how much you want to blur. A very small non-zero value, such as 0.001, will be sharp. Larger values blur more, and at some point the shadow becomes illegible.

Categories

Resources