I found that there is a variable, called "int[] stateCount", in FinderPatternFinder class helps checking for any possible finder pattern in a QR code. In order to detect/locate the Finder Patterns, I am thinking to make changes of this variable would be helpful.
Any ideas on how to detect/decode a color inverted QR code in Java with ZXing?
I'm not a java programer but since I have managed to alter the zxing source to implement scanning inverted codes in my iOS project maybe my implementation will help you.
Invert pixels - zxing
Not sure what you mean by color inverted. Do you mean swapping light and dark? In theory, you should be able to take an image, extract the luminance, and invert it, e.g., 255-pixel_luminance. Note that the quiet zone (surrounding white) needs to be inverted as well, i.e., surrounding black. And it's possible this won't work anyway. The zxing heuristics are not always symmetric. You can give it a shot but it may not work.
Note that zxing only extracts luminance. Two colors of very different hues but the same luminance are indistinguishable to the detectors/decoders.
In any case, mucking with stateCount is probably not going to help. At that point, the image is purely black and white, not even grayscale. You want to take any variations/distortions in your image into account ahead of this and leave this code untouched.
Related
I have a big image as this one this
Can I have that in Android if somebody click on "Pest" just Pest changes its Background color ?
My goal is to have a separate onClickListener for all county and they can change their background.
Can I achieve it in Android?
Thank You !
Firstly, this seems more like a graphics design issue.
Secondly, it is definitely doable. I would suggest some form of Photoshop to split the bigger image in sub-images. Once you have your 10 images (random number, but I suppose you get the ieda) that together form the bigger image (that should 100% be achievable with a Photoshop like software), then you can apply the onClickListener to each image. This will produce the effect you are looking for.
I believe you know how to do that and you might not need the code (as I have seen in your comments). All you need is that Photoshop like software that will do all the work for you.
Meanwhile, I will quickly search about the best software to crop those images. If you have Photoshop that might prove quite good, I guess.
Maybe this (there are tons out there):
https://graphicdesign.stackexchange.com/questions/3205/i-need-to-slice-one-image-into-50-different-images-automatically
I'm trying to optimise a rendering engine in Java to not draw object's which are covered up by 'solid' child objects drawn in front of them, i.e. the parent is occluded by its children.
I'm wanting to know if an arbitrary BufferedImage I load in from a file contains any transparent pixels - as this affects my occlusion testing.
I've found I can use BufferedImage.getColorModel().hasAlpha() to find if the image supports alpha, but in the case that it does, it doesn't tell me if it definitely contains non-opaque pixels.
I know I could loop over the pixel data & test each one's alpha value & return as soon as I discover a non-opaque pixel, but I was wondering if there's already something native I could use, a flag that is set internally perhaps? Or something a little less intensive than iterating through pixels.
Any input appreciated, thanks.
Unfortunately, you will have to loop through each pixel (until you find a transparent pixel) to be sure.
If you don't need to be 100% sure, you could of course test only some pixels, where you think transparency is most likely to occur.
By looking at various images, I think you'll find that most images that has transparent parts contains transparency along the edges. This optimization will help in many common cases.
Unfortunately, I don't think that there's an optimization that can be done in one of the most common cases, the one where the color model allows transparency, but there really are no transparent pixels... You really need to test every pixel in this case, to know for sure.
Accessing the alpha values in its "native representation" (through the Raster/DataBuffer/SampleModel classes) is going to be faster than using BufferedImage.getRGB(x, y) and mask out the alpha values.
I'm pretty sure you'll need to loop through each pixel and check for an Alpha value.
The best alternative I can offer is to write a custom method for reading the pixel data - ie your own Raster. Within this class, as you're reading the pixel data from the source file into the data buffer, you can check for the alpha values as you go. Of course, this isn't much help if you're using a built-in image reading class, and involves a lot more effort.
I want to replace color of an image. For example turn all blue colors to red without any distortion on shape. When i try this i can swap the colors by iterating every pixel, however the shape of the swapped area turns into a flat shape.
example1 input : http://www.tutorialwiz.com/tutorials/changing_color/images/original.jpg
example1 output: http://www.tutorialwiz.com/tutorials/changing_color/images/3.jpg
example2: input output together : http://www.digital-photography-school.com/wp-content/uploads/2009/07/before-after.jpg
The usual way to do this is to use an RGBImageFilter. See the docs for FilteredImageSource for how to use such a filter.
The problem with the issue that you are asking is that a strict color replacement will modify the shape of the image. What you need to do is get a little information on what the shape is. I.e. Find boundaries and shading along the boundaries. My suggestion to you is:
Use edge detection to separate out blobs
Find a blob that fits your demands: i.e. Has a tolerant likelihood for [color]ish.
Identify blobs and filters applied
Recolor the blob and reshade.
IMO: That is the only way you're going to get the result you're looking for.
Some notes: Finding a blob may require some testing to determine how much tolerance you have for blobs and shapes. [Connected Regions].
Another thing: The shading and relative coloring will require you to have a devience for the color you want.
I'm not fully understand the question.
But from the example given, i see that the process of changing blue into red was fail. instead of becoming red, it becomes gray. does color gray is what you mean by flat?
If that so, it looks like color channel error.
to change blue to red, we have to change (0,255,0) to (255,0,0).
I'm not sure if it is the problem, maybe code snippet could help for evaluation. to make sure what is the real problem.
Here’s my task which I want to solve with as little effort as possible (preferrably with QT & C++ or Java): I want to use webcam video input to detect if there’s a (or more) crate(s) in front of the camera lens or not. The scene can change from "clear" to "there is a crate in front of the lens" and back while the cam feeds its video signal to my application. For prototype testing/ learning I have 2-3 images of the “empty” scene, and 2-3 images with one or more crates.
Do you know straightforward idea how to tackle this task? I found OpenCV, but isn't this framework too bulky for this simple task? I'm new to the field of computer vision. Is this generally a hard task or is it simple and robust to detect if there's an obstacle in front of the cam in live feeds? Your expert opinion is deeply appreciated!
Here's an approach I've heard of, which may yield some success:
Perform edge detection on your image to translate it into a black and white image, whereby edges are shown as black pixels.
Now create a histogram to record the frequency of black pixels in each vertical column of pixels in the image. The theory here is that a high frequency value in the histogram in or around one bucket is indicative of a vertical edge, which could be the edge of a crate.
You could also consider a second histogram to measure pixels on each row of the image.
Obviously this is a fairly simple approach and is highly dependent on "simple" input; i.e. plain boxes with "hard" edges against a blank background (preferable a background that contrasts heavily with the box).
You dont need a full-blown computer-vision library to detect if there is a crate or no crate in front of the camera. You can just take a snapshot and make a color-histogram (simple). To capture the snapshot take a look here:
http://msdn.microsoft.com/en-us/library/dd742882%28VS.85%29.aspx
Lots of variables here including any possible changes in ambient lighting and any other activity in the field of view. Look at implementing a Canny edge detector (which OpenCV has and also Intel Performance Primitives have as well) to look for the outline of the shape of interest. If you then kinda know where the box will be, you can perhaps sum pixels in the region of interest. If the box can appear anywhere in the field of view, this is more challenging.
This is not something you should start in Java. When I had this kind of problems I would start with Matlab (OpenCV library) or something similar, see if the solution would work there and then port it to Java.
To answer your question I did something similar by XOR-ing the 'reference' image (no crate in your case) with the current image then either work on the histogram (clustered pixels at right means large difference) or just sum the visible pixels and compare them with a threshold. XOR is not really precise but it is fast.
My point is, it took me 2hrs to install Scilab and the toolkits and write a proof of concept. It would have taken me two days in Java and if the first solution didn't work each additional algorithm (already done in Mat-/Scilab) another few hours. IMHO you are approaching the problem from the wrong angle.
If really Java/C++ are just some simple tools that don't matter then drop them and use Scilab or some other Matlab clone - prototyping and fine tuning would be much faster.
There are 2 parts involved in object detection. One is feature extraction, the other is similarity calculation. Some obvious features of the crate are geometry, edge, texture, etc...
So you can find some algorithms to extract these features from your crate image. Then comparing these features with your training sample images.
I am looking for the best way to detect an image within another image. I have a small image and would like to find the location that it appears within a larger image - which will actually be screen captures. Conceptually, it is like a 'Where's Waldo?' sort of search in the larger image.
Are there any efficient/quick ways to accomplish this? Speed is more important than memory.
Edit:
The 'inner' image may not always have the same scale but will have the same rotation.
It is not safe to assume that the image will be perfectly contained within the other, pixel for pixel.
Wikipedia has an article on Template Matching, with sample code.
(While that page doesn't handle changed scales, it has links to other styles of matching, for example Scale invariant feature transform)
If rotation also had to be catered for, the Generalised Hough Transform can be used.
You can treat this as a substring problem, where characters in the alphabet are pixels and your string is the image. You would need also to use a special character in a similar vein to a linebreak, to denote the image boundary.
The algorithm you want is on wikipedia: http://en.wikipedia.org/wiki/Knuth%E2%80%93Morris%E2%80%93Pratt_algorithm
Update: If you cannot assume that the image is perfectly contained within the other, pixel for pixel, then this approach will not work.
There are other, more complicated algorithms based on the same dynamic programming concept as the above, but I won't go into them unless it's necessary.