How to replace color of an image? - java

I want to replace color of an image. For example turn all blue colors to red without any distortion on shape. When i try this i can swap the colors by iterating every pixel, however the shape of the swapped area turns into a flat shape.
example1 input : http://www.tutorialwiz.com/tutorials/changing_color/images/original.jpg
example1 output: http://www.tutorialwiz.com/tutorials/changing_color/images/3.jpg
example2: input output together : http://www.digital-photography-school.com/wp-content/uploads/2009/07/before-after.jpg

The usual way to do this is to use an RGBImageFilter. See the docs for FilteredImageSource for how to use such a filter.

The problem with the issue that you are asking is that a strict color replacement will modify the shape of the image. What you need to do is get a little information on what the shape is. I.e. Find boundaries and shading along the boundaries. My suggestion to you is:
Use edge detection to separate out blobs
Find a blob that fits your demands: i.e. Has a tolerant likelihood for [color]ish.
Identify blobs and filters applied
Recolor the blob and reshade.
IMO: That is the only way you're going to get the result you're looking for.
Some notes: Finding a blob may require some testing to determine how much tolerance you have for blobs and shapes. [Connected Regions].
Another thing: The shading and relative coloring will require you to have a devience for the color you want.

I'm not fully understand the question.
But from the example given, i see that the process of changing blue into red was fail. instead of becoming red, it becomes gray. does color gray is what you mean by flat?
If that so, it looks like color channel error.
to change blue to red, we have to change (0,255,0) to (255,0,0).
I'm not sure if it is the problem, maybe code snippet could help for evaluation. to make sure what is the real problem.

Related

How to find the prominent color of an image?

I tried to gets the prominent color of an image using this example. This is working perfect for plain images. But when I'm giving an image like this,
It's give me the color RGB(0,4,7)(a black one). How may I fix this?
In fact you are not looking for the dominant color, but more for the dominant range of color. So you are looking for the bigger cluster in the RGB histogram. Take a look to the papers of Arnaud LeTrotter (at this time he was working at the LSIS laboratory in Marseille, France), it's exactly what he did. He added some tolerance to the main color detection in order to have the main range.

Detecting if a BufferedImage contains transparent pixels

I'm trying to optimise a rendering engine in Java to not draw object's which are covered up by 'solid' child objects drawn in front of them, i.e. the parent is occluded by its children.
I'm wanting to know if an arbitrary BufferedImage I load in from a file contains any transparent pixels - as this affects my occlusion testing.
I've found I can use BufferedImage.getColorModel().hasAlpha() to find if the image supports alpha, but in the case that it does, it doesn't tell me if it definitely contains non-opaque pixels.
I know I could loop over the pixel data & test each one's alpha value & return as soon as I discover a non-opaque pixel, but I was wondering if there's already something native I could use, a flag that is set internally perhaps? Or something a little less intensive than iterating through pixels.
Any input appreciated, thanks.
Unfortunately, you will have to loop through each pixel (until you find a transparent pixel) to be sure.
If you don't need to be 100% sure, you could of course test only some pixels, where you think transparency is most likely to occur.
By looking at various images, I think you'll find that most images that has transparent parts contains transparency along the edges. This optimization will help in many common cases.
Unfortunately, I don't think that there's an optimization that can be done in one of the most common cases, the one where the color model allows transparency, but there really are no transparent pixels... You really need to test every pixel in this case, to know for sure.
Accessing the alpha values in its "native representation" (through the Raster/DataBuffer/SampleModel classes) is going to be faster than using BufferedImage.getRGB(x, y) and mask out the alpha values.
I'm pretty sure you'll need to loop through each pixel and check for an Alpha value.
The best alternative I can offer is to write a custom method for reading the pixel data - ie your own Raster. Within this class, as you're reading the pixel data from the source file into the data buffer, you can check for the alpha values as you go. Of course, this isn't much help if you're using a built-in image reading class, and involves a lot more effort.

Using a semi transparent texture to "mask" out a part of the background

I am attempting to create a "hole in the fog" effect. I have a background grid image, overlapped onto that I have a "fog" texture that I use to show that certain areas are not in view. I am attempting to cut a chunk out of "fog" that will show the area that is currently in view. I am trying to "mask" a part of the fog off the screen.
I made some images to help explain what I am after:
Background:
"Mask Image" (The full transparency has to be on the inside and not the outer rim for what I am going to use it for):
Fog (Sorry, hard to see.. Mostly Transparent):
What I want as a final Product:
I have tried:
Stencil-Buffer: I got this fully working except for one fact... I wasn't able to figure out how to retain the fading transparency of the "mask" image.
glBlendFunc: I have tried many different version of the parameters and many other methods with it (glColorMask, glBlendEquation, glBlendFuncSeparate) I started by using some parameter that I found on this website: here. I used the "glBlendFunc(GL_ZERO, GL_ONE_MINUS_SRC_ALPHA);" as this seemed to be what I was looking for but... This is what ended up happening as a result: (Its hard to tell what is happening here but... There is fog covering the grid in the background. Though, the mask is just ending up as an fully opaque black blob when its supposed to be a transparent part in the fog.
Some previous code:
glEnable(GL_BLEND); // This is not really called here... It is called on the init function of the program as it is needed all the way through the rendering cycle.
renderFogTexture(delta, 0.55f); // This renders the fog texture over the background the 0.55f is the transparency of the image.
glBlendFunc(GL_ZERO, GL11.GL_ONE_MINUS_SRC_ALPHA); // This is the one I tried from one of the many website I have been to today.
renderFogCircles(delta); // This just draws one (or more) of the mask images to remove the fog in key places.
(I would have posted more code but after I tried many things I started removing some old code as it was getting very cluttered (I "backed them up" in block comments))
This is doable, provided that you're not doing anything with the alpha of the framebuffer currently.
Step 1: Make sure that the alpha of the framebuffer is cleared to zero. So your glClearColor call needs to set the alpha to zero. Then call glClear as normal.
Step 2: Draw the mask image before you draw the "fog". As Tim said, once you blend with your fog, you can't undo that. So you need the mask data there first.
However, you also need to render the mask specially. You only want the mask to modify the framebuffer's alpha. You don't want it to mess with the RGB color. To do that, use this function: glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_TRUE). This turns off writes to the RGB part of the color; thus, only the alpha will be modified.
Your mask texture seems to have zero where it is visible and one where it isn't. However, the algorithm needs the opposite, so you should either fix your texture or use a glTexEnv mode that will effectively flip the alpha.
After this step, your framebuffer should have an alpha of 0 where we want to see the fog, and an alpha of 1 where we don't.
Also, don't forget to undo the glColorMask call after rendering the mask. You need to get those colors back.
Step 3: Render the fog. That's easy enough; to make the masking work, you need a special blend mode. Like this one:
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFuncSeparate(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA, GL_ZERO, GL_ONE);
The separation between the RGB and A blend portions is important. You don't want to change the framebuffer's alpha (just in case you want to render more than one layer of fog).
And you're done.
The approach you're currently taking will not work, as once you draw the fog over the whole screen, there's no way to 'erase' it.
If you're using fixed pipeline:
You can use multitexturing (glTexEnv) with fixed pipeline to combine the fog and circle textures in a single pass. This function is probably kind of confusing if you haven't used it before, you'll probably have to spend some time studying the man page. You'll do something like bind fog to glActiveTexture 0, and mask to glActiveTexture 1, enable multitexturing, and then combine them with glTexEnv. I don't remember exactly the right parameters for this.
If you're using shaders:
Use a multitexturing shader where you multiply the fog alpha with the circle texture (to zero out the alpha in the circle region), and then blend this combined texture into the background in a single pass. This is probably a more conceptually easy approach, but not sure if you're using shaders.
I'm not sure there's a way you can do this where you draw the fog and the mask in separate passes, as they both have their own alpha values, it will be difficult to combine them to get the right color result.

decode inverted QR code zxing

I found that there is a variable, called "int[] stateCount", in FinderPatternFinder class helps checking for any possible finder pattern in a QR code. In order to detect/locate the Finder Patterns, I am thinking to make changes of this variable would be helpful.
Any ideas on how to detect/decode a color inverted QR code in Java with ZXing?
I'm not a java programer but since I have managed to alter the zxing source to implement scanning inverted codes in my iOS project maybe my implementation will help you.
Invert pixels - zxing
Not sure what you mean by color inverted. Do you mean swapping light and dark? In theory, you should be able to take an image, extract the luminance, and invert it, e.g., 255-pixel_luminance. Note that the quiet zone (surrounding white) needs to be inverted as well, i.e., surrounding black. And it's possible this won't work anyway. The zxing heuristics are not always symmetric. You can give it a shot but it may not work.
Note that zxing only extracts luminance. Two colors of very different hues but the same luminance are indistinguishable to the detectors/decoders.
In any case, mucking with stateCount is probably not going to help. At that point, the image is purely black and white, not even grayscale. You want to take any variations/distortions in your image into account ahead of this and leave this code untouched.

Find an Image within an Image

I am looking for the best way to detect an image within another image. I have a small image and would like to find the location that it appears within a larger image - which will actually be screen captures. Conceptually, it is like a 'Where's Waldo?' sort of search in the larger image.
Are there any efficient/quick ways to accomplish this? Speed is more important than memory.
Edit:
The 'inner' image may not always have the same scale but will have the same rotation.
It is not safe to assume that the image will be perfectly contained within the other, pixel for pixel.
Wikipedia has an article on Template Matching, with sample code.
(While that page doesn't handle changed scales, it has links to other styles of matching, for example Scale invariant feature transform)
If rotation also had to be catered for, the Generalised Hough Transform can be used.
You can treat this as a substring problem, where characters in the alphabet are pixels and your string is the image. You would need also to use a special character in a similar vein to a linebreak, to denote the image boundary.
The algorithm you want is on wikipedia: http://en.wikipedia.org/wiki/Knuth%E2%80%93Morris%E2%80%93Pratt_algorithm
Update: If you cannot assume that the image is perfectly contained within the other, pixel for pixel, then this approach will not work.
There are other, more complicated algorithms based on the same dynamic programming concept as the above, but I won't go into them unless it's necessary.

Categories

Resources