I have two images - background & foreground, having different lighting conditions and colour tones. Each of them contains a person. Also i have an alpha mask for the person in foreground image. I want to blend the person (cropped out using mask) from the foreground with the background image so that the final image looks as if the person in the foreground was standing next to person in background(i.e realistic composite blending).I already have the segmentation mask and i'am able to get the cropped out person from foreground image.It looks like we need to ensure proper illumination, saturation and colour matching to obtain a natural blended feel.It seems, we can easily do that using photoshop manually(link1,link2). How can achieve similar results programmatically, given two random background and foreground images with person?
The following are the approaches i tried; but each of them have certain issues and works only with certain types of images.
1.OpenCV Seamless clone - Works pretty well for plane and similar contrast images.Issues:- heavy, smudging, cannot be generalised for all types of images.
2.Photorealistic style transfer - Works great; but its heavy, needs special training and works only with few class of images. Still artefacts may be present.
3.Fast Color Transfer (pyimagesearch)- Works for plane background. Background color averaging causes colours to be spill over to image roi and thus loses its natural feel.
Also tried normal alpha blending, laplacian blending, histogram matching etc. Additionally, experimented with techniques like CLAHE, gamma correction, histogram equalisation, edge blending etc to adjust and improve the output of blending. Unfortunately, none of these combinations gave the desired result.(NB:-The output should not ideally look as if a layer is stacked or pasted on top of background image).i.e we need a seamless natural blend.
Is this possible only using some AI models or is there any light weight methods for performing automatic image composting in java, using some standard libraries (should work adaptively for any two images in general)?
We are trying to achieve something similar to the following methods:-
https://people.csail.mit.edu/yichangshih/portrait_web/
https://arxiv.org/abs/1709.09828
https://www.cs.cornell.edu/~kb/publications/egsr18_harmonization.pdf
Finally found the answer..Here is the caffe implementaion of Deep Color Harmoniation: https://github.com/wasidennis/DeepHarmonization. It seems to a light weight network and works pretty fast. Still trying to figure out how to convert the same to a mobile friendly format like tensorflow lite ...
Hope it helps somebody ...
Related
I'm currently trying to mimic Gouraud shading in JavaFX.
Specifically, I want to be able to apply vertex colors to textured polygons.
The problem is, I'd rather not have to generate unique textures for all of the used vertex color / texture combinations, due to resource limitations.
My solution is to setup the model without vertex coloring, then overlay faces on top, with (relatively) small transparent textures which overlay on top to create the intended look.
However, my problem is that, I cannot seem to find any information or techniques to procedurally make the texture which when overlayed will create the intended effect.
Generating the "colored texture" uses the original texture, and multiplies the rgb values by brightness values which are unique for each pixel. My problem is, I need to figure out how to mimic this process, by just overlaying one face upon another.
I have tried changing the BlendMode, however no matter what I set it to, the MeshViews both completely disappear when I do this. I do not know why this happens.
I have two pixel arrays, foreground and lighting. When I draw a white radial gradient (blurry circle) onto the lighting array, I want it to make the foreground visible - much like a torch in terraria/starbound. However, I also want to be able to mix different colors of lighting, rather than be stuck with black to white.
So how do I manipulate the pixel arrays so that they 'multiply' (I believe it's called)? Or is there an easier way using RGBA which I have not been able to get to work (flickering black on white or image getting ever more posterized)?
So far most responses regarding alpha channels / opacity have been using libraries from java which for this project I want to refrain from using.
Any help much appreciated!
If you want to know how the blending modes (such as "multiply") work in image editing programs on the pixel-value level, read this: How does photoshop blend two images together?.
Note that if you want to make the image lighter, you need a mode like "screen", because "multiply" makes it darker.
i have developed an android app for color comparison. and i ve successfully completed the app except solving one problem that is illumination problem. my reference chart is in sdcard as jpeg images I need to compare those images with the images which i takes from camera. i am getting the output but it depends the illuminity. so now i am planing no normalize the bit maps. How to normalize a bitmap.?? i am comparing images using naive similarity method
and please suggest me one good idea to solve the illumination problem. searching metheds sonce last two weeks .
Try to use Catalano Framework. Here contains the article that you can learn how to use.
I implemented Image Normalization using this approach.
FastBitmap fb = new FastBitmap("image.jpg");
fb.toGrayscale();
ImageNormalization in = new ImageNormalization();
in.applyInPlace(fb);
here is a link asking a similar question about luminosity, so I won't repeat everything.
Color is perceptual, not an absolute. Take a red car and park it under a street light and the car will appear to be orange. Obviously the color of the paint has not changed, only your perception of the color has changed.
Any time you take a photo of color, the light used to illuminate the image will change the results you get. Most cameras have a light balance control, where most people use auto. The auto control looks something to call white, then shifts the image to make the white look white. This does not mean the rest of the colors will be correct.
Take something like a colorful stuffed animal (or a few) and take its photo outside in sunlight, under an incandescent bulb, under florescent light and in a dark room with the camera flash. How similar are the colors? If you have photoshop, look at the color curves in photoshop.
To match color, you need an objective standard, such as a color card included in the photo. Then the software brightness and color correct the known card, then measure the other color to the known standards.
I am attempting to create a "hole in the fog" effect. I have a background grid image, overlapped onto that I have a "fog" texture that I use to show that certain areas are not in view. I am attempting to cut a chunk out of "fog" that will show the area that is currently in view. I am trying to "mask" a part of the fog off the screen.
I made some images to help explain what I am after:
Background:
"Mask Image" (The full transparency has to be on the inside and not the outer rim for what I am going to use it for):
Fog (Sorry, hard to see.. Mostly Transparent):
What I want as a final Product:
I have tried:
Stencil-Buffer: I got this fully working except for one fact... I wasn't able to figure out how to retain the fading transparency of the "mask" image.
glBlendFunc: I have tried many different version of the parameters and many other methods with it (glColorMask, glBlendEquation, glBlendFuncSeparate) I started by using some parameter that I found on this website: here. I used the "glBlendFunc(GL_ZERO, GL_ONE_MINUS_SRC_ALPHA);" as this seemed to be what I was looking for but... This is what ended up happening as a result: (Its hard to tell what is happening here but... There is fog covering the grid in the background. Though, the mask is just ending up as an fully opaque black blob when its supposed to be a transparent part in the fog.
Some previous code:
glEnable(GL_BLEND); // This is not really called here... It is called on the init function of the program as it is needed all the way through the rendering cycle.
renderFogTexture(delta, 0.55f); // This renders the fog texture over the background the 0.55f is the transparency of the image.
glBlendFunc(GL_ZERO, GL11.GL_ONE_MINUS_SRC_ALPHA); // This is the one I tried from one of the many website I have been to today.
renderFogCircles(delta); // This just draws one (or more) of the mask images to remove the fog in key places.
(I would have posted more code but after I tried many things I started removing some old code as it was getting very cluttered (I "backed them up" in block comments))
This is doable, provided that you're not doing anything with the alpha of the framebuffer currently.
Step 1: Make sure that the alpha of the framebuffer is cleared to zero. So your glClearColor call needs to set the alpha to zero. Then call glClear as normal.
Step 2: Draw the mask image before you draw the "fog". As Tim said, once you blend with your fog, you can't undo that. So you need the mask data there first.
However, you also need to render the mask specially. You only want the mask to modify the framebuffer's alpha. You don't want it to mess with the RGB color. To do that, use this function: glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_TRUE). This turns off writes to the RGB part of the color; thus, only the alpha will be modified.
Your mask texture seems to have zero where it is visible and one where it isn't. However, the algorithm needs the opposite, so you should either fix your texture or use a glTexEnv mode that will effectively flip the alpha.
After this step, your framebuffer should have an alpha of 0 where we want to see the fog, and an alpha of 1 where we don't.
Also, don't forget to undo the glColorMask call after rendering the mask. You need to get those colors back.
Step 3: Render the fog. That's easy enough; to make the masking work, you need a special blend mode. Like this one:
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFuncSeparate(GL_ONE_MINUS_DST_ALPHA, GL_DST_ALPHA, GL_ZERO, GL_ONE);
The separation between the RGB and A blend portions is important. You don't want to change the framebuffer's alpha (just in case you want to render more than one layer of fog).
And you're done.
The approach you're currently taking will not work, as once you draw the fog over the whole screen, there's no way to 'erase' it.
If you're using fixed pipeline:
You can use multitexturing (glTexEnv) with fixed pipeline to combine the fog and circle textures in a single pass. This function is probably kind of confusing if you haven't used it before, you'll probably have to spend some time studying the man page. You'll do something like bind fog to glActiveTexture 0, and mask to glActiveTexture 1, enable multitexturing, and then combine them with glTexEnv. I don't remember exactly the right parameters for this.
If you're using shaders:
Use a multitexturing shader where you multiply the fog alpha with the circle texture (to zero out the alpha in the circle region), and then blend this combined texture into the background in a single pass. This is probably a more conceptually easy approach, but not sure if you're using shaders.
I'm not sure there's a way you can do this where you draw the fog and the mask in separate passes, as they both have their own alpha values, it will be difficult to combine them to get the right color result.
I need to to clip variablesized images into puzzle shaped pices like this(not squares): http://www.fernando.com.ar/jquery-puzzle/
I have considered the posibility of doing this with a php library like Cairo or GD, but have little to no experience with these librays, and see no immidiate soulution for creating a clipping mask dynamicaly scalable for different sized images.
I'm looking for guidance/tips on which serverside programing language to use to accomplish this task, and preferably an approach to this problem.
You can create an image using GD with the size of the puzzle piece. and then copy the full image on that image with the right cropping to get the right part of the image.
Then you can just dynamically color in every part of the piece you want to remove with a distinct color (eg #0f0) and then use imagecolorallocatealpha to make that color transparent. Do it for each piece and you have your server side image pieces.
However, if I where you I would create the clipping mask of each puzzle peace in advance in the distinct color. That would make two images per connection (one with the "circle" connecter sticking out and one where this circle connector fits into). That way you can just copy these masks onto the image to create nice edges quickly.
GD is quite complicated, I've heard very good things about Image Magick for which there is a PHP version and lots of documentation on php.net. However, not all web servers would have this installed by default.
http://www.php.net/manual/en/book.imagick.php
If you choose to do it using PHP with GD then the code here may help:
http://php.amnuts.com/index.php?do=view&id=15&file=class.imagemask.php
Essentially what you need to do with GD is to start with a mask at a particular size and then use the imagecopyresampled function to copy the mask image resource to a larger or smaller size. To see what I mean, check out the _getMaskImage method class shown at the url above. A working example of the output can be seen at:
http://php.amnuts.com/demos/image-mask/
The problem with doing it via GD, as far as I can tell, is that you need to do it a pixel at a time if you want to achieve varying opacity levels, so processing a large image could take a few seconds. With ImageMagick this may not be the case.