I would like to create a Mat with a grid on a transparent background that I can lay on top of other Mats. I struggle with the transparent part and to laying on top
Mat image = imread("pic.jpg");
Mat grid = new Mat(image.size(), CV_8UC4, new Scalar(0, 0, 0, 0);
for (//times)
// draw grid with: line(grid, ... )
grid.copyTo(image);
First of all the grid Mat is not transparent at all it is black. Isn't scalar constructed like this?
new Scalar(Blue, Green, Red, Alpha)
Also how do I overlay an image with another one? This is just overwriting.
Here is sample program written in C++ but it should be very analogue in java:
cv::Mat input = cv::imread("../inputData/Lenna.png");
cv::Mat inputBGRA;
cv::cvtColor(input, inputBGRA, CV_BGR2BGRA);
cv::Mat gridSolid = cv::Mat(input.size(), inputBGRA.type(), cv::Scalar(0,0,0,0));
cv::Mat gridMask = cv::Mat(input.size(), CV_8UC1, cv::Scalar(0));
cv::Mat gridAlpha = cv::Mat(input.size(), inputBGRA.type(), cv::Scalar(0,0,0,0));
cv::line(gridSolid, cv::Point(0,0), cv::Point(512,512), cv::Scalar(0,255,0,255), 10);
cv::line(gridSolid, cv::Point(0,512), cv::Point(512,0), cv::Scalar(0,255,0,255), 10);
cv::line(gridMask, cv::Point(0,0), cv::Point(512,512), cv::Scalar(255), 10); // single channel
cv::line(gridMask, cv::Point(0,512), cv::Point(512,0), cv::Scalar(255), 10); // single channel
// copy and use the mask. copying eliminates the original values where the mask is set
cv::Mat outputCopy = inputBGRA.clone();
gridSolid.copyTo(outputCopy,gridMask);
// here set the scalar alpha value to less than 255
// both lines use different alpha values
cv::line(gridAlpha, cv::Point(0,0), cv::Point(512,512), cv::Scalar(0,255,0,120), 10);
cv::line(gridAlpha, cv::Point(0,512), cv::Point(512,0), cv::Scalar(0,255,0,180), 10);
cv::Mat outputWeightSum = inputBGRA.clone();
//cv::addWeighted(inputBGRA, 0.5, gridAlpha, 0.5, 0, outputWeightSum);
// manually add weighted sum PER ALPHA VALUE:
for(int y=0; y<outputWeightSum.rows; ++y)
for(int x=0; x<outputWeightSum.cols; ++x)
{
// the bigger the alpha value, the less of the original image is kept at that pixel
cv::Vec4b imgPix = outputWeightSum.at<cv::Vec4b>(y,x);
cv::Vec4b gridPix = gridAlpha.at<cv::Vec4b>(y,x);
// use alpha channel vor blending
float blendpart = (float)gridPix[3]/(float)255;
// set pixel value to blended value
outputWeightSum.at<cv::Vec4b>(y,x) = blendpart * gridPix + (1.0f-blendpart) * imgPix;
}
in fact you dont need the alpha channel in this example but if you have more complex "grids" with differing alpha values, it might be nice.
I get these results:
method: copy:
method: blend with alpha channel:
Related
I have an inputMat (RGBA format). I want to keep only the transparent pixels and set them to white color. All the other pixels (that are consequently non transparent) should be changed to transparent.
Beginning of my Java code :
Mat inputMat = new Mat();
Utils.bitmapToMat(bitmap, inputMat);
How can I do what I want to do ? (answers in all languages - not only Java - accepted!)
Thanks !
That is the idea.
Mat inputMat = new Mat();
Utils.bitmapToMat(bitmap, inputMat);
Split image to channels:
List<Mat> channels = new ArrayList<>(4);
Core.split(inputMat, channels);
Get alpha channel:
Mat alpha = channels.get(3);
Invert alpha channel:
Core.bitwise_not(alpha,alpha);
Make new list of channels:
List<Mat> channelsOut = new ArrayList<>();
channelsOut.add(alpha);
channelsOut.add(alpha);
channelsOut.add(alpha);
channelsOut.add(alpha);
Merge them to image:
Mat outputMat = new Mat();
Core.merge(channelsOut,outputMat);
OpenCV's Mat class has a setTo method that takes a mask argument.
OpenCV has the split procedure that can separate color planes (channels).
Mats support comparison.
OpenCV Mats support "normal" math expressions.
assert input.shape[2] == 4, "it's not a four-channel picture"
alpha = input[..., 3] # select alpha plane
assert ((alpha == 0) | (alpha == 255)).all(), "assuming alpha values to be binary"
mask = (alpha == 0) # boolean array representing the transparent pixels
# change input; if you want a new array, copy it or create an empty one of the same shape
input[mask] = (255, 255, 255, 255) # white, opaque
# ~mask inverts the mask
input[~mask] = (0, 0, 0, 0) # set transparent, clear color information
I am developing an application to detect the lesion area, for this I am using the grabcut to detect the ROI and remove the background from the image. However in some images it is not working well. He ends up not identifying the borders of the region of interest well. The watershed can better identify the edges for this type of work, however I am having difficulties making this transition from grabcut to watershed. Before processing the grabcut, the user uses touchevent to mark a rectangle around the image of interest (wound area) to facilitate the work of the algorithm. As the image below.
However, using other wound images, segmentation is not good, showing flaws in ROI detection.
Image using grabcut in app
Image using watershed in desktop
this is the code:
private fun extractForegroundFromBackground(coordinates: Coordinates, currentPhotoPath: String): String {
// TODO: Provide complex object that has both path and extension
val width = bitmap?.getWidth()!!
val height = bitmap?.getHeight()!!
val rgba = Mat()
val gray_mat = Mat()
val threeChannel = Mat()
Utils.bitmapToMat(bitmap, gray_mat)
cvtColor(gray_mat, rgba, COLOR_RGBA2RGB)
cvtColor(rgba, threeChannel, COLOR_RGB2GRAY)
threshold(threeChannel, threeChannel, 100.0, 255.0, THRESH_OTSU)
val rect = Rect(coordinates.first, coordinates.second)
val fg = Mat(rect.size(), CvType.CV_8U)
erode(threeChannel, fg, Mat(), Point(-1.0, -1.0), 10)
val bg = Mat(rect.size(), CvType.CV_8U)
dilate(threeChannel, bg, Mat(), Point(-1.0, -1.0), 5)
threshold(bg, bg, 1.0, 128.0, THRESH_BINARY_INV)
val markers = Mat(rgba.size(), CvType.CV_8U, Scalar(0.0))
Core.add(fg, bg, markers)
val marker_tempo = Mat()
markers.convertTo(marker_tempo, CvType.CV_32S)
watershed(rgba, marker_tempo)
marker_tempo.convertTo(markers, CvType.CV_8U)
val imgBmpExit = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565)
Utils.matToBitmap(markers, imgBmpExit)
image.setImageBitmap(imgBmpExit)
// Run the grab cut algorithm with a rectangle (for subsequent iterations with touch-up strokes,
// flag should be Imgproc.GC_INIT_WITH_MASK)
//Imgproc.grabCut(srcImage, firstMask, rect, bg, fg, iterations, Imgproc.GC_INIT_WITH_RECT)
// Create a matrix of 0s and 1s, indicating whether individual pixels are equal
// or different between "firstMask" and "source" objects
// Result is stored back to "firstMask"
//Core.compare(mark, source, mark, Core.CMP_EQ)
// Create a matrix to represent the foreground, filled with white color
val foreground = Mat(srcImage.size(), CvType.CV_8UC3, Scalar(255.0, 255.0, 255.0))
// Copy the foreground matrix to the first mask
srcImage.copyTo(foreground, mark)
// Create a red color
val color = Scalar(255.0, 0.0, 0.0, 255.0)
// Draw a rectangle using the coordinates of the bounding box that surrounds the foreground
rectangle(srcImage, coordinates.first, coordinates.second, color)
// Create a new matrix to represent the background, filled with black color
val background = Mat(srcImage.size(), CvType.CV_8UC3, Scalar(0.0, 0.0, 0.0))
val mask = Mat(foreground.size(), CvType.CV_8UC1, Scalar(255.0, 255.0, 255.0))
// Convert the foreground's color space from BGR to gray scale
cvtColor(foreground, mask, Imgproc.COLOR_BGR2GRAY)
// Separate out regions of the mask by comparing the pixel intensity with respect to a threshold value
threshold(mask, mask, 254.0, 255.0, Imgproc.THRESH_BINARY_INV)
// Create a matrix to hold the final image
val dst = Mat()
// copy the background matrix onto the matrix that represents the final result
background.copyTo(dst)
val vals = Mat(1, 1, CvType.CV_8UC3, Scalar(0.0))
// Replace all 0 values in the background matrix given the foreground mask
background.setTo(vals, mask)
// Add the sum of the background and foreground matrices by applying the mask
Core.add(background, foreground, dst, mask)
// Save the final image to storage
Imgcodecs.imwrite(currentPhotoPath + "_tmp.png", dst)
// Clean up used resources
firstMask.release()
source.release()
//bg.release()
//fg.release()
vals.release()
dst.release()
return currentPhotoPath
}
Exit:
How do I update the code to use watershed instead of grabcut?
A description of how to apply the watershed algorithm in OpenCV is here, although it is in Python. The documentation also contains some potentially useful examples. Since you already have a binary image, all that's left is to apply the Euclidean Distance Transform (EDT) and the watershed function. So instead of Imgproc.grabCut(srcImage, firstMask, rect, bg, fg, iterations, Imgproc.GC_INIT_WITH_RECT), you would have:
Mat dist = new Mat();
Imgproc.distanceTransform(srcImage, dist, Imgproc.DIST_L2, Imgproc.DIST_MASK_3); // use L2 for Euclidean Distance
Mat markers = Mat.zeros(dist.size(), CvType.CV_32S);
Imgproc.watershed(dist, markers); # apply watershed to resultant image from EDT
Mat mark = Mat.zeros(markers.size(), CvType.CV_8U);
markers.convertTo(mark, CvType.CV_8UC1);
Imgproc.threshold(mark, firstMask, 0, 255, Imgproc.THRESH_BINARY + Imgproc.THRESH_OTSU); # threshold results to get binary image
The thresholding step is described here. Also, optionally, before you apply Imgproc.watershed, you may want to apply some morphological operations to the result of EDT i.e; dilation, erosion:
Imgproc.dilate(dist, dist, Mat.ones(3, 3, CvType.CV_8U));
If you're not familiar with morphological operations when it comes to processing binary images, the OpenCV documentation contains some good, quick examples.
Hope this helps!
I have the following image:
I would like to detect the red rectangle using cv::inRange method and HSV color space.
int H_MIN = 0;
int H_MAX = 10;
int S_MIN = 70;
int S_MAX = 255;
int V_MIN = 50;
int V_MAX = 255;
cv::cvtColor( input, imageHSV, cv::COLOR_BGR2HSV );
cv::inRange( imageHSV, cv::Scalar( H_MIN, S_MIN, V_MIN ), cv::Scalar( H_MAX, S_MAX, V_MAX ), imgThreshold0 );
I already created dynamic trackbars in order to change the values for HSV, but I can't get the desired result.
Any suggestion for best values (and maybe filters) to use?
In HSV space, the red color wraps around 180. So you need the H values to be both in [0,10] and [170, 180].
Try this:
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
Mat3b bgr = imread("path_to_image");
Mat3b hsv;
cvtColor(bgr, hsv, COLOR_BGR2HSV);
Mat1b mask1, mask2;
inRange(hsv, Scalar(0, 70, 50), Scalar(10, 255, 255), mask1);
inRange(hsv, Scalar(170, 70, 50), Scalar(180, 255, 255), mask2);
Mat1b mask = mask1 | mask2;
imshow("Mask", mask);
waitKey();
return 0;
}
Your previous result:
Result adding range [170, 180]:
Another interesting approach which needs to check a single range only is:
invert the BGR image
convert to HSV
look for cyan color
This idea has been proposed by fmw42 and kindly pointed out by Mark Setchell. Thank you very much for that.
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
Mat3b bgr = imread("path_to_image");
Mat3b bgr_inv = ~bgr;
Mat3b hsv_inv;
cvtColor(bgr_inv, hsv_inv, COLOR_BGR2HSV);
Mat1b mask;
inRange(hsv_inv, Scalar(90 - 10, 70, 50), Scalar(90 + 10, 255, 255), mask); // Cyan is 90
imshow("Mask", mask);
waitKey();
return 0;
}
While working with dominant colors such as red, blue, green and yellow; analyzing the two color channels of the LAB color space keeps things simple. All you need to do is apply a suitable threshold on either of the two color channels.
1. Detecting Red color
Background :
The LAB color space represents:
the brightness value in the image in the primary channel (L-channel)
while colors are expressed in the two remaining channels:
the color variations between red and green are expressed in the secondary channel (A-channel)
the color variations between yellow and blue are expressed in the third channel (B-channel)
Code :
import cv2
img = cv2.imread('red.png')
# convert to LAB color space
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
# Perform Otsu threshold on the A-channel
th = cv2.threshold(lab[:,:,1], 127, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
Result:
I have placed the LAB converted image and the threshold image besides each other.
2. Detecting Blue color
Now lets see how to detect blue color
Sample image:
Since I am working with blue color:
Analyze the B-channel (since it expresses blue color better)
Perform inverse threshold to make the blue region appear white
(Note: the code changes below compared to the one above)
Code :
import cv2
img = cv2.imread('blue.jpg')
# convert to LAB color space
lab = cv2.cvtColor(img, cv2.COLOR_BGR2LAB)
# Perform Otsu threshold on the A-channel
th = cv2.threshold(lab[:,:,2], 127, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
Result:
Again, stacking the LAB and final image:
Conclusion :
Similar processing can be performed on green and yellow colors
Moreover segmenting a range of one of these dominant colors is also much simpler.
so got a question involving linear gradients in Java, using multiple colors. I am looking to get an RGB color at any point along agradient. Creating the gradient and painting it is easy, and i can get the fractions and colors for those colors i set.....
The issue i have is that i want to get an RGB color at any point along the gradient.
to break it down, an example application would be the creation and display of a gradient in some JPanel with size of 255 (maxSize=255 (see below)). depending on the size of said JPanel (maxSize) the interpolation would be different (larger number in maxSize would result in more interpolated values).
I would like to be able to grab the RGB value at any location along the gradient,
you could almost equate it to being able to do the following...
grab the RGB value based on the location in the gradient
RGB_Values = p.getColorByGradientLocation(float locationInGradient);
or
grab the RGB value based off a specific value, somewhere between Point2D start and Point2D end
RGB_Values = p.getColorByValue(float value);
e.g setting gradient code
Point2D start = new Point2D.Float(0, 0);
Point2D end = new Point2D.Float(0, maxSize);
Color[] colors = {n number of colors};
dist[] = ((float) i / (float) colors.length); //equally distributes colors
p = new LinearGradientPaint(start, end, dist, colors, CycleMethod.NO_CYCLE);
Many Thanks
thanks to Nolo for the suggestion that help me figure out a method to do this. This is still in progress and i may find a better way to do this, but for now this works....
so you need to paint the lineargradient to a panel then paint the panel to an image (without displaying it).
BufferedImage bi = new BufferedImage(xSize, ySize, BufferedImage.TYPE_INT_RGB);
Graphics2D g = bi.createGraphics();
panel.print(g);
you need to set both the image and the panel to a width of 1, then height to match the gradient end (do 1 width for speed).
to display the gradient you can then grab just 1 pixel off each row, then use the image.getRGB(x,y) to get the pixel values in conjunction with the bit settings e.g
int rgb = im.getRGB(0, i);
r = (rgb >> 16) & 0xFF;
g = (rgb >> 8) & 0xFF;
b = (rgb & 0xFF);
Color newC = new Color(r, g, b);
if you loop through the image height using the above, you can get all the color values for a gradient you create.
:-)
I have some png files that I am applying a color to. The color changes depending on a user selection. I change the color via 3 RGB values set from another method. The png files are a random shape with full transparency outside the shape. I don't want to modify the transparency, only the RGB value. Currently, I'm setting the RGB values pixel by pixel (see code below).
I've come to realize this is incredibly slow and possibly just not efficient enough do in an application. Is there a better way I could do this?
Here is what I am currently doing. You can see that the pixel array is enormous for an image that takes up a decent part of the screen:
public void foo(Component component, ComponentColor compColor, int userColor) {
int h = component.getImages().getHeight();
int w = component.getImages().getWidth();
mBitmap = component.getImages().createScaledBitmap(component.getImages(), w, h, true);
int[] pixels = new int[h * w];
//Get all the pixels from the image
mBitmap[index].getPixels(pixels, 0, w, 0, 0, w, h);
//Modify the pixel array to the color the user selected
pixels = changeColor(compColor, pixels);
//Set the image to use the new pixel array
mBitmap[index].setPixels(pixels, 0, w, 0, 0, w, h);
}
public int[] changeColor(ComponentColor compColor, int[] pixels) {
int red = compColor.getRed();
int green = compColor.getGreen();
int blue = compColor.getBlue();
int alpha;
for (int i=0; i < pixels.length; i++) {
alpha = Color.alpha(pixels[i]);
if (alpha != 0) {
pixels[i] = Color.argb(alpha, red, green, blue);
}
}
return pixels;
}
Have you looked at the functions available in Bitmap? Something like extractAlpha sounds like it might be useful. You an also look at the way functions like that are implemented in Android to see how you could adapt it to your particular case, if it doesn't exactly meet your needs.
The answer that worked for me was a write up Square did here Transparent jpegs
They provide a faster code snippet for doing this exact thing. I tried extractAlpha and it didn't work but Square's solution did. Just modify their solution to instead modify the color bit and not the alpha bit.
i.e.
pixels[x] = (pixels[x] & 0xFF000000) | (color & 0x00FFFFFF);