I am implementing algorithm in java which select a portion of image as marker.
My problem is
1) After selecting the marker area, how do i get the specific mean value of marker color in RGB as the number of pixels with a small difference in color.
2) How can i find marker value, meaning the threshold value for the color, based on the previous marker selection.
Please provide an algorithm and if posssible, an implementation in java.
Thanks in advance.
I'm not sure what you tried, and where you're stuck, but here goes:
To get a mean color your best bet is to try to find the median value for the three channels (R, G and B) separately and use that as the mean. Due to specific qualities of the RGB color space, the mean is very vulnerable to outliers, the median less so.
I assume you want to select all colors that are similar to your marker color. To do that you could select all pixels where the color is less small euclidean distance to your median RGB color selected above.
If this does not work for you you could look into alternative colorspaces. But I think the above should be enough.
Related
I've read few topics here and according to answers there is no exact solution. Anyway lets assume we have RGB color picker (0-255,0-255,0-255) and two colors, one original unmixed and another mixed one, then how do I exactly subtract to find which one was added? Does it actually work as
z - y = x ?
Are there any research formulas?
Another question is if apply CIElab tranformation to get hue saturation brightness then how do I apply these to subtract colors?
You mean additive colour mixing?
In this case, just the light is added. So, it is just addition and subtraction of intensities of light, so RGB is fine. But you need linear colour space. So you need to "unapply" gamma, add or subtract, and apply again gamma.
See https://en.wikipedia.org/wiki/SRGB#Specification_of_the_transformation for the formula to apply Gamma and unapply it: C is channel (R, G, B, each), C_linear is linear space (where you can add and subtract intensities) and C_srgb is the channel value as we use on computers. Note: you should divide and multiply with 255, to normalize values from 0 to 1.
For normal colour mixing (paints, inks, dyes, etc.), this is complex, but one could in such case, CIExyz is preferred: In fact on such space, the result of mixing is in the line between the two original chromacities. Unfortunately, the model do not tell you where the result will be within such line. Usually more data about each colour is needed (often instead of the RGB triplet, a vector of about 60 items [so a data every 5nm])
I'm working on an Image Processing project for a while, which consists in a way to measure and classify some types of sugar in the production line by its color. Until now, my biggest concern was searching and implementing the appropriate mathematical techniques to calculate distance between two colors (a reference color and the color being analysed), and then, turn this value into something more meaningful, as an industry standard measure.
From this, I'm trying to figure out how should I reliably extract the average color value from an image, once the frame captured by a video camera may contain noises or dirt in the sugar (most likely almost black dots).
Language: Java with OpenCV library.
Current solution: Before taking average image value, I'm applying the fastNlMeansDenoisingColored function, provided by OpenCV. It removes some white dots, at cost of more defined details. Couldn't remove black dots with it (not shown in the following images).
From there, I'm using the org.opencv.core.Core.mean function to computate the mean value of array elements independently for each channel, so that I can have a scalar value to use in my calculations.
I tried to use some kinds of image thresholding filters to get rid of black and white dots, and then calculate the mean with a mask, It kinda works too. Also, I tried to find any weighted average function which could return scalar values as well, but without success.
I don't know If those are robust enough pre-processing techniques to such application, mean values can vary easily. Am I in the right way? Would you suggest a better way to get reliable value that will represent my sugar's color?
I have done my own function plotter with java which works quite well.
All you have to do is to iterate over the with (pixels) of the panel and calculate the y-value. Then plot it with a poly-line onto the screen and that's it.
But here comes my problem: There is a scale factor between the number of pixels and the value which I want to plot.
For example I'm at the 304' iteration (iterating over the with value of the plot panel). Now I calculate the corresponding value for this pixel position (304) by the rule of three. This gives me 1.45436. Then I calculate the sin based on this value. Which is transcendetal number. Then I use again the rule of tree to determine which y-pixel this value corresponds to. Doing so, I have to round because the pixel is an integer. And there is my data loss. This data loss may give me the following result:
This looks not really nice. If I play around with resizing the window I sometimes get a smooth result.
How can I fix this problem? I've actually never seen such plots in any other function plotter.
If you do this in Java, you might consider composing your data points to a Path2D. That would have floating point coordinates, and the drawing engine would take care of smoothing things down. You might have to disable stroke control, though.
I need to create a heatmap for android google maps. I have geolocation and points that have negative and positive weight attributed to them that I would like to visually represent. Unlike the majority of heatmaps, I want these positive and negative weights to destructively interfere; that is, when two points are close to each other and one is positive and the other is negative, the overlap of them destructively interferes, effectively not rendering areas that cancel out completely.
I plan on using the android google map's TileOverlay/TileProvider class that has the job of creating/rendering tiles based a given location and zoom. (I don't have an issue with this part.)
How should I go about rendering these Tiles? I plan on using java's Graphics class but the best that I can think of is going through each pixel, calculating what color it should be based on the surrounding data points, and rendering that pixel. This seems very inefficient, however, and I was looking for suggestions on a better approach.
Edit: I've considered everything from using a non-android Google Map inside of a WebView to using a TileOverlay to using a GroundOverlay. What I am now considering doing is having a large 2 dimensional array of "squares." Each square would have a long, lat, and total +/- weights. When a new data point is added, instead of rendering it exactly where it is, it will be added to the "square" that it is in. The weight of this data point will be added to the square and then I will use the GoogleMap Polygon object to render the square on the map. The ratio of +points to -points will determine the color that is rendered, with a ratio closer to 1:1 being clear, >1 being blue (cold point), and <1 being red (hot point).
Edit: a.k.a. clustering the data into small regional groups
I suggest trying
going through each pixel, calculating what color it should be based on the surrounding data points, and rendering that pixel.
Even if it slow, it will work. There are not too many Tiles on the screen, there are not too many pixels in each Tile and all this is done on a background thread.
All this is still followed by translating Bitmap into byte[]. The byte[] is a representation of PNG or JPG file, so it's not a simple pixel mapping from Bitmap. The last operation takes some time too and may possibly require more processing power than your whole algorithm.
Edit (moved from comment):
What you describe in the edit sounds like a simple clustering on LatLng. I can't say it's a better or worse idea, but it's something worth a try.
I need to detect all the red pixels in an image using Java. What 's the best way to do this?
Only supposing a pixel is red when the Red RGB-value is > 200 isn't good enough (see this table).
So is there a better way to do this? Or is there some red-color-rgb algorithm?
Take a look at YCrCb color space.
Simple algorithm: convert your RGB image to YCrCb, extract red channel and make a threshold.
Convert RGB to HSL, and threshold the hue (H) component.
As you suggested, you probably want to do some comparison in HSB space. You'll probably want to define an appropriate rage for all three values based on what your expectations are.
You can use Color.RGBtoHSB to get the values from a given color.
http://docs.oracle.com/javase/7/docs/api/java/awt/Color.html#RGBtoHSB%28int%2C%20int%2C%20int%2C%20float%5B%5D%29