I've seen several questions about skin colour already but didn't found an exact skin colour range in rgba. Can someone tell what is a min-max rgba colours for example for a middle-aged European man?
My code is like this:
//Here should be scin min max rgba range
static CvScalar min = cvScalar(0, 0, 130, 0);//BGR-A
static CvScalar max= cvScalar(140, 110, 255, 0);//BGR-A
public static void main(String[] args) {
//read image
IplImage orgImg = cvLoadImage("colordetectimage.jpg");
//create binary image of original size
IplImage imgThreshold = cvCreateImage(cvGetSize(orgImg), 8, 1);
//apply thresholding
cvInRangeS(orgImg, min, max, imgThreshold);
//smooth filter- median
cvSmooth(imgThreshold, imgThreshold, CV_MEDIAN, 13);
//save
cvSaveImage("threshold.jpg", imgThreshold); }
So I need to just specify rgba values here.
There is no real answer to this. Skin tone is highly variable (even for middle age Caucasian men), and when you then throw in lighting effects (over exposure, under exposure, low light, colored incident light) you can't distinguish skin from non-skin based solely on RGB values of pixels.
My advice would be to pick a sensible first approximation, and then tweak the parameters. And don't expect to be able to pick accurately detect skin without taking the context into account ... somehow.
You could take your first approximation by looking at one of the color charts that you get from Googling "skin tone color chart rgb". Take your pick ...
#Stephen C what do u mean by "pick sensible first approximation, and then tweak the parameters" ? Just picking 1 color from a palette and what then?
(Ignoring the issue of lighting effects.) Since colors in RGB define a 3-D space, I propose that there is a region of that space where the colors are "skin" tones. The problem is to get a reasonable approximation of that region, you could start with a color chart of "skin" tones, plot them in 3-D and figure out a simple 3-D hull that encloses them ... and doesn't enclose colors that don't look right to your eye. Then try your "skin detection" based on the hull and adjust accordingly.
Related
Please note, I am a complete beginner in computer vision and OpenCV(Java).
My objective is to identify parking signs, and to draw bounding boxes around them. My problem is that the four signs from the top (with red borders) were not identified (see last image). I am also noticing that the Canny edge detection does not capture the edges of these four signs (see second image). I have tried with other images, and got the same results. My approach is as follows:
Load the image and convert it to gray scale
Pre-process the image by applying bilateralFilter and Gaussian blur
Execute Canny edge detection
Find all contours
Calculate the perimeter with arcLength and approximate the contour with approxPolyDP
If approximated figure has 4 points, then assuming it is a rectangle hence adding the contour
Finally, draw the contours that has 4 points exactly.
Mat filtered = new Mat();
Mat edges = new Mat(src.size(), CvType.CV_8UC1);
Imgproc.cvtColor(src, edges, Imgproc.COLOR_RGB2GRAY);
Imgproc.bilateralFilter(edges, filtered, 11, 17, 17);
org.opencv.core.Size s = new Size(5, 5);
Imgproc.GaussianBlur(filtered, filtered, s, 0);
Imgproc.Canny(filtered, filtered, 170, 200);
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
Imgproc.findContours(filtered, contours, new Mat(), Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
List<MatOfPoint> rectangleContours = new ArrayList<MatOfPoint>();
for (MatOfPoint contour : contours) {
MatOfPoint2f dst = new MatOfPoint2f();
contour.convertTo(dst, CvType.CV_32F);
perimeter = Imgproc.arcLength(dst, true);
approximationAccuracy = 0.02 * perimeter;
MatOfPoint2f approx = new MatOfPoint2f();
Imgproc.approxPolyDP(dst, approx, approximationAccuracy, true);
if (approx.total() == 4) {
rectangleContours.add(contour);
Toast.makeText(reactContext.getApplicationContext(), "Rectangle detected" + approx.total(), Toast.LENGTH_SHORT).show();
}
}
Imgproc.drawContours(src, rectangleContours, -1, new Scalar(0, 255, 0), 5);
Very happy to get advice on how I could resolve this issue, even if it implies changing my stratergy.
What about starting with OCR, Tesseract, in order to recognize big "P" and other parking-related text patterns?
(Toast seems like Android: How can I use Tesseract in Android?
General Tesseract for Java: https://www.geeksforgeeks.org/tesseract-ocr-with-java-with-examples/ )
Another example, in Python, but see the preprocessing and other tricks and ideas for making the letters recognizable when the image has gradients, lower contrast, small fonts etc.: How to obtain the best result from pytesseract?
Also, there could be filtering by color, since the colors of the signs are known. The conversion to grayscale removes that valuable information, so finding the edges is OK, but the colors still can be used. E.g. split the colors to b,g,r and use each channel as grayscale and possibly boost it. The red and blue borders would stand out.
It seems the contrast around the red borders is too low, the blue signs are brighter compared to the black contour. If not splitting, before converting to grayscale, some of the color channels could be amplified anyway, like the red one.
Searching for big yellow/blue regions with low contrast, with text found, "P" etc. Tesseract has a function returning the boxes of the text that was found.
Also once you find a sign somewhere or a bar of signs and their directions, you could search there, vertically/horizontally.
You may search HoughLines as well, that may find the black border around the signs.
Calculate the perimeter with arcLength and approximate the contour
with approxPolyDP
If approximated figure has 4 points, then assuming it is a rectangle
hence adding the contour
IMO finding exactly 4 points (or after simplification of the polygon) is hard and may be not enough of an evidence, also there are round corners etc. if contours are compared directly.
The angles between the vertices and the distances matter - are the lines parallel (with some precision) etc.
The process could be iterative: gradually reducing the polygon detail, checking the area and perimeter, until the vertices reach 4 (or about that). If the area and perimeter don't change much (the ratio has to be found) after polygon aproximation (simplifying the round corners etc.), while the number of points in the contour gets reduced. I'd try also a comparison to the bounding box and the convex hull measurements etc.
If you need to only detect the parking signs, then treat this problem as a classic object detection problem (just like face detection). For the best results, you will need to use deep learning based convolutional neural network models.
To start with you can train the YOLO model which will give you a lot better results that anything you tried with OpenCV. You need at least 500 images. Then you need to annotate them. This tutorial is kick start tutorial on YOLO. Let's give a try.
Like YOLO there are so many models and all of them can be trained using similar process. So if you want to deploy your model on android, I will recommend you to choose a tensorflow based model. Train it on your PC and integrate the trained serialized model in your app.
I am trying to program a visualisation for the Mandelbrot set in java, and there are a couple of things that I am struggling with to program. I realize that questions around this topic have been asked a lot and there is a lot of documentation online but a lot of things seem very complicated and I am relatively new to programming.
The first issue
The first issue I have is to do with zooming in on the fractal. My goal is to make an "infinite" zoom on the fractal (of course not infinite, as far as a regular computer allows it regarding calculation time and precision). The approach I am currently going for is the following on a timer:
Draw the set using some number of iterations on the range (-2, 2) on the real axis and (2, 2) on the imaginary axis.
Change those ranges to zoom in.
Redraw that section of the set with the number of iterations.
It's the second step that I struggle with. This is my current code:
for (int Py = beginY; Py < endY; Py++) {
for (int Px = beginX; Px < endX; Px++) {
double x0 = map(Px, 0, height,-2, 2);
double y0 = map(Py, 0, width, -2, 2);
Px and Py are the coordinates of the pixels in the image. The image is 1000x1000. The map funtion takes a number, in this case Px or Py, with a range of (0, 1000) and devides it evenly over the range (-2, 2), so it returns the corresponding value in that range.
I think that in order to zoom in, I'll have to change the -2 and 2 values by some way in the timer, but whatever I try, it doesn't seem to work. The zoom always ends up slowing down after a while or it will end up zooming in on a part of the set that is in the set, so not the borders. I tried multiplying them by some scale factor every timer tick, but that doesn't really produce the result I was looking for.
Now I have two questions about this issue.
Is this the right approach to visualizing the set and zooming in(draw, change range, redraw)?
If it is, how do I zoom in properly on an area that is interesting and that will keep zooming in properly even after running for a minute?
The second issue
Of course when visualizing something, you need to get some actual visual thing. In this case I want to color the set in a way similar to what you see here: (https://upload.wikimedia.org/wikipedia/commons/f/fc/Mandel_zoom_08_satellite_antenna.jpg).
My guess is that you have use the amount of iterations a pixel went through to before breaking out of the loop to give it some color value. However, I only really know how to do this with a black and white color scheme. I tried making a color array that holds the same amount of different gray colors as the amount of max iterations, starting from black and ending in white. Here is my code:
Color[] colors = new Color[maxIterations + 2];
for (int i = 0; i < colors.length; i++) {
colors[i] = new Color((int)map(i, 0, maxIterations + 2, 0, 255),
(int)map(i, 0, maxIterations + 2, 0, 255),
(int)map(i, 0, maxIterations + 2, 0, 255));
}
I then just filled in the amount of iterations in the array and assigned that color to the pixel. I have two questions about this:
Will this also work as we zoom into the fractal in the previously described manner?
How can I add my own color scheme in this, like in the picture? I've read some things about "linear interpolation" but I don't really understand what it is and in what way it can help me.
It sounds like you've made a good start.
Re the first issue: I believe there are ways to automatically choose an "interesting" portion of the set to zoom in on, but I don't know what they are. And I'm quite sure it involves more than just applying some linear function to your current bounding rectangle, which is what it sounds like you're doing.
So you could try to find out what these methods are (might get mathematically complicated), but if you're new to programming, you'll probably find it easier to let the user choose where to zoom. This is also more fun in the beginning, since you can run your program repeatedly and explore a new part of the set each time.
A simple way to do this is to let the user draw a rectangle over the image, and use your map function to convert the pixel coordinates of the drawn rectangle to the new real and imaginary coordinates of your zoom area.
You could also combine both approaches: once you've found somewhere you find interesting by manually selecting the zoom area, you can set this as your "final destination", and have the code gradually and smoothly zoom into it, to create a nice movie.
It will always get gradually slower though, as you start using ever more precise coordinates, until you reach the limits of precision with double and it becomes a pixellated mess. From there, if you want to zoom further, you'll have to look into arbitrary-precision arithmetic with BigDecimal - and it will continue to get slower and slower.
Re the second issue: starting off by calculating a value of numIterations / maxIterations (i.e. between 0 and 1) for each pixel is the right idea (I think this is basically what you're doing).
From there, there are all sorts of ways to convert this value to a colour, it's time to get creative!
A simple one is to have an array of a few very different colours. E.g. if you had white (0.0), red (0.25), green (0.5), blue (0.75), black (1.0), then if your calculated number was exactly one of the ones listed, you'd use the corresponding colour. If it's somewhere between, you blend the colours, e.g. for 0.3 you'd take:
((0.5-0.3)*red + (0.3-0.25)*green) / (0.5 - 0.25)
= 0.8*red + 0.2*green
Taking a weighted average of two colours is something I'll leave as an exercise ;)
(hint: take separate averages of the r, g, and b values. Playing with the alpha values could maybe also work).
Another one, if you want to get more mathsy, is to take an equation for a spiral and use it to calculate a point on a plane in HSB colour space (you can keep the brightness at some fixed value, say 1). In fact, any curve in 2D or 3D which you know how to write as an equation of one real variable can be used this way to give you smoothly changing colours, if you interpret the coordinates as points in some colour space.
Hope that's enough to keep you going! Let me know if it's not clear.
I'm trying to find a way to identify an archery target and all of its rings on a photo which might be made of different perspectives:
My goal is to identify the target and later on also where the arrows hit the target to automatically count their score. Presumptions are as follows:
The camera's position is not fixed and might change
The archery target might also move or rotate slightly
The target might be of different size and have different amount of circles
There might be many holes (sometimes big scratches) in the target
I have already tried OpenCV to find contours, but even with preprocessing (grayscale -> blur (-> threshold) -> edge detection) I still find a few houndred contours which are all distracted by the arrows or other obstacles (holes) on the target, so it is impossible to find a nice circular line. Using Hough to find circles doesn't work either as it will give me weired results as Hough will only find perfect circles and not ellipses.
With preprocessing the image this is my best result so far:
I was thinking about ellipse and circle fitting, but as I don't know radius, position and pose of the target this might be a very cpu consuming task. Another thought was about using recognition from a template, but the position and rotation of the target changes often.
Now I have the idea to follow every line on the image to check if it is a curve and then guess which curves belong together to form a circle/ellipse (ellipse because of the perspective). The problem is that the lines might be intersected by arrows or holes in a short distance so the line would be too short to check if it is a curve. With the smaller circles on the target the chance is high that it isn't recognised at all. Also, as you can see, circle 8, 7 and 6 have no clear line on the left side.
I think it is not neccessary to do perspective correction to achieve this task as long as I can clearly identify all the rings in the target.
I googled a long time and found some thesis which are all not exactly focussed on this specific task and also too mathematical for me to understand.
Is it by any chance possible to achieve this task? Could you share with me an idea how to solve this problem? Anything is very appreciated.
I'm doing this in Java, but the programming language is secondary. Please let me know if you need more details.
for starters see
Detecting circles and shots from paper target.
If you are using standardized target as on the image ( btw. I use these same too for my bow :) ) then do not cut off the color. You can select the regions of blue red and yellow pixels to ease up the detection. see:
footprint fitting
From that you need to fit the circles. But as you got perspective then the objects are not circles nor ellipses. You got 2 options:
Perspective correction
Use right bottom table rectangle area as marker (or the whole target). It is rectangle with known aspect ratio. so measure it on image and construct transformation that will change the image so it became rectangle again. There are tons of stuff about this: 3D scene reconstruction so google/read/implement. The basic are based just on De-skew + scaling.
Approximate circles by ellipses (not axis aligned!)
so fit ellipses to found edges instead circles. This will not be as precise but still close enough. see:
ellipse fitting
[Edit1] sorry did not have time/mood for this for a while
As you were unable to adapt my approach yourself here it is:
remove noise
you need to recolor your image to remove noise to ease up the rest... I convert it to HSV and detect your 4 colors (circles+paper) by simple tresholding and recolor the image to 4 colors (circles,paper,background) back into RGB space.
fill the gaps
in some temp image I fill the gaps in circles created by arrows and stuff. It is simple just scan pixels from opposite sides of image (in each line/row) and stop if hit selected circle color (you need to go from outer circles to inner not to overwrite the previous ones...). Now just fill the space between these two points with your selected circle color. (I start with paper, then blue,red and yellow last):
now you can use the linked approach
So find avg point of each color, that is approx circle center. Then do a histogram of radius-es and chose the biggest one. From here just cast lines out of the circle and find where the circle really stops and compute the ellipse semi-axises from it and also update the center (that handles the perspective distortions). To visually check I render cross and circle for each circle into the image from #1:
As you can see it is pretty close. If you need even better match then cast more lines (not just 90 degree H,V lines) to obtain more points and compute ellipse algebraically or fit it by approximation (second link)
C++ code (for explanations look into first link):
picture pic0,pic1,pic2;
// pic0 - source
// pic1 - output
// pic2 - temp
DWORD c0;
int x,y,i,j,n,m,r,*hist;
int x0,y0,rx,ry; // ellipse
const int colors[4]=// color sequence from center
{
0x00FFFF00, // RGB yelow
0x00FF0000, // RGB red
0x000080FF, // RGB blue
0x00FFFFFF, // RGB White
};
// init output as source image and resize temp to same size
pic1=pic0;
pic2=pic0; pic2.clear(0);
// recolor image (in HSV space -> RGB) to avoid noise and select target pixels
pic1.rgb2hsv();
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
{
color c;
int h,s,v;
c=pic1.p[y][x];
h=c.db[picture::_h];
s=c.db[picture::_s];
v=c.db[picture::_v];
if (v>100) // bright enough pixels?
{
i=25; // treshold
if (abs(h- 40)+abs(s-225)<i) c.dd=colors[0]; // RGB yelow
else if (abs(h-250)+abs(s-165)<i) c.dd=colors[1]; // RGB red
else if (abs(h-145)+abs(s-215)<i) c.dd=colors[2]; // RGB blue
else if (abs(h-145)+abs(s- 10)<i) c.dd=colors[3]; // RGB white
else c.dd=0x00000000; // RGB black means unselected pixels
} else c.dd=0x00000000; // RGB black
pic1.p[y][x]=c;
}
pic1.save("out0.png");
// fit ellipses:
pic1.bmp->Canvas->Pen->Width=3;
pic1.bmp->Canvas->Pen->Color=0x0000FF00;
pic1.bmp->Canvas->Brush->Style=bsClear;
m=(pic1.xs+pic1.ys)*2;
hist=new int[m]; if (hist==NULL) return;
for (j=3;j>=0;j--)
{
// select color per pass
c0=colors[j];
// fill the gaps with H,V lines into temp pic2
for (y=0;y<pic1.ys;y++)
{
for (x= 0;(x<pic1.xs)&&(pic1.p[y][x].dd!=c0);x++); x0=x;
for (x=pic1.xs-1;(x> x0)&&(pic1.p[y][x].dd!=c0);x--);
for (;x0<x;x0++) pic2.p[y][x0].dd=c0;
}
for (x=0;x<pic1.xs;x++)
{
for (y= 0;(y<pic1.ys)&&(pic1.p[y][x].dd!=c0);y++); y0=y;
for (y=pic1.ys-1;(y> y0)&&(pic1.p[y][x].dd!=c0);y--);
for (;y0<y;y0++) pic2.p[y0][x].dd=c0;
}
if (j==3) continue; // do not continue for border
// avg point (possible center)
x0=0; y0=0; n=0;
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
if (pic2.p[y][x].dd==c0)
{ x0+=x; y0+=y; n++; }
if (!n) continue; // no points found
x0/=n; y0/=n; // center
// histogram of radius
for (i=0;i<m;i++) hist[i]=0;
n=0;
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
if (pic2.p[y][x].dd==c0)
{
r=sqrt(((x-x0)*(x-x0))+((y-y0)*(y-y0))); n++;
hist[r]++;
}
// select most occurent radius (biggest)
for (r=0,i=0;i<m;i++)
if (hist[r]<hist[i])
r=i;
// cast lines from possible center to find edges (and recompute rx,ry)
for (x=x0-r,y=y0;(x>= 0)&&(pic2.p[y][x].dd==c0);x--); rx=x; // scan left
for (x=x0+r,y=y0;(x<pic2.xs)&&(pic2.p[y][x].dd==c0);x++); // scan right
x0=(rx+x)>>1; rx=(x-rx)>>1;
for (x=x0,y=y0-r;(y>= 0)&&(pic2.p[y][x].dd==c0);y--); ry=y; // scan up
for (x=x0,y=y0+r;(y<pic2.ys)&&(pic2.p[y][x].dd==c0);y++); // scan down
y0=(ry+y)>>1; ry=(y-ry)>>1;
i=10;
pic1.bmp->Canvas->MoveTo(x0-i,y0);
pic1.bmp->Canvas->LineTo(x0+i,y0);
pic1.bmp->Canvas->MoveTo(x0,y0-i);
pic1.bmp->Canvas->LineTo(x0,y0+i);
//rx=r; ry=r;
pic1.bmp->Canvas->Ellipse(x0-rx,y0-ry,x0+rx,y0+ry);
}
pic2.save("out1.png");
pic1.save("out2.png");
pic1.bmp->Canvas->Pen->Width=1;
pic1.bmp->Canvas->Brush->Style=bsSolid;
delete[] hist;
I am implementing algorithm in java which select a portion of image as marker.
My problem is
1) After selecting the marker area, how do i get the specific mean value of marker color in RGB as the number of pixels with a small difference in color.
2) How can i find marker value, meaning the threshold value for the color, based on the previous marker selection.
Please provide an algorithm and if posssible, an implementation in java.
Thanks in advance.
I'm not sure what you tried, and where you're stuck, but here goes:
To get a mean color your best bet is to try to find the median value for the three channels (R, G and B) separately and use that as the mean. Due to specific qualities of the RGB color space, the mean is very vulnerable to outliers, the median less so.
I assume you want to select all colors that are similar to your marker color. To do that you could select all pixels where the color is less small euclidean distance to your median RGB color selected above.
If this does not work for you you could look into alternative colorspaces. But I think the above should be enough.
I have a color, which I only know at runtime. Using this color i want to create two new colors, one very bright and one none bright version of the color.
So to clarify, say i have the color Red; I want to create the hex-value for a "Light red" color, and a "Dark red" color.
How would i go about doing this? My code is written in Java using GWT.
Convert the colours to the HSB/HSV (Hue-Saturation-Brightness/Value ) space and adjust the Brightness up for lighter and down for darker. Then convert back again. In Java:
import java.awt.Color;
float hsbVals[] = Color.RGBtoHSB( originalColour.getRed(),
originalColour.getGreen(),
originalColour.getBlue(), null );
Color highlight = Color.getHSBColor( hsbVals[0], hsbVals[1], 0.5f * ( 1f + hsbVals[2] ));
Color shadow = Color.getHSBColor( hsbVals[0], hsbVals[1], 0.5f * hsbVals[2] );
The HSB space is designed for this kind of operation.
The essential point is that you only need to vary the Brightness term to get the lightening/darkening effect you want. You'll have to experiment with how much you lighten/darken.
The above code shifts the Brightness to half-way towards white for the highlight and half-way to black for the shadow. (I used this code to create a highlighted border effect on a button.)
See: http://en.wikipedia.org/wiki/HSL_and_HSV and http://www.acasystems.com/en/color-picker/faq-hsb-hsv-color.htm
Edit: According to the comments, the java.awt.Color class can't be used in GWT. Since the only part of theColor class we're using are the HSV to RGB and the RGB to HSV conversions, as you're using GWT you could instead google for an implementation of those algorithms: Google HSV RGB conversion algorithm. For example:
javascripter.net
cs.rit.edu/~ncs
rapidtables.com (RGB to HSV)
rapidtables.com (HSV to RGB)
StackOverflow: Algorithm to convert RGB to HSV and HSV to RGB?
There are at least two decent solutions to this, one better (more 'proper', anyway) than the other. It depends on what you want to use the colour for, or a tradeoff against short and simple code.
Using a colour space that models brightness
The problem is your colours are probably specified as RGB (ie, amounts of red, green and blue, reflecting your monitor.) The best way to change a colour's brightness is to specify your colours in a different colour space where brightness is one component, such as HSB - hue (the 'colour'), saturation ('amount' of the colour) and brightness (self-explanatory, I think!)
This Wikipedia article on HSL and HSV color models explains far more than you probably want to know :)
Have a look at this HSB demo.
The point is, once your colours are specified in a different space where one component is brightness, changing the brightness is easy because you can increase or decrease that component as you wish, in the same way you might increase or decrease the amount of blue in a RGB colour. Java, I think, has some colour conversion functions built in - some googling found this page with a handy example of Color.RGBtoHSB() and going back again with Color.HSBtoRGB.
Blending with white or black
This is hackier, but effective in most situations, and most code I've written that needs to get two versions of a colour (for a gradient, for example) for something unimportant like a UI background uses this sort of method. The logic is that a colour will be brighter as it gets closer to white (RGB 255,255,255) and darker as it gets closer to black (RGB 0,0,0). So to brighten something, blend with white by, say, 25%. You can blend between two colours by taking a proportion of one colour, and the inverse of that proportion of the other, for each channel / component.
The following is untested, and is a conversion of Delphi code I have used to do the same thing (the code is taken from memory, and on top of that I haven't used Java for years and don't remember the syntax and classes well, so I don't expect this to compile but you should be able to get an idea):
Color Blend(Color clOne, Color clTwo, float fAmount) {
float fInverse = 1.0 - fAmount;
// I had to look up getting colour components in java. Google is good :)
float afOne[] = new float[3];
clOne.getColorComponents(afOne);
float afTwo[] = new float[3];
clTwo.getColorComponents(afTwo);
float afResult[] = new float[3];
afResult[0] = afOne[0] * fAmount + afTwo[0] * fInverse;
afResult[1] = afOne[1] * fAmount + afTwo[1] * fInverse;
afResult[2] = afOne[2] * fAmount + afTwo[2] * fInverse;
return new Color (afResult[0], afResult[1], afResult[2]);
}
And you'd probably use it like:
Color clBrighter = Blend(Color.red, Color.white, 0.25);
You might want to add some safety code, such as ensuring a clamp between 0..255 for each component, or checking that dAmount is truly in the range 0..1.
The Java Color documentation looks like the Color class has all sorts of useful methods. (Edit: I just noticed you said you're using gwt not awt - I haven't used it and have no idea what classes from standard Java are included. This should point you in the right direction anyway.) It's possible this is not the cleanest way in Java - that'll be due to my lack of knowledge of the classes and methods these days - but it should be enough to get you well down the track. Hope that helps!
I don't know in wich format you have the color (I tried to see if GWT uses colors... but they rely heavily on CSS so they don't have specific properties).
Anyway, if you have one value for each component (Red, green, Blue), and each value ranges between 0 and 255 -this is standard- then apply this algorithm:
for each component
multiply the original value by a factor (let's say 1.1, 10% more bright)
convert the float/double value to int
if this value surpass 255, cut it to 255
Then you'll have a new color (a new three component tuple).
Hexa colors
If you have colors in the web format:
RRGGBB
RR - two hexa digits for red
GG - two hexa digits for green
BB - two hexa digits for blue
you'll need to convert them to int and back to hexa:
Hexa string to int
Integer.parseInt("AB", 16"); // returns 171
int to Hexa string
Integer.toHexaString(171); // returns "AB"
Since you are using GWT, you should do your color calculations using HSL rather then RGB, as it's more intuitive, and can be applied as a style color directly to your components.
Your initial color is "red" is defined as "color: hsl(0,100%, 50%)", see http://www.w3.org/TR/css3-color/#hsl-color for more on style colors.
To get a light red, all you need is to increase the L (lightness) component, so a light red would be "color: hsl(0,100%, 75%)". To get a dark red, decrease the L component, "color: hsl(0,100%, 25%)"
To apply your color, just set the color using
component.getElement().getStyle().setColor("hsl(0,100%, 25%)")
Just Add the following function to your code. It will return the hash value for lighter and darker color as per your requirement.
pass two arguments.
(1) the hash value of your selected color.
(2) how much lighter or darker you want (Ex. if you want 10% lighter shade then pass 0.1 as the second argument and if you want 40% darker then pass -0.4(negative value for darker) as the second argument)
So if you want to find 20% lighter shade of red then call as below
String lightred=convert("ff0000",0.2);
public static String convert(String hex, double num) {
String rgb = "#",temp;
int i;
double c,cd;
for (i = 0; i < 3; i++) {
c = Integer.parseInt(hex.substring(i * 2,(i*2)+2), 16);
c = Math.min(Math.max(0, c+(255*num)), 255);
cd=c-(int)c;
if(cd>0){c=(int)c+1;}
temp = Integer.toHexString((int)c);
if(temp.length()<2)
{
temp=temp+temp;
}
rgb += temp;
}
return rgb;
}