For an assignment I am making a Boardgame. (In java) This Boardgame has a map, with multiple fields/lands that have to be used. Units can be placed on them, they can move. Other things are also placed on them.
For the map I have one image I use. I looked online for solutions, but the only ones I found where for a grid game (such as chess or checkers) and the map of this game can not be divided in just squares. I tried this, but the field shapes are to different to make that work.
I had a few faint ideas as to how to work this out, but I can't quite put them into code examples and have no clue if they would work, or how.
The ideas I had:
Make some invisible buttons and bind them to specific coordinates in the picture. The problem I had with this solution was that it also had to be able to display things placed on it. It would also be very inconvenient if not all of the field was clickable.
I have a 'overlay' image with the outlines of all the fields and the 'insides' removed. I made this overlay so I could add a faint color overlay over the board. Would it be possible to use this in any kind of way?
First I though of cutting out all the loose fields and putting them together to form the one image. Only, I don't know how I would do this. Not just where to place it, but also, how can I make sure that the elements are Always in the same place compared to eachother, and my board doesn't mess up when changing screen/resolution size?
I am using javafx for the graphical elements in my game.
If there are any suggestions of something I haven't thought of myself, those are also very welcome.
If it's sufficient to retrieve the color of the pixel where the mouse was clicked, then you can do that fairly easily. If you know the image is displayed in the image view unscaled and uncropped, then all you need is:
imageView.setOnMouseClicked(e -> {
Color color = imageView.getImage().getPixelReader().getColor((int)e.getX(), (int)e.getY());
// ...
});
More generally, you may need to map the image view coordinates to the image coordinates:
imageView.setOnMouseClicked(e -> {
double viewX = e.getX();
double viewY = e.getY();
double viewW = imageView.getBoundsInLocal().getWidth();
double viewH = imageView.getBoundsInLocal().getHeight();
Rectangle2D viewport = imageView.getViewport();
double imgX = viewport.getMinX() + e.getX() * viewport.getWidth() / viewW;
double imgY = viewport.getMinY() + e.getY() * viewport.getHeight() / viewH ;
Color color = imageView.getImage().getPixelReader().getColor((int)imgX, (int)imgY);
// ...
});
Once you have the color you can do some simple analysis to see if it approximately matches the color of various items in your image, e.g. check the hue component, or check if the "distance" from a fixed color is suitably small.
A typical implementation of that might look like:
// choose a color based on what is in your image:
private final Color FIELD_GREEN = Color.rgb(10, 10, 200);
private double distance(Color c1, Color c2) {
double deltaR = c1.getRed() - c2.getRed();
double deltaG = c1.getGreen() - c2.getGreen();
double deltaB = c1.getBlue() - c2.getBlue();
return Math.sqrt(deltaR * deltaR + deltaG * deltaG + deltaB * deltaB);
}
private boolean colorsApproximatelyEqual(Color c1, Color c2, double tolerance) {
return distance(c1, c2) < tolerance ;
}
And the back in the handler you can do
if (colorsApproximatelyEqual(color, FIELD_GREEN, 0.1)) {
// process click on field...
}
Whether or not this is a viable approach depends on the nature of the image map. If the coloring in the map is too complex (or objects are not easily distinguishable by color), then you will likely need to place other elements in the scene graph and register handlers on each of them, as you describe in the question.
Related
I am using androidplot with PanZoom attached:
PanZoom.attach(plot);
Therefore I can zoom in and out as it should.
What I want next is to draw a circle at a given point.
Right now I use the strokewidth to set the dimension of the circle.
But when I zoom in and out, the circle size remains the same, although it should scale according to the zoom level. So imagine I zoom in infinitely, the circle should at a certain amount of zoom level cover the whole screen.
But it does not.
How can I achieve this?
I was thinking aboud increasing the strokewidth of the circle according to the zoomlevel but I was neither able to get the zoomlevel nor to get the domain levels on the left and right side of the plot.
EDIT:
In xml folder I create a file e.g. circle.xml:
<?xml version="1.0" encoding="utf-8"?>
<config
fillPaint.color="#00000000"
linePaint.color="#00000000"
linePaint.strokeWidth="0dp"
pointLabelFormatter.textPaint.color="#FFFFFF"
vertexPaint.color="#371cd1d4"
vertexPaint.strokeWidth="20dp"/>
and in java
sigmaLabelFormatter = new LineAndPointFormatter();
sigmaLabelFormatter.setPointLabelFormatter(new PointLabelFormatter());
sigmaLabelFormatter.configure(activity.getApplicationContext(), R.xml.circle);
sigmaLabelFormatter.setPointLabelFormatter(null);
Zoom only affects the drawing of XYSeries data; if you draw directly on the canvas, it will be drawn exactly where you specify on the canvas regardless of pan/zoom.
One thing you can do though is to make series to represent your circle and draw the circle there. This will make the circle respond to both pan and zoom actions. The tricky part will be picking enough points to ensure that the circle appears smooth at your highest supported zoom level.
I found a solution to my problem, since I finally managed:
to get the domain levels on the left and right side of the plot.
or to be more precise to get the width of the plot.
Since the dimensions of the plot is in meters, I do the calculation as follows:
private float calculateDP(float px){
return px / (densityDpi / densityDefault);
}
private float pixelsPerMeter(float value){
float w = plot.getGraph().getWidgetDimensions().canvasRect.width();
float w2 = plot.getBounds().getWidth().floatValue();
return value * (w / w2);
}
private void init(Activity activity){
densityDpi = activity.getResources().getDisplayMetrics().densityDpi;
densityDefault = android.util.DisplayMetrics.DENSITY_DEFAULT;
}
private void onCreate(){
// ... lots of stuff
labelFormatter.getVertexPaint().setStrokeWidth(calculateDP(pixelsPerMeter(2)));
}
Might it help somebody out there...
I just started to prototype with libGDX to understand how it works.
I want to realize a grid (like chess game) and when I click/touch a box of the grid, this change its image.
I've found a good tutorial, but it only use the keyboard listener and on web I can't find a good example that clarify to me the these mechanics.
What i don't understand is essentially: what use to render the boxes (for now I've used only SpriteBatch and ShapeRenderer) and how detect when e which box was clicked (I think that calculate coordinates not was a good way to follow. I imagine that best way is add a click listener at each box to determine when it will clicked, but I don't know how to code this).
Thanks for any suggestion, if you have an example, it can help me a lot.
Image image = new Image();
image.addListener(new ClickListener(){
#Override
public void clicked(InputEvent event, float x, float y) {
System.out.println("You clicked an image...");
}
});
Now we can add this image to something like a Table or directly to the Stage.
Like dtx12 has mentioned you should look into Scene2D. You probably want to setup a grid using a table like so.
Table chessTable = new Table();
int boardHeight = 8;
int boardWidth = 8;
for (int y = 0; y < boardHeight; y++)
{
for (int x = 0; x < boardWidth; x++)
{
//Check if dividable by two to make checker pattern and add cell to table.
if (x + y % 2 == 0)
chessTable.add(blackImage);
else
chessTable.add(whiteImage);
}
//Add a new row to table
chessTable.row();
}
Scene2d is the best for your purposes, check documentation.
https://github.com/libgdx/libgdx/wiki/Scene2d
You may add ActorGestureListener to created actor for example.
For rendering something using ShapeRenderer you may override actor's draw method
and apply matrix to them. But would be better to have simple image with rectangle instead of using ShapeRenderer if you only need to draw boxes. Select variant which you like more.
I am using libgdx for easy 3D game, I need check model is clicked.
It is my code:
public int getObject (int screenX, int screenY) {
Ray ray = cam.getPickRay(screenX, screenY);
int result = -1;
float distance = -1;
for (int i = 0; i < rooms.size; ++i) {
final GameObject instance = rooms.get(i);
instance.transform.getTranslation(position);
position.add(instance.center);
final float len = ray.direction.dot(position.x-ray.origin.x, position.y-ray.origin.y, position.z-ray.origin.z);
if (len < 0f)
continue;
float dist2 = position.dst2(ray.origin.x+ray.direction.x*len, ray.origin.y+ray.direction.y*len, ray.origin.z+ray.direction.z*len);
if (distance >= 0f && dist2 > distance)
continue;
if (dist2 <= instance.radius * instance.radius ) {
result = i;
distance = dist2;
}
}
return result;
}
It it sometimes work.
Is is my model:
http://www6.zippyshare.com/v/97501566/file.html
What do I wrong?
Any help for me?
I am new with libgdx.
When was I press 1 it lights, but when I waas press 2, 1 lights too (instead 2)...
I didn't fully analyze your code nor do I know what It it sometimes work. actually means (e.g. in which circumstances doesn't it work?) but you're using a bounding sphere for detecting whether the object has been clicked or not.
Assuming your calculations are correct (as I said, I didn't check them in depth) you still can have false positives or negatives since the only shape which is perfectly represented by a bounding sphere is ... well... a sphere.
That might be the source for click detection to work "sometimes".
If that is the case and you want more accurate detection you should either use different bound volumes, bounding volumne hierarchies or a rendering based approach (i.e. render the object id into some buffer, which would allow for pixel perfect selection).
UPDATE:
From your post update it seems that bounding spheres are not the problem here, since they should not overlap, unless your data is wrong - which you should check/debug.
So the problem might actually lie in your calculations. From the documentation it looks like the ray you get is projected into the scene (i.e. into world space) so you'd need to transform your objects' center into worldspace as well.
You're currently only applying the position but ignore rotation and scale thus the resulting position might be wrong. I'm sure there's some built-in transform code, so instead of transforming manually you should use that. Please check the docs on how to do that.
I'm trying to write function, which can generate colors between two colors based on a given value. An example would explain it better..
Input ..
X : 1
Y : 0.5
Z : 0
The user gives any set of color:value pairs, then enters a number(say 0.75). I have to then generate color which is a blend of Y and Z in proportion(based on the their values and the input value). I was thinking of the following approach.
Find the colors which surround the value, for 0.75 it will be 0.5 and 1.
Mix those two colors somehow, based on the value and generate new colors.
I'm completely lost, as how to generate colors and are there any libraries for this.
UPDATE:
It is part of a bigger project I'm working on. Lets say we have ..
1 : X
0 : Y
and the user inputs, 0.25
I would like to have something..
(X*0.25 + Y*0.75)
as it's more near to Y, that's why the higher proportion. If the user inputs, 0.5.. the output should be
(X*0.5 + Y*0.5)
and so on. I have no idea how to do this with RGB colors.
P.S: The questions is not specific to language, but I'm doing this in Java.
You have to blend each color channel (red, green and blue) seperately like this:
Color x,y; //set by you
float blending;//set by you
float inverse_blending = 1 - blending;
float red = x.getRed() * blending + y.getRed() * inverse_blending;
float green = x.getGreen() * blending + y.getGreen() * inverse_blending;
float blue = x.getBlue() * blending + y.getBlue() * inverse_blending;
//note that if i pass float values they have to be in the range of 0.0-1.0
//and not in 0-255 like the ones i get returned by the getters.
Color blended = new Color (red / 255, green / 255, blue / 255);
So far for the color example. Generally if you want a linear interpolation between two values you have to do the following:
var firstValue;
var secondValue;
var interpolation;
var interpolated = firstValue * interpolation +
secondValue * (1 - interpolation);
But since you have Color-Objects in your case, you cannot interpolate the whole object in one step, you have to interpolate each relevant value on its own. Eventually you have to interpolate the alpha-channel as well, don´t know that, since you didn´t mention it, but for completeness i include it in this answer.
A color is a point in a three-dimensional space. The exact coordinates used depend on what's called a "color space", of which there are several: RGB, HSV, and so on. So to compute a color in between two given colors, get those two colors in the same color space, and compute a third point between those two along the line in 3d-space between them.
The simplest way to do this would be simply to do a linear interpolation for each of the three values of the colorspace (R, G, and B, for example). But there's a further complication that the coordinate values are often not linear, so you have to linearize them first (for example, TV colors are exponential with a lambda of about 2.2). Depending on your application, incorrectly assuming linearity might work OK anyway, especially if the starting colors are already close.
(As mentioned by luk2302, add a fourth coordinate for alpha if necessary).
You could use Java.awt.color by doing somting like this:
public Color mixColors(Color color1, Color color2, double percent){
double inverse_percent = 1.0 - percent;
int redPart = (int) (color1.getRed()*percent + color2.getRed()*inverse_percent);
int greenPart = (int) (color1.getGreen()*percent + color2.getGreen()*inverse_percent);
int bluePart = (int) (color1.getBlue()*percent + color2.getBlue()*inverse_percent);
return new Color(redPart, greenPart, bluePart);
}
I was making a Gauroud algorithm and when i had calculated point intensity on the edge I didn't know what to do with it. I tried to decide this problem like:
private int getPointRGB(double intensity)
{
float[] hsb=null;
double newCrRed;
double newCrGr;
double newCrBlue;
int nRGB;
//crRed, crGr, crBlue - primary components of edge RGB
newCrRed = intensity*crRed;
newCrGr = intensity*crGr;
newCrBlue = intensity*crBlue;
hsb = Color.RGBtoHSB((int)newCrRed, (int)newCrGr, (int)newCrBlue, null);
nRGB = Color.HSBtoRGB(hsb[0], hsb[1], hsb[2]);
return(nRGB);
}
am I right?
If none of the default color choosers are satisfactory, you can create your own custom chooser panel, as discussed in How to Use Color Choosers: Creating a Custom Chooser Panel. For example, you could implement the CIE 1976 color space, shown here.