LiquidFun rendering particles - java

I'm using LiquidFun to simulate water, it's a physics engine based on box2d that uses particles. My problem is when rendering the particles with a specific color.
what is the purpose of setting the particle color on it's particle definition? when you also have to set the color on which the particle is to be rendered on the ParticleDebugRenderer.
public void createWater(float x, float y){
ParticleDef def = new ParticleDef();
def.color.set(Color.Red); //set particle color
def.flags.add(ParticleDef.ParticleType.b2_tensileParticle);
def.flags.add(ParticleDef.ParticleType.b2_colorMixingParticle);
def.position.set(x, y);
int index = system.createParticle(def);
}
ParticleDebugRenderer:
pdr = new ParticleDebugRenderer(Color.BLUE, maxParticles); //set as BLUE
if I set the particle to be RED it would still be rendered in blue because the ParticleDebugRenderer is set to BLUE.

Looking at the source code we can find 2 renderers.
ParticleDebugRenderer.java and ColorParticleRenderer.java
The code difference between them is that ColorParticleRenderer gets color from ParticleSystem and ParticleDebugRenderer gets color from constuctor.
The main use difference is that we use ColorParticleRenderer everytime we are not debugging. ParticleDebugRenderer is the one to use when we want to debug a particle. We use it, because we don't want to make changes in colors at the definition of ParticleSystem, because
There may be several ParticleSystem of one definition, so changing color in definition would be pointless.
It is easier to change one line of drawing than 1 line of definition (you avoid saying: ohh I forgot that I change the color at the definition)
Your confusion comes from fact that you are using ParticleDebugRenderer when you are not debugging so you assign the same color twice.

Related

Get color of pixel for JavaFX window

Note: I'm using TornadoFX and kotlin, but it is based of JavaFX with some kotlin additions, which is why I'm mentioning JavaFX instead, since it seems more related to JavaFX than TornadoFX.
I'm trying to get the color of a specific location on the JavaFX window (the scene).
The reason is because for my 2D game, I'm trying to build a map. For example, if I touch black, then stop moving in that direction (so the border). Or if it's red, then lose a life (obstacle). Rather than hardcoding that (I've got no idea how I'd be able to do that, since I don't want the map to just be a square), I'm trying to get the pixels and get the color.
Note that since this is part of the hit detection system, it'll be ran 100+ a second, so I'll need a solution that doesn't take too much time.
Also, note that I'm not trying to get a pixel from an image, but the window that the user sees. (Just clarifying so that someone doesn't misunderstand)
EDIT: I just realized that I could maybe use the image and get the color from that... although if I zoom in the image to make the map larger.. that part confuses me of how I could do it then.
You can use Robot API to do this, to get the colour of the current mouse position you can do this
int xValue = MouseInfo.getPointerInfo().getLocation().x;
int yValue = MouseInfo.getPointerInfo().getLocation().y;
Robot robot = new Robot();
Color color = robot.getPixelColor(xValue, yValue);
To get the current position when the cursor is moved you need to use the setOnMouseMoved listener
yourViewNode.setOnMouseMoved(event -> {
int xValue = MouseInfo.getPointerInfo().getLocation().x;
int yValue = MouseInfo.getPointerInfo().getLocation().y;
Robot robot = new Robot();
Color color = robot.getPixelColor(xValue, yValue);
});
Then you can compare the color and check with it, if you want the color when the user clicks only you need to listen for the right or left click then use the same code in the listener to get the color at this moment
I have used this solution in my bot creator project to provide tools that can take action depend on the current position color

Libgdx have two different views drawn with own coordinate systems?

Take a look at this screenshoot:
I want to split my UI to two parts, controls and game draw area, but I don't want the controls to overlap the draw area, I want the 0,0 of the game draw area, start above the controls area.
Is that possible to do with Libgdx?
As you said you want each view to have its own coordinate system thus splitting the screen technique as answered by Julian will do the trick, but it's not all to properly use it.
To make it fully works, you should have 2 separate OrthographicCamera one for game draw view, and another for control view. I suggest also to create 2 Viewport mapping to each camera as well. In my experience, when working in multiple camera situation, always create associate Viewport for it. Better for changes that could introduce in the future (i.e. adapt to any screen resolution), debugging purpose like checking touching, position etc.
So combine splitting technique with camera/viewport management, you will have robust system to work for each area independently.
Code
I provided the following code as it's used and working in my game, but changed variable names to fit your need. It's in Kotlin, but should be relatively easy to see it as Java.
You initialize things first for game area stuff.
// create a camera
gameAreaCamera = OrthographicCamera()
gameAreaCamera.setToOrtho(false, GAME_WIDTH, GAMEVIEW_HEIGHT)
gameAreaCamera.update()
// create a viewport associated with camera
gameAreaViewport = ExtendViewport(GAME_WIDTH, GAMEVIEW_HEIGHT, gameAreaCamera)
Next, for control area stuff.
// create a camera
controlAreaCamera = OrthographicCamera()
controlAreaCamera.setToOrtho(false, GAME_WIDTH, CONTROLVIEW_HEIGHT)
controlAreaCamera.update()
// create a viewport associated with camera
controlAreaViewport = ExtendViewport(GAME_WIDTH, CONTROLVIEW_HEIGHT, controlAreaCamera)
PS: Notice width and height of each view. It's set to occupy area as per your intention.
Now you should have something like this in render() method.
override fun render() {
// clear screen
Gdx.gl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f)
Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT)
// draw game area
drawGameArea()
// draw control area
drawControlArea()
}
For your drawGameArea() assume that sb is your SpriteBatch that you maintain it in the current class,
private fun drawGameArea() {
// the following twos to let system know we will operate against game area's camera
// set the projection matrix
sb.projectionMatrix = gameAreaCamera.combined
// set gl viewport
Gdx.gl.glViewport(0,0,GAME_WIDTH, GAMEVIEW_HEIGHT)
// draw your stuff here...
sb.begin()
...
sb.end()
}
That would goes in the same way for drawControlArea()
private fun drawControlArea() {
// the following twos to let system know we will operate against control area's camera
// set the projection matrix
sb.projectionMatrix = controlAreaCamera.combined
// set gl viewport
Gdx.gl.glViewport(0,GAMEVIEW_HEIGHT,GAME_WIDTH, CONTROLVIEW_HEIGHT)
// draw your stuff here...
sb.begin()
...
sb.end()
}
Note Gdx.gl.glViewport() we supply it with target rectangle area to draw things on.
Viewports are not directly used, but it's more to tell the system which kind of screen resizing strategy to fit your game's graphic into screen, and for better debugging purpose. You can read more here.
Two frequently used options you will use most is ExtendViewport and FitViewport. If you want the game to be appeared and cover entire area of the screen without affect aspect-ratio with no black-bars (black on left-side or right-side), ExtendViewport is likely to be what you want, or if you want the similar effect but with black-bars thus game screen will be the same for every player (thus provide no advantage over player with wide-screen) then FitViewport is your choice.
Checking Hit On UI
I guess you will need this, so I should include it too.
Whenever you need to check whether such UI element is clicked (or tapped) by user, then you can have the following in its corresponding update() method to check whether touching position is within bound of such object's bounding area or not.
The following code safely and work very well with ExtendViewport, but it also should work the same with other Viewport as the code doesn't require any specific information from ExtendViewport. So it's generic.
fun update(dt: Float, cam: Camera: viewport: Viewport) {
// convert screen coordinate to world coordinate
val location = Vector3(Gdx.input.getX(), Gdx.input.getY(), 0f)
cam.unproject(location, viewport.screenX.toFloat(), viewport.screenY.toFloat(), viewport.screenWidth.toFloat(), viewport.screenHeight.toFloat())
if ((Gdx.input.isTouched() &&
boundingRect.contains(location.x, location.y)) {
// do something here...
}
}
You can use two cameras and position them as shown in your image. I think this post could give you a hint: Split-Screen in LibGDX

LinearGradientPaint without coordinates

I have a bunch of values and appropriate colours:
0 Black
20 Dark Grey
50 Light Grey
100 White
I want to create a LinearGradientPaint to demonstrate that gradient. I can easily calculate the fractions but the LinearGradientPaint also requires a starting X, Y and ending X, Y coordinates.
Is there a way I can apply the linear gradient paint to an arbitrarily sized rectangle without knowing the rectangle's size at the point at which the paint is created?
No, there is not a way to do this.
You will have to create this object at the time that you paint the rectangle. If you want to save object creation, my suggestion would be to cache this object when you create it, along with the starting and ending points used to create it. If the rectangle is still in the same place the next time you paint it, you can use the same paint object. Otherwise, you will need to create a new one at the new location.

Collision Detection With a Gapped Sprite LibGDX

I need an object to be able to go through a gap in a sprite but collide with any part of it. The sprite is similar to that below:
\\\\\\\\\\\\\\\\\\\\\\\\.............///////////////////////////////
The white in the image between the two branches (the dots) is transparent, not white.
I've seen countless tutorials where a rectangle is drawn around the sprite but the problem is that it won't work in this case, as you can probably see. Using area overlap could be a possibility, but I have no idea how to find the area of the sprite, if I could, I could use something like this, as I've seen on another post:
public static boolean testIntersection(Shape shape, Branches branches){
Area shapearea = new Area(shape);
areaA.intersect(new Area(branches));
return !areaA.isEmpty();
}
Spawning them separately isn't an option as the x must be random, but the distance apart must remain the same.
Any help or ideas would be greatly appreciated.

drawing layers using java graphics API

I'm doing a simulator project that tests several A* based algorithms and show how they work and their results.
The algorithms are all multi-agent and run on a grid map environment.
I used a JPanel for the grid which contains a two dimensional array of Cells where each Cell is a custom class that extends the Component class and use the paint method to draw the stuff i need inside each cell.
For the drawing inside the cell I use method such as Graphics.fillRect or Graphics.drawImage to fill each cell with a certain color or icon).
I'm using a special Icon for the start position and goal position of every agent on the grid.
My problem is that I want to be able to draw more than one item on the same cell.
For example I want to be able to show the path of one of the agents by painting the cells along the path in a special color and the path might go through a start position of a different agent, so I want to be able to fill the cell with the color and have an icon drawn on top.
In another example I want to be able to mix two colors using alpha blending.
If I use graphics.fillRect() with one color that has alpha and then use it again with a different color with alpha value it won't work since the last fillRect() will override the first call.
Is there a way I can achieve what I need using the same Cell Component I created or should I implement it differently?
Perhaps there is a better solution to this problem?
I would really appreciate any advice on this matter.
If you draw a rectangle with 50% alpha and then draw another one, the second one will override it instead of blending with it.
It depends on the mode. This convenient utility shows the result of blending different colors using the modes defined in AlphaComposite. The available source code may offer some insights for your project.
Addendum:
the stuff I was trying to composite was on the same Component.
The example cited does exactly this, as does this example. If AlphaComposite does not meet your requirements, you can always vary hue, saturation and/or value; this example composes a color table based on saturation.

Categories

Resources