Currently I am having static reference to all my sprites and loading and initializing them in my OnCreateResource mthod of SimpleBaseGameActivity, But now I have to override onAreaTouched listener on spirtes and the way I can override it while Initializing the Sprite. But I have a static method creating Atlas and Texture Region for every sprite. And I am using these sprites in my scene class and I want to override onAreaTouched there. I can registerTouchArea for that specific sprite in my scene so that can be done But I want to Override OnAreaTouched in a way so that Code reusability can be done.
Here is how I am currently creating and loading sprites.
defualtCageSprite = createAndLoadSimpleSprite("bg.png", this, 450, 444);
And this is my Method createAndLoadSimpleSprite.
public static Sprite createAndLoadSimpleSprite(String name,
SimpleBaseGameActivity activity, int width, int height) {
BitmapTextureAtlas atlasForBGSprite = new BitmapTextureAtlas(
activity.getTextureManager(), width, height);
TextureRegion backgroundSpriteTextureRegion = BitmapTextureAtlasTextureRegionFactory
.createFromAsset(atlasForBGSprite, activity, name, 0, 0);
Sprite sprite = new Sprite(0, 0, backgroundSpriteTextureRegion,
activity.getVertexBufferObjectManager());
activity.getTextureManager().loadTexture(atlasForBGSprite);
return sprite;
}
Now How Can I override onAreaTouched for some sprites while not losing the code reusability.
Is there any reason you need to load the textures at runtime? The normal way is to load the required textures all onto a single atlas while loading the application so that you can then quickly use them later.
As for the code reusability, Todilo's idea about enums seems to be pretty much what you need. Say for example that you have two kinds of objects - objects that disappear when you touch them and objects that fly up when you touch them. You enumerate both categories and put a piece of code into the touch event handling code that checks whether the object should disappear or fly up.
If you don't know what the objects should be doing on touch before running the application, there is a more dynamic way of achieving the same result. Just create two lists at runtime and put a reference to the object in one of the lists according to what the object should do when touched. Then in touch event handling do something like this:
if (disappearList.contains(touchedObject)) {
disappear(object)
}
if (flyUpList.contains(touchedObject)) {
flyUp(object)
}
Too bad AndEngine does not allow users to set listeners on sprites, it would make things a bit easier.
EDIT:
Added explanation of the use of BlackPawnTextureBuilder:
Your Atlas must be of type BuildableBitmapTextureAtlas, then you add all textures like this
BitmapTextureAtlasTextureRegionFactory.createFromAsset(buildableAtlas, this, "image.png"));
and after that
try {
this.buildableAtlas.build(new BlackPawnTextureBuilder<IBitmapTextureAtlasSource, BitmapTextureAtlas>(1));
} catch (final TextureAtlasSourcePackingException e) {
Debug.e(e);
}
I don't know whether this will work for animated Sprites or not, you will have to try it.
Also, there is no overriding onTouch, you will have to do that in the onAreaTouched method. One example of such condition is
if (pSceneMotionEvent.getAction() == MotionEvent.ACTION_DOWN && disappearList.contains(pTouchArea)) {disappear();}
Are you sure you dont want more functionality than override ontouch? How about creating a class inheriting for sprite for those that need onarea to be overriden and all other needs.
Related
I have a Canvas in JavaFX, and I would like to draw, say, a Circle, or Polygon, with stroke inside.
This is what I would usually do:
class MyClass {
fun draw(ctx: GraphicsContext, points: List<Point>) {
ctx.beginPath()
points.forEach {
ctx.lineTo(it.x, it.y)
}
ctx.closePath()
ctx.fill()
ctx.stroke()
}
}
But the problem is, this method will draw the shape with the stroke on path, like on the left on this picture:
I need it to draw the stroke inside the path.
I know that this is possible in JavaFX Shapes with their javafx.scene.shape.StrokeType patameter, but I would like to avoid using them as they need to be manually added to the Node as children (and manually removed if I need to change something).
Also, I know that I can use ctx.clip() to imitate the desired effect. But this method, if used frequently (which is my case), causes unacceptable lags to the application.
I searched everywhere to find a Canvas analog of StrokeType parameter, but haven't found anything.
Take a look at this screenshoot:
I want to split my UI to two parts, controls and game draw area, but I don't want the controls to overlap the draw area, I want the 0,0 of the game draw area, start above the controls area.
Is that possible to do with Libgdx?
As you said you want each view to have its own coordinate system thus splitting the screen technique as answered by Julian will do the trick, but it's not all to properly use it.
To make it fully works, you should have 2 separate OrthographicCamera one for game draw view, and another for control view. I suggest also to create 2 Viewport mapping to each camera as well. In my experience, when working in multiple camera situation, always create associate Viewport for it. Better for changes that could introduce in the future (i.e. adapt to any screen resolution), debugging purpose like checking touching, position etc.
So combine splitting technique with camera/viewport management, you will have robust system to work for each area independently.
Code
I provided the following code as it's used and working in my game, but changed variable names to fit your need. It's in Kotlin, but should be relatively easy to see it as Java.
You initialize things first for game area stuff.
// create a camera
gameAreaCamera = OrthographicCamera()
gameAreaCamera.setToOrtho(false, GAME_WIDTH, GAMEVIEW_HEIGHT)
gameAreaCamera.update()
// create a viewport associated with camera
gameAreaViewport = ExtendViewport(GAME_WIDTH, GAMEVIEW_HEIGHT, gameAreaCamera)
Next, for control area stuff.
// create a camera
controlAreaCamera = OrthographicCamera()
controlAreaCamera.setToOrtho(false, GAME_WIDTH, CONTROLVIEW_HEIGHT)
controlAreaCamera.update()
// create a viewport associated with camera
controlAreaViewport = ExtendViewport(GAME_WIDTH, CONTROLVIEW_HEIGHT, controlAreaCamera)
PS: Notice width and height of each view. It's set to occupy area as per your intention.
Now you should have something like this in render() method.
override fun render() {
// clear screen
Gdx.gl.glClearColor(0.0f, 0.0f, 0.0f, 1.0f)
Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT)
// draw game area
drawGameArea()
// draw control area
drawControlArea()
}
For your drawGameArea() assume that sb is your SpriteBatch that you maintain it in the current class,
private fun drawGameArea() {
// the following twos to let system know we will operate against game area's camera
// set the projection matrix
sb.projectionMatrix = gameAreaCamera.combined
// set gl viewport
Gdx.gl.glViewport(0,0,GAME_WIDTH, GAMEVIEW_HEIGHT)
// draw your stuff here...
sb.begin()
...
sb.end()
}
That would goes in the same way for drawControlArea()
private fun drawControlArea() {
// the following twos to let system know we will operate against control area's camera
// set the projection matrix
sb.projectionMatrix = controlAreaCamera.combined
// set gl viewport
Gdx.gl.glViewport(0,GAMEVIEW_HEIGHT,GAME_WIDTH, CONTROLVIEW_HEIGHT)
// draw your stuff here...
sb.begin()
...
sb.end()
}
Note Gdx.gl.glViewport() we supply it with target rectangle area to draw things on.
Viewports are not directly used, but it's more to tell the system which kind of screen resizing strategy to fit your game's graphic into screen, and for better debugging purpose. You can read more here.
Two frequently used options you will use most is ExtendViewport and FitViewport. If you want the game to be appeared and cover entire area of the screen without affect aspect-ratio with no black-bars (black on left-side or right-side), ExtendViewport is likely to be what you want, or if you want the similar effect but with black-bars thus game screen will be the same for every player (thus provide no advantage over player with wide-screen) then FitViewport is your choice.
Checking Hit On UI
I guess you will need this, so I should include it too.
Whenever you need to check whether such UI element is clicked (or tapped) by user, then you can have the following in its corresponding update() method to check whether touching position is within bound of such object's bounding area or not.
The following code safely and work very well with ExtendViewport, but it also should work the same with other Viewport as the code doesn't require any specific information from ExtendViewport. So it's generic.
fun update(dt: Float, cam: Camera: viewport: Viewport) {
// convert screen coordinate to world coordinate
val location = Vector3(Gdx.input.getX(), Gdx.input.getY(), 0f)
cam.unproject(location, viewport.screenX.toFloat(), viewport.screenY.toFloat(), viewport.screenWidth.toFloat(), viewport.screenHeight.toFloat())
if ((Gdx.input.isTouched() &&
boundingRect.contains(location.x, location.y)) {
// do something here...
}
}
You can use two cameras and position them as shown in your image. I think this post could give you a hint: Split-Screen in LibGDX
I'm starting to create my World with Box2d in Libgdx and I have to create shapes for different game objects. The tutorial I've read said I should dispose my shapes when I'm done using them.
So, I started keeping references like that:
private CircleShape circle;
private PolygonShape ground;
private PolygonShape wall;
private PolygonShape box;
//...
//(getters)
And disposing my objects like that:
#Override
public void dispose()
{
circle.dispose();
ground.dispose();
wall.dispose();
box.dispose();
world.dispose();
}
I decided to change this to a list for expansion but the problem is somewhere else in my code, I'm adding bodies on click so I need to let the access to some shapes from external classes. I could create extra shapes and let access to my list but I don't like the idea of creating a giant list of disposable objects.
A solution would be to create a ShapeManager object that has an internal list of shapes. I could dispose this object and it would wrap the shapes constructors letting me return an already existing shape if it fit the need.
However, this solution seem too heavy. Why Box2d (or LibGDX) made the shapes objects that need to be disposed ? Is there a class like I described already included in LibGDX ? Is there a better solution ?
You can dispose after making the definition of your body.
I have been trying to solve this problem for two days and I have given up trying to find an existing solution.
I have started learning libgdx and finished a couple of tutorials. And now I have tried to use all that I have learned and create a simple side scrolling game. Now, I know that there are libgdx examples of this, but I haven't found a one that incorporates Box2d with scene2d and actors as well as tiled maps.
My main problem is with the cameras.
You need a camera for the Stage (which as far as I know is used for the projection matrix of the SpriteBatch passed to the method draw() at actors, if this is wrong please correct me) and you need a camera for the TileMapRender for calling the render() method. Also, in some of the tutorials there is a OrthographicCamera in the GameScreen, which is used where needed.
I have tried to pass a OrthographicCamera object to methods, I have tried to use the camera from the Stage and the camera from the TileMapRenderer everywhere.
Ex.
OrthographicCamera ocam = new OrthographicCamera(FRUSTUM_WIDTH, FRUSTUM_HEIGHT);
stage.setCamera(ocam); // In the other cases i replace ocam with stage.getCamera() or the one i use for the tileMap Render
tileMapRenderer.render(ocam);
stage.getSpriteBatch().setProjectionMatrix(ocam.combined); // I am not sure if this is needed
I have also tried to use different cameras everywhere.
After trying all of this I haven't noted what happens exactly when but I will list what happens :
There is nothing on the screen ( Probably the camera is away from the stuff that is drawn )
I can see the tiled map and the contours from the debugRenderer (I use debugRender too but I don't think that it interferes with the cameras), but the sprite of the actor is not visible ( probably off screen )
I can see everything that I should but when I try to move the Actor and the Camera, which is supposed to follow him, the sprite goes faster than the body ( the green debug square ).
So my main questions are :
I don't understand what happens when you have multiple cameras. "Through" which one do you actually see on the montior?
Should I use multiple cameras and how ?
Also, I thought that I should mention that I am using OpenGL ES 2.0.
I am sorry for the long question, but I thought that I should describe in detail, since it's a bit complicated for me.
You actually see through all of them at the same time. They might look at a completely different world though, but all of them render their point of view to the screen.
You can use several cameras, or just one. If you use only one you need to make sure that you update the projection matrix correctly, between drawing the TiledMap, your Stage with Actors and maybe for the optional Box2DDebugRenderer.
I'd use an extra Camera for the Box2DDebugRenderer, because you can easily throw it away later. I assume you use a conversion factor to convert meters to pixels and the other way around. Having a 1:1 ratio wouldnt be very good. I always used something between 1m=16px and 1m=128px.
So you initialize it this way, and use that one for your debugging renderer:
OrthographicCamera physicsDebugCam = new OrthographicCamera(Gdx.graphics.getWidth() / Constants.PIXEL_PER_METER, Gdx.graphics.getHeight() / Constants.PIXEL_PER_METER);
For your TiledMapRenderer you may use an extra camera as well, but that one will work in screen-coordinates only, so no conversion:
OrthographicCamera tiledMapCam = new OrthographicCamera(Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
The TiledMap will always be rendered at (0, 0). So you need to use the camera to move around on the map. It will probably follow a body, so you can update it via:
tiledMapCam.position.set(body.getPosition().x * Constants.PIXELS_PER_METER, body.getPosition().y * Constants.PIXELS_PER_METER)
Or in case it follows an Actor:
tiledMapCam.position.set(actor.getX(), actor.getY())
I actually haven't used scene2d together with Box2D yet, because I didn't need to interact very much with my game objects. You need to implement a custom PhysicsActor here, which extends Actor and builds the bridge from scene2d to Box2D by having a body as a property. It will have to set the Actors position, rotation etc based on the Body at every update-step. But here you have several options. You may re-use the tiledMapCam and work in screen-coordinates. In this case you need to always remember to multiply with Constants.PIXELS_PER_METER when you update your actor. Or you will use another cam with the same viewport like the physicsDebugCam. In this case no conversion is needed, but I'm not sure if this might interfere with some scene2d-specific things.
For a ParallaxBackground you may use another camera as well, for UI you can use another stage and another camera again... or reuse others by resetting them correctly. It's your choice but I think several cameras do not influence performance much. Less resetting and conversions might even improve it.
After everything is setup, you just need to render everything, using the correct cameras and render every "layer"/"view" on top of each other. First a ParallaxBackground, then your Tiledmap, then your Entity-Stage, then your Box2DDebugging view, then your UI-stage.
In general remember to call spriteBatch.setProjectionMatrix(cam.combined); and using cam.update() after you changed anything of your camera.
I'm making a Java shoot em up game for Android phones. I've got 20 odd enemies in the game that each have a few unique behaviors but certain behaviors are reused by most of them. I need to model bullets, explosions, asteroids etc. and other things that all act a bit like enemies too. My current design favors composition over inheritance and represents game objects a bit like this:
// Generic game object
class Entity
{
// Current position
Vector2d position;
// Regular frame updates behaviour
Behaviour updateBehaviour;
// Collision behaviour
Behaviour collideBehaviour;
// What the entity looks like
Image image;
// How to display the entity
Renderer renderer;
// If the entity is dead and should be deleted
int dead;
}
abstract class Renderer { abstract void draw(Canvas c); }
abstract class Behaviour { abstract void update(Entity e); }
To just draw whatever is stored as the entity image, you can attach a simple renderer e.g.
class SimpleRenderer extends Renderer
{
void draw(Canvas c)
{
// just draw the image
}
}
To make the entity fly about randomly each frame, just attach a behavior like this:
class RandomlyMoveBehaviour extends Behaviour
{
void update(Entity e)
{
// Add random direction vector to e.position
}
}
Or add more complex behaviour like waiting until the player is close before homing in:
class SleepAndHomeBehaviour extends Behaviour
{
Entity target;
boolean homing;
void init(Entity t) { target = t; }
void update(Entity e)
{
if (/* distance between t and e < 50 pixels */)
{
homing = true;
// move towards t...
}
else
{
homing = false;
}
}
}
I'm really happy with this design so far. It's nice and flexible in that you can e.g. modularize the latter class so you could supply the "sleep" behavior and the "awake" behavior so you could say something like new WaitUntilCloseBehaviour(player, 50/pixels/, new MoveRandomlyBehaviour(), new HomingBehaviour()). This makes it really easy to make new enemies.
The only part that's bothering me is how the behaviors and the renderers communicate. At the moment, Entity contains an Image object that a Behaviour could modify if it chose to do so. For example, one behavior could change the object between a sleep and awake image and the renderer would just draw the image. I'm not sure how this is going to scale though e.g.:
What about a turret-like enemy that faces a certain direction? I guess I could add a rotation field to Entity that Behavior and Renderer can both modify/read.
What about a tank where the body of the tank and the gun of the tank have separate directions? Now the renderer needs access to two rotations from somewhere and the two images to use. You don't really want to bloat the Entity class with this if there is only one tank.
What about an enemy that glows as his gun recharges? You'd really want to store the recharge time in the Behaviour object, but then the Renderer class cannot see it.
I'm having trouble thinking of ways to model the above so the renderers and the behaviors can be kept somewhat separate. The best approach I can think of is to have the behavior objects contain the extra state and the renderer object then the behavior objects call the renderers draw method and pass on the extra state (e.g. rotation) if they want to.
You could then e.g. have a tank-like Behaviour object that wants a tank-like Renderer where the latter asks for the two images and two rotations to draw with. If you wanted your tank to just be a plain image, you would just write a subclass Renderer that ignored the rotations.
Can anyone think of any alternatives? I really want simplicity. As it's a game, efficiency may be a concern as well if e.g. drawing a single 5x5 enemy image, when I have 50 enemies flying around at 60fps, involves many layers of function calls.
The composition design is a valid one, as it allow to mix-and-match the behaviour(s) and render.
In the game we're toying with, we've added a "databag" that contains basic informations (in your case the position and the dead/alive status), and variables datas that are set/unset by the behaviour and collision subsytem. The renderer can then use these data (or not if not needed). This work well, and allows for neat effect, such as setting a "target" for a given graphical effect.
A few problems :
if the Renderer ask for data that the behaviour did not set. In our case, the event is logged, and default values (defined in renderer) is used.
It's a little bit harder to check for the needed informations beforehand (ie what data should be in the databag for the Renderer A ? what data are set by the Behaviour B ?). We try to keep the doc up-to-date, but we're thinking of recording the set/get by the classes, and generate a doc page...
Currently we're using a HashMap for the databag, but this is on a PC, not an IPhone. I don't know if perfomance will be enough, in which case another struct could be better.
Also in our case, we've decided for a set of specialized renderer. For example, if the entity possess a non void shield data, the ShieldRenderer display the representation... In your case, the tank could possess two renderer linked to two (initialization-defined) datas :
Renderer renderer1 = new RotatedImage("Tank.png", "TankRotation");
Renderer enderer2 = new RotatedImage("Turret.png", "TurretRotation");
with "TankRotation" and "TurretRotation" set by the behaviour. and the renderer simply rotating the image before displaying it at the position.
image.rotate (entity.databag.getData(variable));
Hope this help
Regards
Guillaume
The design you're going with looks good to me. This chapter on components may help you.