LibGDX touch differences between emulator & phone - java

While trying to develop a small game with LibGDX, I have a behavior that I don't explain : when I run my game in Android Studio Emulator (Nexus 5X API28) and on my phone (Samsung Galaxy S20), I have noticed a difference that I can't explain :
on the emulator, the touch (emulated with a click) is detected as expected
on my phone the touch is detected ~50-100 pixel higher than expected
My guess is that there is something to do with the way LibGDX handle the phone resolution and/or ratio
Here is a sample of the type of code I have:
LevelScreen(final PixGame game) {
this.game = game;
mTouchPoint = new Vector3();
camera = new OrthographicCamera();
camera.setToOrtho(false, 720, 1280);
FitViewport viewport = new FitViewport(720, 1280, camera);
Gdx.gl.glViewport(0, 0, 720, 1280);
viewport.update(game.ScreenWidth, game.ScreenHeight, true);
mStage = new Stage(viewport);
InputProcessor processor1 = new InputAdapter() {
#Override
public boolean touchDown(int x, int y, int pointer, int button) {
game.mTouchUp = false;
return false;
}
#Override
public boolean touchUp(int x, int y, int pointer, int button) {
game.mTouchUp = true;
return false;
}
};
InputMultiplexer inputMultiplexer = new InputMultiplexer(processor1,mStage);
Gdx.input.setInputProcessor(inputMultiplexer);
Gdx.graphics.setContinuousRendering(false);
}
#Override
public void render(float delta) {
if(game.mTouchUp) {
mTouchPoint.set(Gdx.input.getX(), Gdx.input.getY(), 0);
camera.unproject(mTouchPoint);
game.mTouchUp = false;
Gdx.app.debug("TOUCH", "Touchpoint coordinates : " + mTouchPoint);
}
}
I have removed all the game part on the code above, this is just the way I detect and handle touch event (and probably it's not the smartest way but hey, I'm not a pro-developer).
I have a PNG image as a background, and if I touch the same spot on the emulator and on my phone, I have the following result :
EMULATOR : Touchpoint coordinates : (90.0,1103.7681,0.0)
PHONE : Touchpoint coordinates : (95.33334,1054.3251,0.0)
The X coordinate is about the same (within the margin of error of my finger), the Y coordinate has always a minus 50-100px gap. I have tested many times on different positions on the screen and still the same behavior.
Any guess what I should do to handle this? Maybe the difference of screen ratio between the two (16:9 vs 16:10) ? Many thanks for the help.

After different tries, I've figured out what was happening : the problem was indeed coming from the ratio that is different between the two phones (16:9 vs 16:10).
In order to neutralize that, I have changed the line
FitViewport viewport = new FitViewport(720, 1280, camera);
with
Viewport viewport = new ScalingViewport(Scaling.stretch, 720, 1280, camera);
This is now handling the touch coordinates the same way on the 2 devices.

Related

Move a shape to place where my finger is touched

#Override
public void create()
{
batch = new SpriteBatch();
shape = new ShapeRenderer();
velocity = new Vector2(100, 0);
position = new Rectangle(0, 5, 100, 100);
font = new BitmapFont();
font.setColor(Color.BLACK);
font.getData().scale(3f);
camera = new OrthographicCamera();
confCamera();
}
#Override
public void render()
{
if(Gdx.input.isTouched())
position.x = Gdx.input.getX() - position.width/2;
position.y = (Gdx.input.getY() - position.height/2);
shape.begin(ShapeRenderer.ShapeType.Filled);
shape.setColor(Color.BLACK);
shape.rect(position.x, position.y, position.width, position.height);
shape.end();
}
It's a simple code, but I'm not undestanding Y axis, my shape moves like a mirror. If I touch on top, my shape goes to bottom. If I touch on bottom, my shape goes to top. How to fix it?
LibGDX (by default) for rendering uses coordinate system where 0 is at bottom of the screen and the more you go up Y coordinate grows.
Also, when you read input coordinates (touches, moves...) you get screen coordinates, but when you render your graphics you are using "world" coordinates. They are in 2 different coordinate system so to convert from screen to world you have to use camera.unproject() call. Should be like:
Vector3 touchPos = new Vector3(Gdx.input.getX(), Gdx.input.getY(), 0);
camera.unproject(touchPos);
and then use touchPos.x and touchPos.y.
The similar question is asked here so you can find more answers there:
Using unProject correctly in Java Libgdx

Libgdx: Drawing lots of particles

What is the best way to draw lots of particles (circles) moving in the background in LibGDX?
200 particles running in the background is what I can get out of my app. Anything above will get my app to stutter. Ive actually tested an App where it's possible to run up to 200.000 particles in the background without having to sacrifice fps. This is my Game class in short:
public Array<Particles> particlesArray;
SpriteBatch batch;
OrthographicCamera camera;
Texture sParticlesTexture;
public void create(){
camera = new OrthographicCamera();
camera.setToOrtho(false, 1080, 1920);
batch = new SpriteBatch;
Pixmap pixmap = new Pixmap(Particles.MAX_PARTICLE_RADIUS*2, Particles.MAX_PARTICLE_RADIUS*2, Pixmap.Format.RGBA4444);
pixmap.setColor(Color.WHITE);
pixmap.fillCircle(pixmap.getWidth() / 2, pixmap.getHeight() / 2, Particles.MAX_PARTICLE_RADIUS);
sParticlesTexture = new Texture(pixmap);
pixmap.dispose();
size = random(2, Particles.MAX_PARTICLE_RADIUS+1);
for(int i=0; i<200; i++){
particlesArray.add(new Particles(random(size, width-size),
random(0, height),
0,
random(0.15f*height, 0.25f*height)*0.15f*size,
size));
}
public void render(float deltaTime){
Gdx.gl.glClearColor(0, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
//update camera and draw in camera
camera.update();
batch.setProjectionMatrix(camera.combined);
batch.begin();
drawFallingObjects(particlesArray, batch);
batch.end()
moveParticles(particlesArray, deltaTime);
}
public <T extends Objects> void drawFallingObjects(Array<T> objects, SpriteBatch batch){
for(T item: objects){
item.draw(batch);
}
}
public void moveParticles(Array<Particles> particlesArray, float deltaTime){
for(Particles item: particlesArray){
size = random(2, Particles.MAX_PARTICLE_RADIUS+1);
item.move(deltaTime);
//creating particles if out of scale
if(item.y+item.mDiameter<0){
item.x = random(size, width-size);
item.y = height+20;
item.vy = random(0.15f*height, 0.25f*height)*0.15f*size;
item.mDiameter = size;
}
}
}
And this my Particles class:
import com....sParticlesTexture;
public class Particles{
public static int MAX_PARTICLE_RADIUS = 4;
public Particles(float x, float y, float vx, float vy, float mDiameter){
super(x, y, vx, vy, mDiameter);
radius = mDiameter/2;
}
#Override
public void draw(SpriteBatch batch){
batch.draw(sParticlesTexture, x-radius, y-radius, mDiameter, mDiameter);
}
#Override
public void move(float deltaTime){
y -= ceil(vy*deltaTime);
x += ceil(vx*deltaTime);
}
public void dispose() {
sParticlesTexture.dispose();
}
All Particles objects use one and the same texture. This improves a lot instead of creating hundred different textures. So what can be done now? I've googled a lot. What would help in my case? A Framebuffer, shader? And how should I implement these in my game? What about CPUSpriteBatch?
I also came across the particle system from LibGDX but it doesn't work differently than what I do.
First of all have a look at Particle effect which is much more efficient. https://github.com/libgdx/libgdx/wiki/2D-ParticleEffects
If you are not trying to get that kind of effect and want to use a lot of particles, you may not want to perform such large number of calculations in the Java. Rather use NDK and calculate the values from C/C++.
As Nabin said, libgdx has a particle system in place already which is already tuned to be efficient. Libgdx also has a tool called the 2D Particle editor which allows you to view and edit particles before you add them to your application. A guide on the Editor can be found on the libgdx site and gamedevelopment.blog.
From the code samples you provided, I think you could also possibly use a shader to create the same effect. The bonus to this is its all done on the GPU. Some example shaders can be found on Shadertoy and guide on shaders from GamesFromScratch or GLSL Shader Tutorial for Libgdx

libgdx drawing an image at touch location

I'm just trying to get libgdx to create a picture wherever I touch the screen.
here's what i have that isn't doing anything
SpriteBatch batch;
Texture img;
#Override
public void create () {
batch = new SpriteBatch();
img = new Texture("badlogic.jpg");
}
#Override
public void render () {
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
}
public class MyInputProcessor implements InputProcessor {
public boolean touchDown (int x, int y, int pointer, int button) {
batch.begin();
batch.draw(img,Gdx.input.getX(),Gdx.input.getY());
batch.end();
return true;
}
... (the rest of the input methods)
if you can't tell, I don't really know what I'm doing yet, I think it has to do with the batch.draw() being in the touchDown method instead of the render area but I can't figure out from research how to do it a different way either
or maybe this is all wrong too, point is I'm doing this to learn so hopefully the correct answer will help me understand some important things about java in general
LibGDX, like basically all game engines, re-renders the entire scene every time render() is called. render() is called repeatedly at the frame rate of the game (typically 60fps if you don't have a complex and unoptimized game). The first drawing-related thing you usually do in the render() method is to clear the screen, which you have already done with Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);. Then you re-draw the whole scene with whatever changes there might be since the last frame.
You are trying to draw something with the batch outside of the render method. In this case, you are doing it when there is a touch down. But since you are doing this only when there is a touch down, the object will appear and disappear on the next call to render(), so it will only be on screen for 1/60th of a second. If you want to do this with an input processor, you need to set some variable to true to indicate the render method should draw it, and other variables to indicate where to draw it. Then in the render() method, you draw the stuff if the variable is true.
Secondly, the x and y that an input processor gets do not necessarily (and usually don't) correspond with the x and y in OpenGL. This is because OpenGL has it's own coordinate system that is not necessarily sized exactly the same as the screen's coordinate system. The screen has (0,0) in the top left with the Y axis going down, and the width and height of the screen matching the number of actual pixels on the screen. OpenGL has (0,0) in the center of the screen with the Y axis going up, and the width and height of the screen being 2 regardless of the actual screen pixels.
But the OpenGL coordinate system is modified with projection matrices. The LibGDX camera classes make this simpler. For 2D drawing, you need an OrthographicCamera. You set the width and size of the OpenGL world using the camera, and can also position the camera. Then you pass the camera's calculated matrices to the SpriteBatch for it to position the scene in OpenGL space.
So to get an input coordinate into your scene's coordinates, you need to use that camera to convert the coordinates.
Finally, LibGDX cannot magically know that it should pass input commands to any old input processor you have created. You have to tell it which InputProcessor it should use with a call to Gdx.input.setInputProcessor().
So to fix up your class:
SpriteBatch batch;
Texture img;
boolean isTouchDown;
final Vector3 touchPosition = new Vector3();
OrthographicCamera camera;
#Override
public void create () {
batch = new SpriteBatch();
img = new Texture("badlogic.jpg");
camera = new OrthographicCamera();
Gdx.input.setInputProcessor(new MyInputProcessor()); // Tell LibGDX what to pass input to
}
#Override
void resize (int width, int height) {
// Set the camera's size in relation to screen or window size
// In a real game you would do something more sophisticated or
// use a Viewport class to manage the camera's size to make your
// game resolution-independent.
camera.viewportWidth = width;
camera.viewportHeight = height;
camera.update(); // re-calculate the camera's matrices
}
#Override
public void render () {
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(camera.combined); // pass camera's matrices to batch
batch.begin();
if (isTouchDown) { // Only draw this while the screen is touched.
batch.draw(img, touchPosition.x, touchPosition.y);
}
batch.end();
}
public class MyInputProcessor implements InputProcessor {
public boolean touchDown (int x, int y, int pointer, int button) {
isTouchDown = true;
touchPosition.set(x, y, 0); // Put screen touch coordinates into vector
camera.unproject(touchPosition); // Convert the screen coordinates to world coordinates
return true;
}
public boolean touchUp (int screenX, int screenY, int pointer, int button){
isTouchDown = false;
return true;
}
//... (the rest of the input methods)
}

Scaling to sceen using libgdx on Android

I'm creating a simple game for Android using libgdx. I've come to the issue of having different screen sizes for different devices yet haven't found any concreted documentation on how to deal with this problem.
I think I'm supposed to use an OrthographicCamera? An example of code I have so far is:
private OrthographicCamera camera;
public void create() {
batch = new SpriteBatch();
texture = new Texture(Gdx.files.internal("data/cube.png"));
texture.setFilter(TextureFilter.Linear, TextureFilter.Linear);
camera = new OrthographicCamera(1280, 720);
sprite = new Sprite(texture);
sprite.setOrigin(0, 0);
sprite.setPosition(1280/2, 600);
}
public void render() {
Gdx.gl.glClearColor(0.204f, 0.255f, 0.255f, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(camera.combined);
batch.begin();
sprite.draw(batch);
batch.end();
}
Am I going along the right lines? I don't have any other devices to test on and my emulators are causing me issues.
In case you didn't already do so, you should upgrade your LibGDX version to the latest release which is 1.0.0. In this version the socalled Viewport has been introduced.
You can find some screenshots and code snippets and everything you need to know here.
Basically you will have to decide for a certain strategy (your question sounds like you are interesting in either ScreenViewport or StretchViewport) and then let that manage your camera.
What I use in my libGDX projects is to override the resize method and set the OrthographicCamera to the size of the screen as follows, using the built in method called setToOrtho(boolean yDown) which sets camera centered on the current size which a parameter if you want the y-Axis pointing down or not:
#Override
public void resize(int width, int height){
camera.setToOrtho(false);
}
This, however will not change the size of your textures if you want to rescale your textures as well then I would recommend rather than setting them to an absolute size e.g 15 pixels try setting them to a percentage of the screen size.
Or, another, sometimes more effective method is to work out the correct sizes at a certain size e.g 800 x 480 then work out the percentage increase in width and apply that to your sprites E.g:
#Override
public void resize(int width, int height){
super.resize(width, height);
camera.setToOrtho(false);
//The following is if the sprites are normally correct scaled
//for a screen size of 800 x 480, change to whatever you need
//They should both be floats and class variables
widthChange = width / 800;
heightChange = height / 480;
}
public void render() {
Gdx.gl.glClearColor(0.204f, 0.255f, 0.255f, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.setProjectionMatrix(camera.combined);
batch.begin();
//Rather than using sprite.draw(batch), use:
batch.draw(sprite, sprite.x * widthChange, sprite.y * heightChange, sprite.width * widthChange, sprite.height * heightChange);
batch.end();
}

AndEngine GLES 2 - black screen, no errors

I am writing a game for Android using AndEngine GLES 2. Everything was working properly - I had a background image, there were sprites moving around and even some music - until recently I tried something new (I wanted to be able to switch between two different scenes) when the display turned black.
I could still execute the game and there were no error shown. All log entries I made during the game were shown, even the music was playing so I knew the game was running "properly", but I couldn't see any image. Nothing. All black.
So I thought, changing everything back to before this "error" appeared, would do the trick. But still the screen is black.
I even tried commenting everything out but the background image - nothing.
Now if it is not too much to ask, could anyone please look over this short piece of code and tell me what is wrong there?
This are the variables I use:
private SmoothCamera camera;
private BitmapTextureAtlas bitmapTextureAtlas;
private Scene scene;
private Sprite background;
The EngineOptions I never changed, so they should be alright.
#Override
public EngineOptions onCreateEngineOptions() {
float positionX = 80f; // horizontal (x) position of the camera
float positionY = 280f; // vertical (y) position of the camera
float velocityX = 200f; // velocity of the horizontal camera movement
float velocityY = 200f; // velocity of the vertical camera movement
float zoomFactor = 1f; // the camera's zoom Factor (standard := 1)
this.camera = new SmoothCamera(positionX, positionY, this.getWindowManager().getDefaultDisplay().getWidth(), this.getWindowManager().getDefaultDisplay().getHeight(), velocityX, velocityY, zoomFactor);
EngineOptions options = new EngineOptions(true, ScreenOrientation.LANDSCAPE_SENSOR, new RatioResolutionPolicy(this.camera.getWidth(), this.camera.getHeight()), this.camera);
return options;
}
Here I create the TextureAtlas and load a background image.
#Override
protected void onCreateResources() {
// create the TextureAtlas
BitmapTextureAtlasTextureRegionFactory.setAssetBasePath("gfx/");
this.bitmapTextureAtlas = new BitmapTextureAtlas(this.getTextureManager(), 1024, 1600, TextureOptions.NEAREST);
// background
this.background = new Sprite(0, 0, BitmapTextureAtlasTextureRegionFactory.createTiledFromAsset(this.bitmapTextureAtlas, this, "background.png", 0, 0, 1, 1), this.getVertexBufferObjectManager());
this.mEngine.getTextureManager().loadTexture(this.bitmapTextureAtlas);
}
And finally the Scene is instantiated and the background gets attached.
#Override
protected Scene onCreateScene() {
this.scene = new Scene();
this.scene.attachChild(this.background);
return this.scene;
}
Now why would this small Activity not show? I forgot: its a SimpleBaseGameActivity.
Well, since AndEngine GLES2 is not running on the emulator, I have to use my phone (Samsung Galaxy GIO) and can't test the app on another machine.
Did anyone stumble upon a similar problem?
Any help is really much appreciated and thank you for your time !
Christoph
I think the problem is here:
this.bitmapTextureAtlas = new BitmapTextureAtlas(this.getTextureManager(), 1024, 1600, TextureOptions.NEAREST);
The dimensions of the Atlas are supposed to be powers of 2.

Categories

Resources