I'm using the camera anchor in ArCore to create a static object in the scene.
float scaleFactor = 1.0f;
camera.getPose().toMatrix(cameraAnchorMatrix, 0);
// Update and draw the model and its shadow.
Matrix.rotateM(cameraAnchorMatrix, 0, 110, 0f, 1f, 0f);
virtualObject.updateModelMatrix(cameraAnchorMatrix, scaleFactor / 10);
virtualObject.draw(viewmtx, projmtx, colorCorrectionRgba);
However rotating the object sometimes makes it not visible, also translating it doesn't seem to work. I'm also kinda guessing the values for the rotation. Also the object is visible from the top, how can I make it look more natural? (It's an arrow that's supposed to show a direction.)
How can I move the object to the bottom left corner of the screen and rotate it from left to right?
This is how it looks at the moment. I want to move the arrow down and to the left and also tilt it forward. Then it should be able to rotate left and right. Thank your for your help.
Solved it with the following code:
camera.getPose().compose(Pose.makeTranslation(0.37f, -0.17f, -1f)).extractTranslation().toMatrix(cameraAnchorMatrix, 0);
This makes the object appear 'behind' the camera and moves it to the bottom-left. Then you can rotate the object with the angle value:
Matrix.rotateM(cameraAnchorMatrix, 0, 230 - directionChange, 0f, 1f, 0f);
Related
I am currently trying to snap a crosshair sprite to my game's cursor using libgdx. The game is top-down view:
Texture crosshair_text = new Texture(Gdx.files.internal("data/crosshair1.png"));
this.crosshair = new Sprite(crosshair_text, 0, 0, 429, 569);
//...
#Override
public void render() {
Gdx.gl.glClearColor(1, 1, 1, 1);
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
cam.update();
batch.setProjectionMatrix(cam.combined);
batch.begin();
//.. sprites that scale based on camera (top-down view)
batch.end();
//draws 'ui' elements
floatingBatch.begin();
//...
//snap to cursor
this.crosshair.setPosition( Gdx.input.getX(), (Gdx.graphics.getHeight()-Gdx.input.getY()) );
//transform
this.crosshair.setScale(1/cam.zoom);
//draw
this.crosshair.draw(floatingBatch);
floatingBatch.end();
}
Sorry if there are errors that I didn't catch, this isn't an exact copy of my code. The problem here is that 1. The sprite doesn't snap to the correct position and 2. the crosshair sprite lags behind the current position of the mouse on the screen. Can anyone give me insight on how to fix either of these two issues?
Your position might not be correct because the screen location isn't necessarily the right location to draw onto using your OrthographicCamera, try using the unproject first. E.g:
Vector3 mousePos = new Vector3( Gdx.input.getX(), (Gdx.graphics.getHeight()-Gdx.input.getY()), 0); //Get the mouse-x and y like in your code
cam.unproject(mousePos); //Unproject it to get the correct camera position
this.crosshair.setPosition(mousePos.x, mousePos.y); //Set the position
Plus you need to make sure your floating batch is set to the projection matrix of the camera, add the following code:
floatingBatch.setProjectionMatrix(cam.combined);
Instead of drawing a Sprite at the mouse position, you could change the cursor image:
Gdx.input.setCursorImage(cursorPixMap, hotspotX, hotspotY);
Where cursorPixMap is a PixMap of the new cursor image an hotspotX and hotspotY is the "origin" of the PixMap/cursor image. In your case it would be the center of the crosshair.
So basicly the Gdx.input.getX() and Gdx.input.getY() return the current position of the hotspotX and hotspotY.
I suggest to read the wiki artikcle about the cursor.
I've searched all around google and this website for infos about this problem, but cannot solve it..
I'm a newbie in game development and LibGDX, and cannot find a solution well explained on how to port my game to all the various screen sizes..
Would you kindly help me?
Thanx
When using the newest libgdx version, you will find the Viewport class...
The viewport describes the transformation of the coordinate system of the screen (being the pixels from 0,0 in the lower left corner to e.g. 1280,768 in the upper right corner (depending on the device)) to the coordinate system of your game and scene.
The Viewport class has different possibilities on how to do that transformation. It can either stretch your scene coordinate system to exactly fit the screen coordinate system, which might change the aspect ratio and for example "stretch" your images or buttons.
It's also possible to fit the scene viewport with its aspect ratio into the viewport, which might produce a black border. E.g. when you have developed the game for 4:3 screens and now embed it into 16:10 displays.
The (in my opinion) best option is through fitting the scene viewport into the screen by matching either the longest or shortest edge.
This way, you can have a screen/window coordinate system from (0,0) to (1280,768) and create your game coordinate system maybe from (0,0) to (16,10) in landscape mode. When matching the longest edge, this means that the lower left corner of the screen will be (0,0), the lower right will be (16,0)... On devices that don't have the same aspect ratio, the y-values on the upper corners might differ a bit.
Or when matching the shortest edge, this means your scene coordinates will always be shown from (x,0) to (x,10) ... But the right edge might not exactly have and x value of 16, since device resolutions differ...
When using that method, you might have to reposition some buttons or UI elements, when they are supposed to be rendered on the top or the right edges...
Hope it helps...
Once me too suffered from this problem but at end i got the working solution, for drawing anything using SpriteBatch or Stage in libgdx. Using OrthographicCamera we can do this.
first choose one constant resolution which is best for game. Here i have taken 1280*720 (landscape).
class ScreenTest implements Screen {
final float appWidth = 1280, screenWidth = Gdx.graphics.getWidth();
final float appHeight = 720, screenHeight = Gdx.graphics.getHeight();
OrthographicCamera camera;
SpriteBatch batch;
Stage stage;
Texture img1;
Image img2;
public ScreenTest() {
camera = new OrthographicCamera();
camera.setToOrtho(false, appWidth, appHeight);
batch = new SpriteBatch();
batch.setProjectionMatrix(camera.combined);
img1 = new Texture("your_image1.png");
img2 = new Image(new Texture("your_image2.png"));
img2.setPosition(0, 0); // drawing from (0,0)
stage = new Stage(new StretchViewport(appWidth, appHeight, camera));
stage.addActor(img2);
}
#Override
public void render(float delta) {
batch.begin();
batch.draw(img, 0, 0);
batch.end();
stage.act();
stage.act(delta);
stage.draw();
// Also You can get touch input according to your Screen.
if (Gdx.input.isTouched()) {
System.out.println(" X " + Gdx.input.getX() * (appWidth / screenWidth));
System.out.println(" Y " + Gdx.input.getY() * (appHeight / screenHeight));
}
}
// ...
}
run this code in any type of resolution it will going to adjust in that resolution without any disturbance.
I am using Libgdx.
I want to simulate fog in my game using pixmap, but I have a problem during generating the "fogless" circle. First, I make a pixmap, filled with black (it is transparent a little bit). After filling I want to draw a filled circle onto it, but the result is not that I expected.
this.pixmap = new Pixmap(640, 640, Format.LuminanceAlpha);
Pixmap.setBlending(Blending.None); // disable Blending
this.pixmap.setColor(0, 0, 0, 0.9f);
this.pixmap.fill();
//this.pixmap.setColor(0, 0, 0, 0);
this.pixmap.fillCircle(200, 200, 100);
this.pixmapTexture = new Texture(pixmap, Format.LuminanceAlpha, false);
In the procedure render()
public void render() {
mapRenderer.render();
batch.begin();
batch.draw(pixmapTexture, 0, 0);
batch.end();
}
If I use Format. Alpha when creating the Pixmap and Texture, I neither see the more translucent circle.
Here is my problem:
Problem
Could somebody help me? What should I do, what should I init before to draw a full transparent circle? Thanks.
UPDATE
I have found the answer for my problem. I have to disable blending to avoid the problem.
Now my code:
FrameBuffer fbo = new FrameBuffer(Format.RGBA8888, 620, 620, false);
Texture tex = EnemyOnRadar.assetManager.get("data/sugar.png", Texture.class);
batch.begin();
// others
batch.end();
fbo.begin();
batch.setColor(1,1,1,0.7f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.begin();
batch.draw( tex, 100, 100);
batch.end();
fbo.end();
But I don't see the circle (it's a png image, represents transparent bg, white filled circle).
I am not sure if this works for you but i just share it:
You could use FrameBuffers and do the following:
Draw everything you want to draw on screen.
End your SpriteBatch and begin your FrameBuffer, begin the SpriteBatch again.
Draw the Fog, which fills the whole "screen" (FrameBuffer) with a black, non transparent color.
Draw the "Fogless" circle as a white circle, at the position you want to delete the fog.
Set the FrameBuffers alpha channel (transparancy) to 0.7 or something like that.
End the SpriteBatch and the FrameBuffer to draw it to screen.
What happens? You draw the normal scene, without fog. You create a "virtual screen", fill it with black and overdraw the black with a white circle. Now you set a transparacy to this "virtual screen" and overdraw your real screen with it. The part of the screen, which is under the white circle seems to be bright, while the black rest makes your scene darker.
Something to read: 2D Fire effect with libgdx, more or less the same as fog.
My question to this: Libgdx lighting without box2d
EDIT: another Tutorial.
Let me know if it helps!
EDIT: Some Pseudocode:
In create:
fbo = new FrameBuffer(Format.RGBA8888, width, height, false);
In render:
fbo.begin();
glClearColor(0f, 0f, 0f, 1f); // Set the clear color to black, non transparent
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT); // Clear the "virtual screen" with the clear color
spriteBatch.begin(); // Start the SpriteBatch
// Draw the filled circles somehow // Draw your Circle Texture as a white, not transparent Texture
spriteBatch.end(); // End the spritebatch
fbo.end(); // End the FrameBuffer
spriteBatch.begin(); // start the spriteBatch, which now draws to the real screen
// draw your textures, sprites, whatever
spriteBatch.setColor(1f, 1f, 1f, 0.7f); // Sets a global alpha to the SpriteBatch, maybe it applies alo to the stuff you have allready drawn. If so just call spriteBatch.end() before and than spriteBatch.begin() again.
spriteBatch.draw(fbo, 0, 0); // draws the FBO to the screen.
spriteBatch.end();
tell me if it works
Ok I have this code
#Override
public void render() {
// do not update game world when paused
if (!paused) {
// Update game world by the time that has passed
// since last render frame
worldController.update(Gdx.graphics.getDeltaTime());
}
// Sets the clear screen color to: Cornflower Blue
Gdx.gl.glClearColor(0x64/255.0f, 0x95/255.0f, 0xed/255.0f,
0xff/255.0f);
// Clears the screen
Gdx.gl.glClear(GL10.GL_COLOR_BUFFER_BIT);
// Render game world to screen
worldRenderer.render();
}
And it draws a light blue background onto the screen. I am attempting to create a gradient that goes from a dark blue at the top, to a light blue towards the bottom. Is there a simple way to do this? I'm new to Libgdx, and OpenGL so i'm trying to learn from a book but I can't seem to find the answer to this one. I've heard of drawing a big square and having the vertices different colors, but I'm unsure of how to do this.
In libGDX, the ShapeRenderer object contains a drawRect() method that takes arguments for its position and size as well as four colors. Those colors are converted to a 4-corners gradient. If you want a vertical gradient, just make the top corners one color and the bottom corners another color. Something like this:
shapeRenderer.filledRect(x, y, width, height, lightBlue, lightBlue, darkBlue, darkBlue);
From the API for ShapeRenderer:
The 4 color parameters specify the color for the bottom left, bottom right, top right and top left corner of the rectangle, allowing you to create gradients.
It seems ShapeRenderer.filledRect method has been removed in late libGDX versions. Now the way to do this is as follows:
shapeRenderer.set(ShapeRenderer.ShapeType.Filled);
shapeRenderer.rect(
x,
y,
width,
height,
Color.DARK_GRAY,
Color.DARK_GRAY,
Color.LIGHT_GRAY,
Color.LIGHT_GRAY
);
The parameters for rect method work in the same way as those in filledRect used to do, like in Kevin Workman answer.
There are some further details worth bearing in mind before comitting to ShapeRenderer. I for one will be sticking with stretching and tinting Texture.
private Color topCol = new Color(0xd0000000);
private Color btmCol = new Color(0xd0000000);
#Override
public void render(float delta) {
...
batch.end(); //Must end your "regular" batch first.
myRect.setColor(Color.YELLOW); // Must be called, I don't see yellow, but nice to know.
myRect.begin(ShapeRenderer.ShapeType.Filled); //Everyone else was saying call `set`.
//Exception informed me I needed `begin`. Adding `set` after was a NOP.
myRect.rect(
10, 400,
//WORLD_H - 300, // WORLD_H assumed 1920. But ShapeRenderer uses actual pixels.
420,
300,
btmCol, btmCol, topCol, topCol
);
myRect.end();
I was hoping to change transparency dynamically as player health declines. The btmCol and topCol had no effect on transparency, hence I'll stick to Textures. Translating pixel space is no biggie, but this is much more than the proferred single or double line above.
I want to create a camera moving above a tiled plane. The camera is supposed to move in the XY-plane only and to look straight down all the time. With an orthogonal projection I expect a pseudo-2D renderer.
My problem is, that I don't know how to translate the camera. After some research it seems to me, that there is nothing like a "camera" in OpenGL and I have to translate the whole world. Changing the eye-position and view center coordinates in the Matrix.setLookAtM-function just leads to distorted results.
Translating the whole MVP-Matrix does not work either.
I'm running out of ideas now; do I have to translate every single vertex every frame directly in the vertex buffer? That does not seem plausible to me.
I derived GLSurfaceView and implemented the following functions to setup and update the scene:
public void onSurfaceChanged(GL10 unused, int width, int height) {
GLES20.glViewport(0, 0, width, height);
float ratio = (float) width / height;
// Setup the projection Matrix for an orthogonal view
Matrix.orthoM(mProjMatrix, 0, -ratio, ratio, -1, 1, 3, 7);
}
public void onDrawFrame(GL10 unused) {
// Draw background color
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT);
//Setup the camera
float[] camPos = { 0.0f, 0.0f, -3.0f }; //no matter what else I put in here the camera seems to point
float[] lookAt = { 0.0f, 0.0f, 0.0f }; // to the coordinate center and distorts the square
// Set the camera position (View matrix)
Matrix.setLookAtM( vMatrix, 0, camPos[0], camPos[1], camPos[2], lookAt[0], lookAt[1], lookAt[2], 0f, 1f, 0f);
// Calculate the projection and view transformation
Matrix.multiplyMM( mMVPMatrix, 0, projMatrix, 0, vMatrix, 0);
//rotate the viewport
Matrix.setRotateM(mRotationMatrix, 0, getRotationAngle(), 0, 0, -1.0f);
Matrix.multiplyMM(mMVPMatrix, 0, mRotationMatrix, 0, mMVPMatrix, 0);
//I also tried to translate the viewport here
// (and several other places), but I could not find any solution
//draw the plane (actually a simple square right now)
mPlane.draw(mMVPMatrix);
}
Changing the eye-position and view center coordinates in the "LookAt"-function just leads to distorted results.
If you got this from the android tutorial, I think they have a bug in their code. (made a comment about it here)
Try the following fixes:
Use setLookatM to point to where you want the camera to be.
In the shader, change the gl_Position line
from: " gl_Position = vPosition * uMVPMatrix;"
to: " gl_Position = uMVPMatrix * vPosition;"
I'd think the //rotate the viewport section should be removed as well, as this is not rotating the camera properly. You can change the camera's orientation in the setlookat function.