I recently started using java 3D for games development.
I am currently working on an assignment, a simple racing game to be very specific.
What you seeing on this screenshot is just a box, i have given road texture to it, and another box representing a car (which will be replaced by an external model later)
This is another screenshot from a different angle.
As you can see, I am not able to render the whole road, I have been looking to render the whole path from a long time, but somehow, the java3d engine does not want to render the whole road It stops rendering at some point.
What can I do to get over this, I want to render the whole distance, so, the whole road will be visible.
Help would be greatly appreciated. :)
I'm not seeing any screens either, but you're probably looking for View.setBackClipDistance(double distance), which sets the distance at which objects begin disappearing.
Related
I'm working on a simple game with libGDX and want to make the main character to be fixed in the center of the screen and the world move when I press a button. I was wondering how to do... I was thinking to add a physicsBody to the world that contains other bodies and apply impulses to it when the button is pressed, is this possible in libGDX? And if it is, can i apply other impulses or forces to the bodies contained in the world's physicsBody? I think this way would be the best for me if it is possible, because i have to work a lot with physics, but if you have other ideas tell me please
There's no need to think about applying forces to all the non-character objects, that's just going to get messy very quickly.
The simple solution is to move your camera so that it always looks at your character. So your game loop may look something like:-
Process input
Update physics, character and other entity positions.
Move camera to point at character's new position.
Render
This way, you can update your game world without having to think about the camera at all. Then, when it comes to rendering, you can position your camera and render your graphics without needing to know anything about the game physics. It keeps the physics and rendering relatively independent, and makes it much easier to change things in the future.
For example, you may later decide that you want the camera to follow your character for the most part, but then follow a baddy whilst it is their turn. This is now easy to do, you just specify the character / entity to look at in your game logic, and then position the camera to look at whatever target that is, before you render.
I'm currently developing a 2D RPG with LWJGL, and am still in the engine stage of development. I've got a lot of the tech I want created, but one of my big problems is fixing the camera on the player. All the solutions I've seen involve moving the world and keeping the player still, which can work, but it seems apparent that this can cause some calculation issues if not closely monitored. Normally, I'd write a system where I wouldn't have to worry about it, but I refuse, because I eventually intend on adding multiplayer capability, where a moving world would be unplayable.
Is there a way to affix the camera to an object or point that can move WITHOUT using translate to move the world around? Also, I'd like to avoid Slick if possible. That would require me to rework much of my game engine as it currently stands.
Whenever you are going to project the 3d viewport onto a 2d screen you need to move everything according to the point of view of the observer (the so called camera or view).
I guess you can't escape from this. What you usually do is having a Camera object which holds position and rotation that is used to build the view matrix which is passed to the vertices of your scene through a uniform to the shaders. Passing transformation matrices to shaders is the normality so you shouldn't feel burdened by it. You can always premultiply it with the perspective matrix.
You must move the whole world to match the position of your camera just because you need to transform everything in your scene as it is seen from that point of view, otherwise how could you then project it on your screen? There is no "move the camera, keep the world still" concept.
Move the world visually, it's how every other RPG does it. Don't move the actual world's location though.
Draw everything but the ui normally, than translate it all according to the players position (i.e. glTranslate2f(-player.x,-player.y)). This is all done in the render method. On networked multiplayer, the viewport is done to that specific player (i.e. Bob's screen is translated based off Bob's position, Jane's is translated based off Jane's position). Should you instead want single-screen multiplayer, you will probably have to use mutliple framebuffers (one per player), and use them as viewports.
I would like to develop a game on Android platform, I have about a year experience with Java and also used the OpenGL library in C++. I also programmed Minesweeper and Connect Four in Java. Basically, here's the type of game I want to create:
Pressing the screen would make your character go up in the screen and releasing it will make it go down. I know there are games like this already but it doesn't matter to me, it's my current goal.
The structure of both games I programmed was quite easy, it was only a GridLayout. This wouldn't fit in any defined layout. Then, I have absolutely no idea how to test a character/environment collision. I'm also wondering what would be the easiest/fastest way to draw the "collision" environment, I assume it would be with OpenGL but from what I know, it would still take a long time and wouldn't be that easy.
I've been trying to find a tutorial about this but obviously, I've been unsuccessful.
PS: I already know the basics to make an Android app so you shouldn't need to worry about that.
think about each segment of what you're trying to achieve individually.
First off, you could probably read up on libgdx: http://code.google.com/p/libgdx/
it's a great android game engine which will do alot for the work for you.
For the player, think of it as just incrementing the players y position by a few pixels if it's pressed down, else decrement it.
For the map, you'd probably need some sort of 2d polygon based collision for the upper and lower collideable environments, libgdx has a physics library built in but i'm not sure how the support it for polygon-based collision. And finally, just create the map and make it wider than your game screen, and just move the camera along as the player moves.
I'm developing Side Scroll 2D Game, using AndEngine
I'm using their SVG extension (I'm using vector graphic)
But I discovered strange and ugly effect, while moving my player (while camera is chasing player exactly, means while camera is changing its position)
Images of my sprites looks just different, they are like blurred or there is effect like those images would be moving (not changing their possition, just jittery effect, really hard to explain and call this effect properly) Hopefully this image may explain it a bit:
Its more or less, how does it look in the game, where:
a) "FIRST" image is showing square, while player is moving (CAMERA isn't) image looks as it should
b) "SECOND" the same image, but with this strange effect "which looks like image moving/blurring during camera moving [chasing player])
Friend of mine told me that it might be hardware problem:
"the blurring that you notice is actually a hardware problem. Some phones "smooth" the content on the screen to give a nicer feel to applications. I don't know if it's the screen or the graphics processor, but it doesn't occur on my wife's Samsung Captivate. It happens on my Atrix and Xoom though. It's really noticable on the Xoom due to the large screen size."
But seems there is way to prevent it, since I have tested many similar games, where camera is chasing player, and I could not notice such effect.
Is there a way to turn this off in code?
I'm grateful for previous answers, unfortunately, still problem exist.
Till now, I have tried:
casting (int) on setCenter method which is being executed on updateChaseEntity
testing game using PNG images, instead of SVG extension and vector graphic
different TextureOptions
hardwareAcceleration
If someone have different idea, what may cause this strange effect, I would be really grateful for help - thank you.
Some devices (Xperia Play) bleed everywhere when trying to draw things that are moving quickly. For example a red icon on the application list leaves a blur behind it. You could try hardwareAcceleration in the manifest (on and off) to see if it makes a difference.
You'd probably get the same effect if you weren't using svg
When your player's just going to the right and camera begins chasing him, all other sprites except player are moving to the left. Try to print the absolute coordinates of your "blurring" sprite (or some of its anchor points) to the log. The X-coord of sprite should be decreasing linearly. If you notice it's increasing some times, it could be a reason of blur.
Hope this will help.
It sounds like it's due to the camera moving in real increments, making the SVG components rest on non-integer bounds, and the SVG renderer making anti-aliasing come into effect to demonstrate this. Try moving the camera in integer increments by casting camera values to int.
I'm not familiar with this engine but I wonder why would you use vector graphics for pixelated art style. I'll be surprised if your character in the screenshot is really a vector art... maybe it's texture imported in SVG? I attempted back in the day to use flash a few times and I was making the same mistake... I'm not saying it's not possible but it's not intended to create pixel art with flash or any other vector software. There is a reason why most flash games have similar look.
Best way to debug it, is try a different looking sprite.
Maybe it is just the slow response time of your device display.
I'm also an Andengine developer, and never seen such behavior.
Sometimes you fix jittering using FixedStepEngine, it might help.
If you can post your code maybe we can better help you.
Yesterday I tried to solve a Problem I had the entire Day and it is still unsolved. I searched for every combinations of words I could imagine to find solutions on Google etc. But without success. :(
The Problem consists of following idea:
I started programming GLES20 (Not GLES10!) on Android. There are many other ways how to compute matrixes and objects. So there aren't any methods like "Pop" or "push" Matrix.
I want to rotate a globe/sphere only by touching and moving my fingers. Touching functions etc works fine but the rotation itself never does. Everytime I first rotate by x-axis and then rotating by y-axis the rotation is still computed by local space axis of the object and Not two global axis. This happens whatever I do... :/
There are some people searching the solution for the Same Problem, but mostly GLES10 or completly different programming languages, never GLES20 and Java.
I will also post some Code parts later, when I get access to a Computer.
perhaps someone already understands what my Problem is.
Thank you so much! :)
Chrise
In OpenGL ES 2.0 there aren't any of the matrix functions and you're supposed to take care of your own matrices. So, for pushing and popping matrices what you need to do is when you're drawing you copy your matrix to a temporary one, do whatever transformation on it and pass the temporary matrix to the vertex shader.