I have a flame ParticleEffect as the exhaust from a rocket. While the rocket moves slowly, the flame looks nice, but when the rocket starts moving very fast the flame can't really keep up because its particle speed is relative to the world and not the rocket. The result is blips that disappear from screen in parts of a second.
Can I update the emitter velocity during runtime, or can I set some kind of velocity point of reference to the particle effect (also during runtime)?
Thanks for any help!
It'd be a nice idea to encapsulate your particle effect inside an actor (like here).
Unfortunately the documentation is currently somewhat lacking, but you can always look at the source.
Now looking at that, the velocity value seems read only. So the answer to your question seems to be no.
But to fulfill your requirements, I'd suggest you to create 2 or 3 (or more) particle effects suitable for different velocities. (Much easy). Now you can change the entire effect during runtime.
Hope this helps.
Related
I'm currently working on my 2D game project (Java), but so far any kind of game logic or AI has been crudely implemented. For instance, say I need to randomly position a bunch of sprites a long the top of the screen, I'd be using the Random class to do this. I'd simply use Random.nextInt( size of x axis on which to spawn ); Although this does work I'd be interested to hear how I should really be going about this kind of thing.
As a second scenario (this is why I put AI in the title, although it's not really AI), say I want to have my characters randomly blink in a life-like fashion. What I'd do here is use the Random class to calculate a % (say 20% chance) of blinking and call it every second.
Any suggestions on how I should really be going about this would be greatly appreciated.
Google for a paper titled "Steering Behaviors" by Craig Reynolds. It address just this and you'll find great ideas to start with specifically some nice ideas for giving groups of sprites the appearance of 'intelligent' movement. The key for him in his different behaviors, i.e. flocking, etc. is making properties of any given sprite dependent on those of some other sprite. You could even go so far as to say, like -- any given sprite will blink only if it's two neighbors just have blinked. Something or other along those lines.
Hope this helps!
Are you using an OOP (Object-Oriented approach)? If not, you should definitely look into it. It's really simple with java and can speed up your development time and neaten your code.
I would make a sprite class, and give them a function, say actionSpawn, or actionMove (I like to start my "action" functions with the word action so they are easily identifiable). In this function you would encapsulate the Random.nextInt function, to set the sprite's x and/or y position.
You could use the same approach to make them blink.
Im making small 2D game and i would like how should, theoretically work jumping and standing on objects.
For jumping should there be some kind of gravity?Should i use collision detection?
Good example of what i want to undestand is jumping in Mario.
I think for jumping it would be best to create some simple model which use "game time". I would personally create some physical model based on gravity, but you can go for whatever you want.
If you want your character to "stand" on object, you will have to create some collision detection. Good start is to approximate object by one or more circles (or lines) compute collision of them and in case the approximation collides determine the final state by some more precise method.
In terms of should there be gravity and collision detection - personally for such a thing I'd say yes and yes! It doesn't have to be complicated but once you have those things in place the pseudocode for the "main loop" becomes relatively simple:
if not colliding with object underneath, apply gravity
if user presses jump key and not colliding with surface //i.e. if we're in the air already, don't jump again
apply upward velocity
That is of course oversimplified and there are other corner cases to deal with (like making sure when you're coming down after jumping, you don't end up embedded or potentially going through the floor.
You might want to take a look at Greenfoot which handles a lot of things like all the Java2d stuff, collision detection etc. for you. Even if you don't stick with it it'd be a good platform for building a prototype or playing around with the ideas talked about.
Standing on objects implies collision detection, you can approximate the characters and the environment objects with primitive shapes (rectangles, circles etc.).
To check if to shapes are colliding, check axis one by one (X and Y axis in your case). Here is an explanation of the "separating axis theorem" (Section 1).
Yesterday I came across Craig Reynolds' Boids, and subsequently figured that I'd give implementing a simple 2D version in Java a go.
I've put together a fairly basic setup based closely on Conrad Parker's notes.
However, I'm getting some rather bizarre (in my opinion) behaviour. Currently, my boids move reasonably quickly into a rough grid or lattice, and proceed to twitch on the spot. By that I mean they move around a little and rotate very frequently.
Currently, I have implemented:
Alignment
Cohesion
Separation
Velocity limiting
Initially, my boids are randomly distributed across the screen area (slightly different to Parker's method), and their velocities are all directed towards the centre of the screen area (note that randomly initialised velocities give the same result). Changing the velocity limit value only changes how quickly the boids move into this pattern, not formation of the pattern.
As I see it, this could be:
A consequence of the parameters I'm using (right now my code is as described in Parker's pseudocode; I have not yet tried areas of influence defined by an angle and a radius as described by Reynolds.)
Something I need to implement but am not aware of.
Something I am doing wrong.
The expected behaviour would be something more along the lines of a two dimensional version of what happens in the applet on Reynolds' boids page, although right now I haven't implemented any way to keep the boids on screen.
Has anyone encountered this before? Any ideas about the cause and/or how to fix it? I can post a .gif of the behaviour in question if it helps.
Perhaps your weighting for the separation rule is too strong, causing all the boids to move as far away from all neighboring boids as they can. There are various constants in my pseudocode which act as weights: /100 in rule 1 and /8 in rule 3 (and an implicit *1 in rule 2); these can be tweaked, which is often useful for modelling different behaviors such as closely-swarming insects or gliding birds.
Also the arbitrary |distance| < 100 in the separation rule should be modified to match the units of your simulation; this rule should only apply to boids within close proximity, basically to avoid collisions.
Have fun!
If they see everyone, they will all try to move with average velocity. If they see only some there can be some separated groups.
And if they are randomly distributed, it will be close to zero.
If you limit them by rectangle and either repulse them from walls or teleport them to other side when they got close) and have too high separation, they will be pushed from walls (from walls itself or from other who just were teleported, who will then be pushed to other side (and push and be pushed again)).
So try tighter cohesion, limited sight, more space and distribute them clustered (pick random point and place multiple of them small random distance from there), not uniformly or normaly.
I encountered this problem as well. I solved it by making sure that the method for updating each boid's velocity added the new velocity onto the old, instead of resetting it. Essentially, what's happening is this: The boids are trying to move away from each other but can't accelerate (because their velocities are being reset instead of increasing, as they should), thus the "twitching". Your method for updating velocities should look like
def set_velocity(self, dxdy):
self.velocity = (self.velocity[0] + dxdy[0], self.velocity[1] + dxdy[1])
where velocity and dxdy are 2-tuples.
I wonder if you have a problem with collision rectangles. If you implemented something based on overlapping rectangles (like, say, this), you can end up with the behaviour you describe when two rectangles are close enough that any movement causes them to intersect. (Or even worse if one rectangle can end up totally inside another.)
One solution to this problem is to make sure each boid only looks in a forwards direction. Then you avoid the situation where A cannot move because B is too close in front, but B cannot move because A is too close behind.
A quick check is to actually paint all of your collision rectangles and colour any intersecting ones a different colour. It often gives a clue as to the cause of the stopping and twitching.
Here’s my task which I want to solve with as little effort as possible (preferrably with QT & C++ or Java): I want to use webcam video input to detect if there’s a (or more) crate(s) in front of the camera lens or not. The scene can change from "clear" to "there is a crate in front of the lens" and back while the cam feeds its video signal to my application. For prototype testing/ learning I have 2-3 images of the “empty” scene, and 2-3 images with one or more crates.
Do you know straightforward idea how to tackle this task? I found OpenCV, but isn't this framework too bulky for this simple task? I'm new to the field of computer vision. Is this generally a hard task or is it simple and robust to detect if there's an obstacle in front of the cam in live feeds? Your expert opinion is deeply appreciated!
Here's an approach I've heard of, which may yield some success:
Perform edge detection on your image to translate it into a black and white image, whereby edges are shown as black pixels.
Now create a histogram to record the frequency of black pixels in each vertical column of pixels in the image. The theory here is that a high frequency value in the histogram in or around one bucket is indicative of a vertical edge, which could be the edge of a crate.
You could also consider a second histogram to measure pixels on each row of the image.
Obviously this is a fairly simple approach and is highly dependent on "simple" input; i.e. plain boxes with "hard" edges against a blank background (preferable a background that contrasts heavily with the box).
You dont need a full-blown computer-vision library to detect if there is a crate or no crate in front of the camera. You can just take a snapshot and make a color-histogram (simple). To capture the snapshot take a look here:
http://msdn.microsoft.com/en-us/library/dd742882%28VS.85%29.aspx
Lots of variables here including any possible changes in ambient lighting and any other activity in the field of view. Look at implementing a Canny edge detector (which OpenCV has and also Intel Performance Primitives have as well) to look for the outline of the shape of interest. If you then kinda know where the box will be, you can perhaps sum pixels in the region of interest. If the box can appear anywhere in the field of view, this is more challenging.
This is not something you should start in Java. When I had this kind of problems I would start with Matlab (OpenCV library) or something similar, see if the solution would work there and then port it to Java.
To answer your question I did something similar by XOR-ing the 'reference' image (no crate in your case) with the current image then either work on the histogram (clustered pixels at right means large difference) or just sum the visible pixels and compare them with a threshold. XOR is not really precise but it is fast.
My point is, it took me 2hrs to install Scilab and the toolkits and write a proof of concept. It would have taken me two days in Java and if the first solution didn't work each additional algorithm (already done in Mat-/Scilab) another few hours. IMHO you are approaching the problem from the wrong angle.
If really Java/C++ are just some simple tools that don't matter then drop them and use Scilab or some other Matlab clone - prototyping and fine tuning would be much faster.
There are 2 parts involved in object detection. One is feature extraction, the other is similarity calculation. Some obvious features of the crate are geometry, edge, texture, etc...
So you can find some algorithms to extract these features from your crate image. Then comparing these features with your training sample images.
So I'm currently working on some FPS game programming in OpenGL (JOGL, more specifically) just for fun and I wanted to know what would be the recommended way to create an FPS-like camera?
At the moment I basically have a vector for the direction the player is facing, which will be added to the current player position upon pressing the "w" or forward key. The negative of that vector is of course used for the "s" or backward key. For "a", left, and "d", right I use the normal of the direction vector. (I am aware that this would let the player fly, but that is not a problem at the moment)
Upon moving the mouse, the direction vector will be rotated using trigonometry and matrices. All vectors are, of course, normalized for easy speed control.
Is this the common and/or good way or is there an easier/better way?
The way I have always seen it done is using two angles, yaw and pitch. The two axes of mouse movement correspond to changes in these angles.
You can calculate the forward vector easily with a spherical-to-rectangular coordinate transformation. (pitch=latitude=φ, yaw=longitude=θ)
You can use a fixed up vector (say (0,0,1)) but this means you can't look directly upwards or downwards. (Most games solve this by allowing you to look no steeper than 89.999 degrees.)
The right vector is then the cross product of the forward and up vectors. It will always be parallel to the ground plane since the up vector is always perpendicular to the ground plane.
Left/right strafe keys then use the +/-right vector. For a forward vector parallel to the ground plane, you can take the cross product of the right and the up vectors.
As for the GL part, you can simply use gluLookAt() using the player's origin, the origin plus the forward vector and the up vector.
Oh and please, please add an "invert mouse" option.
Edit: Here's an alternative solution which gets rid of the 89.9 problem, asked in another question, which involves building the right vector first (with no pitch information) and then forward and up.
Yes, thats essentially the way I have always seen it done.
Yeah, but in the end you will want to add various other attributes to the camera. To spell it n00b: keep it tidy if you want to mimic Quake or CS. In the end might have bobing, FoV, motion filtering, network lag suspension and more.
Cameras are actually one of the more difficult parts to make in a good game. That's why developers usually are content with a seriously dull, fixed 1st/3rd person ditto.
You could use Quaternions for your camera rotation. Although I have not tried it myself, they are useful for avoiding gimbal lock.