I have a side-scrolling platformer type game with procedurally generated terrain, see below. The pink circle is a dynamic body for the player, the green area is a single static body and each of the tiles is a fixture.
The player can move left and right and jump, via linear impulses. Now, most of the time this works fine, but every so often when I'm jumping around the player body suddenly falls partially through the ground body and gets stuck:
I can't discern any pattern to this, it just seems to happen at random. The degree to which the player falls through also varies quite a bit, sometimes you just clip a little bit into the terrain and other times you fall through 15 or 20 tiles.
I've found threads on here with similar problems of bodies ignoring collisions, one suggestion was to increase the velocityIterations and positionIterations arguments of the World.step() method. I've been trying that and it doesn't seem to matter. Another suggestion was to set the player body as a bullet. Again, tried it and it did nothing. So, any other ideas?
You should be doing something to coalesce your boxes so that you can cover the filled areas with a fewer number of objects. As is, it’s very inefficient and each gap is an opportunity for something to go wrong and allow for getting stuck inside as you report.
You definitely should look into computing the boundary and forming chain shapes on that boundary. This approach also means you won’t need to coalesce the boxes. You’ll want to still find a way to handle getting to the wrong side of the chain shapes but that’s still a cleaner approach, and besides, TOI calculations on chains will work way better than tons of separate little boxes.
Consider this, when you have a fast moving object slamming into a group of static boxes, even if the object is marked as bullet the system will try to ensure for each box that the object is about to penetrate that it will not penetrate them individually, and sequentially compute that, but this may still leave the dynamic object penetrating them in the end because they are all separate boxes! A chain shape is one single shape and will not have this problem, but you should still have some backup plan for if the dynamic object does somehow make its way to the wrong location.
Because of these nuances it is potentially useful to both have your static geometry wrapped in chain and space-filled with fixtures, and you can detect “stuck” states by looking for very obvious patterns in events (if on every single dt you get huge impulses from multiple static objects onto one dynamic object) and handle it accordingly.
Related
I'm trying to find if a scanned pdf form contains a signature (like making sure a check is signed).
The problem domain:
I will be receiving document packages (multi page pdf's with multiple forms). I have already put together document package classifiers that will check the package for all documents and scale the images to a common size. After that I know where the signatures should be and can scan the area of the document specifically. What I'm looking for is the best approach to making sure there is a signature present. I've considered just checking for a base threshold of dark pixels but that seems so clumsy. The trouble with signatures is that they are not really writing, more of a personal mark.
The only thing I can come up with is a machine learning method to look for loopyness? But I'm not all the familiar with machine learning and don't even know where to start with something like that. Anyone with some suggestions for practical approaches would very appreciated.
I'm coding this in Java if that's helpful at all
What you asked was very broad so there isn't a lot of information that we can give you. However, I can point you to some helpful links:
http://java-ml.sourceforge.net/ --This is a library that you can download that has lots of useful algorithms and other code to include in your program
https://www.youtube.com/playlist?list=PLiaHhY2iBX9hdHaRr6b7XevZtgZRa1PoU --this is a series that explains neural networks (something you might want to look into for your machine learning)
So a big tip I have for your algorithm is to instead of looking for how long exactly all of the loops and things are, look at all of their relative distances
"Relative distances from what?" you say. Well this is where the next tip comes in handy: instead of keeping track of the lines, keep track of the tips of the loops and the order of these points. If you then take the distance between all of them (relatively of course which means to set one of the lengths to zero). Along to keeping track of the distances, you should also keep track of the angles. You would calculate the angle ABC by taking the distance between (A,B), (B,C), and (A,C) (A,B, and C being coordinates on the xy plane) which creates a triangle between the points which allows you to use trigonometry to calculate the angle.
(I am assuming that for all of these you are also trying to detect who's signature it is of course because it actually doesn't really complicate things much at all) When trying to match up the signature detected to the stored signatures to see if they are the "same," don't make it to where the distances and angles have to be exact. Give a margin of error (like use a % range above and below). Here is a tip: Make the margin of error rather large. That way if it is written poorly, it will still be detected. This raises the chances of more than one signature being picked up. Luckily, there is a simply solution to this. Just have it run the algorithm again on the signatures that were found but with the margin of error smaller (you of course don't do this manually, the program does it). Continue decreasing the margin of error until you get only one signature remaining.
I am hoping you have ideas already for detecting where the actual signature is but check for the difference in darkness of the pixels of course. Make sure it is pretty continuous. Also take note of the fact that signatures are commonly signed in both black or blue or sometimes red and other fancy colors.
So, I'm desperately trying to understand ray tracing, and I THINK I have it...? What I think it is: You have a camera location and rotation, and then you create a simulated entity in which will move along the velocity equal to the 0-1(float/double) vector per coordinate, in a short amount of time, and in the end with a lot of collision checking, find and return a location depending on how you do the collision data? This is just my theories of understand, my question is, did I get it right?
Depending on the amount of time you can invest in this, one idea is to get a working implementation, such as pbrt, and try to understand it from a higher level, as in, feed it an example scene, also downloadable, and debug the code to see how it goes parsing the file, creating the scene and its geometry, then sampling the region and creating rays, and finally evaluating the value for those rays.
You could easily skip the details so as not to feel overwhelmed.
What would be the best algorithm in terms of speed for locating an object in a field?
The field consists of 18 by 18 squares with side length 30.48 cm. The robot is placed in the square (0,0) and its job is to reach the light source while avoiding obstacles along the way. To locate the light source, the robot does a 360 degree turn to find the angle with the highest light reading and then travels towards the source. It can reliably detect a light source from 100 cm.
The way I'm implementing this presently is I'm storing the information about each tile in a 2x2 array. The possible values of the tiles are unexplored (default), blocked (there's an obstacle), empty (there's nothing in there). I'm thinking of using the DFS algorithm where the children are at position (i+3,j) or (i,j+3). However, considering the fact that I will be doing a rotation to locate the angle with the highest light reading at each child, I think there may be an algorithm which may be able to locate the light source faster than DFS. Also, I will only be travelling in the x and y directions since the robot will be using the grid lines on the floor to make corrections to it's x and y positions.
I would appreciate it if a fast and reliable algorithm could be suggested to accomplish this task.
This is a really broad question, and I'm not an expert so my answer is based on "first principles" thinking rather than experience in the field.
(I'm assuming that your robot has generally unobstructed line of sight and movement; i.e. it is an open area with scattered obstacles, not in a maze.)
The problem is interpreting the information that you get back from a 360 degree scan.
If the robot sees the light source, then traversing a route to the light source is either trivial, or a "simple" maze walking task.
The difficulty is when you don't see the source. It might mean that the source is not within the circle of visibility. But it could also mean that the light is behind an obstacle. And unfortunately, a simple sensor like you are describing cannot distinguish these two cases.
If your sensor system allowed you to see the obstacles, you could plot the locations of the "shadow" regions (regions behind obstacles), and use that to keep track of the places that are left to search. So your strategy would be to visit a small number of locations and do a scan at each, then methodically "tidy up" a small number of areas that were in shadow.
But since you cannot easily tell where the shadow areas are, you need an algorithm that (ultimately) searches everywhere. DFS is a general strategy that searches everywhere, but it does it by (in effect) looking in the nooks and crannies first. A better strategy is to a breadth first search, and only visit the nooks and crannies if the wide-scale scans didn't find the light source.
I would appreciate it if a fast and reliable algorithm could be suggested to accomplish this task.
I think you are going to need to develop one yourself. (Isn't this the point of the problem / task / competition?)
Although it may not look like it, this looks a more like a maze following problem than anything. I suppose this is some kind of challenge or contest situation, where there's always a path from start to target, but suppose there's not for a moment. One of the successful results for a robot navigating a beacon fully surrounded by obstacles would be a report with a description of a closed path of obstacles surrounding a signal. If there's not such a closed path, then you can find a hole in somewhere; this is why is looks like maze following.
So the basic algorithm I'd choose is to start with a spiraling-inward tranversal, sweeping out a path narrow enough so that you're sure to see a beacon if one is present. If there are no obstacles (a degenerate case), this finds the target in minimal time. (Hint: each turn reduces the number of cells your sensor can locate per step.)
Take the spiral traversal to be counter-clockwise. What you have then is related to the rule for solving mazes by keeping your right hand on the wall and following the generated path. In this case, you have the complication that, while the start of the maze is on the boundary, the end may not be. It's possible of the right-hand-touching path to fail in such a situation. Detecting this situation requires looking for "cavities" in the region swept out by adjacency to the wall.
I may have a problem imagining the best solution for a collision detection related problem. I'm writing a 2D top-down game in Java with many objects that could collide. I am planning to use the approach to create a multi resolution map, with specific objects in specific resolutions of map squares, so that I can go around the O(n²) problem and narrow down the objects in an area that could be colliding.
I have to keep a list of all objects that are residing in each map square. However, since many or sometimes all objects are moving, I have to keep these lists updated all the time.
I guess using every render cycle of an object to update the map square lists will be quite resource consuming and will probably destroy the advantage I gained by using multi resolution maps to narrow down the number objects that could be colliding with another object.
My question is now, how to keep track of all objects and filling them into the according map squares? Is there an easy way, or should I maybe choose another concept for the collision detection?
I might have forgotte some details, if there is some more information I should provide, please reply.
Thanks in advance
Best regards
i assume that you have non moving objects and moving objects. the static objects needs to register themselves only once in the multi resolution map. the moving objects have only to check if they belong now to another square if they really moved.
depending on how you do the actual collision detection you only need to recheck if the moving objects traveled a certain distance. that is the case when you check collisions with an object in the current and all surrounding squares.
the squares far away from the players square usually also don't need to be checked every single physics pass. there is usually no action going on anyway.
I'm creating a grid based game in Java and I want to implement game recording and playback. I'm not sure how to do this, although I've considered 2 ideas:
Several times every second, I'd record the entire game state. To play it back, I write a renderer to read the states and try to create a visual representation. With this, however, I'd likely have a large save file, and any playback attempts would likely have noticeable lag.
I could also write every key press and mouse click into the save file. This would give me a smaller file, and could play back with less lag. However, the slightest error at the start of the game (For example, shooting 1 millisecond later) would result in a vastly different game state several minutes into the game.
What, then, is the best way to implement game playback?
Edit- I'm not sure exactly how deterministic my game is, so I'm not sure the entire game can be pieced together exactly by recording only keystrokes and mouse clicks.
A good playback mechanism is not something that can be simply added to a game without major difiiculties. The best would be do design the game infrastructure with it in mind. The command pattern can be used to achieve such a game infrastructure.
For example:
public interface Command{
void execute();
}
public class MoveRightCommand implements Command {
private Grid theGrid;
private Player thePlayer;
public MoveRightCommand(Player player, Grid grid){
this.theGrid = grid;
this.thePlayer = player;
}
public void execute(){
player.modifyPosition(0, 1, 0, 0);
}
}
And then the command can be pushed in an execution queue both when the user presses a keyboard button, moves the mouse or without a trigger with the playback mechanism. The command object can have a time-stamp value (relative to the beginning of the playback) for precise playback...
Shawn Hargreaves had a recent post on his blog about how they implemented replay in MotoGP. Goes over several different approaches and their pros and cons.
http://blogs.msdn.com/shawnhar/archive/2009/03/20/motogp-replays.aspx
Assuming that your game is deterministic, it might be sufficient if you recorded the inputs of the users (option 2). However, you would need to make sure that you are recognizing the correct and consistent times for these events, such as when it was recognized by the server. I'm not sure how you handle events in the grid.
My worry is that if you don't have a mechanism that can uniformly reference timed events, there might be a problem with the way your code handles distributed users.
Consider a game like Halo 3 on the XBOX 360 for example - each client records his view of the game, including server-based corrections.
Why not record several times a second and then compress your output, or perhaps do this:
recordInitialState();
...
runs 30 times a second:
recordChangeInState(previousState, currentState);
...
If you only record the change in state with a timestamp(and each change is small, and if there is no change, then record nothing), you should end up with reasonable file sizes.
There is no need to save everything in the scene for every frame. Save changes incrementally and use some good interpolation techniques. I would not really use a command pattern based approach, but rather make checks at a fixed rate for every game object and see if it has changed any attribute. If there is a change that change is recorded in some good encoding and the replay won't even become that big.
How you approach this will depend greatly on the language you are using for your game, but in general terms there are many approaches, depending on if you want to use a lot of storage or want some delay. It would be helpful if you could give some thoughts as to what sacrifices you are willing to make.
But, it would seem the best approach may be to just save the input from the user, as was mentioned, and either store the positions of all the actors/sprites in the game at the same time, which is as simple as just saving direction, velocity and tile x,y, or, if everything can be deterministic then ignore the actors/sprites as you can get their information.
How non-deterministic your game is would also be useful to give a better suggestion.
If there is a great deal of dynamic motion, such as a crash derby, then you may want to save information each frame, as you should be updating the players/actors at a certain framerate.
I would simply say that the best way to record a replay of a game depends entirely on the nature of the game. Being grid based isn't the issue; the issue is how predictable behaviour is following a state change, how often there are new inputs to the system, whether there is random data being injected at any point, etc, You can store an entire chess game just by recording each move in turn, but that wouldn't work for a first person shooter where there are no clear turns. You could store a first person shooter by noting the exact time of each input, but that won't work for an RPG where the result of an input might be modified by the result of a random dice roll. Even the seemingly foolproof idea of taking a snapshot as often as possible isn't good enough if important information appears instantaneously and doesn't persist in any capturable form.
Interestingly this is very similar to the problem you get with networking. How does one computer ensure that another computer is made aware of the game state, without having to send that entire game state at an impractically high frequency? The typical approach ends up being a bespoke mixture of event notifications and state updates, which is probably what you'll need here.
I did this once by borrowing an idea from video compression: keyframes and intermediate frames. Basically, every few seconds you save the complete state of the world. Then, once per game update, you save all the changes to the world state that have happened since the last game update. The details (how often do you save keyframes? What exactly counts as a 'change to the world state'?) will depend on what sort of game information you need to preserve.
In our case, the world consisted of many, many game objects, most of which were holding still at any given time, so this approach saved us a lot of time and memory in recording the positions of objects that weren't moving. In yours the tradeoffs might be different.