I've created a Voxelizer for .obj models which is working pretty well so far. However, it only turns the surface of the model into voxels and doesn't fill it up. And filling it up afterwards is very important for further exporting and optimization.
I was thinking about options to fill up the space but just couldn't put my finger on an efficient algorithm that does it.
This is what the inside of a "cat" looks like, exported as .obj again.
Is there any fast algorithms for detecting enclosed space within a voxel shape?
My voxels are being stored using either a
List<Voxel> //Voxel contains 4 integers for x,y,z,rgb OR
Map<int[], java.awt.Color>.
I would need an algorithm that works really efficiently with one of these.
here's the best approach I've come up with so far:
1) It starts from the assumption that you only have one volume: no separate polygon meshes. That's easy to check. If it doesn't hold true, extra steps would be required, but the process would still be the same.
2) Take any voxel in the grid which is not part of the shell voxels. ray trace it to the closest point on the mesh to figure out whether this is part of an inside or outside volume.
3) Recursively flood to all the neighbours which are not in the shell either, until there's no where to grow. When that happens you have recognized either the outside or inside of your mesh (depending on where you started).
4) finding the opposite volume is a simple subtraction operation: vol_2 = all_voxels - shell - vol_1. if vol_2 is empty, your shell is open and the flood operation permeated through the gaps.
Related
I'm searching for a fast way to fix perspective of a picture given in java or any language.And currently i really don't have any idea how to do it, nor find anything useful in Google.
Input:
Point[4] , Color[][]
Output:
Perspective-Fixed Color[][]
By Perspective Fixing, i meant the one in Photoshop. Just Like:
I^d appreciate it if you tell me how the code piece works since i want to understand the logic.
The simple solution is to just remap coordinates from the original to the final image, copying pixels from one coordinate space to the other, rounding off as necessary -- which may result in some pixels being copied several times adjacent to each other, and other pixels being skipped, depending on whether you're stretching or shrinking (or both) in either dimension. Make sure your copying iterates through the destination space, so all pixels are covered there even if they're painted more than once, rather than thru the source which may skip pixels in the output.
The better solution involves calculating the corresponding source coordinate without rounding, and then using its fractional position between pixels to compute an appropriate average of the (typically) four pixels surrounding that location. This is essentially a filtering operation, so you lose some resolution -- but the result looks a LOT better to the human eye; it does a much better job of retaining small details and avoids creating straight-line artifacts which humans find objectionable.
Note that the same basic approach can be used to remap flat images onto any other shape, including 3D surface mapping.
So I'm doing the project of an introduction to Java course and it seems that I chose something that goes way beyond what I'm able to do. :P
Any help would be greatly appreciated. This is what I'm having problems with:
You have a cursor that is controlled by a player (goes forward or
turns 90°) which leaves a colored line as it goes. If you manage to go
over your own line and close a polygon of any shape (only right angles
though), its surface changes color into the color of your line.
I can detect when this situation arises but I am kind of lost as how to actually fill the correct polygon just closed. I can't seem to imagine an algorithm that would cover any case possible.
I looked at the Scanline fill algorithm but I think it would start having problems by the time there are already some polygons already filled in the map.
The Floodfill algorithm would be perfect if I had a way of finding a point inside the polygon, but, as there are many different possibilities, I can't think of a general rule for this.
I'm using an array 2x2 of integers where each color is represented by a number.
Does anyone have an idea on how to approach this problem?
If you can detect the situation then this can be solved in very simple manner. The question is which point to choose as start point for floodfill. The simple answer is: try all of them. Of course it makes a sense to start only with points adjacent to the one where your cursor is located. In this case you will have at most 8 points to check. Even better - at least 2 of them are definitely painted already if current point forms a polygon.
So you have 8 points to check. Launch floodfill 8 times starting from each of those points.
Two things which you probably should keep in mind:
You should try filling the area in cloned version of your field in order to be able to get back if floodfill will not find a polygon.
Launching floodfill second time and later you should reuse this cloned version of your field to see whether it was filled there. This will allow you to check every point at most once and this will make your 8 floodfills almost as fast as 1 floodfill.
Check this question, using Graphics2 and Polygon to fill an arbitrary polygon: java swing : Polygon fill color problem
Finding out whether a point is inside or outside a polygon: http://en.wikipedia.org/wiki/Point_in_polygon
Make sure you use double buffering. If you set individual pixels and don't use double buffering the component may redraw after every pixel was set.
I have a simple game that uses a 3D grid representation, something like:
Blocks grid[10][10][10];
The person in the game is represented by a point and a sight vector:
double x,y,z, dx,dy,dz;
I draw the grid with 3 nested for loops:
for(...) for(...) for(...)
draw(grid[i][j][k]);
The obvious problem with this is when the size of the grid grows within the hundreds, fps drop dramatically. With some intuition, I realized that:
Blocks that were hidden by other blocks in the grid don't need to be rendered
Blocks that were not within the person's vision field also don't need to be rendered (ie. blocks that were behind the person)
My question is, given a grid[][][], a person's x,y,z, and a sight vector dx,dy,dz, how could I figure out which blocks need to be rendered and which don't?
I looked into using JMonkeyEngine, a 3D game engine, a while back and looked at some of the techniques they employ. From what I remember, they use something called culling. They build a tree structure of everything that exists in the 'world'. The idea then is that you have a subset of this tree that represents the visible objects at any given time. In other words, these are the things that need to be rendered. So, say for example that I have a room with objects in the room. The room is on the tree and the objects in the room are children of the tree. If I am outside of the room, then I prune (remove) this branch of the tree which then means I don't render it. The reason this works so well is that I don't have to evaluate EVERY object in the world to see if it should be rendered, but instead I quickly prune whole portions of the world that I know shouldn't be rendered.
Even better, then when I step inside the room, I trim the entire rest of the world from the tree and then only render the room and all its descendants.
I think a lot of the design decisions that the JMonkeyEngine team made were based on things in David Eberly's book, 3D Game Engine Design. I don't know the technical details of how to implement an approach like this, but I bet this book would be a great starting point for you.
Here is an interesting article on some different culling algorithms:
View Frustum Culling
Back-face Culling
Cell-based occlusion culling
PVS-based arbitrary geometry occlusion culling
Others
First you need a spatial partitioning structure, if you are using uniform block sizes, probably the most effective structure will be an octree. Then you will need to write an algorithm that can calculate if a box is on a particular side of (or intersecting) a plane. Once you have that you can work out which leaf nodes of the octree are inside the six sides of your view frustum - that's view culling. Also using the octree you can determine which blocks occlude others (sometimes called frustum masking), but get the first part working first.
It sounds like you're going for a minecraft-y type thing.
Take a look at this android minecraft level renderer.
The points to note are:
You only have to draw the faces of blocks that are shared with transparent blocks. e.g.: don't bother drawing the faces between two opaque blocks - the player will never see them.
You'll probably want to batch up your visible block geometry into chunklets (and stick it into a VBO) and determine visibility on a per-chunklet basis. Finding exactly which blocks can be seen will probably take longer than just flinging the VBO at the gpu and accepting the overdraw.
A flood-fill works pretty well to determine which chunklets are visible - limit the fill using the view frustum, view direction (if you're facing in the +ve x direction, don't flood in the -ve direction), and simple analyses of chunklet data (e.g.: if an entire face of a chunklet is opaque, don't flood through that face)
Yesterday I came across Craig Reynolds' Boids, and subsequently figured that I'd give implementing a simple 2D version in Java a go.
I've put together a fairly basic setup based closely on Conrad Parker's notes.
However, I'm getting some rather bizarre (in my opinion) behaviour. Currently, my boids move reasonably quickly into a rough grid or lattice, and proceed to twitch on the spot. By that I mean they move around a little and rotate very frequently.
Currently, I have implemented:
Alignment
Cohesion
Separation
Velocity limiting
Initially, my boids are randomly distributed across the screen area (slightly different to Parker's method), and their velocities are all directed towards the centre of the screen area (note that randomly initialised velocities give the same result). Changing the velocity limit value only changes how quickly the boids move into this pattern, not formation of the pattern.
As I see it, this could be:
A consequence of the parameters I'm using (right now my code is as described in Parker's pseudocode; I have not yet tried areas of influence defined by an angle and a radius as described by Reynolds.)
Something I need to implement but am not aware of.
Something I am doing wrong.
The expected behaviour would be something more along the lines of a two dimensional version of what happens in the applet on Reynolds' boids page, although right now I haven't implemented any way to keep the boids on screen.
Has anyone encountered this before? Any ideas about the cause and/or how to fix it? I can post a .gif of the behaviour in question if it helps.
Perhaps your weighting for the separation rule is too strong, causing all the boids to move as far away from all neighboring boids as they can. There are various constants in my pseudocode which act as weights: /100 in rule 1 and /8 in rule 3 (and an implicit *1 in rule 2); these can be tweaked, which is often useful for modelling different behaviors such as closely-swarming insects or gliding birds.
Also the arbitrary |distance| < 100 in the separation rule should be modified to match the units of your simulation; this rule should only apply to boids within close proximity, basically to avoid collisions.
Have fun!
If they see everyone, they will all try to move with average velocity. If they see only some there can be some separated groups.
And if they are randomly distributed, it will be close to zero.
If you limit them by rectangle and either repulse them from walls or teleport them to other side when they got close) and have too high separation, they will be pushed from walls (from walls itself or from other who just were teleported, who will then be pushed to other side (and push and be pushed again)).
So try tighter cohesion, limited sight, more space and distribute them clustered (pick random point and place multiple of them small random distance from there), not uniformly or normaly.
I encountered this problem as well. I solved it by making sure that the method for updating each boid's velocity added the new velocity onto the old, instead of resetting it. Essentially, what's happening is this: The boids are trying to move away from each other but can't accelerate (because their velocities are being reset instead of increasing, as they should), thus the "twitching". Your method for updating velocities should look like
def set_velocity(self, dxdy):
self.velocity = (self.velocity[0] + dxdy[0], self.velocity[1] + dxdy[1])
where velocity and dxdy are 2-tuples.
I wonder if you have a problem with collision rectangles. If you implemented something based on overlapping rectangles (like, say, this), you can end up with the behaviour you describe when two rectangles are close enough that any movement causes them to intersect. (Or even worse if one rectangle can end up totally inside another.)
One solution to this problem is to make sure each boid only looks in a forwards direction. Then you avoid the situation where A cannot move because B is too close in front, but B cannot move because A is too close behind.
A quick check is to actually paint all of your collision rectangles and colour any intersecting ones a different colour. It often gives a clue as to the cause of the stopping and twitching.
In weka I load an arff file. I can view the relationship between attributes using the visualize tab.
However I can't understand the meaning of the jitter slider. What is its purpose?
You can find the answer in the mailing list archives:
The jitter function in the Visualize panel just adds artificial random
noise to the coordinates of the plotted points in order to spread the
data out a bit (so that you can see points that might have been
obscured by others).
I don't know weka, but generally jitter is a term for the variation of a periodic signal to some reference interval. I'm guessing the slider allows you to set some range or threshold below which data points are treated as being regular, or to modify the output to introduce some variation. The wikipedia entry can give you some background.
Update: from this pdf, the jitter slider is for this purpose:
“Jitter” option to deal with nominal attributes (and to detect “hidden”data points)
Based on the accompanying slide it looks like it introduces some variation in the visualisation, perhaps to show when two data points overlap.
Update 2: This google books extract (to Data mining By Ian H. Witten, Eibe Frank) seems to confirm my guess:
[jitter] is a random displacement applied to X and Y values to separate points that lie on top of one another. Without jitter, 1000 instances at the same data point would look just the same as 1 instance
I don't know the products you mention, but jittering generally means randomising the sample positions. Eg, in ray tracing you would normally render a ray though each pixel on the screen. Jittering adds a random offset to each ray to reduce issues caused by regular aliasing.