Hide Rubik's Cube Internal Wirings - java

In the context of a Java/OpenGL application I am drawing a black wired only (without filling) big cube composed of 27 smaller cubes. To do that I wrote the following code:
for (int x = 1; x <= 3; x++) {
for (int y = 1; y <= 3; y++) {
for (int z = 1; z <= 3; z++) {
wireCube(x - 2, 2 - y, 2 - z);
}
}
}
The wireCube method is implemented using GL11.glBegin(GL11.GL_LINE_LOOP);
Using the right call to gluPerspective to define the projection and the correct call to gluLookAt to position the "camera" I am able to display my big cube as needed and ....I am very happy with that !!!
My new problem is now, how to modify this code in order to "hide" all the wirings that are inside the big cube ? To help visualize the scene, these wirings are the ones that are usually drawn has dashed lines when learning 3D geometry at school.
Thanks in advance for help
Manu

Enable depth testing (glEnable(GL_DEPTH_TEST)) and put quads on the surfaces of the cubes.
To draw a quad, use glBegin(GL_QUADS) followed by the four vertices and the glEnd() call.

Draw all your cubes with black polygons (or disable color output : glColorMask(false,false,false,false); ): this will fill the depth buffer.
Then draw your lines. The ones hidden by the polygons will not appear. There will be z-fighting though, so glDepthTest(GL_LEQUAL);
If you want to draw "unvisible" lines dashed, this won't be enough. You can draw again with glDepthTest(GL_GREATER);
Another solution is to draw polygons that face the camera with a solid line, and other with a dashed line. This is a simple dot product (camDir.faceNorm).

Use glPolygonOffset() to drown or to emerge your wireframe above (or below) the polygons with the same coordinates.

Related

How to properly combine two camera view matrices?

Basically, I have a 3D hexagonal tile map (think something like a simplified Civ 5 map). It is optimized to use a single large mesh to reduce draw calls and easily allow for some cool Civ 5 features (terrain continuity and uv texture bleeding).
I want to support wraparound maps in my game, and so was brainstorming ideas on how to best do this.
For example, if the main camera is approaching the far east of the map, then I can simply perform the translation to the far west by doing:
if(camera.x >= MAP_WIDTH)
camera.translate(0, 0, y);
However, by doing this, there will be a brief timespan in which the player will see the "end" of the board before the translation. I want to eliminate this.
The first idea I had to solve this problem was to basically just modify the above code as follows:
if((camera.x + camera.viewportWidth >= MAP_WIDTH)
camera.translate(0, 0, y);
However, this has the side effect of a "jump" during the translation that feels unnatural.
My final solution, and the subject of the question:
I have three cameras, my main camera, one to the far east, and one to the far west. I basically want to "combine" the matrices of these cameras to render the map outside of its actual bounds.
Basically, if the camera is a certain distance from the world bounds, I want to draw the scene from the other side of the world in the following location. So, for example, this is the pseudo code of what I want to do:
int MAP_WIDTH = 25;
float viewportSize = 10f;
float mainCamX = 24f;
float mainCamY = 15f;
Matrix4 cbnd = camera.combined;
if(camX >= MAP_WIDTH)
camX = 0;
else if(camX < 0)
camX = MAP_WIDTH - camX;
if(camX + viewportSize >= MAP_WIDTH)
cbnd = combineMatrices(mainCam.combined, westCam.combined);
modelBatch.setProjectionMatrix(cbnd);
modelBatch.begin();
//Draw map model
//Draw unit models.
modelBatch.end();
modelBatch.setProjectionMatrix(mainCam.combined);
But I am unsure of how to appropriately combine matrices, and am new to the concept of matrices in general.
Can somebody give me a hand in combining these matrices?
Sounds too complicated. Here is my idea:
I.e. you can display 10x10 fields on screen
you have map 100x100 fields
just increase your map to 110x110 and in that extra space repeat your first (zero-est rows and columns)
that way you can scroll smoothly and when camera reaches i.e. most right position you have on map just return it to 0 X position. Same goes for vertical movement.
So, idea is to have double most left part of map in width of screen width and most top part of map in size of screen height at rigth/bottom of the map respectively.

Creating a isomatricStagged Map with libGDX

I am learning creating a simple game with libgdx and already had some success.
Now, i like to know the best way to use to create a simple isomatricStagged world.
Even with some camera troubles I have, I already implements a map, rendered, and use a cam.
What I want to know is, should I use Box2d and include my map to get boundaries? Or should I just render a 2 dimensional array for a map (this one is really slow after using more than 100x100 tiles).
thabnks for the rough overview or for some links to get these information. =)
opened on https://gamedev.stackexchange.com/questions/140750/creating-a-isomatricstagged-map-with-libgdx as well)
Not sure what you want to do with box 2D but an array should not be slow for just 100x100 tiles but you need to only draw what is needed unless your loop will draw thousands of off-screen assets. Calculating this for a staggered map is pretty straight forward:
Tiles X position in 2D array = worldX / tileWidth;
Tiles Y position in 2D array = worldY / (TileHeight / 2); // Since they overlap half on y axis.
So if that worldX/worldY would be the bottom left corner or center of your camera you can alter your draw loop to just draw what is on the screen. You calculate what tile would go in the top left corner and bottom right corner and iterate through.
Gridpoint topLeft; //Calculate based on camera position.
Gridpoint bottomRight; //Calculate based on camera position.
for (int y = bottomRight.y; y <= topLeft.y; y++)
{
for (int x = topLeft.x; x <= bottomRight.x; x++)
{
drawTile[x,y];
}
}
Now you are able to have a 1024x1024 map without hassle at all. But this does require a lot of memory since the large map array is stored. This is why you should only keep necessary data within a Tile object, like instead of a Texture a int referencing to a specific texture.

How to detect a colored rectangles in an image?

I'm trying to write an AI maze solver program. To do this, I will draw 2-color mazes in GIMP with red being walls and blue being background or floor. Then I will export from GIMP as a png and use ImageIO.read() to get a BufferedImage object of the maze. Finally, I will assign Rectangle hitboxes to walls and store them in an ArrayList so I can use .intersect() to check for sprite contact with walls. I can work with it from here.
However, there is one thing I want to be able to do for my program that I don't know how to do: Once I have stored my image as a BufferedImage, how can I detect the red parts (all the exact same RGB shade of red) and create matching Rectangles?
Notes:
Mazes will always be of fixed size (1000x1000 pixels).
There is a fixed starting point for each maze
The red areas will always form straight rectangles. The Rectangle objects which I create are just used as hitboxes so I can use .intersect(), never drawn or anything like that.
Rectangles that are created will be stored in an ArrayList.
Example Maze: (a simple one)
What I want to be able to do: (green areas being where the java.awt.Rectangles are created and stored into ArrayList)
I will provide a quite naive way of solving the problem (not fully implemented, just so you get the idea)..
Have a list of all rectangles List<Rectangle> mazeRectangles. All rectangles will be stored here.. And of course the image BufferedImage image;
Now we will iterate over all pictures until we find one with the right colour
Every time we found a rectangle, we will skip all x values for the width of the rectangle..
//iterate over every pixel..
for (int y = 0; y < image.getHeight(); y++) {
for (int x = 0; x < image.getWidth(); x++) {
//check if current pixel has maze colour
if(isMazeColour(image.getRGB(x, y))){
Rectangle rect = findRectangle(x, y);
x+=rect.width;
}
}
}
Your method for checking the colour:
public boolean isMazeColour(int colour){
// here you should actually check for a range of colours, since you can
// never expect to get a nicely encoded image..
return colour == Color.RED.getRGB();
}
The interesting part is the findRectangle method..
We see if there is already a Rectangle which contains our coordinates. If so return it, otherwise create a new Rectangle, add it to the list and return it.
If we have to create a new Rectangle, we will first check it's width. The annoying part about this is, that you'll still have to check every pixel for the rest of the rectangle, since you might have a configuration like that:
+++++++
+++++++
###
###
where # and + are separate boxes. So we first find the width:
public Rectangle findRectangle(int x, int y){
// this could be optimized. You could keep a separate collection where
// you remove rectangles from, once your cursor is below that rectangle
for(Rectangle rectangle : mazeRectangles){
if(!rectangle.contains(x, y)){
return rectangle;
}
}
//find the width of the `Rectangle`
int xD = 0;
while(x+xD < width && isMazeColour(image.getRGB(x+xD+1, y))){
xD++;
}
int yD = 0; //todo: find height of rect..
Rectangle toReturn = new Rectangle(x, y, xD, yD);
mazeRectangles.add(toReturn);
return toReturn;
}
I didn't implement the yD part, since it's a bit messy and I am a little lazy, but you'd need to iterate over y and check each row (so two nested loops)
Note that this algorithm might result in overlapping Rectangles. if you don't want that, when finding xD check for each pixel if it is already contained in a Rectangle. Only expand xD as long as you are not inside another Rectangle.
Another thing: You might end up with strange artefacts at the border of your rectangles, due to the interpolation of colours between red and blue. Maybe you want to check for Rectangles being to small (like only 1 pixel wide) and get rid of them..
Last year, someone asked about a more general case for solving a maze. They had one additional complexity in that there were multiple paths, but the "correct" path through an intersection was straight.
Python: solve "n-to-n" maze
The solution provided solves the maze by ray-casting. Starting at the beginning of a path, it projects lines down the path in all directions. Then it sorts the list and chooses the longest line and uses that to calculate the next starting point. Now, it repeats projecting lines in all directions except in the direction it came - the backtrack could be longer than the forward progress. That would just bounced the solution around in the longest leg of the maze.
If you are certain your angles are always 90 degrees, you could modify the code accordingly.

Painting and rotating a polygon around a point in Java swing

just trying to paint multiple unfilled triangles rotating a central point in Java. Paint one triangle, rotate the points by a certain radius, and paint another one.
int rad = 10 //Radius between the triangles
int num = 20 //Number of triangles
for (int i = 0; i < num; i++){
// (250,250) would be the center
int[] xPoints = (250,175,325) //X points of the first triangle
int[] yPoints = (250,100,100) //Y points of the first triangle
g.drawPolygon(xPoints,yPoints,3); //Paint the shape
}
Of course my code only prints the first triangle, as I'm unsure how to rotate the points. I've searched around and found some trig, but I don't really understand it. Is there a simple way to rotate each point? Thanks.
Is there a simple way to rotate each point?
Use an AffineTranform that does the geometry for you.
Some examples can be seen in posts tagged affinetransform. Particularly those of mine, Trashgod, MadProgrammer & HovercraftFullOfEels (my apologies if I forgot someone who has done some nice examples).
The Graphics2d object contains an AffineTransform and has a call to set it directly to a rotation about given point.
When using this, you often (not always) want to save a copy of the transform first and then restore it so the next use of g has the original transform rather than a pre- or post-multiplied version:
AffineTransform savedTransform = g.getTransform();
g.rotate(theta, x_center_of_rotation, y_center_of_rotation);
g.setTransform(savedTransform);

own implementation of phong illumination with ray casting

I am trying to write a program in java from scratch that renders a sphere with ray casting technique and phong illumination, but I am a bit lost.
I understand the concept behind the phong equation coefficients, but I don't understand how to get to the vector values, and what is the relation of all this with ray casting
so let's say I want to renders the sphere in the middle of my screen, and I have it's position and radius, so (cx,cy,r). Where exactly do I start now? how exactly do I get to the vector values? my idea is as follows (pseudocode)
int cx = window width/2
int cy = window height/2
int r = 30;
for(i = 0 -> window height) {
for(j = 0 -> window width) {
if( (j-cx)^2 + (i-cy)^2 < r^2) {
//point inside
Color c = phong(arguments..)
draw pixel j,i with color c
}
}
}
but I have no idea if this is correct or not, and if it is, how do I get the vector values, for starters, the Normal?
could you point me in the right way? I have tried googling a lot with no success, thank you in advance
The vectors for calculating the normal usually come from a tessellation (approximation) of the real geometrical object. So you break the sphere up into, say, triangles. Then each triangle (p1,p2,p3) has its own normal vector ((p2-p1)×(p3-p1).
The phong shading method is an interpolation which then (ideally) blurs over the lines that give away the fact that you're drawing triangles instead of a true sphere. It's doesn't help with corners around the sides, though. :(
For the tessellation, one way is to approximate the sphere with Bezier surface patches which can then be subdivided to a suitably small sizes and simplified to triangles. My question over here explores doing this work to draw a teapot (mostly surfaces of revolution, not unlike spheres).

Categories

Resources