Java 3D - Build 3D models dynamically - java

im trying to build a 3D Model, by building dynamically 3D models and translate them to where i need them.
Im starting with a basic model, trying to to achieve what is on the picture bellow. I want to dynamically build two cylinders, and the X,Y,Z of the TOP of my cylinder will be the same X,Y,Z of the BOTTOM of the second Cylinder, like the picture bellow:
For now i have this code:
public static void main(String[] args) throws FileNotFoundException, IOException {
int height = 10;
int radius = 1;
int angle = 0;
BranchGroup objRoot = new BranchGroup();
Cylinder cylinder;
Vector3f last_coordinates = new Vector3f(0f,0f,0f);
TransformGroup transf_group_cylinder = null;
//---- Working ok -----/
//build cylinder
cylinder = new Cylinder(radius, height);
transf_group_cylinder = createTransformGroup_Cylinder(new Vector3f(0f,0f,0f),angle);
transf_group_cylinder.addChild(cylinder);
objRoot.addChild(transf_group_cylinder);
last_coordinates = calculateLastPoint(height/2, angle);
System.out.println(last_coordinates);
//----------------------------//
//build 2nd cylinder
cylinder = new Cylinder(radius, height);
transf_group_cylinder = createTransformGroup_Cylinder(last_coordinates, Math.PI/2);
transf_group_cylinder.addChild(cylinder);
objRoot.addChild(transf_group_cylinder);
OBJWriter objWriter = new OBJWriter("myObj.obj");
objWriter.writeNode(objRoot);
objWriter.close();
}
private static Vector3f calculateLastPoint(int height, int angle) {
float x = (float) (height * Math.sin(angle));
float y = (float) (height * Math.cos(angle));
return new Vector3f(0, x, y);
}
private static TransformGroup createTransformGroup_Cylinder(
Vector3f last_coordinates, double angle) {
TransformGroup transf_group = new TransformGroup();
//position the model
Transform3D transform_origin = new Transform3D();
transform_origin.setTranslation(new Vector3f(0, 0, 0));
// set model in horizontal position
Transform3D transf_horizontal = new Transform3D();
transf_horizontal.rotZ(Math.PI / 2);
transform_origin.mul(transf_horizontal);
// rotate object
Transform3D angleRotation = new Transform3D();
angleRotation.rotX(angle);
transform_origin.mul(angleRotation);
Transform3D transform_xyz = new Transform3D();
transform_xyz.setTranslation(last_coordinates);
transform_origin.mul(transform_xyz); // set Transform for
transf_group.setTransform(transform_origin);
return transf_group;
}
With this code im achieving this:
My first cylinder is ok, but my 2nd cylinder is not placed in a proper place.
I can add any value for the size and angle values, so i need to calculate this two values in a dynamically way.
Can someone help solving this translation problem?
Thank you in advance.

The first step here is to give each cylinder a different color to allow you to see which one is which.
Next: When you create a cylinder, then it's centered at the origin. Since you want to chain them at the end point, you need to move them accordingly: First, you need to move the cylinder (or its transformation matrix) by -half its height to virtually move the "cylinder origin" to it's end.
The next step is that you need to apply the same transformation matrix to the end point (this time plus a full cylinder height), so it lines up with the actual end point.
That said, I would suggest you create a helper function that can create a cylinder between two points. This would allow you to say:
Point endPoint = cylinder(new Point(-1,.5,0), new Point(0,.5,0))
cylinder(endPoint, new Point(0,-.5,0))
...
or even create a helper function which accepts a list of points and creates all the cylinders between them.

Related

Transform cartesian pixel-data-array to lat/lon pixel-data-array

I have an image (basically, I get raw image data as 1024x1024 pixels) and the position in lat/lon of the center pixel of the image.
Each pixel represents the same fixed pixel scale in meters (e.g. 30m per pixel).
Now, I would like to draw the image onto a map which uses the coordinate reference system "EPSG:4326" (WGS84).
When I draw it by defining just corners in lat/lon of the image, depending on a "image size in pixel * pixel scale" calculation and converting the distances from the center point to lat/lon coordinates of each corner, I suppose, the image is not correctly drawn onto the map.
By the term "not correctly drawn" I mean, that the image seems to be shifted and also the contents of the image are not at the map location, where I expected them to be.
I suppose this is the case because I "mix" a pixel scaled image and a "EPSG:4326" coordinate reference system.
Now, with the information I have given, can I transform the whole pixel matrix from fixed pixel scale base to a new pixel matrix in the "EPSG:4326" coordinate reference system, using Geotools?
Of course, the transformation must be dependant on the center position in lat/lon, that I have been given, and on the pixel scale.
I wonder if using something like this would point me into the correct direction:
MathTransform transform = CRS.findMathTransform(DefaultGeocentricCRS.CARTESIAN, DefaultGeographicCRS.WGS84, true);
DirectPosition2D srcDirectPosition2D = new DirectPosition2D(DefaultGeocentricCRS.CARTESIAN, degreeLat.getDegree(), degreeLon.getDegree());
DirectPosition2D destDirectPosition2D = new DirectPosition2D();
transform.transform(srcDirectPosition2D, destDirectPosition2D);
double transX = destDirectPosition2D.x;
double transY = destDirectPosition2D.y;
int kmPerPixel = mapImage.getWidth / 1024; // It is known to me that my map is 1024x1024km ...
double x = zeroPointX + ((transX * 0.001) * kmPerPixel);
double y = zeroPointY + (((transX * -1) * 0.001) * kmPerPixel);
(got this code from another SO thread and already modified it a little bit, but still wonder if this is the correct starting point for my problem.)
I only suppose that my original image coordinate reference system is of the type DefaultGeocentricCRS.CARTESIAN. Can someone confirm this?
And from here on, is this the correct start to use Geotools for this kind of problem solving, or am I on the complete wrong path?
Additionally, I would like to add that this would be used in a quiet dynamic system. So my image update would be about 10Hz and the transormations have to be performed accordingly often.
Again, is this initial thought of mine leading to a solution, or do you have other solutions for solving my problem?
Thank you very much,
Kiamur
This is not as simple as it might sound. You are essentially trying to define an area on a sphere (ellipsoid technically) using a flat square. As such there is no "correct" way to do it, so you will always end up with some distortion. Without knowing exactly where your image came from there is no way to answer this exactly but the following code provides you with 3 different possible answers:
The first two make use of GeoTools' GeodeticCalculator to calculate the corner points using bearings and distances. These are the blue "square" and the green "square" above. The blue is calculating the corners directly while the green calculates the edges and infers the corners from the intersections (that's why it is squarer).
final int width = 1024, height = 1024;
GeometryFactory gf = new GeometryFactory();
Point centre = gf.createPoint(new Coordinate(0,51));
WKTWriter writer = new WKTWriter();
//direct method
GeodeticCalculator calc = new GeodeticCalculator(DefaultGeographicCRS.WGS84);
calc.setStartingGeographicPoint(centre.getX(), centre.getY());
double height2 = height/2.0;
double width2 = width/2.0;
double dist = Math.sqrt(height2*height2+width2 *width2);
double bearing = 45.0;
Coordinate[] corners = new Coordinate[5];
for (int i=0;i<4;i++) {
calc.setDirection(bearing, dist*1000.0 );
Point2D corner = calc.getDestinationGeographicPoint();
corners[i] = new Coordinate(corner.getX(),corner.getY());
bearing+=90.0;
}
corners[4] = corners[0];
Polygon bbox = gf.createPolygon(corners);
System.out.println(writer.write(bbox));
double[] edges = new double[4];
bearing = 0;
for(int i=0;i<4;i++) {
calc.setDirection(bearing, height2*1000.0 );
Point2D corner = calc.getDestinationGeographicPoint();
if(i%2 ==0) {
edges[i] = corner.getY();
}else {
edges[i] = corner.getX();
}
bearing+=90.0;
}
corners[0] = new Coordinate( edges[1],edges[0]);
corners[1] = new Coordinate( edges[1],edges[2]);
corners[2] = new Coordinate( edges[3],edges[2]);
corners[3] = new Coordinate( edges[3],edges[0]);
corners[4] = corners[0];
bbox = gf.createPolygon(corners);
System.out.println(writer.write(bbox));
Another way to do this is to transform the centre point into a projection that is "flatter" and use simple addition to calculate the corners and then reverse the transformation. To do this we can use the AUTO projection defined by the OGC WMS Specification to generate an Orthographic projection centred on our point, this gives the red "square" which is very similar to the blue one.
String code = "AUTO:42003," + centre.getX() + "," + centre.getY();
// System.out.println(code);
CoordinateReferenceSystem auto = CRS.decode(code);
// System.out.println(auto);
MathTransform transform = CRS.findMathTransform(DefaultGeographicCRS.WGS84,
auto);
MathTransform rtransform = CRS.findMathTransform(auto,DefaultGeographicCRS.WGS84);
Point g = (Point)JTS.transform(centre, transform);
width2 *=1000.0;
height2 *= 1000.0;
corners[0] = new Coordinate(g.getX()-width2,g.getY()-height2);
corners[1] = new Coordinate(g.getX()+width2,g.getY()-height2);
corners[2] = new Coordinate(g.getX()+width2,g.getY()+height2);
corners[3] = new Coordinate(g.getX()-width2,g.getY()+height2);
corners[4] = corners[0];
bbox = gf.createPolygon(corners);
bbox = (Polygon)JTS.transform(bbox, rtransform);
System.out.println(writer.write(bbox));
Which solution to use is a matter of taste, and depends on where your image came from but I suspect that either the red or the blue will be best. If you need to do this at 10Hz then you will need to test them for speed, but I suspect that transforming the images will be the bottle neck.
Once you have your bounding box setup to your satisfaction you can convert you (unreferenced) image to a georeferenced coverage using:
GridCoverageFactory factory = CoverageFactoryFinder.getGridCoverageFactory(null);
GridCoverage2D gc = factory.create("name", image, new ReferencedEnvelope(bbox.getEnvelopeInternal(),DefaultGeographicCRS.WGS84));
String fileName = "myImage.tif";
AbstractGridFormat format = GridFormatFinder.findFormat(fileName);
File out = new File(fileName);
GridCoverageWriter writer = format.getWriter(out);
try {
writer.write(gc, null);
writer.dispose();
} catch (IllegalArgumentException | IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}

Java Rotation Matrices & OBB

Hi I am attempting to build OBB's into my 3d java game using lwjgl. Currently I am just attempting to rotate the OBB around using matrix4f's and testing it by rendering the points. So, when I render it, with its xyx=0,0,0 and its angle on the x axis =1 it will rotate fine. But when i move the y axis up say y=5 the rotation will no longer go around the center.
I tried fixing this with translation but It doesnt work. Im also wondering if there is a way to access opengl's push/pop and rotate methods to get those variables for my points because opengl rotate does it perfectly.
This is my OBB class:
public OBB(float x, float y, float z, float angleX, float angleY, float angleZ, float sizeX, float sizeY, float sizeZ){
this.x=x;
this.y=y;
this.z=z;
this.angleX=angleX;
this.angleY=angleY;
this.angleZ=angleZ;
this.sizeX=sizeX;
this.sizeY=sizeY;
this.sizeZ=sizeZ;
posUBR = new Vector3f(x-sizeX,y+sizeY,z+sizeZ);//UpperBackRight
posUBL = new Vector3f(x-sizeX,y+sizeY,z-sizeZ);//UpperBackLeft
posUFL = new Vector3f(x+sizeX,y+sizeY,z-sizeZ);//UpperForLeft
posUFR = new Vector3f(x+sizeX,y+sizeY,z+sizeZ);//UpperForRight
posLBR = new Vector3f(x-sizeX,y-sizeY,z+sizeZ);//LowerBackRight
posLBL = new Vector3f(x-sizeX,y-sizeY,z-sizeZ);//LowerBackLeft
posLFL = new Vector3f(x+sizeX,y-sizeY,z-sizeZ);//LowerForLeft
posLFR = new Vector3f(x+sizeX,y-sizeY,z+sizeZ);//LowerForRight
posUBR=rotMat(posUBR);
posUBL=rotMat(posUBL);
posUFL=rotMat(posUFL);
posUFR=rotMat(posUFR);
posLBR=rotMat(posLBR);
posLBL=rotMat(posLBL);
posLFL=rotMat(posLFL);
posLFR=rotMat(posLFR);
}
This is my rotation method:
public Vector3f rotMatrix(Vector3f point) {
Matrix4f rotationMatrix = new Matrix4f();
rotationMatrix.m00 = point.x;
rotationMatrix.m10 = point.y;
rotationMatrix.m20 = point.z;
rotationMatrix.translate(new Vector3f(-x,-y,-z));
rotationMatrix.rotate(angleX,new Vector3f(1,0,0));
rotationMatrix.rotate(angleY,new Vector3f(0,1,0));
rotationMatrix.rotate(angleZ,new Vector3f(0,0,1));
rotationMatrix.translate(new Vector3f(x,y,-z));
return new Vector3f(rotationMatrix.m00, rotationMatrix.m10, rotationMatrix.m20);
}
public void rotate(){
posUBR=rotMatrix(posUBR);
posUBL=rotMatrix(posUBL);
posUFL=rotMatrix(posUFL);
posUFR=rotMatrix(posUFR);
posLBR=rotMatrix(posLBR);
posLBL=rotMatrix(posLBL);
posLFL=rotMatrix(posLFL);
posLFR=rotMatrix(posLFR);
}
My render function is a bit long to put in here but it basically renders a cube.
Sorry all I needed to do was this set to origin function:
public void setToOrigin(){
posUBR = new Vector3f(0-sizeX,0+sizeY,0+sizeZ);
posUBL = new Vector3f(0-sizeX,0+sizeY,0-sizeZ);
posUFL = new Vector3f(0+sizeX,0+sizeY,0-sizeZ);
posUFR = new Vector3f(0+sizeX,0+sizeY,0+sizeZ);
posLBR = new Vector3f(0-sizeX,0-sizeY,0+sizeZ);
posLBL = new Vector3f(0-sizeX,0-sizeY,0-sizeZ);
posLFL = new Vector3f(0+sizeX,0-sizeY,0-sizeZ);
posLFR = new Vector3f(0+sizeX,0-sizeY,0+sizeZ);
}

Rotate object to face point

I would like to rotate an object to face a point which I'm have a bit of of trouble with.
So I'm starting with an object that has a base at zero and is aligned on the y axis.
I would like to rotate it so that the top of the object is facing the destination
My process so far is to:
Given axis A
find the distance between my position and my look position: D
create a direction vector: V = D.normalize()
find the right vector: R = A cross D
find the up vector: U = D cross R
find the angle between up and direction: ANGLE = acos((U dot D) / (U.length * D.length))
rotate by angle scaled by direction on each axis
here is the code representation of that. I'm not sure what exactly is wrong with this I've worked it out on paper and to my knowledge this approach should work but the results are completely incorrect when drawn. If anyone sees any flaws and could point me in the right direction it would be great.
Vector3 distance = new Vector3(from.x, from.y, from.z).sub(to.x, to.y, to.z);
final Vector3 axis = new Vector3(0, 1, 0);
final Vector3 direction = distance.clone().normalize();
final Vector3 right = (axis.clone().cross(direction));
final Vector3 up = (distance.clone().cross(right));
float angle = (float) Math.acos((up.dot(direction)/ (up.length() * direction.length())));
bondObject.rotateLocal(angle, direction.x , direction.y, direction.z);
The basic idea here is as follows.
Determine which way the object is facing: directionA
Determine which way the object should be facing: directionB
Determine the angle between those directions: rotationAngle
Determine the rotation axis: rotationAxis
Here's the modified code.
Vector3 distance = new Vector3(from.x, from.y, from.z).sub(to.x, to.y, to.z);
if (distance.length() < DISTANCE_EPSILON)
{
//exit - don't do any rotation
//distance is too small for rotation to be numerically stable
}
//Don't actually need to call normalize for directionA - just doing it to indicate
//that this vector must be normalized.
final Vector3 directionA = new Vector3(0, 1, 0).normalize();
final Vector3 directionB = distance.clone().normalize();
float rotationAngle = (float)Math.acos(directionA.dot(directionB));
if (Math.abs(rotationAngle) < ANGLE_EPSILON)
{
//exit - don't do any rotation
//angle is too small for rotation to be numerically stable
}
final Vector3 rotationAxis = directionA.clone().cross(directionB).normalize();
//rotate object about rotationAxis by rotationAngle

Is there a way to add on to the points of a shape/or a way to grab the perimeter points? [duplicate]

i have made a transform and rendered a Polygon object with it(mesh is of type Polygon):
at.setToTranslation(gameObject.position.x, gameObject.position.y);
at.rotate(Math.toRadians(rotation));
at.scale(scale, scale);
g2d.setTransform(at);
g2d.fillPolygon(mesh);
now i want to return the exact mesh i rendered so that i can do collision checks on it. only problem is that if i return mesh it returns the un-transformed mesh. so i tried setting the transform to the Polygon object (mesh) like so:
mesh = (Polygon)at.createTransformedShape(mesh);
but unfortunately at.createTransformedShape() returns a Shape that can only be casted to Path2D.Double. so if anyone knows how to convert Path2D.Double to Polygon or knows another way to set the transformations to the mesh please please help.
If AffineTransform#createTransformedShape doesn't provide the desired result for Polygons (as it seems to be the case), you can split the Polygon into Points, transform each Point and combine into a new Polygon. Try:
//Polygon mesh
//AffineTransform at
int[] x = mesh.xpoints;
int[] y = mesh.ypoints;
int[] rx = new int[x.length];
int[] ry = new int[y.length];
for(int i=0; i<mesh.npoints; i++){
Point2d p = new Point2d.Double(x[i], y[i]);
at.transform(p,p);
rx[i]=p.x;
ry[i]=p.y;
}
mesh = new Polygon(rx, ry, mesh.npoints)

CatmullRomSplines and other smooth paths

I've been looking into getting an object on a two dimensional plane to follow a smooth curve defined by several control points.From what I've found, I'm looking for a Catmull-Rom-Spline.
I've been using LibGDX for my project, and it has its own Catmull-Rom-Spline implementation but I'm having trouble wrapping my head around how it works, as I've had trouble finding documentation or other source code implementing Catmull-Rom-Splines using LibGDX.
I'm looking for either an explanation of the LibGDX Catmull-Rom-Spline implementation or another way to implement a smooth path that implements control points using Catmull-Rom-Splines or another method. All I'm looking for is the ability to generate a path and pass back the x and y coordinates of a point on that path. If anyone has any suggestions or pointers, it would be appreciated. Thanks.
The libgdx Path classes (including CatmullRomSpline) are suitable for both 2D and 3D. So when creating a CatmullRomSpline, you must specify which Vector (Vector2 or Vector3) to use:
CatmullRomSpline<Vector2> path = new CatmulRomSpline<Vector2> ( controlpoints, continuous );
For example:
float w = Gdx.graphics.getWidth();
float h = Gdx.graphics.getHeight();
Vector2 cp[] = new Vector2[]{
new Vector2(0, 0), new Vector2(w * 0.25f, h * 0.5f), new Vector2(0, h), new Vector2(w*0.5f, h*0.75f),
new Vector2(w, h), new Vector2(w * 0.75f, h * 0.5f), new Vector2(w, 0), new Vector2(w*0.5f, h*0.25f)
};
CatmullRomSpline<Vector2> path = new CatmullRomSpline<Vector2>(cp, true);
Now you can get the location on the path (ranging from 0 to 1) using the valueAt method:
Vector2 position = new Vector2();
float t = a_vulue_between_0_and_1;
path.valueAt(position, t);
For example:
Vector2 position = new Vector2();
float t = 0;
public void render() {
t = (t + Gdx.graphics.getDeltaTime()) % 1f;
path.valueAt(position, t);
// Now you can use the position vector
}
Here's an example: https://github.com/libgdx/libgdx/blob/master/tests/gdx-tests/src/com/badlogic/gdx/tests/PathTest.java

Categories

Resources