Build a 3d surface plot using xyz coordinates with jzy3d - java

I've been searching for a way to send a list of coord(x,y,z) to jzy3d. But without success.
The only way I found is to use a "builder" with a list of "coord3d" and a "tesselator", but it actually doesn't work.
I don't realy get the meaning of the Tesselator in fact ?
Here is the code I tried :
public Chart getChart(){
List<Coord3d> coordinates = new ArrayList<Coord3d>();
for(int i=0; i<200; i++)
coordinates.add( new Coord3d(5, 10, 15) );
Tesselator tesselator = new Tesselator() {
#Override
public AbstractComposite build(float[] x, float[] y, float[] z) {
return null;
}
};
tesselator.build(coordinates);
org.jzy3d.plot3d.primitives.Shape surface = (Shape)Builder.build(coordinates, tesselator);
/*/ Define a function to plot
Mapper mapper = new Mapper(){
public double f(double x, double y) {
return 10*Math.sin(x/10)*Math.cos(y/20)*x;
}
};*/
// Define range and precision for the function to plot
// Range range = new Range(-150,150);
// int steps = 50;
// Create the object to represent the function over the given range.
// org.jzy3d.plot3d.primitives.Shape surface = (Shape)Builder.buildOrthonormal(new OrthonormalGrid(range, steps, range, steps), mapper);
//surface.setColorMapper(new ColorMapper(new ColorMapRainbow(), surface.getBounds().getZmin(), surface.getBounds().getZmax(), new Color(1,1,1,.5f)));
// surface.setWireframeDisplayed(true);
// surface.setWireframeColor(Color.BLACK);
//surface.setFace(new ColorbarFace(surface));
//surface.setFaceDisplayed(true);
//surface.setFace2dDisplayed(true); // opens a colorbar on the right part of the display
// Create a chart
Chart chart = new Chart("swing");
chart.getScene().getGraph().add(surface);
return chart;
}
Could someone please tell me how to feed my graph with many XYZ coordonates so that I may get a 3d surface plot like this one :
(source: free.fr)

A tesselator allows creating polygons out of a list of points. Jzy3d provides two base tesselators: one that supports points standing on a regular grid (called OrthonormalTesselator), one that supports unstructured points as input (DelaunayTesselator). The second one is not always "working good": not a problem concerning its implementation but mainly the fact that it's difficult to decide how points should work together to form a polygon in 3d. You may find some discussions about it on Jzy3d wiki and discussion groups.
To manually build polygons, here's what you should do:
// Build a polygon list
double [][]distDataProp = new double[][] {{.25,.45, .20},{.56, .89, .45}, {.6, .3,.7}};
List<Polygon> polygons = new ArrayList<Polygon>();
for(int i = 0; i < distDataProp.length -1; i++){
for(int j = 0; j < distDataProp[i].length -1; j++){
Polygon polygon = new Polygon();
polygon.add(new Point( new Coord3d(i, j, distDataProp[i][j]) ));
polygon.add(new Point( new Coord3d(i, j+1, distDataProp[i][j+1]) ));
polygon.add(new Point( new Coord3d(i+1, j+1, distDataProp[i+1][j+1]) ));
polygon.add(new Point( new Coord3d(i+1, j, distDataProp[i+1][j]) ));
polygons.add(polygon);
}
}
// Creates the 3d object
Shape surface = new Shape(polygons);
surface.setColorMapper(new ColorMapper(new ColorMapRainbow(), surface.getBounds().getZmin(), surface.getBounds().getZmax(), new org.jzy3d.colors.Color(1,1,1,1f)));
surface.setWireframeDisplayed(true);
surface.setWireframeColor(org.jzy3d.colors.Color.BLACK);
chart = new Chart();
chart.getScene().getGraph().add(surface);

Related

Array to List converting and type casting

Motive:
Im creating an android Application that does the following.
Reads input points (of type Point - a custom class).
Calculates the outermost points.
Draws a polygon (needs input of type GeoPoint from osmdroid library) with those points as the vertices (a convex polygon).
Problem:
I made the algorithm to work. But there is a problem converting the datatypes.
The input is a Point array(Point is a custom class).
geoPoints = new Point[7];
geoPoints[0] = new Point(new GeoPoint(8.180992, 77.336551));
But the osmdroid function that draws the polygon needs List < GeoPoint > as input.
List<GeoPoint> gPoints;
polygon.setPoints(gPoints);
I managed to convert this on the Point() constructor like something below
public Point(GeoPoint geoPoint) {
this.x = geoPoint.getLatitude();
this.y = geoPoint.getLongitude();
}
But I have no idea how work out the return type.
SO, when I run the code. I get the following error.
error: incompatible types: inference variable T has incompatible bounds
equality constraints: GeoPoint
lower bounds: Point
where T is a type-variable:
T extends Object declared in method <T>asList(T...)
Calling Function:
private void callingFunction() {
geoPoints = new Point[7];
geoPoints[0] = new Point(new GeoPoint(8.180992, 77.336551));
geoPoints[1] = new Point(new GeoPoint(8.183966, 77.340353));
geoPoints[2] = new Point(new GeoPoint(8.179836, 77.336105));
geoPoints[3] = new Point(new GeoPoint(8.178744, 77.339179));
geoPoints[4] = new Point(new GeoPoint(8.182155, 77.341925));
geoPoints[5] = new Point(new GeoPoint(8.181655, 77.339318));
geoPoints[6] = new Point(new GeoPoint(8.182155, 77.341925));
polygon = new Polygon(); //see note below
polygon.setFillColor(Color.parseColor("#80FFE082"));
polygon.setStrokeColor(Color.parseColor("#FFD54F"));
polygon.setStrokeWidth(5f);
ConvexHull cx = new ConvexHull(geoPoints);
Point C1[]= cx.getConvexHull();
polygon.setPoints(Arrays.asList(C1));
map.getOverlayManager().add(polygon);
map.invalidate();
}
Note:
Using only List or only Array is not possible as
polygon.setPoints(geoPoints) //needs List<GeoPoint> in callingFunction
Arrays.sort(points); //requires array of Points in ConvexHull
Everything provided. How can I solve this problem? Any sorts of help will be really useful. And Thanks in advance.
I'm not sure, if I got you right, but basically your problem boils down to the fact, that you have an array of Point objects, but need a list of GeoPoint objects instead, right? And there is no relation between Point and GeoPoint, so one does not extend the other.
So why not translate Point objects back to GeoPoint objects before drawing the polygon?
You could map types in a stream:
ConvexHull cx = new ConvexHull(geoPoints);
Point C1[] = cx.getConvexHull();
polygon.setPoints(Arrays.stream(C1).map(p -> new GeoPoint(p.getX(), p.getY())).collect(Collectors.toList());
Or use a for loop:
ConvexHull cx = new ConvexHull(geoPoints);
Point C1[] = cx.getConvexHull();
List<GeoPoint> points = new ArrayList<GeoPoint>(C1.length);
for (int i = 0; i < C1.length; ++i) {
points.add(new GeoPoint(C1[i].getX(), C1[i].getY()));
}
polygon.setPoints(points);

How to draw a series of connected lines in Libgdx?

I'm using polygon on shape renderer and the problem is that the user should be able to add vertex whenever they want, basically the vertices are not set by me but by the user. What I did is whenever the user add a point, I add them to an arrayList.
ArrayList<Float> v = new ArrayList<Float>();
public void ontouch(screenX, screenY){
v.add(screenX);
v.add(screenY)
}
And then I have this problem when I try to render a polygon on a shapeRenderer
for(int i = 0; i < v.size; i++){
float[] vertices = new float[v.size()]
vertices[i - 1] = v.get(i - 1);
vertices[i] = v.get(i);
}
sr.polygon(v);
But I just get errors.
I am trying to achieve something like this, if you know a different way of doing this then that would be really helpful. By the way I'm also using box2d and this does not need to have collision it's just for the user visual.
The way I would personally do it is by having an LinkedList with Vector2 objects. Vector2 objects store two floats, so for every click, you get the x and y coordinates and make a new Vector2 object. By storing them in the LinkedList, you can retrieve the points at any time in the correct order so you can connect a line.
LinkedList<Vector2> v = new LinkedList<Vector2>();
public void ontouch(screenX, screenY){
v.add(new Vector2(screenX, screenY)); // add Vector2 into LinkedList
}
How you want to draw the lines or connect the points is up to you.
Another was is to just only keep the two most recent points that were clicked, and throw the others away. This would mean storing the lines instead of the points. If the lines are objects, then you can do this:
Vector2 previousPoint;
Vector2 currentPoint;
ArrayList<MyLineClass> lines = new ArrayList<MyLineClass>();
public void ontouch(screenX, screenY){
if(previousPoint == null){
previousPoint = new Vector2(screenX, screenY);
}else{
previousPoint = currentPoint;
currentPoint = new Vector2(screenX, screenY);
lines.add(new MyLineClass(currentPoint, previousPoint)
}
}
I wrote this off the cuff but I believe this example should work.
EDIT:
Good thing LibGDX is open source. If you want to use an array of float numbers, the method simply gets an x and y coordinate in alternating order. So for each index:
0 = x1
1 = y1
2 = x2
3 = y2
4 = x3
5 = y3
etc.
It's an odd method, but I suppose it works.

Can't find out how to transform Tango Point Cloud

I work with the Google Tango right now and so far everything worked fine, but I stumbled upon a few strange things.
There is a class ScenePoseCalculator in the Tango Examples. And there is a method "toOpenGlCameraPose". When using this, the OpenGL camera however is not set up correctly. When I move forward, the camera moves backwards. Left and right are swapped as well.
But the most difficult thing is to transform the PointCloud correctly. I do the following:
Create an Vector3-Array:
Vector3f [] vertices = new Vector3f[mPointCloud.getGeometry().getVertices().capacity()];
for(int i=0; i<mPointCloud.getGeometry().getVertices().capacity(); i++) {
vertices[i] = new Vector3f(
mPointCloud.getGeometry().getVertices().get(i*3),
mPointCloud.getGeometry().getVertices().get(i*3+1),
mPointCloud.getGeometry().getVertices().get(i*3+2));
}
In the PointCloud Callback of the TangoListener I do:
private void updateXYZIJ() {
try {
if(pcm.getLatestXyzIj().xyzCount<10) return;
TangoCoordinateFramePair fp = new TangoCoordinateFramePair(
TangoPoseData.COORDINATE_FRAME_START_OF_SERVICE,
TangoPoseData.COORDINATE_FRAME_DEVICE);
TangoXyzIjData xyzIj = pcm.getLatestXyzIj();
depthArray = new float[xyzIj.xyzCount * 3];
xyzIj.xyz.get(depthArray);
com.projecttango.rajawali.Pose p =
ScenePoseCalculator.toDepthCameraOpenGlPose(
tangoProvider.getTango().getPoseAtTime(
xyzIj.timestamp, fp), this.setupExtrinsics());
Pose cloudPose = this.rajawaliPoseToJMEPoseCLOUD(p);
currentPointCloudData = new PointCloudUpdate(
new Date(),
depthArray,
xyzIj.xyzCount,
xyzIj.ijRows,
xyzIj.ijCols,
0,
cloudPose,
null
);
// inform listeners
for (PointCloudListener listener : this.pointsListeners) {
listener.acceptMessage(tangoProvider, this.currentPointCloudData);
}
} catch (Exception e) {
e.printStackTrace();
}
}
To transform the PointCloud I do:
this.transformation = new Matrix4f();
this.transformation.setRotationQuaternion(
this.referencePose.orientation);
this.transformation.setTranslation(this.referencePose.place);
absVertices = new Vector3f[vertices.length];
for(int i=0; i<vertices.length; i++) {
// Create new Vertex
absVertices[i]=new Vector3f(
vertices[i].x,
vertices[i].y,
vertices[i].z
);
Vector3f v = absVertices[i];
transformation.rotateVect(absVertices[i]);
transformation.translateVect(absVertices[i]);
}
But whatever I do. I've tried everything. But it won't look right. The PointClouds get stapled over each other or look like they where placed without any sense.
Hope someone knows more...
For everyone still looking to find out how to do this, I just managed it in jMOnkeyEngine3. The secret is, one has to use the INVERSE of the rotation Quaternion which the Tango delivers.
Okay, here is how it works:
Getting depth data with the Callback:
public void onXyzIjAvailable(TangoXyzIjData xyzIj)
Create an Array of vertices from the xyz-Array:
// Create new array, big enough to hold all vertices
depthArray = new float[xyzIj.xyzCount * 3];
xyzIj.xyz.get(depthArray);
// Create Vertices
Vector3f[] vertices = new Vector3f[xyzIj.xyzCount];
for(int i=0; i<xyzIj.xyzCount; i++) {
vertices[i] = new Vector3f(
depthArray[i*3],
depthArray[i*3+1],
depthArray[i*3+2]);
}
Use the ScenePoseCalculator from Rajawali3d:
com.projecttango.rajawali.Pose p = ScenePoseCalculator
.toDepthCameraOpenGlPose(tangoProvider.getTango().getPoseAtTime(
xyzIj.timestamp, framePair_SS_D), this.setupExtrinsics());
Device intrinsics must be ordered before that, but remember to do it in an TRY/CATCH enclosure.
After that, get Rotation Quaternion and Pose Vector3 from the Tango Pose:
Pose cloudPose = this.rajawaliPoseToJMEPoseCLOUD(p);
That method is defined here:
private Pose rajawaliPoseToJMEPoseCLOUD(com.projecttango.rajawali.Pose p) {
Pose pose = new Pose(
new Vector3f(
(float)p.getPosition().x,
(float)p.getPosition().y,
(float)p.getPosition().z),
new Quaternion(
(float)p.getOrientation().x,
(float)p.getOrientation().y,
(float)p.getOrientation().z,
(float)p.getOrientation().w
));
return pose;
}
Then Put the vertices in a node and transform that node with the Pose Data of the according XYZIJ-Data:
meshNode.setLocalRotation(pcu.referencePose.orientation.inverse());
meshNode.setLocalTranslation(pcu.referencePose.place);
Hope it helps someone. Took me 4 days to figure it out x(
Cheers

Can't texture model properly

My Blender 3D object that was exported with triangulated faces and UV's written doesn't apply the texture properly.
It looks like this:
My render code:
Color.white.bind();
texture.bind();
glBegin(GL_TRIANGLES);
for(ObjFace face : model.faces){
float[] vertex1 = model.vertices[face.indices[0]-1];
float[] vertex2 = model.vertices[face.indices[1]-1];
float[] vertex3 = model.vertices[face.indices[2]-1];
float[] normal1 = model.normals[face.normals[0]-1];
float[] normal2 = model.normals[face.normals[1]-1];
float[] normal3 = model.normals[face.normals[2]-1];
float[] tex1 = model.texCoords[face.texCoords[0]-1];
float[] tex2 = model.texCoords[face.texCoords[1]-1];
float[] tex3 = model.texCoords[face.texCoords[2]-1];
glNormal3f(normal1[0], normal1[1], normal1[2]);
glTexCoord2f(tex1[0], tex1[1]);
glVertex3f(vertex1[0], vertex1[1], vertex1[2]);
glNormal3f(normal2[0], normal2[1], normal2[2]);
glTexCoord2f(tex2[0], tex2[1]);
glVertex3f(vertex2[0], vertex2[1], vertex2[2]);
glNormal3f(normal3[0], normal3[1], normal3[2]);
glTexCoord2f(tex3[0], tex3[1]);
glVertex3f(vertex3[0], vertex3[1], vertex3[2]);
}
glEnd();
The parsing code:
for (int i = 0; i < lines.length; ++i) {
String[] spaced = lines[i].split(" ");
if (lines[i].startsWith("v ")) {
float[] vertices = new float[3];
vertices[0] = parseFloat(spaced[1]);
vertices[1] = parseFloat(spaced[2]);
vertices[2] = parseFloat(spaced[3]);
verticesArray.add(vertices);
} else if (lines[i].startsWith("vn ")) {
float[] normals = new float[3];
normals[0] = parseFloat(spaced[1]);
normals[1] = parseFloat(spaced[2]);
normals[2] = parseFloat(spaced[3]);
normalsArray.add(normals);
} else if (lines[i].startsWith("vt ")) {
float[] texCoords = new float[2];
texCoords[0] = parseFloat(spaced[1]);
texCoords[1] = parseFloat(spaced[2]);
texCoordsArray.add(texCoords);
} else if (lines[i].startsWith("f ")) {
int[] faceIndices = new int[3];
int[] faceNormals = new int[3];
int[] faceTextureCoords = new int[3];
faceIndices[0] = parseInt(spaced[1].split("/")[0]);
faceIndices[1] = parseInt(spaced[2].split("/")[0]);
faceIndices[2] = parseInt(spaced[3].split("/")[0]);
faceNormals[0] = parseInt(spaced[1].split("/")[2]);
faceNormals[1] = parseInt(spaced[2].split("/")[2]);
faceNormals[2] = parseInt(spaced[3].split("/")[2]);
faceTextureCoords[0] = parseInt(spaced[1].split("/")[1]);
faceTextureCoords[1] = parseInt(spaced[2].split("/")[1]);
faceTextureCoords[2] = parseInt(spaced[3].split("/")[1]);
faceArray.add(new ObjFace(faceIndices, faceNormals, faceTextureCoords));
}
}
Although I'm not sure if it could be a problem with my Blender export.
Thanks.
EDIT: Updated pic after I made the texture image's width and height powers of two.
Edit 2: I tried a simple box to make sure that it wasn't the model that was screwing up and tested the face culling. On the box, culling the back faces fixes the problem to some extent however, on the cup it makes little difference.
Edit 3: I included a video to demonstrate what I think is the problem. I think that the triangular glitch is caused by overlapping triangles like with the handle is in front of the actual cup. youtube vid
Is that another instance of the 'textures that have sizes not power of 2' problem ? lwjgl will extend your texture to a power of 2 size, meaning your UV coordinates that are [0...1] will be wrong, they should be [0 ... 0.5783] because the rest is lwjgl padding to reach power of 2. Can't find a reference...
IIRC, in GLES20, the normals for a triangle are indicated by the winding (clock wise or counter clock wise). I'm not sure exactly what GL will understand if you set a normal per vertex (that's for lighting) and none for the triangle. I'm not sure you can set one for a triangle, meaning it will be computed from vertex position+winding. What makes me think it's a problem is the fact that all your quads are half rendered (one triangle in two).

Is there a way to add on to the points of a shape/or a way to grab the perimeter points? [duplicate]

i have made a transform and rendered a Polygon object with it(mesh is of type Polygon):
at.setToTranslation(gameObject.position.x, gameObject.position.y);
at.rotate(Math.toRadians(rotation));
at.scale(scale, scale);
g2d.setTransform(at);
g2d.fillPolygon(mesh);
now i want to return the exact mesh i rendered so that i can do collision checks on it. only problem is that if i return mesh it returns the un-transformed mesh. so i tried setting the transform to the Polygon object (mesh) like so:
mesh = (Polygon)at.createTransformedShape(mesh);
but unfortunately at.createTransformedShape() returns a Shape that can only be casted to Path2D.Double. so if anyone knows how to convert Path2D.Double to Polygon or knows another way to set the transformations to the mesh please please help.
If AffineTransform#createTransformedShape doesn't provide the desired result for Polygons (as it seems to be the case), you can split the Polygon into Points, transform each Point and combine into a new Polygon. Try:
//Polygon mesh
//AffineTransform at
int[] x = mesh.xpoints;
int[] y = mesh.ypoints;
int[] rx = new int[x.length];
int[] ry = new int[y.length];
for(int i=0; i<mesh.npoints; i++){
Point2d p = new Point2d.Double(x[i], y[i]);
at.transform(p,p);
rx[i]=p.x;
ry[i]=p.y;
}
mesh = new Polygon(rx, ry, mesh.npoints)

Categories

Resources