I work with the Google Tango right now and so far everything worked fine, but I stumbled upon a few strange things.
There is a class ScenePoseCalculator in the Tango Examples. And there is a method "toOpenGlCameraPose". When using this, the OpenGL camera however is not set up correctly. When I move forward, the camera moves backwards. Left and right are swapped as well.
But the most difficult thing is to transform the PointCloud correctly. I do the following:
Create an Vector3-Array:
Vector3f [] vertices = new Vector3f[mPointCloud.getGeometry().getVertices().capacity()];
for(int i=0; i<mPointCloud.getGeometry().getVertices().capacity(); i++) {
vertices[i] = new Vector3f(
mPointCloud.getGeometry().getVertices().get(i*3),
mPointCloud.getGeometry().getVertices().get(i*3+1),
mPointCloud.getGeometry().getVertices().get(i*3+2));
}
In the PointCloud Callback of the TangoListener I do:
private void updateXYZIJ() {
try {
if(pcm.getLatestXyzIj().xyzCount<10) return;
TangoCoordinateFramePair fp = new TangoCoordinateFramePair(
TangoPoseData.COORDINATE_FRAME_START_OF_SERVICE,
TangoPoseData.COORDINATE_FRAME_DEVICE);
TangoXyzIjData xyzIj = pcm.getLatestXyzIj();
depthArray = new float[xyzIj.xyzCount * 3];
xyzIj.xyz.get(depthArray);
com.projecttango.rajawali.Pose p =
ScenePoseCalculator.toDepthCameraOpenGlPose(
tangoProvider.getTango().getPoseAtTime(
xyzIj.timestamp, fp), this.setupExtrinsics());
Pose cloudPose = this.rajawaliPoseToJMEPoseCLOUD(p);
currentPointCloudData = new PointCloudUpdate(
new Date(),
depthArray,
xyzIj.xyzCount,
xyzIj.ijRows,
xyzIj.ijCols,
0,
cloudPose,
null
);
// inform listeners
for (PointCloudListener listener : this.pointsListeners) {
listener.acceptMessage(tangoProvider, this.currentPointCloudData);
}
} catch (Exception e) {
e.printStackTrace();
}
}
To transform the PointCloud I do:
this.transformation = new Matrix4f();
this.transformation.setRotationQuaternion(
this.referencePose.orientation);
this.transformation.setTranslation(this.referencePose.place);
absVertices = new Vector3f[vertices.length];
for(int i=0; i<vertices.length; i++) {
// Create new Vertex
absVertices[i]=new Vector3f(
vertices[i].x,
vertices[i].y,
vertices[i].z
);
Vector3f v = absVertices[i];
transformation.rotateVect(absVertices[i]);
transformation.translateVect(absVertices[i]);
}
But whatever I do. I've tried everything. But it won't look right. The PointClouds get stapled over each other or look like they where placed without any sense.
Hope someone knows more...
For everyone still looking to find out how to do this, I just managed it in jMOnkeyEngine3. The secret is, one has to use the INVERSE of the rotation Quaternion which the Tango delivers.
Okay, here is how it works:
Getting depth data with the Callback:
public void onXyzIjAvailable(TangoXyzIjData xyzIj)
Create an Array of vertices from the xyz-Array:
// Create new array, big enough to hold all vertices
depthArray = new float[xyzIj.xyzCount * 3];
xyzIj.xyz.get(depthArray);
// Create Vertices
Vector3f[] vertices = new Vector3f[xyzIj.xyzCount];
for(int i=0; i<xyzIj.xyzCount; i++) {
vertices[i] = new Vector3f(
depthArray[i*3],
depthArray[i*3+1],
depthArray[i*3+2]);
}
Use the ScenePoseCalculator from Rajawali3d:
com.projecttango.rajawali.Pose p = ScenePoseCalculator
.toDepthCameraOpenGlPose(tangoProvider.getTango().getPoseAtTime(
xyzIj.timestamp, framePair_SS_D), this.setupExtrinsics());
Device intrinsics must be ordered before that, but remember to do it in an TRY/CATCH enclosure.
After that, get Rotation Quaternion and Pose Vector3 from the Tango Pose:
Pose cloudPose = this.rajawaliPoseToJMEPoseCLOUD(p);
That method is defined here:
private Pose rajawaliPoseToJMEPoseCLOUD(com.projecttango.rajawali.Pose p) {
Pose pose = new Pose(
new Vector3f(
(float)p.getPosition().x,
(float)p.getPosition().y,
(float)p.getPosition().z),
new Quaternion(
(float)p.getOrientation().x,
(float)p.getOrientation().y,
(float)p.getOrientation().z,
(float)p.getOrientation().w
));
return pose;
}
Then Put the vertices in a node and transform that node with the Pose Data of the according XYZIJ-Data:
meshNode.setLocalRotation(pcu.referencePose.orientation.inverse());
meshNode.setLocalTranslation(pcu.referencePose.place);
Hope it helps someone. Took me 4 days to figure it out x(
Cheers
Related
I am developing an application that uses a map. I want to show a polygon with a "hole", in Java Android. I searched, but unfortunately, I could not find a solution. I think my problem is that I can not set the correct fillColor. Can someone help me?
My result:
I want the hole's color to be transparent.
My code:
List<ArrayList<LatLng>> multiLatLon;
...
//draw polygon hole
for(int i=0; i<multiLatLon.size(); i++){
poly = new PolygonOptions();
for (int j=0; j<multiLatLon.get(i).size(); j++){
mop.position(multiLatLon.get(i).get(j));
poly.add(multiLatLon.get(i).get(j));
Marker m = mMap.addMarker(mop);
}
poly.fillColor(R.color.colorOcher);
Polygon polygon = mMap.addPolygon(poly);
}
Let me know if you need more info.
Solution:
...
poly = new PolygonOptions ();
poly.fillColor (ColorUtils.setAlphaComponent (Color.BLUE, 128));
for (int i = 0; i <multiLatLon.size (); i ++) {
if (i == 0) {
poly.addAll (multiLatLon.get (i));
} else {
poly.addHole (multiLatLon.get (i));
}
}
mMap.addPolygon(poly);
In my case, I know that the first point array (multiLatLon.get (i)) defines the polygon geometry; while the others are the polygon holes.
Note: I used addAll to delete one for loop
I think the solution you're looking for is the addHole function in the PolygonOptions class.
Give that function your points (as Iterable<LatLng>) you want to have a hole and you should be good to go.
I don't know exactly where the values of your hole are in your code, but basically, you just call that function like this :
poly = new PolygonOptions();
// set the polygon's attributes
//...
//Iterable<LatLng> hole = //whatever contains the hole
poly.addHole(hole);
I'm using polygon on shape renderer and the problem is that the user should be able to add vertex whenever they want, basically the vertices are not set by me but by the user. What I did is whenever the user add a point, I add them to an arrayList.
ArrayList<Float> v = new ArrayList<Float>();
public void ontouch(screenX, screenY){
v.add(screenX);
v.add(screenY)
}
And then I have this problem when I try to render a polygon on a shapeRenderer
for(int i = 0; i < v.size; i++){
float[] vertices = new float[v.size()]
vertices[i - 1] = v.get(i - 1);
vertices[i] = v.get(i);
}
sr.polygon(v);
But I just get errors.
I am trying to achieve something like this, if you know a different way of doing this then that would be really helpful. By the way I'm also using box2d and this does not need to have collision it's just for the user visual.
The way I would personally do it is by having an LinkedList with Vector2 objects. Vector2 objects store two floats, so for every click, you get the x and y coordinates and make a new Vector2 object. By storing them in the LinkedList, you can retrieve the points at any time in the correct order so you can connect a line.
LinkedList<Vector2> v = new LinkedList<Vector2>();
public void ontouch(screenX, screenY){
v.add(new Vector2(screenX, screenY)); // add Vector2 into LinkedList
}
How you want to draw the lines or connect the points is up to you.
Another was is to just only keep the two most recent points that were clicked, and throw the others away. This would mean storing the lines instead of the points. If the lines are objects, then you can do this:
Vector2 previousPoint;
Vector2 currentPoint;
ArrayList<MyLineClass> lines = new ArrayList<MyLineClass>();
public void ontouch(screenX, screenY){
if(previousPoint == null){
previousPoint = new Vector2(screenX, screenY);
}else{
previousPoint = currentPoint;
currentPoint = new Vector2(screenX, screenY);
lines.add(new MyLineClass(currentPoint, previousPoint)
}
}
I wrote this off the cuff but I believe this example should work.
EDIT:
Good thing LibGDX is open source. If you want to use an array of float numbers, the method simply gets an x and y coordinate in alternating order. So for each index:
0 = x1
1 = y1
2 = x2
3 = y2
4 = x3
5 = y3
etc.
It's an odd method, but I suppose it works.
My Blender 3D object that was exported with triangulated faces and UV's written doesn't apply the texture properly.
It looks like this:
My render code:
Color.white.bind();
texture.bind();
glBegin(GL_TRIANGLES);
for(ObjFace face : model.faces){
float[] vertex1 = model.vertices[face.indices[0]-1];
float[] vertex2 = model.vertices[face.indices[1]-1];
float[] vertex3 = model.vertices[face.indices[2]-1];
float[] normal1 = model.normals[face.normals[0]-1];
float[] normal2 = model.normals[face.normals[1]-1];
float[] normal3 = model.normals[face.normals[2]-1];
float[] tex1 = model.texCoords[face.texCoords[0]-1];
float[] tex2 = model.texCoords[face.texCoords[1]-1];
float[] tex3 = model.texCoords[face.texCoords[2]-1];
glNormal3f(normal1[0], normal1[1], normal1[2]);
glTexCoord2f(tex1[0], tex1[1]);
glVertex3f(vertex1[0], vertex1[1], vertex1[2]);
glNormal3f(normal2[0], normal2[1], normal2[2]);
glTexCoord2f(tex2[0], tex2[1]);
glVertex3f(vertex2[0], vertex2[1], vertex2[2]);
glNormal3f(normal3[0], normal3[1], normal3[2]);
glTexCoord2f(tex3[0], tex3[1]);
glVertex3f(vertex3[0], vertex3[1], vertex3[2]);
}
glEnd();
The parsing code:
for (int i = 0; i < lines.length; ++i) {
String[] spaced = lines[i].split(" ");
if (lines[i].startsWith("v ")) {
float[] vertices = new float[3];
vertices[0] = parseFloat(spaced[1]);
vertices[1] = parseFloat(spaced[2]);
vertices[2] = parseFloat(spaced[3]);
verticesArray.add(vertices);
} else if (lines[i].startsWith("vn ")) {
float[] normals = new float[3];
normals[0] = parseFloat(spaced[1]);
normals[1] = parseFloat(spaced[2]);
normals[2] = parseFloat(spaced[3]);
normalsArray.add(normals);
} else if (lines[i].startsWith("vt ")) {
float[] texCoords = new float[2];
texCoords[0] = parseFloat(spaced[1]);
texCoords[1] = parseFloat(spaced[2]);
texCoordsArray.add(texCoords);
} else if (lines[i].startsWith("f ")) {
int[] faceIndices = new int[3];
int[] faceNormals = new int[3];
int[] faceTextureCoords = new int[3];
faceIndices[0] = parseInt(spaced[1].split("/")[0]);
faceIndices[1] = parseInt(spaced[2].split("/")[0]);
faceIndices[2] = parseInt(spaced[3].split("/")[0]);
faceNormals[0] = parseInt(spaced[1].split("/")[2]);
faceNormals[1] = parseInt(spaced[2].split("/")[2]);
faceNormals[2] = parseInt(spaced[3].split("/")[2]);
faceTextureCoords[0] = parseInt(spaced[1].split("/")[1]);
faceTextureCoords[1] = parseInt(spaced[2].split("/")[1]);
faceTextureCoords[2] = parseInt(spaced[3].split("/")[1]);
faceArray.add(new ObjFace(faceIndices, faceNormals, faceTextureCoords));
}
}
Although I'm not sure if it could be a problem with my Blender export.
Thanks.
EDIT: Updated pic after I made the texture image's width and height powers of two.
Edit 2: I tried a simple box to make sure that it wasn't the model that was screwing up and tested the face culling. On the box, culling the back faces fixes the problem to some extent however, on the cup it makes little difference.
Edit 3: I included a video to demonstrate what I think is the problem. I think that the triangular glitch is caused by overlapping triangles like with the handle is in front of the actual cup. youtube vid
Is that another instance of the 'textures that have sizes not power of 2' problem ? lwjgl will extend your texture to a power of 2 size, meaning your UV coordinates that are [0...1] will be wrong, they should be [0 ... 0.5783] because the rest is lwjgl padding to reach power of 2. Can't find a reference...
IIRC, in GLES20, the normals for a triangle are indicated by the winding (clock wise or counter clock wise). I'm not sure exactly what GL will understand if you set a normal per vertex (that's for lighting) and none for the triangle. I'm not sure you can set one for a triangle, meaning it will be computed from vertex position+winding. What makes me think it's a problem is the fact that all your quads are half rendered (one triangle in two).
I recently started learning OpenGL, I wanted to advance from the manually written cube and wanted to use models exported from Blender. The easiest way seemed to be by parsing .obj files so I made a parser.
It works but not quite well, I stumbled upon a problem when I wanted to learn about lights. The reflections and shadows weren't right at all. They made no sense in relation to where the light was. So I rendered in wireframe mode and to my surprize, there were some extra faces that weren't supposed to be there.
Screenshot: http://img5.imageshack.us/img5/5937/582ff91c155b466495b2b02.png
I then generated the .obj file using the data I parsed and compared it using a diff tool to the original .obj file and nothing seems to be wrong. There are some missing zeros like 0.01 instead of 0.0100 but nothing else.
Here is my parser: http://pastebin.com/Aw8mdhJ9
Here is where I use the parsed data: pastebin.com/5Grt1WGf
And this is my toFloatBuffer:
public static FloatBuffer makeFloatBuffer(float[] arr) {
ByteBuffer bb = ByteBuffer.allocateDirect(arr.length * 4);
bb.order(ByteOrder.nativeOrder());
FloatBuffer fb = bb.asFloatBuffer();
fb.put(arr);
fb.position(0);
return fb;
}
After asking around the internet I found lots of different opinions. With a bit more study I figured out what was the problem. The problem was that the normals were per face and for OpenGL they need to be per vertex.
To fix this I rearranged the parsed data and made new indices. The problem is I am using more indices now and with the GL_UNSIGNED_SHORT limitation in OpenGL ES, my vertex count gets more limited.
Here is the code I used to fix the arrays:
private static void makeThingsWork() {
int n_points = faces.length * 3;
short count = 0;
int j = 0;
float[] fixedvertices = new float[n_points];
float[] fixednormals = new float[n_points];
short[] fixedfaces = new short[faces.length];
for(int i = 0; i < n_points; i+=3)
{
j = i/3;
fixedvertices[i] = vertices[faces[j]*3];
fixedvertices[i+1] = vertices[faces[j]*3 + 1];
fixedvertices[i+2] = vertices[faces[j]*3 + 2];
fixednormals[i] = normals[normalIndices[j]*3];
fixednormals[i+1] = normals[normalIndices[j]*3 + 1];
fixednormals[i+2] = normals[normalIndices[j]*3 + 2];
fixedfaces[i/3] = count;
count++;
}
vertices = fixedvertices;
normals = fixednormals;
faces = fixedfaces;
}
I've been searching for a way to send a list of coord(x,y,z) to jzy3d. But without success.
The only way I found is to use a "builder" with a list of "coord3d" and a "tesselator", but it actually doesn't work.
I don't realy get the meaning of the Tesselator in fact ?
Here is the code I tried :
public Chart getChart(){
List<Coord3d> coordinates = new ArrayList<Coord3d>();
for(int i=0; i<200; i++)
coordinates.add( new Coord3d(5, 10, 15) );
Tesselator tesselator = new Tesselator() {
#Override
public AbstractComposite build(float[] x, float[] y, float[] z) {
return null;
}
};
tesselator.build(coordinates);
org.jzy3d.plot3d.primitives.Shape surface = (Shape)Builder.build(coordinates, tesselator);
/*/ Define a function to plot
Mapper mapper = new Mapper(){
public double f(double x, double y) {
return 10*Math.sin(x/10)*Math.cos(y/20)*x;
}
};*/
// Define range and precision for the function to plot
// Range range = new Range(-150,150);
// int steps = 50;
// Create the object to represent the function over the given range.
// org.jzy3d.plot3d.primitives.Shape surface = (Shape)Builder.buildOrthonormal(new OrthonormalGrid(range, steps, range, steps), mapper);
//surface.setColorMapper(new ColorMapper(new ColorMapRainbow(), surface.getBounds().getZmin(), surface.getBounds().getZmax(), new Color(1,1,1,.5f)));
// surface.setWireframeDisplayed(true);
// surface.setWireframeColor(Color.BLACK);
//surface.setFace(new ColorbarFace(surface));
//surface.setFaceDisplayed(true);
//surface.setFace2dDisplayed(true); // opens a colorbar on the right part of the display
// Create a chart
Chart chart = new Chart("swing");
chart.getScene().getGraph().add(surface);
return chart;
}
Could someone please tell me how to feed my graph with many XYZ coordonates so that I may get a 3d surface plot like this one :
(source: free.fr)
A tesselator allows creating polygons out of a list of points. Jzy3d provides two base tesselators: one that supports points standing on a regular grid (called OrthonormalTesselator), one that supports unstructured points as input (DelaunayTesselator). The second one is not always "working good": not a problem concerning its implementation but mainly the fact that it's difficult to decide how points should work together to form a polygon in 3d. You may find some discussions about it on Jzy3d wiki and discussion groups.
To manually build polygons, here's what you should do:
// Build a polygon list
double [][]distDataProp = new double[][] {{.25,.45, .20},{.56, .89, .45}, {.6, .3,.7}};
List<Polygon> polygons = new ArrayList<Polygon>();
for(int i = 0; i < distDataProp.length -1; i++){
for(int j = 0; j < distDataProp[i].length -1; j++){
Polygon polygon = new Polygon();
polygon.add(new Point( new Coord3d(i, j, distDataProp[i][j]) ));
polygon.add(new Point( new Coord3d(i, j+1, distDataProp[i][j+1]) ));
polygon.add(new Point( new Coord3d(i+1, j+1, distDataProp[i+1][j+1]) ));
polygon.add(new Point( new Coord3d(i+1, j, distDataProp[i+1][j]) ));
polygons.add(polygon);
}
}
// Creates the 3d object
Shape surface = new Shape(polygons);
surface.setColorMapper(new ColorMapper(new ColorMapRainbow(), surface.getBounds().getZmin(), surface.getBounds().getZmax(), new org.jzy3d.colors.Color(1,1,1,1f)));
surface.setWireframeDisplayed(true);
surface.setWireframeColor(org.jzy3d.colors.Color.BLACK);
chart = new Chart();
chart.getScene().getGraph().add(surface);