I am using the Proj.4 java library that can be found here
And I am pretty much unsure on how can I implement a code that looks like this in Proj.4JS:
// include the library
<script src="lib/proj4js-combined.js"></script> //adjust the path for your server
//or else use the compressed version
// creating source and destination Proj4js objects
// once initialized, these may be re-used as often as needed
var source = new Proj4js.Proj('EPSG:4326'); //source coordinates will be in Longitude/Latitude, WGS84
var dest = new Proj4js.Proj('EPSG:4141'); //destination coordinates in meters, global spherical mercators projection, see http://spatialreference.org/ref/epsg/3785/
// transforming point coordinates
var p = new Proj4js.Point(-76.0,45.0); //any object will do as long as it has 'x' and 'y' properties
Proj4js.transform(source, dest, p); //do the transformation. x and y are modified in place
//p.x and p.y are now EPSG:3785 in meters
I am new to the whole projection subjects and I really want to know what am I doing. I need to convert my coordinate system from WGS84 to EPSG:4141 but Proj.4 Java library is not documented at all and I can't really find out on how to use it.
Is anyone familiar with this?
Unfortunately the library is still not well documented so for those still searching for a solution:
CRSFactory factory = new CRSFactory();
CoordinateReferenceSystem srcCrs = factory.createFromName("EPSG:4326");
CoordinateReferenceSystem dstCrs = factory.createFromName("EPSG:4141");
BasicCoordinateTransform transform = new BasicCoordinateTransform(srcCrs, dstCrs);
// Note these are x, y so lng, lat
ProjCoordinate srcCoord = new ProjCoordinate(-76.0, 45.0);
ProjCoordinate dstCoord = new ProjCoordinate();
// Writes result into dstCoord
transform.transform(srcCoord, dstCoord);
Source code at https://github.com/Proj4J/proj4j if you need to figure anything else out.
Related
I saved a tensorflow model using tf.saved_model.builder.SavedModelBuilder.
However, when I try to make predictions in java, in most of the time it returns the same results (for fc8 (alexnet) the layer before softmax) in some cases, it produces some real different results and it's most likely to be correct, so from that, I assume that the training is OK.
Did anyone else experienced this? Does anyone have an idea what's wrong?
my Java implementation:
Tensor image = constructAndExecuteGraphToNormalizeImage(imageBytes);
Tensor result = s.runner().feed("input_tensor", image).feed("Placeholder_1",t).fetch("fc8/fc8").run().get(0);
private static Tensor constructAndExecuteGraphToNormalizeImage(byte[] imageBytes) {
try (Graph g = new Graph()) {
TF.GraphBuilder b = new TF.GraphBuilder(g);
// Some constants specific to the pre-trained model at:
// https://storage.googleapis.com/download.tensorflow.org/models/inception5h.zip
//
// - The model was trained with images scaled to 224x224 pixels.
// - The colors, represented as R, G, B in 1-byte each were converted to
// float using (value - Mean)/Scale.
final int H = 227;
final int W = 227;
final float mean = 117f;
final float scale = 1f;
// Since the graph is being constructed once per execution here, we can use a constant for the
// input image. If the graph were to be re-used for multiple input images, a placeholder would
// have been more appropriate.
final Output input = b.constant("input", imageBytes);
final Output output =
b.div(
b.sub(
b.resizeBilinear(
b.expandDims(
b.cast(b.decodeJpeg(input, 3), DataType.FLOAT),
b.constant("make_batch", 0)),
b.constant("size", new int[] {H, W})),
b.constant("mean", mean)),
b.constant("scale", scale));
try (Session s = new Session(g)) {
return s.runner().fetch(output.op().name()).run().get(0);
}
}
}
I am assuming that there is no random operation left in your graph, such as dropout. (Seems to be the case, since you often get the same results).
Alas, some operations in tensorflow seem to be non-deterministic, such as reductions and convolutions. We have to live with the fact that tensorflow's nets are stochastic beasts: their performance can be approached statistically but their outputs are non-deterministic.
(I believe some other frameworks such as Theano go farther than tensorflow in proposing deterministic operations.)
I have an image (basically, I get raw image data as 1024x1024 pixels) and the position in lat/lon of the center pixel of the image.
Each pixel represents the same fixed pixel scale in meters (e.g. 30m per pixel).
Now, I would like to draw the image onto a map which uses the coordinate reference system "EPSG:4326" (WGS84).
When I draw it by defining just corners in lat/lon of the image, depending on a "image size in pixel * pixel scale" calculation and converting the distances from the center point to lat/lon coordinates of each corner, I suppose, the image is not correctly drawn onto the map.
By the term "not correctly drawn" I mean, that the image seems to be shifted and also the contents of the image are not at the map location, where I expected them to be.
I suppose this is the case because I "mix" a pixel scaled image and a "EPSG:4326" coordinate reference system.
Now, with the information I have given, can I transform the whole pixel matrix from fixed pixel scale base to a new pixel matrix in the "EPSG:4326" coordinate reference system, using Geotools?
Of course, the transformation must be dependant on the center position in lat/lon, that I have been given, and on the pixel scale.
I wonder if using something like this would point me into the correct direction:
MathTransform transform = CRS.findMathTransform(DefaultGeocentricCRS.CARTESIAN, DefaultGeographicCRS.WGS84, true);
DirectPosition2D srcDirectPosition2D = new DirectPosition2D(DefaultGeocentricCRS.CARTESIAN, degreeLat.getDegree(), degreeLon.getDegree());
DirectPosition2D destDirectPosition2D = new DirectPosition2D();
transform.transform(srcDirectPosition2D, destDirectPosition2D);
double transX = destDirectPosition2D.x;
double transY = destDirectPosition2D.y;
int kmPerPixel = mapImage.getWidth / 1024; // It is known to me that my map is 1024x1024km ...
double x = zeroPointX + ((transX * 0.001) * kmPerPixel);
double y = zeroPointY + (((transX * -1) * 0.001) * kmPerPixel);
(got this code from another SO thread and already modified it a little bit, but still wonder if this is the correct starting point for my problem.)
I only suppose that my original image coordinate reference system is of the type DefaultGeocentricCRS.CARTESIAN. Can someone confirm this?
And from here on, is this the correct start to use Geotools for this kind of problem solving, or am I on the complete wrong path?
Additionally, I would like to add that this would be used in a quiet dynamic system. So my image update would be about 10Hz and the transormations have to be performed accordingly often.
Again, is this initial thought of mine leading to a solution, or do you have other solutions for solving my problem?
Thank you very much,
Kiamur
This is not as simple as it might sound. You are essentially trying to define an area on a sphere (ellipsoid technically) using a flat square. As such there is no "correct" way to do it, so you will always end up with some distortion. Without knowing exactly where your image came from there is no way to answer this exactly but the following code provides you with 3 different possible answers:
The first two make use of GeoTools' GeodeticCalculator to calculate the corner points using bearings and distances. These are the blue "square" and the green "square" above. The blue is calculating the corners directly while the green calculates the edges and infers the corners from the intersections (that's why it is squarer).
final int width = 1024, height = 1024;
GeometryFactory gf = new GeometryFactory();
Point centre = gf.createPoint(new Coordinate(0,51));
WKTWriter writer = new WKTWriter();
//direct method
GeodeticCalculator calc = new GeodeticCalculator(DefaultGeographicCRS.WGS84);
calc.setStartingGeographicPoint(centre.getX(), centre.getY());
double height2 = height/2.0;
double width2 = width/2.0;
double dist = Math.sqrt(height2*height2+width2 *width2);
double bearing = 45.0;
Coordinate[] corners = new Coordinate[5];
for (int i=0;i<4;i++) {
calc.setDirection(bearing, dist*1000.0 );
Point2D corner = calc.getDestinationGeographicPoint();
corners[i] = new Coordinate(corner.getX(),corner.getY());
bearing+=90.0;
}
corners[4] = corners[0];
Polygon bbox = gf.createPolygon(corners);
System.out.println(writer.write(bbox));
double[] edges = new double[4];
bearing = 0;
for(int i=0;i<4;i++) {
calc.setDirection(bearing, height2*1000.0 );
Point2D corner = calc.getDestinationGeographicPoint();
if(i%2 ==0) {
edges[i] = corner.getY();
}else {
edges[i] = corner.getX();
}
bearing+=90.0;
}
corners[0] = new Coordinate( edges[1],edges[0]);
corners[1] = new Coordinate( edges[1],edges[2]);
corners[2] = new Coordinate( edges[3],edges[2]);
corners[3] = new Coordinate( edges[3],edges[0]);
corners[4] = corners[0];
bbox = gf.createPolygon(corners);
System.out.println(writer.write(bbox));
Another way to do this is to transform the centre point into a projection that is "flatter" and use simple addition to calculate the corners and then reverse the transformation. To do this we can use the AUTO projection defined by the OGC WMS Specification to generate an Orthographic projection centred on our point, this gives the red "square" which is very similar to the blue one.
String code = "AUTO:42003," + centre.getX() + "," + centre.getY();
// System.out.println(code);
CoordinateReferenceSystem auto = CRS.decode(code);
// System.out.println(auto);
MathTransform transform = CRS.findMathTransform(DefaultGeographicCRS.WGS84,
auto);
MathTransform rtransform = CRS.findMathTransform(auto,DefaultGeographicCRS.WGS84);
Point g = (Point)JTS.transform(centre, transform);
width2 *=1000.0;
height2 *= 1000.0;
corners[0] = new Coordinate(g.getX()-width2,g.getY()-height2);
corners[1] = new Coordinate(g.getX()+width2,g.getY()-height2);
corners[2] = new Coordinate(g.getX()+width2,g.getY()+height2);
corners[3] = new Coordinate(g.getX()-width2,g.getY()+height2);
corners[4] = corners[0];
bbox = gf.createPolygon(corners);
bbox = (Polygon)JTS.transform(bbox, rtransform);
System.out.println(writer.write(bbox));
Which solution to use is a matter of taste, and depends on where your image came from but I suspect that either the red or the blue will be best. If you need to do this at 10Hz then you will need to test them for speed, but I suspect that transforming the images will be the bottle neck.
Once you have your bounding box setup to your satisfaction you can convert you (unreferenced) image to a georeferenced coverage using:
GridCoverageFactory factory = CoverageFactoryFinder.getGridCoverageFactory(null);
GridCoverage2D gc = factory.create("name", image, new ReferencedEnvelope(bbox.getEnvelopeInternal(),DefaultGeographicCRS.WGS84));
String fileName = "myImage.tif";
AbstractGridFormat format = GridFormatFinder.findFormat(fileName);
File out = new File(fileName);
GridCoverageWriter writer = format.getWriter(out);
try {
writer.write(gc, null);
writer.dispose();
} catch (IllegalArgumentException | IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
I am trying to battle my way through learning Java and bullet physics all in one go. Quite possible a little too much to do all at once but I like a challenge.
So far, I've learned how to import g3db objects, apply bullet physics to them and interact with them on the screen by using the following code:
assets = new AssetManager();
assets.load("globe.g3db", Model.class);
assets.load("crate.g3db", Model.class);
assets.finishLoading();
Model model = assets.get("globe.g3db", Model.class);
ModelInstance inst = new ModelInstance(model);
inst.transform.trn(0, 20, 0);
btRigidBody body;
btSphereShape sh = new btSphereShape(1);
sh.calculateLocalInertia(1, new Vector3(0,0,0));
body = new btRigidBody(new btRigidBody.btRigidBodyConstructionInfo(3, new btDefaultMotionState(inst.transform), sh));
body.setUserValue(Minstances.size);
body.proceedToTransform(inst.transform);
motionState = new MyMotionState();
motionState.transform = inst.transform;
body.setMotionState(motionState);
dynamicsWorld.addRigidBody(body );
Minstances.add(inst);
This works fine, if I set it above the ground it falls and comes to rest on the ground, however when it moves about it slides rather than rolls.
Is there an easy fix?
To allow rolling of a physical body, you need to calculate its local inertia and provide it to the construction info. In your code you're almost doing it right.
The method
btCollisionShape.calculateLocalInertia(float mass, Vector3 inertia)
indeed calculates local inertia but stores it in its second argument 'Vector3 inertia'. No changes are applied to btCollisionShape itself.
After obtaining the vector of inertia you need to pass it to
btRigidBodyConstructionInfo(float mass, btMotionState motionState, btCollisionShape collisionShape, Vector3 localInertia)
as the last argument.
The correct code should look like this:
btRigidBody body;
btSphereShape sh = new btSphereShape(1);
Vector3 inertia = new Vector3(0,0,0);
sh.calculateLocalInertia(1, inertia);
body = new btRigidBody(new btRigidBody.btRigidBodyConstructionInfo(3, new btDefaultMotionState(inst.transform), sh, inertia));
Local inertia is not required to perform the simulation, but without it, all forces applied to bodies are treated as central forces and therefore cannot affect angular speed.
You need to read out and set the rotation as well, right now you are just reading \ applying the translation.
Create a new quaternion and get the xyzw values and apply them to your object.
something like this (c++ code, but you can easily do the same in java):
btTransform trans;
Quat myquat;
yourBulletDynamicObject->getMotionState()->getWorldTransform(trans);
btQuaternion rot = trans.getRotation();
myquat.w = rot.w();
myquat.x = rot.x();
myquat.y = rot.z();
myquat.z = rot.y();
set myquat back on your object.
I have a line segment that represents a direction and magnitude (length), when i draw the segment it works as it should. the value getAimArmsRotation is being pulled from another class that contains a touchpad value.
if (player.getFacingRight()) {
lOriginX = (player.getPosition().x + Player.SIZEw/2);
lOriginY = (player.getPosition().y + Player.SIZEh/1.5f);
//lEndX = lOriginX + (float)Math.cos((player.getAimArmsRotation())/57) * 15f;
//lEndY = lOriginY + (float)Math.sin((player.getAimArmsRotation())/57) * 15f;
laserO = new Vector2(lOriginX, lOriginY);
laserE = new Vector2(lEndX, lEndY);
However if I use the Vectors or floats from this calculation and apply them to a model's velocity vector it does not move the model along the line segment as I would think it should.
EDIT: Sorry meant to attach this picture when I created the question. Fig 1 is how my line segment looks, when I set the velocity values that make up the line segment to my object it moves in the direction that fig 2 shows.
getAimArmsRotation() is just a method that sets a sprite's rotation with a value from the touchpad in another class. I don't think that the values should matter since these floats are what i've used in order to give the line segment it's length and direction, I would think that giving an object a velocity of the x and y floats would give it the same direction as the line?
Thanks for the DV, jerks.
I wasn't taking into account the origin position of the object when trying to send it along the desired path. I was only using the LineEnd values, I needed to give the object it's origin point to correctly calculate the trajectory or path.
for (GameObject go: gObjects) {
if (go.getType() == PROJECTILE_ID) {
go.getVelocity().x = player.getLineEndX() - player.getLineOrgX();
go.getVelocity().y = player.getLineEndY() - player.getLineOrgY();
System.out.println(go.getVelocity().x);
System.out.println(go.getVelocity().y);
}
}
I have the problem with processing digital signals. I am trying to detect fingertips, similar to the solution that is presented here: Hand and finger detection using JavaCV.
However, I am not using JavaCV but OpenCV for android which is slightly different.
I have managed to do all the steps presented in the tutorial, but filtering of convex hulls and convexity defects. This is how my image looks like:
Here is an image in another resolution:
As you can clearly see, There is to many yellow points (convex hulls) and also to many red points (convexity deffects). Sometimes between 2 yellow points there is no red point, which is quite strange (how are convex hulls calculated?)
What I need is to create simillar filtering function like in the link provided before, but using data structures of OpenCV.
Convex Hulls are type of MatOfInt ...
Convexity defects are type of MatOfInt4 ...
I created also some additional data structures, because stupid OpenCV uses different types of data containing same data, in different methods...
convexHullMatOfInt = new MatOfInt();
convexHullPointArrayList = new ArrayList<Point>();
convexHullMatOfPoint = new MatOfPoint();
convexHullMatOfPointArrayList = new ArrayList<MatOfPoint>();
Here is what I did so far but it is not working good. The problem is probably with converting data in a wrong way:
Creating convex hulls and convexity defects:
public void calculateConvexHulls()
{
convexHullMatOfInt = new MatOfInt();
convexHullPointArrayList = new ArrayList<Point>();
convexHullMatOfPoint = new MatOfPoint();
convexHullMatOfPointArrayList = new ArrayList<MatOfPoint>();
try {
//Calculate convex hulls
if(aproximatedContours.size() > 0)
{
Imgproc.convexHull( aproximatedContours.get(0), convexHullMatOfInt, false);
for(int j=0; j < convexHullMatOfInt.toList().size(); j++)
convexHullPointArrayList.add(aproximatedContours.get(0).toList().get(convexHullMatOfInt.toList().get(j)));
convexHullMatOfPoint.fromList(convexHullPointArrayList);
convexHullMatOfPointArrayList.add(convexHullMatOfPoint);
}
} catch (Exception e) {
// TODO Auto-generated catch block
Log.e("Calculate convex hulls failed.", "Details below");
e.printStackTrace();
}
}
public void calculateConvexityDefects()
{
mConvexityDefectsMatOfInt4 = new MatOfInt4();
try {
Imgproc.convexityDefects(aproximatedContours.get(0), convexHullMatOfInt, mConvexityDefectsMatOfInt4);
if(!mConvexityDefectsMatOfInt4.empty())
{
mConvexityDefectsIntArrayList = new int[mConvexityDefectsMatOfInt4.toArray().length];
mConvexityDefectsIntArrayList = mConvexityDefectsMatOfInt4.toArray();
}
} catch (Exception e) {
Log.e("Calculate convex hulls failed.", "Details below");
e.printStackTrace();
}
}
Filtering:
public void filterCalculatedPoints()
{
ArrayList<Point> tipPts = new ArrayList<Point>();
ArrayList<Point> foldPts = new ArrayList<Point>();
ArrayList<Integer> depths = new ArrayList<Integer>();
fingerTips = new ArrayList<Point>();
for (int i = 0; i < mConvexityDefectsIntArrayList.length/4; i++)
{
tipPts.add(contours.get(0).toList().get(mConvexityDefectsIntArrayList[4*i]));
tipPts.add(contours.get(0).toList().get(mConvexityDefectsIntArrayList[4*i+1]));
foldPts.add(contours.get(0).toList().get(mConvexityDefectsIntArrayList[4*i+2]));
depths.add(mConvexityDefectsIntArrayList[4*i+3]);
}
int numPoints = foldPts.size();
for (int i=0; i < numPoints; i++) {
if ((depths.get(i).intValue()) < MIN_FINGER_DEPTH)
continue;
// look at fold points on either side of a tip
int pdx = (i == 0) ? (numPoints-1) : (i - 1);
int sdx = (i == numPoints-1) ? 0 : (i + 1);
int angle = angleBetween(tipPts.get(i), foldPts.get(pdx), foldPts.get(sdx));
if (angle >= MAX_FINGER_ANGLE) // angle between finger and folds too wide
continue;
// this point is probably a fingertip, so add to list
fingerTips.add(tipPts.get(i));
}
}
Results (white points - fingertips after filtering):
Could you help me to write proper function for filtering?
UPDATE 14.08.2013
I use standard openCV function for contour approximation. I have to change approximation value with resolution change, and hand-to-camera distance, which is quite hard to do. If the resolution is smaller, then finger consist of less pixel, thus approximation value should be lover. Same with the distance. Keeping it high will result in completely losing the finger. So I think approximation is not good approach to resolving the problem, however small value could be useful to speed up calculations:
Imgproc.approxPolyDP(frame, frame, 2 , true);
If I use high values, then the result is like on the image below, which would be good only if distance and resolution wouldn't change. Also, I am quite surprised that default methods for hulls points and defects points doesn't have useful arguments to pass (min angle, distance etc)...
Image below presents the effect that I would like to achieve always, independently from resolution or hand-to-camera distance. Also I don't want to see any yellow points when I close my palm...
To sum everything up, I would like to know:
how to filter the points
how can I make resolution and distance independent approximation which will always work
if someone knows or have some materials (graphical representation, explanation) about those data structures used in OpenCV, I would be happy to read it. (Mat, MatOfInt, MatOfPoint, MatOfPoint2, MatOfPoint4 etc.)
The convex hull at low res can be used to identify the position of the hand as a whole, it is not useful for fingers but does provide a region of interest and appropriate scale.
The higher resolution analysis should then be applied to your approximated contour, it is easy to skip any points that do not pass the "length and angle" criteria from the last two, though you may wish to "average in" instead of "skip entirely".
Your code example is a single pass of calculating convexity defects and then removing them .. that is a logic error .. you need to remove points as you go .. (a) it is faster and simpler to do everything in one-pass (b) it avoids removing points at a first pass and having to add them back later because any removal changes previous calcs.
This basic technique is very simple and so works for a basic open palm. It doesn't intrinsically understand a hand or a gesture though, so tuning the scale, angle and length parameters is only ever going to get you "so far".
References to Techniques:
filter length and angle "Convexity defect"
Simen Andresen blog http://simena86.github.io/blog/2013/08/12/hand-tracking-and-recognition-with-opencv/
Kinect SDK based C# Library with added finger direction detection
http://candescentnui.codeplex.com/
http://blog.candescent.ch/2011/11/improving-finger-detection.html
"Self-growing and organized neural gas" (SGONG)
Prof Nikos Papamarkos http://www.papamarkos.gr/uploaded-files/Hand%20gesture%20recognition%20using%20a%20neural%20network%20shape%20fitting%20technique.pdf
Commercial product
David Holz & Michael Buckwald founders of "Leap Motion" http://www.engadget.com/2013/03/11/leap-motion-michael-buckwald-interview/
I think you missed that point:
Hull creation and defect analysis are speeded up by utilizing a low-polygon approximation of the contour rather than the original.