Matching an images with series of templates - java

I am working on an application that matches a photo that taken from the mobile phone camera and matches with a series of images that saved in a database. Following java code works fine to match 1 image with 1 template. Please help me to develop this program to match with several templates and return the best match. I am using android studio to develop the application.
Thank you
import org.bytedeco.javacv.*;
import org.bytedeco.javacpp.*;
import org.bytedeco.javacpp.indexer.FloatIndexer;
import static org.bytedeco.javacpp.opencv_core.*;
import static org.bytedeco.javacpp.opencv_imgproc.*;
import static org.bytedeco.javacpp.opencv_highgui.*;
import static org.bytedeco.javacpp.opencv_imgcodecs.*;
//import static org.bytedeco.javacpp.opencv_calib3d.*;
//import static org.bytedeco.javacpp.opencv_objdetect.*;
import java.util.ArrayList;
import java.util.List;
import java.util.concurrent.ThreadLocalRandom;
public class TemplateMatching {
public static void main(String[] args) throws Exception {
footPrint(args);
} //main method
public static void footPrint(String[] args){
//read in image default colors
Mat sourceColor=imread("F:\\image_processing\\Foot_Print_Temp_Match\\img.jpg", CV_LOAD_IMAGE_COLOR);//this image should captured by the camera
Mat sourceGrey = new Mat(sourceColor.size(), CV_8UC1);
cvtColor(sourceColor, sourceGrey, COLOR_BGR2GRAY);
//load in template in grey
Mat template1 = imread("F:\\image_processing\\templ.jpg",CV_LOAD_IMAGE_GRAYSCALE);//Template should load from the database
//Size for the result image
Size size = new Size(sourceGrey.cols()-template1.cols()+1, sourceGrey.rows()-template1.rows()+1);
Mat result = new Mat(size, CV_32FC1);//32 bit floating point signed depth in one channel
matchTemplate(sourceGrey, template1, result, TM_CCORR_NORMED) ;//Template matching function
DoublePointer minVal= new DoublePointer();
DoublePointer maxVal= new DoublePointer();
Point min = new Point();
Point max = new Point();
minMaxLoc(result, minVal, maxVal, min, max, null);
rectangle(sourceColor,new Rect(max.x(),max.y(),template1.cols(),template1.rows()), randColor(), 2, 0, 0);
imshow("Original marked", sourceColor);
imshow("Template", template1);
waitKey(0);
destroyAllWindows();
}
public static Scalar randColor(){
int b,g,r;
b= ThreadLocalRandom.current().nextInt(0, 255 + 1);
g= ThreadLocalRandom.current().nextInt(0, 255 + 1);
r= ThreadLocalRandom.current().nextInt(0, 255 + 1);
return new Scalar (b,g,r,0);
}
public static List<Point> getPointsFromMatAboveThreshold(Mat m, float t){
List<Point> matches = new ArrayList<Point>();
FloatIndexer indexer = m.createIndexer();
for (int y = 0; y < m.rows(); y++) {
for (int x = 0; x < m.cols(); x++) {
if (indexer.get(y,x)>t) {
System.out.println("(" + x + "," + y +") = "+ indexer.get(y,x));
matches.add(new Point(x, y));
}
}
}
return matches;
}

i have an application that basically does this.
Look at MatchingService and OpenCVUtils.
basically it should match a template, record the score and match the next template. keep list of scores associated to the template that made that score. then just get the max score.
public static float matchScore(Mat src,Mat tmp){
Size size = new Size(src.cols()-tmp.cols()+1, src.rows()-tmp.rows()+1);
Mat result = new Mat(size, CV_32FC1);
matchTemplate(src, tmp, result, TM_CCORR_NORMED);
DoublePointer minVal= new DoublePointer();
DoublePointer maxVal= new DoublePointer();
Point min = new Point();
Point max = new Point();
FloatIndexer fi = result.createIndexer();
minMaxLoc(result, minVal, maxVal, min, max, null);
return fi.get(max.y(),max.x());
}

Related

How to show images in a large frequency in JavaFX?

My application generates heatmap images as fast as the CPU can (around 30-60 per second) and I want to display them in a single "live heatmap". In AWT/Swing, I could just paint them into a JPanel which worked like a charm.
Recently, I switched to JavaFX and want to achieve the same here; at first, I tried with a Canvas, which was slow but okay-ish but had a severe memory leak problem, causing the application to crash. Now, I tried the ImageView component - which apparently is way too slow as the image gets quite laggy (using ImageView.setImage on every new iteration). As far as I understand, setImage does not guarantee that the image is actually displayed when the function finishes.
I am getting the impression that I am on the wrong track, using those components in a manner they are not made to. How can I display my 30-60 Images per second?
EDIT: A very simple test application. You will need the JHeatChart library.
Note that on a desktop machine, I get around 70-80 FPS and the visualization is okay and fluid, but on a smaller raspberry pi (my target machine), I get around 30 FPS but an extremly stucking visualization.
package sample;
import javafx.application.Application;
import javafx.embed.swing.SwingFXUtils;
import javafx.scene.Scene;
import javafx.scene.image.ImageView;
import javafx.scene.layout.VBox;
import javafx.stage.Stage;
import org.tc33.jheatchart.HeatChart;
import java.awt.*;
import java.awt.geom.AffineTransform;
import java.awt.image.AffineTransformOp;
import java.awt.image.BufferedImage;
import java.util.LinkedList;
public class Main extends Application {
ImageView imageView = new ImageView();
final int scale = 15;
#Override
public void start(Stage primaryStage) {
Thread generator = new Thread(() -> {
int col = 0;
LinkedList<Long> fps = new LinkedList<>();
while (true) {
fps.add(System.currentTimeMillis());
double[][] matrix = new double[48][128];
for (int i = 0; i < 48; i++) {
for (int j = 0; j < 128; j++) {
matrix[i][j] = col == j ? Math.random() : 0;
}
}
col = (col + 1) % 128;
HeatChart heatChart = new HeatChart(matrix, 0, 1);
heatChart.setShowXAxisValues(false);
heatChart.setShowYAxisValues(false);
heatChart.setLowValueColour(java.awt.Color.black);
heatChart.setHighValueColour(java.awt.Color.white);
heatChart.setAxisThickness(0);
heatChart.setChartMargin(0);
heatChart.setCellSize(new Dimension(1, 1));
long currentTime = System.currentTimeMillis();
fps.removeIf(elem -> currentTime - elem > 1000);
System.out.println(fps.size());
imageView.setImage(SwingFXUtils.toFXImage((BufferedImage) scale(heatChart.getChartImage(), scale), null));
}
});
VBox box = new VBox();
box.getChildren().add(imageView);
Scene scene = new Scene(box, 1920, 720);
primaryStage.setScene(scene);
primaryStage.show();
generator.start();
}
public static void main(String[] args) {
launch(args);
}
private static Image scale(Image image, int scale) {
BufferedImage res = new BufferedImage(image.getWidth(null) * scale, image.getHeight(null) * scale,
BufferedImage.TYPE_INT_ARGB);
AffineTransform at = new AffineTransform();
at.scale(scale, scale);
AffineTransformOp scaleOp =
new AffineTransformOp(at, AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
return scaleOp.filter((BufferedImage) image, res);
}
}
Your code updates the UI from a background thread, which is definitely not allowed. You need to ensure you update from the FX Application Thread. You also want to try to "throttle" the actual UI updates to occur no more than once per JavaFX frame rendering. The easiest way to do this is with an AnimationTimer, whose handle() method is invoked each time a frame is rendered.
Here's a version of your code which does that:
import java.awt.Dimension;
import java.awt.Image;
import java.awt.geom.AffineTransform;
import java.awt.image.AffineTransformOp;
import java.awt.image.BufferedImage;
import java.util.LinkedList;
import java.util.concurrent.atomic.AtomicReference;
import org.tc33.jheatchart.HeatChart;
import javafx.animation.AnimationTimer;
import javafx.application.Application;
import javafx.embed.swing.SwingFXUtils;
import javafx.scene.Scene;
import javafx.scene.image.ImageView;
import javafx.scene.layout.VBox;
import javafx.stage.Stage;
public class Main extends Application {
ImageView imageView = new ImageView();
final int scale = 15;
#Override
public void start(Stage primaryStage) {
AtomicReference<BufferedImage> image = new AtomicReference<>();
Thread generator = new Thread(() -> {
int col = 0;
LinkedList<Long> fps = new LinkedList<>();
while (true) {
fps.add(System.currentTimeMillis());
double[][] matrix = new double[48][128];
for (int i = 0; i < 48; i++) {
for (int j = 0; j < 128; j++) {
matrix[i][j] = col == j ? Math.random() : 0;
}
}
col = (col + 1) % 128;
HeatChart heatChart = new HeatChart(matrix, 0, 1);
heatChart.setShowXAxisValues(false);
heatChart.setShowYAxisValues(false);
heatChart.setLowValueColour(java.awt.Color.black);
heatChart.setHighValueColour(java.awt.Color.white);
heatChart.setAxisThickness(0);
heatChart.setChartMargin(0);
heatChart.setCellSize(new Dimension(1, 1));
long currentTime = System.currentTimeMillis();
fps.removeIf(elem -> currentTime - elem > 1000);
System.out.println(fps.size());
image.set((BufferedImage) scale(heatChart.getChartImage(), scale));
}
});
VBox box = new VBox();
box.getChildren().add(imageView);
Scene scene = new Scene(box, 1920, 720);
primaryStage.setScene(scene);
primaryStage.show();
generator.setDaemon(true);
generator.start();
AnimationTimer animation = new AnimationTimer() {
#Override
public void handle(long now) {
BufferedImage img = image.getAndSet(null);
if (img != null) {
imageView.setImage(SwingFXUtils.toFXImage(img, null));
}
}
};
animation.start();
}
public static void main(String[] args) {
launch(args);
}
private static Image scale(Image image, int scale) {
BufferedImage res = new BufferedImage(image.getWidth(null) * scale, image.getHeight(null) * scale,
BufferedImage.TYPE_INT_ARGB);
AffineTransform at = new AffineTransform();
at.scale(scale, scale);
AffineTransformOp scaleOp = new AffineTransformOp(at, AffineTransformOp.TYPE_NEAREST_NEIGHBOR);
return scaleOp.filter((BufferedImage) image, res);
}
}
Using the AtomicReference to wrap the buffered image ensures that it is safely shared between the two threads.
On my machine, this generates about 130 images per second; note that not all are displayed, as only the latest one is shown each time the JavaFX graphics framework displays a frame (which is typically throttled at 60fps).
If you want to ensure you show all images that are generated, i.e. you throttle the image generation by the JavaFX framerate, then you can use a BlockingQueue to store the images:
// AtomicReference<BufferedImage> image = new AtomicReference<>();
// Size of the queue is a trade-off between memory consumption
// and smoothness (essentially works as a buffer size)
BlockingQueue<BufferedImage> image = new ArrayBlockingQueue<>(5);
// ...
// image.set((BufferedImage) scale(heatChart.getChartImage(), scale));
try {
image.put((BufferedImage) scale(heatChart.getChartImage(), scale));
} catch (InterruptedException exc) {
Thread.currentThread.interrupt();
}
and
#Override
public void handle(long now) {
BufferedImage img = image.poll();
if (img != null) {
imageView.setImage(SwingFXUtils.toFXImage(img, null));
}
}
The code is pretty inefficient, as you generate a new matrix, new HeatChart, etc, on every iteration. This causes many objects to be created on the heap and quickly discarded, which can cause the GC to be run too often, particularly on a small-memory machine. That said, I ran this with the maximum heap size set at 64MB, (-Xmx64m), and it still performed fine. You may be able to optimize the code, but using the AnimationTimer as shown above, generating images more quickly will not cause any additional stress on the JavaFX framework. I would recommend investigating using the mutability of HeatChart (i.e. setZValues()) to avoid creating too many objects, and/or using PixelBuffer to directly write data to the image view (this would need to be done on the FX Application Thread).
Here's a different example, which (almost) completely minimizes object creation, using one off-screen int[] array to compute data, and one on-screen int[] array to display it. There's a little low-level threading details to ensure the on-screen array is only seen in a consistent state. The on-screen array is used to underly a PixelBuffer, which in turn is used for a WritableImage.
This class generates the image data:
import java.util.concurrent.atomic.AtomicLong;
import java.util.concurrent.locks.ReentrantLock;
import java.util.function.Consumer;
public class ImageGenerator {
private final int width;
private final int height;
// Keep two copies of the data: one which is not exposed
// that we modify on the fly during computation;
// another which we expose publicly.
// The publicly exposed one can be viewed only in a complete
// state if operations on it are synchronized on this object.
private final int[] privateData ;
private final int[] publicData ;
private final long[] frameTimes ;
private int currentFrameIndex ;
private final AtomicLong averageGenerationTime ;
private final ReentrantLock lock ;
private static final double TWO_PI = 2 * Math.PI;
private static final double PI_BY_TWELVE = Math.PI / 12; // 15 degrees
public ImageGenerator(int width, int height) {
super();
this.width = width;
this.height = height;
privateData = new int[width * height];
publicData = new int[width * height];
lock = new ReentrantLock();
this.frameTimes = new long[100];
this.averageGenerationTime = new AtomicLong();
}
public void generateImage(double angle) {
// compute in private data copy:
int minDim = Math.min(width, height);
int minR2 = minDim * minDim / 4;
for (int x = 0; x < width; x++) {
int xOff = x - width / 2;
int xOff2 = xOff * xOff;
for (int y = 0; y < height; y++) {
int index = x + y * width;
int yOff = y - height / 2;
int yOff2 = yOff * yOff;
int r2 = xOff2 + yOff2;
if (r2 > minR2) {
privateData[index] = 0xffffffff; // white
} else {
double theta = Math.atan2(yOff, xOff);
double delta = Math.abs(theta - angle);
if (delta > TWO_PI - PI_BY_TWELVE) {
delta = TWO_PI - delta;
}
if (delta < PI_BY_TWELVE) {
int green = (int) (255 * (1 - delta / PI_BY_TWELVE));
privateData[index] = (0xff << 24) | (green << 8); // green, fading away from center
} else {
privateData[index] = 0xff << 24; // black
}
}
}
}
// copy computed data to public data copy:
lock.lock();
try {
System.arraycopy(privateData, 0, publicData, 0, privateData.length);
} finally {
lock.unlock();
}
frameTimes[currentFrameIndex] = System.nanoTime() ;
int nextIndex = (currentFrameIndex + 1) % frameTimes.length ;
if (frameTimes[nextIndex] > 0) {
averageGenerationTime.set((frameTimes[currentFrameIndex] - frameTimes[nextIndex]) / frameTimes.length);
}
currentFrameIndex = nextIndex ;
}
public void consumeData(Consumer<int[]> consumer) {
lock.lock();
try {
consumer.accept(publicData);
} finally {
lock.unlock();
}
}
public long getAverageGenerationTime() {
return averageGenerationTime.get() ;
}
}
And here's the UI:
import java.nio.IntBuffer;
import javafx.animation.AnimationTimer;
import javafx.application.Application;
import javafx.scene.Scene;
import javafx.scene.control.Label;
import javafx.scene.image.ImageView;
import javafx.scene.image.PixelFormat;
import javafx.scene.image.PixelWriter;
import javafx.scene.image.WritableImage;
import javafx.scene.layout.BorderPane;
import javafx.stage.Stage;
public class AnimationApp extends Application {
private final int size = 400 ;
private IntBuffer buffer ;
#Override
public void start(Stage primaryStage) throws Exception {
// background image data generation:
ImageGenerator generator = new ImageGenerator(size, size);
// Generate new image data as fast as possible:
Thread thread = new Thread(() -> {
while( true ) {
long now = System.currentTimeMillis() ;
double angle = 2 * Math.PI * (now % 10000) / 10000 - Math.PI;
generator.generateImage(angle);
}
});
thread.setDaemon(true);
thread.start();
generator.consumeData(data -> buffer = IntBuffer.wrap(data));
PixelFormat<IntBuffer> format = PixelFormat.getIntArgbPreInstance() ;
PixelBuffer<IntBuffer> pixelBuffer = new PixelBuffer<>(size, size, buffer, format);
WritableImage image = new WritableImage(pixelBuffer);
BorderPane root = new BorderPane(new ImageView(image));
Label fps = new Label("FPS: ");
root.setTop(fps);
Scene scene = new Scene(root);
primaryStage.setScene(scene);
primaryStage.setTitle("Give me a ping, Vasili. ");
primaryStage.show();
AnimationTimer animation = new AnimationTimer() {
#Override
public void handle(long now) {
// Update image, ensuring we only see the underlying
// data in a consistent state:
generator.consumeData(data -> {
pixelBuffer.updateBuffer(pb -> null);
});
long aveGenTime = generator.getAverageGenerationTime() ;
if (aveGenTime > 0) {
double aveFPS = 1.0 / (aveGenTime / 1_000_000_000.0);
fps.setText(String.format("FPS: %.2f", aveFPS));
}
}
};
animation.start();
}
public static void main(String[] args) {
Application.launch(args);
}
}
For a version that doesn't rely on the JavaFX 13 PixelBuffer, you can just modify this class to use a PixelWriter (AIUI this won't be quite as efficient, but works just as smoothly in this example):
// generator.consumeData(data -> buffer = IntBuffer.wrap(data));
PixelFormat<IntBuffer> format = PixelFormat.getIntArgbPreInstance() ;
// PixelBuffer<IntBuffer> pixelBuffer = new PixelBuffer<>(size, size, buffer, format);
// WritableImage image = new WritableImage(pixelBuffer);
WritableImage image = new WritableImage(size, size);
PixelWriter pixelWriter = image.getPixelWriter() ;
and
AnimationTimer animation = new AnimationTimer() {
#Override
public void handle(long now) {
// Update image, ensuring we only see the underlying
// data in a consistent state:
generator.consumeData(data -> {
// pixelBuffer.updateBuffer(pb -> null);
pixelWriter.setPixels(0, 0, size, size, format, data, 0, size);
});
long aveGenTime = generator.getAverageGenerationTime() ;
if (aveGenTime > 0) {
double aveFPS = 1.0 / (aveGenTime / 1_000_000_000.0);
fps.setText(String.format("FPS: %.2f", aveFPS));
}
}
};

ALOS Satellite Product to PNG conversion issue (missing rotation)

I'm trying to export a PNG quicklook of an ALOS AVNIR-2 product using the BEAM java APIs. The following picture shows the RGB preview of the prduct, as it appears in the GUI of beam.
As you can see, the image is not upright, due to its geocoding. I've developed a very simple java program to export the quicklook of the product.
public static void main(String[] args) {
String[] rgbBandNames = new String[3];
rgbBandNames[0] = "radiance_3";
rgbBandNames[1] = "radiance_2";
rgbBandNames[2] = "radiance_1";
try {
//Product inputProduct = ProductIO.readProduct(args[0]);
Product inputProduct = ProductIO.readProduct("C:\\nfsdata\\VOL-ALAV2A152763430-O1B2R_U");
Band[] produtBands = inputProduct.getBands();
Band[] rgbBands = new Band[3];
int n = 0;
for (Band band : produtBands) {
if (band.getName().equals(rgbBandNames[0])) {
rgbBands[0] = band;
} else if (band.getName().equals(rgbBandNames[1])) {
rgbBands[1] = band;
} else if (band.getName().equals(rgbBandNames[2])) {
rgbBands[2] = band;
}
n = n + 1;
}
ImageInfo outImageInfo = ProductUtils.createImageInfo(rgbBands, true, ProgressMonitor.NULL);
BufferedImage outImage = ProductUtils.createRgbImage(rgbBands, outImageInfo, ProgressMonitor.NULL);
ImageIO.write(outImage, "PNG", new File(inputProduct.getName() + ".png"));
} catch (IOException e) {
System.err.println("Error: " + e.getMessage());
}
}
The program works, but every PNG image i get from it is an upright PNG image, like the following.
Now, I know that it is not possible to have geocoding information inside a PNG image. I need only to reproduce the same "rotation" of the image.
Any idea?
I managed to solve my problem. In other words, I managed to extract the quicklook from an ALOS AV2 O1B2R_U product, rotated according to the geocoding information of the product (see the image below).
The reason for this is that the ALOS AV2 O1B2R_U products have the geocoding rotation already applied to the raster. As a consequence, in order to successfulyl export a quicklook, the rotation must be retrieved from the native raster and applied to the output image.
For future reference, I'd like to recap and share my solution with the community. This is my main class:
import com.bc.ceres.core.ProgressMonitor;
import java.awt.Graphics2D;
import java.awt.Image;
import java.awt.Point;
import java.awt.geom.AffineTransform;
import java.awt.image.AffineTransformOp;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.IOException;
import javax.imageio.ImageIO;
import org.esa.beam.framework.dataio.ProductIO;
import org.esa.beam.framework.datamodel.Band;
import org.esa.beam.framework.datamodel.ImageInfo;
import org.esa.beam.framework.datamodel.MapGeoCoding;
import org.esa.beam.framework.datamodel.Product;
import org.esa.beam.util.ProductUtils;
public static void main(String[] args) throws IOException {
String inputProductPath = "path\to\input\product";
String outputProductPath = "path\to\output\image";
// Read the source product.
Product inputProduct = ProductIO.readProduct(inputProductPath);
// Extract the RGB bands.
String[] bandNames = new String[3];
Band[] bandData = new Band[3];
bandNames[0] = "radiance_3";
bandNames[1] = "radiance_2";
bandNames[2] = "radiance_1";
for (Band band : inputProduct.getBands()) {
for (int i = 0; i < bandNames.length; i++) {
if (band.getName().equalsIgnoreCase(bandNames[ i ])) {
bandData[ i ] = band;
}
}
}
// Generate quicklook image.
ImageInfo outImageInfo = ProductUtils.createImageInfo(bandData, true, ProgressMonitor.NULL);
BufferedImage outImage = ProductUtils.createRgbImage(bandData, outImageInfo, ProgressMonitor.NULL);
outImage = resize(outImage, WIDTH, 1200);
// Extract the orientation.
double orientation;
if (inputProduct.getGeoCoding() != null) {
orientation = -((MapGeoCoding) inputProduct.getGeoCoding()).getMapInfo().getOrientation();
} else {
orientation = 0.0;
}
outImage = rotate(outImage, orientation);
// Write image.
ImageIO.write(outImage, "PNG", new File(outputProductPath));
}
Once the rotation angle of the quicklook has been extracted from the source product (see the above code), it must be applied to the output image (BufferedImage). In the above code, two simple image manipulation functions are employed: resize(...) and rotate(...), see below for their definition.
/**
* Resizes the image {#code tgtImage} by setting one of its dimensions
* (width or height, specified via {#code tgtDimension}) to {#code tgtSize}
* and dynamically calculating the other one in order to preserve the aspect
* ratio.
*
* #param tgtImage The image to be resized.
* #param tgtDimension The selected dimension: {#code ImageUtil.WIDTH} or
* {#code ImageUtil.WIDTH}.
* #param tgtSize The new value for the selected dimension.
*
* #return The resized image.
*/
public static BufferedImage resize(BufferedImage tgtImage, short tgtDimension, int tgtSize) {
int newWidth = 0, newHeight = 0;
if (HEIGHT == tgtDimension) {
newHeight = tgtSize;
newWidth = (tgtImage.getWidth() * tgtSize) / tgtImage.getHeight();
} else {
newHeight = (tgtImage.getHeight() * tgtSize) / tgtImage.getWidth();
newWidth = tgtSize;
}
Image tmp = tgtImage.getScaledInstance(newWidth, newHeight, Image.SCALE_SMOOTH);
BufferedImage outImage = new BufferedImage(newWidth, newHeight, BufferedImage.TYPE_INT_ARGemoticon;
Graphics2D g2d = outImage.createGraphics();
g2d.drawImage(tmp, 0, 0, null);
g2d.dispose();
return outImage;
}
/**
* Rotates the image {#code tgtImage} by {#code tgtAngle} degrees clockwise.
*
* #param tgtImage The image to be rotated.
* #param tgtAngle The rotation angle (expressed in degrees).
*
* #return The resized image.
*/
public static BufferedImage rotate(BufferedImage tgtImage, double tgtAngle) {
int w = tgtImage.getWidth();
int h = tgtImage.getHeight();
AffineTransform t = new AffineTransform();
t.setToRotation(Math.toRadians(tgtAngle), w / 2d, h / 2d);
Point[] points = {
new Point(0, 0),
new Point(w, 0),
new Point(w, h),
new Point(0, h)
};
// Transform to destination rectangle.
t.transform(points, 0, points, 0, 4);
// Get destination rectangle bounding box
Point min = new Point(points[0]);
Point max = new Point(points[0]);
for (int i = 1, n = points.length; i < n; i++) {
Point p = points[ i ];
double pX = p.getX(), pY = p.getY();
// Update min/max x
if (pX < min.getX()) {
min.setLocation(pX, min.getY());
}
if (pX > max.getX()) {
max.setLocation(pX, max.getY());
}
// Update min/max y
if (pY < min.getY()) {
min.setLocation(min.getX(), pY);
}
if (pY > max.getY()) {
max.setLocation(max.getX(), pY);
}
}
// Determine new width, height
w = (int) (max.getX() - min.getX());
h = (int) (max.getY() - min.getY());
// Determine required translation
double tx = min.getX();
double ty = min.getY();
// Append required translation
AffineTransform translation = new AffineTransform();
translation.translate(-tx, -ty);
t.preConcatenate(translation);
AffineTransformOp op = new AffineTransformOp(t, null);
BufferedImage outImage = new BufferedImage(w, h, BufferedImage.TYPE_INT_ARGemoticon;
op.filter(tgtImage, outImage);
return outImage;
}

Java - array of objects not storing values (scope?)

I'm trying to get an array of objects to render into a gamescreen. I've looked around and found some similar questions asked by others, but I can't seem to apply the answers they've gotten to my program.
The problem seems to occur in the nested for-loops. In trying to solve this issue, I've gotten Java NullPointerExceptions at the first line within those for loops.
package com.frfanizz.agility;
import com.badlogic.gdx.Gdx;
import com.badlogic.gdx.Input.Keys;
import com.badlogic.gdx.Screen;
import com.badlogic.gdx.graphics.GL30;
import com.badlogic.gdx.graphics.OrthographicCamera;
import com.badlogic.gdx.graphics.g2d.SpriteBatch;
public class GameScreen implements Screen {
AgilityGame game;
OrthographicCamera camera;
SpriteBatch batch;
Hero hero;
Spark[] sparkArray;
int totalSparks;
public GameScreen(AgilityGame game) {
this.game = game;
camera = new OrthographicCamera();
camera.setToOrtho(true, 1920, 1080);
batch = new SpriteBatch();
hero = new Hero(60,540);
//Variables to set spark array
int screenX = 1400;
int screenY = 1200;
int numOfRow = 11;
int numOfCol = 8;
float spacingRow = screenY/(numOfRow + 1);
float spacingCol = screenX/(numOfCol + 1);
int totalSparks = numOfRow*numOfCol;
//Spark array
Spark[] sparkArray = new Spark[totalSparks];
//setting bounds for sparks
int index = 0;
for (int i=0;i<numOfCol;i++) {
for (int j=0; j<numOfRow;j++) {
//sparkArray[index] = new Spark();
sparkArray[index].bounds.x = (float) (60 + spacingCol + ((i)*spacingCol));
sparkArray[index].bounds.y = (float) (spacingRow + ((j-0.5)*spacingRow));
index++;
}
}
}
#Override
public void render(float delta) {
Gdx.gl.glClearColor(1F, 1F, 1F, 1F);
Gdx.gl.glClear(GL30.GL_COLOR_BUFFER_BIT);
camera.update();
generalUpdate();
batch.setProjectionMatrix(camera.combined);
batch.begin();
//Rendering code
batch.draw(Assets.spriteBackground, 0, 0);
batch.draw(hero.image,hero.bounds.x,hero.bounds.y);
for (int i=0; i<totalSparks; i++) {
batch.draw(sparkArray[i].image,sparkArray[i].bounds.x,sparkArray[i].bounds.y);
}
batch.end();
}
//Other gamescreen methods
}
The classes Hero and Spark are as follows:
package com.frfanizz.agility;
import com.badlogic.gdx.graphics.g2d.Sprite;
import com.badlogic.gdx.math.Rectangle;
public class Hero {
public Sprite image;
public Rectangle bounds;
public Hero(int spawnX, int spawnY) {
image = Assets.spriteHero;
image.flip(false, true);
bounds = new Rectangle(spawnX - 16, spawnY - 16, 32, 32);
}
}
and:
package com.frfanizz.agility;
import com.badlogic.gdx.graphics.g2d.Sprite;
import com.badlogic.gdx.math.Rectangle;
public class Spark {
public Sprite image;
public Rectangle bounds;
public Spark () {
image = Assets.spriteSpark;
image.flip(false, true);
bounds = new Rectangle();
bounds.height = 32;
bounds.width = 32;
}
}
Within the for loops, I've printed the values of sparkArray, so I think I have an issue with values rather than references from the other questions I've read the answers to.
(Here are the questions I (unsuccessfully) referenced:
Java. Array of objects ,
Java NullPointerException with objects in array)
Thanks in advance!
Uncomment the line
//sparkArray[index] = new Spark();
Your array of Spark objects contains null values until you put instances of Spark into it.
Consider using a two-dimensional array of Spark, if it's appropriate to what you're doing.
int numOfRow = 11;
int numOfCol = 8;
Spark[][] sparkArray = new Spark[numOfRow][numOfCol];
//setting bounds for sparks
for (int row=0; row<numOfRow; row++) {
for (int col=0; col<numOfCol; col++) {
sparkArray[row][col] = new Spark();
sparkArray[row][col].bounds.x = 1.23; // example of usage
}
}

Conversion of Latitude/Longitude into image coordinates (Pixel coordinates) on a simple cylindrical projection

I have taken a image from Google earth, whose latitude/longitude of all the 4 corners are known. I am capturing latitude/longitudes using a GPS sensor. I have to convert these captured latitude/longitudes to image coordinates(pixel coordinates) using java. I will use the image coordinates to simulate as if a vehicle is moving on a static map( image taken from Google Earth).
I found this formula and tried to implement it
Determine the left-most longitude in your 1653x1012 image (X)
Determine the east-most longitude in your 1653x1012 image (Y)
Determine Longitude-Diff (Z = Y - X)
Determine north-most latitude in your 1653x1012 image (A)
Determine south-most latitude in your 1653x1012 image (B)
Determine Latitude-Diff (C = A - B)
Given a Latitude and Longitude, to determine which pixel they clicked on:
J = Input Longitude
K = Input Latitude
Calculate X-pixel
XPixel = CInt(((Y - J) / CDbl(Z)) * 1653)
Calculate Y-pixel
YPixel = CInt(((A - K) / CDbl(C)) * 1012)
This is the code I used.
import java.awt.geom.Point2D;
import java.io.BufferedReader;
import java.io.File;
import java.io.FileReader;
import java.util.ArrayList;
import java.util.List;
public class LatLongService {
private static LatLongService latLangService;
private BufferedReader reader = null;
private String st;
private LatLongService() {
try {
reader = new BufferedReader(new FileReader(new File(
"resources/GPS_lat_long_2.txt")));
} catch (Exception e) {
e.printStackTrace();
}
}
public static LatLongService getInstance() {
if (latLangService == null)
latLangService = new LatLongService();
return latLangService;
}
public List<Point2D> readLatLongList() {
List<Point2D> pointList = new ArrayList<Point2D>();
StringBuffer xStr;
StringBuffer yStr = new StringBuffer();
try {
while ((st = reader.readLine()) != null) {
xStr = new StringBuffer(st.substring(0, st.indexOf(',')));
yStr = new StringBuffer(st.substring(st.indexOf(',') + 2,
st.length()));
Point2D pt = new Point2D.Double(
new Double(xStr.toString()).doubleValue(), new Double(
yStr.toString()).doubleValue());
pointList.add(pt);
}
} catch (Exception e) {
e.printStackTrace();
try {
reader.close();
} catch (Exception e2) {
e.printStackTrace();
}
}
return pointList;
}
public List<Point2D> convertLatLongToCoord(List<Point2D> coordinate) {
List<Point2D> latLong = new ArrayList<Point2D>();
double westMostLong = -79.974642;
double eastMostLong = -79.971244;
double longDiff = eastMostLong - westMostLong; // (rightmost_longitude -
// leftmost_longitude)
double northMostLat = 39.647556;
double southMostLat = 39.644675;
double latDiff = northMostLat - southMostLat; // (topmost_latitude -
// bottommost_latitude)
for (Point2D coord : coordinate) {
double j = coord.getY();
double k = coord.getX();
double XPixel = (((eastMostLong - j) / longDiff) * 1653);
double YPixel = (((northMostLat - k) / latDiff) * 1012);
Point2D actualCoord = new Point2D.Double(XPixel, YPixel);
latLong.add(actualCoord);
}
return latLong;
}
}
Some of the GPS lat/long I got from GPS sensors
Input Latitude Input Longitude
(39.64581, -79.97168)
(39.64651, -79.97275)
(39.646915, -79.97342)
(39.646538, -79.97279)
[IMG]http://i59.tinypic.com/nbqkk3.png[/IMG]
The red line in the picture shows the path followed when GPS coordinates were taken by sensor.
However, when I am using this formula to convert the Lat/Long coordinates to pixel coordinates. The pixel coordinates after conversion are not consistent, as you can see the output below:
Image X Image Y
(212.0977045, 613.3120444)
(732.6127134, 367.4251996)
(1058.542672, 225.1620965)
(752.0712184, 357.5897258)
The variation in the X,Y (pixel) coordinates are too much. So when I try to move a vehicle based on the pixel coordinates, the vehicle does not follow the red line or atleast near to that.
The vehicle moves either above the red line or below the line, but not on the line.
For smooth movement of the vehicle based on pixel coordinates, ideally I expect the conversion from lat/long to image coordinates to be something like this:
Required Image X Required Image Y
(1290, 409)
(1289, 409)
(1288, 409)
(1287, 409)
But I am getting this
Image X Image Y
(212.0977045, 613.3120444)
(732.6127134, 367.4251996)
(1058.542672, 225.1620965)
(752.0712184, 357.5897258)
I hope I am able to convey my problem.
Latitude and Longitude are not distances.
http://geography.about.com/cs/latitudelongitude/a/latlong.htm
I recently worked on a Arduino Project which was using GPS. I followed minigeo API approach that was converting latitude and longitude into northing and easting(UTM).
Using the library in the link you can do this convertion:
http://www.ibm.com/developerworks/library/j-coordconvert/
Than get maximum easting and northing and calculate the scale
private synchronized void scale() {
int w = 800;
int h = 600;
this.scale = Math.min(
w / (maxEasting - minEasting),
h / (maxNorthing - minNorthing));
oEasting = minEasting;
oNorthing = minNorthing;
}
Than converting to X and Y
private int applyScale(double km) {
return (int) (km * scale);
}
private int convertX(double easting) {
return applyScale(easting - oEasting);
}
private int convertY(double northing, int height) {
return 600/*height*/ - applyScale(northing - oNorthing);
}
Source:Minigeo
Here is a code that compiles and answers your question as
[1] (39.64581,-79.97168) -> 102,363
[2] (39.64651,-79.97275) -> 354,217
[3] (39.646915,-79.97342) -> 512,133
[4] (39.646538,-79.97279) -> 363,212
[5] (39.646458,-79.97264) -> 328,228
You might have interchanged the x-y coordinates. In this case, x == longitude, y == latitude.
import java.util.*;
import java.awt.geom.*;
public class LatLong {
private int imageW, imageH;
private final static double west = -79.974642, north = 39.647556,
east = -79.971244, south = 39.644675;
public LatLong (int w, int h) {
imageW = w;
imageH = h;
}
public List<Point2D> convertLatLongToCoord (List<Point2D> coordinate) {
List<Point2D> latLong = new ArrayList<Point2D>();
for (Point2D coord : coordinate) {
double x = coord.getY(), px = imageW * (x-east) / (west-east),
y = coord.getX(), py = imageH * (y-north)/(south-north);
latLong.add (new Point2D.Double(px,py));
}
return latLong;
}
public static void main (String[] args) {
double[] latit = {39.64581, 39.64651, 39.646915, 39.646538, 39.646458},
longit = {-79.97168, -79.97275, -79.97342, -79.97279, -79.97264};
List<Point2D> pointList = new ArrayList<Point2D>();
for (int i = 0 ; i < latit.length ; i++)
pointList.add (new Point2D.Double(latit[i], longit[i]));
List<Point2D> pixels = new LatLong (800,600).convertLatLongToCoord (pointList);
for (int i = 0 ; i < latit.length ; i++)
System.out.println ("[" + (i+1) + "]\t(" + latit[i] + "," + longit[i] + ") -> " +
(int) (pixels.get(i).getX()) + "," + (int) (pixels.get(i).getY()));
}}

Problems with OpenCV ellipse detection in Java

We tried to convert the c++ Code from Detection of coins (and fit ellipses) on an image into Java. When we start the program with the parameters
2 PathToThePicture
it crashes with this error:
OpenCV Error: Bad argument (Array should be CvMat or IplImage) in cvGetSize, file
..\..\..\..\opencv\modules\core\src\array.cpp, line 1238
Exception in thread "main" java.lang.RuntimeException: ..\..\..\..\opencv\modules
\core\src\array.cpp:1238: error: (-5) Array should be CvMat or IplImage in function
cvGetSize
at com.googlecode.javacv.cpp.opencv_core.cvGetSize(Native Method)
at DetectEllipse.main(DetectEllipse.java:65)
Here is the converted Java-Code:
import static com.googlecode.javacv.cpp.opencv_core.CV_FILLED;
import static com.googlecode.javacv.cpp.opencv_core.CV_RGB;
import static com.googlecode.javacv.cpp.opencv_core.CV_WHOLE_SEQ;
import static com.googlecode.javacv.cpp.opencv_core.cvCreateImage;
import static com.googlecode.javacv.cpp.opencv_core.cvCreateMemStorage;
import static com.googlecode.javacv.cpp.opencv_core.cvDrawContours;
import static com.googlecode.javacv.cpp.opencv_core.cvGetSize;
import static com.googlecode.javacv.cpp.opencv_core.cvPoint;
import static com.googlecode.javacv.cpp.opencv_core.cvScalar;
import static com.googlecode.javacv.cpp.opencv_core.cvXorS;
import static com.googlecode.javacv.cpp.opencv_core.cvZero;
import static com.googlecode.javacv.cpp.opencv_highgui.cvLoadImage;
import static com.googlecode.javacv.cpp.opencv_highgui.cvSaveImage;
import static com.googlecode.javacv.cpp.opencv_imgproc.CV_CHAIN_APPROX_SIMPLE;
import static com.googlecode.javacv.cpp.opencv_imgproc.CV_RETR_CCOMP;
import static com.googlecode.javacv.cpp.opencv_imgproc.CV_THRESH_BINARY;
import static com.googlecode.javacv.cpp.opencv_imgproc.cvContourArea;
import static com.googlecode.javacv.cpp.opencv_imgproc.cvDilate;
import static com.googlecode.javacv.cpp.opencv_imgproc.cvFindContours;
import static com.googlecode.javacv.cpp.opencv_imgproc.cvThreshold;
import com.googlecode.javacpp.Loader;
import com.googlecode.javacv.cpp.opencv_core.CvContour;
import com.googlecode.javacv.cpp.opencv_core.CvMemStorage;
import com.googlecode.javacv.cpp.opencv_core.CvRect;
import com.googlecode.javacv.cpp.opencv_core.CvScalar;
import com.googlecode.javacv.cpp.opencv_core.CvSeq;
import com.googlecode.javacv.cpp.opencv_core.IplImage;
public class DetectEllipse{
public static final double M_PI = 3.14159265358979323846;
public static final double MIN_AREA = 100.00;
public static final double MAX_TOL = 100.00;
private static int[] array = { 0 };
//
// We need this to be high enough to get rid of things that are too small
// too
// have a definite shape. Otherwise, they will end up as ellipse false
// positives.
//
//
// One way to tell if an object is an ellipse is to look at the relationship
// of its area to its dimensions. If its actual occupied area can be
// estimated
// using the well-known area formula Area = PI*A*B, then it has a good
// chance of
// being an ellipse.
//
// This value is the maximum permissible error between actual and estimated
// area.
//
public static void main(String[] args) {
IplImage src = cvLoadImage(args[1], 0);
// the first command line parameter must be file name of binary
// (black-n-white) image
if (Integer.parseInt(args[0]) == 2) {
IplImage dst = cvCreateImage(cvGetSize(src), 8, 3);
CvMemStorage storage = cvCreateMemStorage(0);
CvSeq contour = new CvContour();
// maybe: = new CvSeq(0)
cvThreshold(src, src, 1, 255, CV_THRESH_BINARY);
//
// Invert the image such that white is foreground, black is
// background.
// Dilate to get rid of noise.
//
cvXorS(src, cvScalar(255, 0, 0, 0), src, null);
cvDilate(src, src, null, 2);
cvFindContours(src, storage, contour,
Loader.sizeof(CvContour.class), CV_RETR_CCOMP,
CV_CHAIN_APPROX_SIMPLE, cvPoint(0, 0));
cvZero(dst);
for (; contour.flags() != 0; contour = contour.h_next()) {
// if not working: use contour.isNull()
double actual_area = Math.abs(cvContourArea(contour,
CV_WHOLE_SEQ, 0));
if (actual_area < MIN_AREA)
continue;
//
// FIXME:
// Assuming the axes of the ellipse are vertical/perpendicular.
//
CvRect rect = ((CvContour) contour).rect();
int A = rect.width() / 2;
int B = rect.height() / 2;
double estimated_area = Math.PI * A * B;
double error = Math.abs(actual_area - estimated_area);
if (error > MAX_TOL)
continue;
System.out.printf("center x: %d y: %d A: %d B: %d\n", rect.x()
+ A, rect.y() + B, A, B);
CvScalar color = CV_RGB(
tangible.RandomNumbers.nextNumber() % 255,
tangible.RandomNumbers.nextNumber() % 255,
tangible.RandomNumbers.nextNumber() % 255);
cvDrawContours(dst, contour, color, color, -1, CV_FILLED, 8,
cvPoint(0, 0));
}
cvSaveImage("coins.png", dst, array);
}
}
}
Can anyone help use? Thanks in advance !
Probably cvGetSize(src) is doing the crash. That happens when src is null.
In other words, the image was not loaded/found (maybe the path is wrong?).
In the future, you can avoid such problems by testing if the image was loaded successfully:
IplImage src = cvLoadImage(args[1], 0);
if (src == null)
{
System.out.println("!!! Unable to load image: " + args[1]);
return;
}

Categories

Resources