I want to create a (Googlemap-esq) object from an image. This is going to be architects building plans scanned in, probably to something like a JPEG. In a native android app I want to have an object I can drop pins on and then click the pins for a info bubble etc.
I know Google provide functionality that lets you upload, create FloorPlans but this isn't really what I want. The end result would be perfect, but I can use a scanner to create them.
What sort of technology can be recommended, or do I just implement a Co-Ords system and overlay it?
Google maps uses a tiling system because the map can be very big. For example a morton curve can recursively subdivide the surface. It's similar to a quadtree. In relational database there is a spatial index usually a r-tree or a hilbert curve. If you can use a database you can use the point datatype or you can write your own r-tree. When you have many overlapping pins maybe a quadtree is better.
Related
is user inside volume OpenGL ES Java Android
I have an opengl renderer that shows airspaces.
I need to calculate if my location already converted in float[3] is inside many volumes.
I also want to calculate the distance with the nearest volume.
Volumes are random shapes extruded along z axis.
What is the most efficient algorithm to do that?
I don t want to use external library.
What you have here is a Nearest Neighbor Search problem. Since your meshes are constant and won't change, you should probably use a space partioning algorithm. It's a big topic but, in short, you generally need to use a tree structure and sort all the objects to be put into various tree nodes. You'll need to pre-calculate the tree itself. There are plenty of books and tutorials on the net about space partioning, and you could also at source code of, for example, id Software products like Doom, Quake etc. to see how this algorithms (BSP, at least) are used. The efficiency of each algorithm depends on what you have and what you need. Using BSP trees, for example, you'll have the objects sorted from nearest to farthest so you can quickly get the one you need.
I am creating a pseudo-turn-based online strategy browser game where many people play in the same world over a long period(months). For this, I want to have a map of 64000x64000 tiles = 4 billion tiles. I need about 6-10 bytes of data per tile, making up a total of around 30GB of data for storing the map.
Each tile should have properties such as type(water, grass, desert, mountain), resource(wood, cows, gold) and playerBuilt(road, building)
The client will only ever need access to about 100x100 tiles at the same time.
I have handled the map on client side under control. The problem that I'm faced with is how to store, retrieve and modify information from this map on the server side.
Required functionality:
Create, store, and modify 64000x64000 tilemap.
Show 100x100 part of the map to the client.
Make modifications on the map such as roads, buildings, and depleted resources.
What I have considered so far:
Procedural generation: Procedurally generating whichever part of the map is needed on the fly. Making sure that given the same seed, it always generates the same map. The main problem I have with this is that there will be modifications to the map during the game. Note: Less than 1% of the tiles would be modified during the game and it could be possible to store modifications with coordinates in an outside array. Loading them on top of the procedural generation.
Databases: Generating the map at the start of the game and storing it in a database. A friend advised me against this for such a huge tile map and told me that I'd probably want to store it in memory instead.
Keeping it all in memory on the server side: Keeping it in memory in a data structure. Seems like a nice way to do it if the map was smaller but for 4 billion tiles that would be a lot to keep in memory.
I was planning on using java+mysql for back-end for this project. I'm still in early phases and open to change technology if needed.
My question is: Which of the three approaches above seem viable and/or are there other ways to do it which I have not considered?
Depends on:
how much RAM you got (or player/users got)
Is most of the tile map empty (sparse) ? Opposite is dense.
Is there a default terrain (like empty or water ?)
If sparse, use a hashmap instead of a 2D array.
If dense it will be much more challenging and you may need to use a database or some special data structures + cache.
You may detect hot zones and keep them in memory for a while, dead zones (no players there, no activity ...) can be stored in the database and read on demand.
You may also load data in several passes: first just the terrain, then other objects... each layer could be stored in a different way. For example the terrain could be perlin noise generated + another layer which can be modified.
I am creating my own ray-tracer for fun and learning. One of the features I want to add is the ability to use SVG files as textures directly.
The simple and straight forward way to do this would be to simply render the SVG to another more "lookup-friendly" raster format first and feed that as a regular texture to be used during ray tracing. However I don't want to do that.
Instead I want to actually "trace" the SVG itself directly. So I would like to know are there any SVG libraries for Java that has an API that would lend it self to be used in this manner? It would need some call that takes as input a float point2D[] and returns float colorRGBA[] as an output.
If not what would be the best approach to do this?
I don't know much about Java libraries but most likely they do not suit you too well. The main reasons are:
Most libraries are meant to render pictures and are unsuitable for random look up.
More importantly the SVG texture data does not filter naturally all that well. We know how to build good mipmaps of images and filtering them is easy reducing pressure on your raytracers super sampling need.
Then there is the complexity of SVG itself, something like SVG filters (blur) will be prohibitively expensive to calculate in a random sampling context.
Now if we sidestep option three (3), which is indeed a quite hard problem as it really requires you to do rasterization or something other out of the ordinary. Then there are algorithmic options:
You can actually raytrace the SVG in 2D. This would probably work out well for you as your doing a ray tracer anyway. So all you need to do is shoot rays inside the 2d model and see if your sample point is inside the shape or not. Just shoot a ray to a arbitrary direction and count intersections to see if your inside the shape or not. Simply your ray will intersect the shape a odd number of times if your inside the shape.
Image 1: Intersection testing. (originally posted here) Glancing hits must be excluded (most tracers consider that a miss anyway for this reason even in 3D)
Pairing this tracing with a BSP-Tree or a Quadtree should make this sufficiently performant. All you need is to implement a similar shader support as your standard raytracer and you can handle alpha and gradfients + some of the filters like noise. But sill no luck with blurs without a lot of sampling.
You can also use a texture as a precomputed result for a mipmap and only ask for rendering for a small view box when reaching a mipmap level that does not exist yet using a standard library with a limited window size. This would naturally work better for you and by caching the data you can remove the number of calls. Without the caching it might be too expensive to use. But you can try, (if the system supports clipping your svg). Thsi may not be as easy as it sounds.
You can use your 3d raytracer for this, so instead you shoot rays head on. All you need to do is implement a tracing set logic and you can then triangulate the SVG and use your normal tracing logic to do this. How to describe bezier curves as triangles is described in this nvidia publication. So your changes might be minimal.
Hope this helps even if its not a use this library answer. There is a reason why you do not see this implemented very often.
Is it possible to analyse an image and determine the position of a car inside it?
If so, how would you approach this problem?
I'm working with a relatively small data-set (50-100) and most images will look similar to the following examples:
I'm mostly interested in only detecting vertical coordinates, not the actual shape of the car. For example, this is the area I want to highlight as my final output:
You could try OpenCV which has an object detection API. But you would need to "train" it...by supplying it with a large set of images that contained "cars".
http://docs.opencv.org/modules/objdetect/doc/objdetect.html
http://robocv.blogspot.co.uk/2012/02/real-time-object-detection-in-opencv.html
http://blog.davidjbarnes.com/2010/04/opencv-haartraining-object-detection.html
Look at the 2nd link above and it shows an example of detecting and creating a bounding box around the object....you could use that as a basis for what you want to do.
http://www.behance.net/gallery/Vehicle-Detection-Tracking-and-Counting/4057777
Various papers:
http://cbcl.mit.edu/publications/theses/thesis-masters-leung.pdf
http://cseweb.ucsd.edu/classes/wi08/cse190-a/reports/scheung.pdf
Various image databases:
http://cogcomp.cs.illinois.edu/Data/Car/
http://homepages.inf.ed.ac.uk/rbf/CVonline/Imagedbase.htm
http://cbcl.mit.edu/software-datasets/CarData.html
1) Your first and second images have two cars in them.
2) If you only have 50-100 images, I can almost guarantee that classifying them all by hand will be faster than writing or adapting an algorithm to recognize cars and deliver coordinates.
3) If you're determined to do this with computer vision, I'd recommend OpenCV. Tutorial here: http://docs.opencv.org/doc/tutorials/tutorials.html
You can use openCV latentSVM detector to detect the car and plot a bounding box around it:
http://docs.opencv.org/modules/objdetect/doc/latent_svm.html
No need to train a new model using HaarCascade, as there is already a trained model for cars:
https://github.com/Itseez/opencv_extra/tree/master/testdata/cv/latentsvmdetector/models_VOC2007
This is a supervised machine learning problem. You will need to use an API that features learning algorithms as colinsmith suggested or do some research and write on of your own. Python is pretty good for machine learning (it's what I use, personally) and has some nice tools like scikit: http://scikit-learn.org/stable/
I'd suggest for you to look into HAAR classifiers. Since you mentioned you have a set of 50-100 images, you can use this to build up a training dataset for the classifier and use it to classify your images.
You can also look into SURF and SIFT algorithms for the specified problem.
I'm developing a indoor navigation program for android, and i'm stuck right at the beginning:
How do you represent a map in java?
I will prefer a way that will allow me to apply Dijkstra's algorithm easily.
note: i need the program to know the size of each room and where are the entrances and exits.
edit: i'm looking for a object to use in the BL not the UI
Android has a built-in GUI element called a mapView that you can add to a screen in your application. It is a Google map and will require you to obtain your applications API key. You can add overlays to it and locate points on the map based on latitude and longitude (I think it supports other methods as well) and these points could be used for Dijkstra's algorithm.
Hope this helps, good luck.
I think, what you are describing is a extended topological map (where you have an estimate on the nodes absolute location). So you basically need a component that can draw a graph with locations attached to the nodes. Also, you might want the possibility to draw addistional stuff (like heat maps for wifi connectivitiy, bluetooth devices or whatever you are using to estimate your position and gather information about the environment). You are also dealing with the SLAM problem.
I don't know of any existing component that can do the visual task for that, but maybe you find something with google. I wouldn't however recommend it. I would just create my own component (using a TextView and draw my stuff on that). It is much more flexible, not that much of an effort and you can choose the functionality and data format, which can be very helpful in such cases...
Once you have all the rest running (which is a lot!), you still can decide to look for some fancy component that may integrate even google maps, is able to zoom or so.