Detecting if multiple bent lines intersect - java

I'm in the process of making a swimlane diagram but can't come up with a good algorithm to automatically lay out the lines that connect the nodes in the diagram. What I essentially want is this.
However, I don't have any protection against lines overlapping or intersecting right now and it sometimes gets very messy.
Does anyone know a way to detect if a line will intersect ANY of the lines already drawn?
One idea that I've come up with is to store the paths in an array or table and check the entire table every time a new line is slated to be drawn but that does not seem efficient.
I'm doing this in javascript and java through the use of GWT so maybe there is an easy way to solve this using one of the tools provided by these languages?

If your real issue is to minimize the line intersections, there are several algorithms that try to accomplish this in diagrams. Check this link for example, there are also more algorithms that are used in auto routing for electric design automation that are also used in this kind of diagrams, like Lee algorithm, or A* Algorithm.
I don't know if the tools that you are using have enough flexibility to implement this kind of algorithms, usually you need to implement your own heuristic according to the specific type of diagram, but I hope that this links are enough to give you good ideas.
Minimizing the line intersections in a graph is a difficult NP-Hard problem, check this link (about the crossing number) for more information.
Good luck.

Related

Efficient 2D Radial Gravity Layout (Java)

I'm after an efficient 2D mapping algorithm, and I've tried a number of implementations, but they all seem lacking. I'm hoping the stackoverflow world can help out with some pointers to existing, tried-n-tested algorithms I could learn from.
My goal is to display articles based on the genre of writing; for the prototype, I am using Philosophy, Programming, Politics and Poetry, since those are the only four styles of writing I have.
Each article is weighted based on each category, and the home view will have each category as a header in each corner. The articles are then laid out in word-cloud-like format, with "artificial gravity" placing each item as-near-as-possible to its main category (or between its main categories), without overlapping.
Currently, I am using an inefficient algorithm which stores arrays of rectangles to perform hit-test-and-search every time an article is added to the view, (with A* search patterns to find empty space to fill). By approximating a single destination for all articles of the same weight, and by using a round-robin queue to pick off articles from each pool, I can achieve fresh results (arrays are sorted by weight, then timestamp), with positioning-by-relevance ("artificial gravity").
However, using A* to blindly search seems really wasteful, even with heuristics to make each article check closest to it's target marks first. I need a more efficient way to iterate over a 2D space.
I'm wondering if a Linked-List approach might work better; rather than go searching blindly in all directions for empty space, I can just iterate through connected nodes to ask each one if it has either a) nearby free space, or b) other connected nodes to ask (and always ask the closest node first).
If there are any better algorithms available, or critiques on my methods, any and all help would surely be appreciated.
I am using gwt elemental + java in this gui, but any 2D mapping algorithm in any language will surely help.
[EDIT (request for more details)] : The main problem here is the amount of work each new addition performs; it produces noticable glitches in the ui thread, especially when there is almost no space left, as I am searching many points in a given radius for enough free space to fit the article.
If I cut the algorithm off too soon, I get blank spots that could have been filled. If I let it run too long, the ui glitches pretty bad, and I'm sure users will hate it.
What is the fastest / most efficient way to store and modify collections of 2D space?
You haven't provided enough information to say what would make an algorithm "better." Faster? Produces layouts that are "nicer" by some metric for quality? Able to handler bigger data sources?
There is certainly nothing wrong with arrays, nor with A*. If they are giving acceptable results with the size of problem you are trying to solve, how can they be "wasteful?" Linked data structures are worthwhile only if they reduce cost of frequently needed operations.
If you sharpen the problem, you're more likely to get a useful answer.
At any rate, there is an enormous literature on "graph layout" and "graph drawing." Try searching on these terms. If you can represent your desired layout as a collection of nodes and edges, these might apply. Many are based on simulated spring systems, which seems akin to what you are doing.

How to approach writing algorithm from a complex research paper

I thought of writing a piece of software which does Alpha Compositing. I didn't wanted ready made code off from internet so I tried to find research papers and other sources to understand the mathematical algorithms, and initiated to implement.
But, I got lost very quickly. So my question is,
How should I approach these papers to extract the necessary details from it in order to write algorithm based on it. Any specific set of steps which works well?
Desired answer :
Read ...
Extract ...
Understand ...
Implement ...
Note: This question is not limited to only Alpha Compositing, so more generalised approach will be helpful. I have tagged Java and C++, because thats my desired language to implement the image processing.
What I have done so far?
This is not a homework question but it is of course better to say what I know. I have read wiki of Alpha compositing, and few closely related Image compositing research papers. But, I stuck at the next step to take in order to go from understanding to implementation.
Wikipedia
Technical Memo, Image compositing
I'd recommend reading articles with complex formulas with a pencil and paper. Work through the math involved until you have a good grasp on it. Then, you'll be ready to code.
Start with identifying the steps needed to perform your algorithm on some image data. Include all of the steps from loading the image itself into memory all the way through the complex calculations that you may need to perform. Then structure that list into pseudocode. Once you have that, it should be rather easy to code up.
Write pseudocode. Ideally, the authors of the research papers would have done this, but often they don't. Write pseudocode for some simple language like Matlab or possibly Python, and hack away at writing a working implementation based on the psuedocode.
If you understand some parts of the algorithm but not others, then implement your pseudocode into real code for the parts you understand, and leaving comments for the places you don't.
The section from The Pragmatic Programmer on "Tracer Bullets" basically describes this idea. You want to quickly hack together something that takes your data into some form of an output, and then iterate on the body of the code to get it to slowly resemble the algorithm you're trying to produce.
My answer is necessarily somewhat vague. There's no magic bullet for something like this.
Have you implemented any image processing algorithms? Maybe start with something a little simpler, like desaturation/color intensification, reversal (side to side and upside down), rotating, scaling, and compositing images through a mask.
Once you have those figured out, you will be in a very good position to do an alpha composite.
I agree that academic papers seem to go out of their way to make implementation details muddy and uncertain. I find that large amounts of simplification to what is written is needed to begin to perform a practical implementation. In their haste to be general, writers excessively parameterize every aspect. To build useful, reliable software, it is necessary to start with something simple which actually works so that it can be a framework to add features. To do that, it is necessary to throw away 80–90 percent of the academic generality. Often much can be done with a raft of symbolic constants, but abandoning generality (say for four and five dimensional images) doesn't really lose anything in practice.
My suggestion is to first write the algorithm using Matlab to make sure that you understood all the steps and then try to implement using C++ or java.
To add to the good suggestions above, try to write your pseudocode in simple module (Object oriented style ) so has to have a deep understanding of each part of your code while not loosing the big picture. Writing everything in a procedural way is good a the beginning but as the code grow, it might get become hard to keep up will all you are trying to do.
This example cites one of the seminal works on the topic: Compositing Digital Images by Porter & Duff. The class java.awt.AlphaComposite implements the same rules.

Java library for creating straight skeleton?

I have as an input a 2D polygon with holes, and I need to find it's straight skeleton, like in the picture:
(source: cgal.org)
Maybe there is a good Java library for it?
And if not, can you point me to the good explanation of the algorithm, so I could implement it myself? (I haven't found good resources on Google)
I wrote this a little while back. Not sure if it's robust enough.
https://github.com/twak/campskeleton
(edited for 2018...)
See http://www.sable.mcgill.ca/~dbelan2/roofs/roofs.html which contains an applet.
You may be able to use the JTS Topology Suite. It is a very capable library that I've used on a number of projects - never for straight skeleton, but it may be possible.
Edit:
Ah. I see that "Straight Skeleton" is a technical term. The wikipedia article references several algorithms. Have you looked at those?
As I understand it, you have a (convex?) polygon. From it, you subtract 1 or more (potentially non-convex) polygons. You want to turn the result into a set of polygons without holes. Are there extra rules that you're trying to apply?
I have a hard time coming up with a set of rules from the example that you provided. The outer polygons are non-convex; so it doesn't seem like you're trying to find a convex set to represent the result (which is a relatively common task).
If you could use the breakdown shown below, the algorithm is pretty simple. Can you clarify?
Can I ask u what is your purpose for finding Straight skeleton? Is it personal or commercial? I would be interested in knowing how you r using it to solve real time problems? I do have a java library that does that. My algorithm is listed here http://web.stcloudstate.edu/rsarnath/skeleton/definition.htm

Image Comparison Techniques with Java

I'm looking for several methods to compare two images to see how similar they are. Currently I plan to have percentages as the 'similarity index' end-result. My program outline is something like this:
User selects 2 images to compare.
With a button, the images are compared using several different methods.
At the end, each method will have a percentage next to it indicating how similar the images are based on that method.
I've done a lot of reading lately and some of the stuff I've read seems to be incredibly complex and advanced and not for someone like me with only about a year's worth of Java experience. So far I've read about:
The Fourier Transform - im finding this rather confusing to implement in Java, but apparently the Java Advanced Imaging API has a class for it. Though I'm not sure how to convert the output to an actual result
SIFT algorithm - seems incredibly complex
Histograms - probably the easiest out of all mentioned so far
Pixel grabbing - seems viable but if theres a considerable amount of variation between the two images it doesn't look like it's going to produce any sort of accurate result. I might be wrong?
I also have the idea of pre-processing an image using a Sobel filter first, then comparing it. Problem is the actual comparing part.
So yeah I'm looking to see if anyone has ideas for comparing images in Java. Hoping that there are people here that have done similar projects before. I just want to get some input on viable comparison techniques that arent too hard to implement in Java.
Thanks in advance
Fourier Transform - This can be used to efficiently can compute the cross-correlation, which will tell you how to align the two images and how similar they are, when they are optimally aligned.
Sift descriptors - These can be used to compare local features. They are often used for correspondence analysis and object recognition. (See also SURF)
Histograms - The normalized cross-correlation often yields good results for comparing images on a global level. But since you are just comparing color distributions you could end up declaring an outdoor scene with lots of snow as similar to an indoor scene with lots of white wallpaper...
Pixel grabbing - No idea what this is...
You can get a good overview from this paper. Another field you might to look into is content based image retrieval (CBIR).
Sorry for not being Java specific. HTH.
As a better alternative to simple pixel grabbing, try SSIM. It does require that your images are essentially of the same object from the same angle, however. It's useful if you're comparing images that have been compressed with different algorithms, for example (e.g. JPEG vs JPEG2000). Also, it's a fairly simple approach that you should be able to implement reasonably quickly to see some results.
I don't know of a Java implementation, but there's a C++ implementation using OpenCV. You could try to re-use that (through something like javacv) or just write it from scratch. The algorithm itself isn't that complicated anyway, so you should be able to implement it directly.

Random maps/graphs and OSM

just wondering if you have any suggestions here. I need a lot of sample maps/graphs to test my shortest path search solution (I was told I should have >100 of them). My code is supposed to work in a simulator, which uses OpenStreetMap maps of urban setting, limiting the total number of junctions to a few thousand. the problem is, there are only two or three maps provided with the simulator. The way I see it, I have a few choices here:
Write my own random graph generator. Possibly lots of work (do you think? --I've never done it before) and reinventing the wheel.
Use off-the-shelf solution. I'm not aware of any that would generate me map-like graphs (well, at least I didn't find it in JUNG :-) )
In some automated way grab them from OSM. I don't really intend to myself go and pick out a 100+ urban maps that would satisfy <15000 nodes requirement. I don't think that would be easy to automate either, though.
I would assume that 3 would be tough to do. Any advice on some off-the-shelf solution? or comments about writing my own? I'm not an experienced programmer by any measure, but given a few days.
The first thought:
You have a known problem and you need to test its solution. Generate lots of test data, find correct solutions with verified algorithm, then run your algorithm against generated data set and compare results. (or just download verified dijkstra algorithm implementation, I believe that implementing this algorithm is your task)
The second thought:
Random-generated data set is not the best way to test algorithms. You need to think about cases when your algorithm can fail and create correspondent tests. For example, graph with 1 node, graph with cycles, linear graph i.e. N1---N2---N3-...-Nn, complete graph with maximum nodes number. I think if you create these 4 tests and 2-3 small random tests it'll be enough to be sure that your algorithm is implemented correctly.

Categories

Resources