I am designing a flight simulation program and I am looking for ideas on how to properly
implement such requirement.
Kindly look at my picture below. The points represents location.
The idea is this, I wanted to properly create a data structure to best represent such scenario
in java such that
when I am in Point 1, how far I am from the last point which is Point 8?
Points 2, 3 and 5 are at the same distance from Point 8
From Point 1, I could traverse to Point 3 to Point 6 then 7 then eight that would equate to 4 Steps.
when I am in Point 0
I could traverse to Point 4 then 5 then 7 then reach point 8 which would equate to 4 steps also.
I just wanted to assist user to help them find different route.
Is this possible and which java data structure should best fit this requirement? Also any design ideas how to implement this?
Sorry if my question might be vague, I am just trying to get as much information as I can to properly handle such requirement.
What you have is a weighted graph where the weights represent the distance between the nodes (which is very common). You could easily implement this yourself (and it's a great way to learn!), but there are lots of java source code for this out there.
Of course, this is not a java data structure. It's simply a data structure (or a concept), used by everyone, everywhere.
Calculating steps and distances is very easy once you've implemented a weighted graph.
There are massive amounts of documentation on all of this, especially here on Stackoverflow.
This is a Shortest Path problem, a common Graph problem. The usual way to represent the data is an Adjancency List or Matrix:
An adjancency list keeps, for every node, all 1-step reachable destinations. This is often implemented as a Linked List. It's the proper choice if your graph is relatively sparse (i.e. few destinations per node).
An adjancency matrix is used for (very) dense graphs. Basically, you keep an NxN matrix of values (weights/costs, or yes/no booleans). Then, distances[i][j] represents the cost of travelling to j from i. An unavailable arc has a cost of INF (or some error value).
The problem itself is generally solved by Dijkstra's Algorithm.
Related
I'm building a game tree search for a card game, similar to bridge. So I started out with building a double dummy solver for bridge. I'm trying to further optimize the current game tree search algorithm which is based on alpha-beta pruning tree search, enhanced with MTD-f https://en.wikipedia.org/wiki/MTD-f
. (All my current code is in Java)
I've been reading M.Ginsberg's paper on "partition search". It can be found in many places in the internet e.g. https://www.aaai.org/Papers/AAAI/1996/AAAI96-034.pdf
According to Ginsberg he was gaining an order of magnitude in performance by this.
After heaving read the paper multiple times, I think I more or less understand it, at least the idea is fairly simple, but working it out isn't.
I'm not sure fully understanding how the complexity of the intersection in the algorithm is avoided by doing null-window searches, although it isn't hard to see that it converts a complex problem to a binary 0 or 1 problem.
Ginsberg goes on to note in his bridge example
"although the details of the approximation functions and data structures are dependent on the rules of bridge and a discussion of their implementation is outside the scope of this paper"
I totally do not see, what sort of data structures can be used to implement this e.g. the first line of the algorithm:
if there is an entry (S,[x,y],z) with p elemOf S return (z,S)
Where
"entry" refers to an entry in some form of "transposition table"
p is a position (as one typically has it in a game tree)
S is a set of positions!
[x,y] is the possible outcome interval, similar to alpha and beta
This
somehow requires a transposition table where the keys are sets of positions?!
how does one determine elemOf?
...
Similarly for the partioning system, referred to as (P,R,C) in the paper. How does one go about building the "approximating functions" for these?
I seem unable to find implementations of this idea, data structures to support it .... Has anyone tried? Is it really worth all the effort?
P.S.
The only reference to something similar, I found, was the master thesis of S.Kupferschmid on: http://gki.informatik.uni-freiburg.de/theses/diploma_de.html
(fully in German though, and it only defines an equivalence relationship between cards that takes a Skat particularity into account. For something like bridge this is similar to "rank equivalence" i.e. if you have 6 and 4 of the same suit and the 5 of that suit was played then 6 and 4 are equivalent for the outcome of the game)
I need some advise in an approach I may need to take to solve a gaming problem which is a puzzle (NxN), the puzzle consists of positive numbers and stored in a two dimensional array. For simplistic reasons is i´ll list a simple example
2 1 2 2
1 3 2 1
1 0 2 1
3 1 2 0
So the starting point is at (0,0) => 2 and the goal location is to (3,3) => 0
The number in the array location tells you how far to move. (0,0)=> 2 can move to either (0,2) or (2,0) and so on (moves allowed left, right, up, or down)
So you end up with a solution like this for example (0,2)=>(2,2)=>(2,0)=>(3,0)=>(3,3).
so my question is what sort of algorithm i should be looking into and whether any of you have done something similar to this?
You have plenty of solutions here:
A* algorithm
Dijkstra
Depth-first
Breadth-first
The first two will give you an optimal solution if one exists. A* is typically faster than Dijkstra if the heuristic is well chosen. Breadth- first will also give you an optimal implementation. Depth-first may give you non-optimal solutions in this problem.
The main difference between A* and Djisktra is that A* defines a heuristic, namely a function that tries to estimate if a move is better than another one.
The main difference between depth-first and breadth-first is the order in which they explore the space of solutions. Breadth-first will start by looking for all solutions of length 1 then all solutions of length 2, etc, while depth-first will fully explore an entire path until it either cannot go any further or finds a solution.
A* and Dijkstra are typically implemented in imperative style and are probably more sophisticated than the other two, especially A*. Breadth-first is also naturally expressed in imperative style. Depth-first is generally expressed recursively, which can be a problem if your solutions can exceed a length of several thousands moves (depending on the size of your stack, you will generally only be able to make 7-10k recursive calls before you get a StackOverflowError).
To sum up:
A* is generally the most efficient of the algorithms listed below
A* is the most difficult to implement
Dijkstra is a special case of A* with similar performance but potentially less efficient
Breadth-first is straightforward to implement and is resilient to long solutions
Depth-first is straightforward to implement but it is limited by the length of the longest possible path if it is implemented recursively
All these algorithms except depth-first guarantee an optimal solution
Code example:
I found this Scala implementation of A* in one of my repositories. Might help.
I have some grid search algorithms (Best-First, Breadth-First, Depth-First) implemented here in Object Pascal (Delphi), that you could easily adapt to Java if this was a classic grid search:
https://github.com/Zoomicon/GridSearchDemo/tree/master/Object%20Pascal/algorithms
You can try the GridSearchDemo application here to see how those algorithms behave when searching in a grid with start and target point and obstacles in various grid cells (you can set them):
https://github.com/Zoomicon/GridSearchDemo/releases
In general, I prefer the A* algorithm, which is an example of a Best-First algorithm (https://en.wikipedia.org/wiki/Best-first_search)
In your case, this is not a grid really, but a graph, since you seem to have jump links to other cells (or at least this is how you explain the number in your question, although you call it "how far" at first)
I have written a program in java to solve this problem. It uses A*-algorithm with heuristic functions Manhatten and hamming. It depends on the person whether he uses hamming or manhatten distance but Manhatten is better.
Here is my code in java: 8-puzzle
Btw it's not an easy approach and many problems can't be solved.
I am currently working on a project involving a mixture of travelling salesman and shortest path. It goes as follows:
I am given a set of 9 vertices,all with positive coordinates in 2 space, (x,y), where x and y are positive real numbers. I am then asked to compute the shortest path a travelling salesman can take, and traverse through all the points. In essence, the graph is simply a collection of nodes with coordinates, and I have to arrange the edges, such that there is a connection from node 1 to node 9, that is the shortest possible one. It is assumed that I have already visited at the beginning node 1 (x1,y1), and need to visit each node once and only once. I will end on node 9, specified by (x9,y9).
I am writing the program in Java, although the professor said that the program can be written in any language.
I started by writing a node class, which represents each node, and has fields x,y which represent the x and y coordinates respectively. However, I am completely stumped as to how to perform the optimization.
I would greatly appreciate any assistance with respect to this problem.
Thank you very much, and I am excited to become a member of this community!
Euclidean TSP remains NP-hard, so just go for whatever algorithm for general TSP you want; for 9 vertices, the brute force solution compares 7! = 5040 permutations of the 7 intermediate vertices (since you start at 1 and end in 9 as per the problem description), so all you would need to do for that is generate all permutations of {2,...,8} and then just check the length of those paths.
If you want something slightly fancy, go for the dynamic programming approach by Held, Karp and separately Bellman which you can read up on in lots of places online, so I won't be posting how that works.
I have a graph in which i have two nodes (0 and 6) and i have to cut the least edges possible so that they aren´t connected. For example in this graph
Being the nodes 0 and 6 the least edges that i have to cut are 2-7 and 3-7.
My idea was finding the shortest path between the both using bfs, i find one (0-2-7-6) but then i don´t know how to find the other (0-3-7-6). Even then i have no idea how choose the edges to cut.
It would be nice if someone could give me some pointers on this matter.
This problem seems very similar to that of finding articulation nodes within a graph. The technical definition of an articulation point, or a biconnected component is a node whose removal would cause a graph to be split in two.
The discovery of articulated nodes from a graph is a largely solved problem and you can find more details about it here: http://en.wikipedia.org/wiki/Biconnected_component
It seems to me that the way you would like to approach a problem like this in general would something along these lines:
1. Find all articulation points
2. Do a bfs from each node and determine articulation points along the path
3. Split graph at the articulation point, choosing the side with minimal edges
4. Continue until the two nodes are not connected
In the above example, 7 is the only articulation point and so you would snip the edges between 7, 2 and 3 since there are only two edges between 7 and the 0-4 graph and 3 edges between 7 and the 5,6,8 graph.
There is a more established algorithm for doing this (read: one that I didn't come up with) called Karger's algorithm that can solve your problem, albeit in n^2 time.
That algorithm works by effectively joining adjacent nodes to each other until there are only two nodes and then by counting the number of edges left between the two nodes. The number of edges is then the minimum number of cuts required to split the graph.
The way you would implement Karger's algorithm in your problem would just need the caveat that you should always be joining nodes to the two nodes you want to split. Additionally, in order to be able to go backward to the original edges you need to cut you should keep a reference to which nodes the edges originally belonged to.
There's a great visualization of Karger's algorithm here: http://en.wikipedia.org/wiki/Karger%27s_algorithm
What you want is a min s-t cut. The usual way to find one in a general graph is to run an algorithm like push relabel with source 0 and sink 6, which computes a min s-t cut as a byproduct of computing a maximum flow.
Karger's algorithm finds a min cut, i.e., it minimizes over s and t as well as cuts. Since s and t are fixed for you, Karger's is not appropriate. An articulation vertex is a vertex whose removal disconnects the graph. You are interested in removing edges, not vertices.
As I have mentioned in previous questions I am writing a maze solving application to help me learn about more theoretical CS subjects, after some trouble I've got a Genetic Algorithm working that can evolve a set of rules (handled by boolean values) in order to find a good solution through a maze.
That being said, the GA alone is okay, but I'd like to beef it up with a Neural Network, even though I have no real working knowledge of Neural Networks (no formal theoretical CS education). After doing a bit of reading on the subject I found that a Neural Network could be used to train a genome in order to improve results. Let's say I have a genome (group of genes), such as
1 0 0 1 0 1 0 1 0 1 1 1 0 0...
How could I use a Neural Network (I'm assuming MLP?) to train and improve my genome?
In addition to this as I know nothing about Neural Networks I've been looking into implementing some form of Reinforcement Learning, using my maze matrix (2 dimensional array), although I'm a bit stuck on what the following algorithm wants from me:
(from http://people.revoledu.com/kardi/tutorial/ReinforcementLearning/Q-Learning-Algorithm.htm)
1. Set parameter , and environment reward matrix R
2. Initialize matrix Q as zero matrix
3. For each episode:
* Select random initial state
* Do while not reach goal state
o Select one among all possible actions for the current state
o Using this possible action, consider to go to the next state
o Get maximum Q value of this next state based on all possible actions
o Compute
o Set the next state as the current state
End Do
End For
The big problem for me is implementing a reward matrix R and what a Q matrix exactly is, and getting the Q value. I use a multi-dimensional array for my maze and enum states for every move. How would this be used in a Q-Learning algorithm?
If someone could help out by explaining what I would need to do to implement the following, preferably in Java although C# would be nice too, possibly with some source code examples it'd be appreciated.
As noted in some comments, your question indeed involves a large set of background knowledge and topics that hardly can be eloquently covered on stackoverflow. However, what we can try here is suggest approaches to get around your problem.
First of all: what does your GA do? I see a set of binary values; what are they? I see them as either:
bad: a sequence of 'turn right' and 'turn left' instructions. Why is this bad? Because you're basically doing a random, brute-force attempt at solving your problem. You're not evolving a genotype: you're refining random guesses.
better: every gene (location in the genome) represents a feature that will be expressed in the phenotype. There should not be a 1-to-1 mapping between genome and phenotype!
Let me give you an example: in our brain there are 10^13ish neurons. But we have only around 10^9 genes (yes, it's not an exact value, bear with me for a second). What does this tell us? That our genotype does not encode every neuron. Our genome encodes the proteins that will then go and make the components of our body.
Hence, evolution works on the genotype directly by selecting features of the phenotype. If I were to have 6 fingers on each hand and if that would made me a better programmer, making me have more kids because I'm more successful in life, well, my genotype would then be selected by evolution because it contains the capability to give me a more fit body (yes, there is a pun there, given the average geekiness-to-reproducibily ratio of most people around here).
Now, think about your GA: what is that you are trying to accomplish? Are you sure that evolving rules would help? In other words -- how would you perform in a maze? What is the most successful thing that can help you: having a different body, or having a memory of the right path to get out? Perhaps you might want to reconsider your genotype and have it encode memorization abilities. Maybe encode in the genotype how much data can be stored, and how fast can your agents access it -- then measure fitness in terms of how fast they get out of the maze.
Another (weaker) approach could be to encode the rules that your agent uses to decide where to go. The take-home message is, encode features that, once expressed, can be selected by fitness.
Now, to the neural network issue. One thing to remember is that NNs are filters. They receive an input. perform operations on it, and return an output. What is this output? Maybe you just need to discriminate a true/false condition; for example, once you feed a maze map to a NN, it can tell you if you can get out from the maze or not. How would you do such a thing? You will need to encode the data properly.
This is the key point about NNs: your input data must be encoded properly. Usually people normalize it, maybe scale it, perhaps you can apply a sigma function to it to avoid values that are too large or too small; those are details that deal with error measures and performance. What you need to understand now is what a NN is, and what you cannot use it for.
To your problem now. You mentioned you want to use NNs as well: what about,
using a neural network to guide the agent, and
using a genetic algorithm to evolve the neural network parameters?
Rephrased like so:
let's suppose you have a robot: your NN is controlling the left and right wheel, and as input it receives the distance of the next wall and how much it has traveled so far (it's just an example)
you start by generating a random genotype
make the genotype into a phenotype: the first gene is the network sensitivity; the second gene encodes the learning ratio; the third gene.. so on and so forth
now that you have a neural network, run the simulation
see how it performs
generate a second random genotype, evolve second NN
see how this second individual performs
get the best individual, then either mutate its genotype or recombinate it with the loser
repeat
there is an excellent reading on the matter here: Inman Harvey Microbial GA.
I hope I did you some insight on such issues. NNs and GA are no silver bullet to solve all problems. In some they can do very much, in others they are just the wrong tool. It's (still!) up to us to get the best one, and to do so we must understand them well.
Have fun in it! It's great to know such things, makes everyday life a bit more entertaining :)
There is probably no 'maze gene' to find,
genetic algorithms are trying to setup a vector of properties and a 'filtering system' to decide by some kind of 'surival of the fittest' algorithm to find out which set of properties would do the best job.
The easiest way to find a way out of a maze is to move always left (or right) along a wall.
The Q-Algorithm seems to have a problem with local maxima this was workaround as I remember by kicking (adding random values to the matrix) if the results didn't improve.
EDIT: As mentioned above a backtracking algorithm suits this task better than GA or NN.
How to combine both algorithm is described here NeuroGen descibes how GA is used for training a NN.
Try using the free open source NerounDotNet C# library for your Neural networks instead of implementing it.
For Reinforcement Learning library, I am currently looking for one, especially for Dot NET framework..