Well, I need to make simulator for non-deterministic Push-Down Automaton.
Everything is okey, I know I need to do recursion or something similar. But I do not know how to make that function which would simulate automaton.
I got everything else under control, automaton generator, stack ...
I am doing it in java, so this is maybe only issue that man can bump on, and I did it.
So if anyone have done something similar, I could use advices.
This is my current organisation of code:
Classes: class transit:
list<transit> -contains non deterministic transitions
state
input sign
stack sign class generator
it generate automaton from file clas NPA
public boolean start() - this function I am having trouble with
Of course problem of separate stacks, and input for every branch.
I tried to solve it with collection of objects NPA and try to start every object, but it doesn work.
Okay, think about the definition of the automaton. You have states and a state transition function. You have the stack. What makes life exciting is the non-determinism.
however, it is a theorem (look it up) that every nondeterministic finite automaton has an equivalent deterministic FSA.
One approach you could try is to construct the equivalent DFA. That's exponential space in the worst case, though: every state in the DFA maps to a subset of the powerset of the NFA states.
So you could try it "on line" instead. Now, instead of constructing the equivalent DFA, you simulate the NFA; at state transitions you construct all the next states you reach and put them on some data structure; then go back and see what happens next for each such state.
JFLAP is open source and does this (and much more!) - why not check it out?
Related
after posting a somewhat ambiguous question, I believe I have nailed that what I am wondering (I am a complete novice in FSMs).
I would like to simulate a state space using finite state machines (possibly non-deterministic automata i.e. multiple next-state transitions allowed) in clojure.
This is essentially my problem:
Suppose we have these states Q={flying,cycling,running,driving} and we have the state durations for each during an average day D={120,30,30,60} - where for the sake of argument those are in minutes. How can one then create a possibly non-deterministic FSM (multiple destination states allowed) using clojure? I have looked at e.g. https://github.com/cdorrat/reduce-fsm and https://github.com/ztellman/automat but I do not believe it is quite what I want.
My end goal is to get a simulation looking something like S = {flying,flying,flying,flying,flying,cycling,cycling,running,driving,driving,driving}.
Effectively inducing heavy self-transition bias in the state machine. End and start state are not important.
The problem is not completely formulated to be answered unequivocally. Anyway:
If you just want to recognize a specific sequence of states, you can use a finite automaton, and you will have to write them in that order, like:
flying -> flying -> flying -> flying -> flying -> cycling -> cycling -> running -> driving -> driving -> driving
where I'm considering that the transitions are caused by the durations you refer.
However, I suspect that you possibly need something more elaborated. That, can not be elaborated here. In my opinion if this is for programming purposes, I suggest that you use state machine diagrams from UML. They are powerful enough for your problem.
I would recommend:
Draw the picture with states, transitions, and conditions for transitions.
Check the picture for consistency, dead loops, etc.
Implement FSM for yourself using maps and vectors. Do not use other FSM libraries until you need something heavy from FSM.
Here is an example of such an approach for ants simulation Ant FSM
When the ant is born it goes for food.
If ant sees a threat then it runs away from the threat.
If the threat disappears then an ant continues to find food.
If food is found by ants then it goes for home.
If an ant sees a threat then it runs away from the threat with food.
If the threat disappears then an ant continues to return home.
If ant at home it put food inside anthill then goes for food.
I am working on microflow engine (backend) which is a process flow to be executed in runtime.
Consider the following diagram where each process is a Java Class. There are variables out from process to in to another process. Since flow is dynamic in nature, very complicated flow is possible with many gateways (GW) and processes.
Is DFS/BFS a good choice to implement the runtime engine? Any idea guys.
As far as the given example is concerned, it is solved via Depth First Search (DFS), using the output node as the "root" of the tree.
This is because:
For the output to obtain a value, it needs the output of Process4
For Process4 to produce an output, it needs the outputs of Process2 and
Process3
For Process2 / Process3 to produce an output, they need the
output of GW
For GW to produce an output it needs the output from
Process1
So, the general idea would be to do a DFS from each output, all the way back to the inputs.
This will work almost as described for anything that looks like a Directed Acyclic Graph (DAG, or in fact a Tree), from the point of view of the output.
If a workflow ends up having "cycle edges" or "feedback loops", that is, if it now looks like a Graph, then additional consideration will need to be given to avoid infinite traversals and re-evaluation of a Process output.
Finally, if a workflow needs to be aware of the concept of "Time" (in general) then additional consideration will need to be given so that it is ensured that although the graph is evaluated progressively, node-by-node, in the end, it has produced the right output for time instance (n). That is, you want to avoid some Processes producing output AHEAD of the current time instance just because they were called more frequently.
A trivial example of this is already present in the question. Due to DFS, GW will be evaluated for Process2 (or Process3) but it doesn't have to be re-evaluated (for the same time instance) for Process3 (or Process2). When dealing with DAGs, you can simply add an "Evaluated" flag on each Process which is cleared at the beginning of the traversal. Then, DFS would decide to descend down the branch of a node if it finds that it is not yet evaluated. Otherwise, it simply obtains the output of some Process that was evaluated during a previous traversal. (This is why I mention "almost as described" earlier). But, this trivial trick will not work with multiple feedback loops. In that case, you really need to make the nodes "aware" about the passage of time.
For more information and for a really thorough exposition of related issues, I would strongly recommend that you go through Bruno Preiss' Y logic simulator. Although it is in C++ and is a logic simulator, it goes through exactly the same considerations that are faced by any similar system of interconnected "abstract nodes" that are supposed to be carrying out some form of "processing".
Hope this helps.
I'm writing a biological evolution simulator. Currently, all of my code is written in Python. For the most part, this is great and everything works sufficiently well. However, there are two steps in the process which take a long time and which I'd like to rewrite in Scala.
The first problem area is sequence evolution. Imagine you're given a phylogenetic tree which relates a large set of proteins. The length of each branch represents the evolutionary distance between the parent and child. The root of the tree is seeded with a single sequence, and then an evolutionary model (e.g. http://en.wikipedia.org/wiki/Models_of_DNA_evolution) is used to evolve the sequence along the tree structure; taking into account the branch lengths. PyCogent takes a long time to perform this step, and I believe that a reasonable Java/Scala implementation would be significantly faster. Do you know of any libraries that implement this type of functionality. I want to write the application in Scala, so, due to interoperability, any Java library will suffice.
The second problem area is the comparison of the generated sequences. The problem is, given a set of sequences for the proteins in a number of different extant species, attempt to use the sequence to reconstruct the phylogenetic tree which relates the species. This problem is inherently computationally demanding, because one must basically do a pairwise comparison between all sequences in the extant species. Here again, however, I feel like a Java/Scala implementation would perform significantly faster than a Python one, if for nothing else than the unfortunately slow speed of looping in Python. This part I could write from scratch more easily than the sequence evolution part, but I'd be willing to use a library for it as well if a good one exists.
Thanks,
Rob
For the second problem, why not make use an existing program for comparing sequences and infering phylogenetic trees, like RAxML or MrBayes and call that? Maximum likelihood and Bayesian inference are very sophisticated models for these problems, and using them seems a far better idea than implementing it yourself - something like a maximum parsiomony or a neihbour-joining tree, which probably could be written from scratch for such a project, is not sufficient for evolutionary analysis. Unless you just want a very quick and dirty topology (and trees inferred via MP or NJ are really often quite false), where you can probably use something like this
i need help modelling a use case diagram from a topic, it will be in java GUI
Design a Calculator that
1.Allow user to key in a legitimate arithmetic statement that involves number, operator +, - and bracket '(' and ')' ;
2.When user press “Calculate” button, display result;
3.Some legitimate statement would be ((3+2)-4+2) (equals 3) and (-2+3)-(3-1) (equals -1);
4.You should NOT use a pre-existing function that just take in the statement as a parameter and returns the result but you should write the logic of parsing every character in your code.
5.Store the last statement and answer so it is displayed when user press the “Last calculation” button.
i have designed two use case diagrams using UML on netbeans 6.5.1, one of the use case i am not sure whether is it containing too much use cases etc, while the other is what i think could be too vague for the topic.i hope to get some feedback on whether the use case diagram are appropriate, thanks.i included a what it would be like in GUI
First thing you must know about use case diagrams is that its supposed to describe functionality of a system for which actor. It should be on such a high level that anyone without knowledge of programming can understand it. As a programmer, use cases might look very vague to you but thats fine. Its not supposed to say anything about the system, just what it can do.
Some more specific comments:
As i mentioned use cases should describe high level functions. Press Calculate is not a function, Calculate is. Press Last Calculation should be Store Last Calculation, etc
Its not clear what Press Backspace does. Backspace is just a key, not a use case.
The ParserSys package tries to describe internals of a system. This does not belong in a use case diagram. Other diagrams should be used for this.
Use case Store Result (first pic) should not be in this diagram. But if thats something User can do, it should be associated with User.
Edit:
..i believe the main problem is i am having trouble identifying use case..
A good way of identifying use cases is as simple as asking yourself the question: "[Actor] should be able to [what]" (or something similar). [What] is then your use case. If it doesn't fit in this sentence, its probably not a use case.
In the second use case diagram, you have user having use cases based on the sequence of actions performed to implement the use cases in the first. These would be better represented as either an activity diagram or state machine - the user cares about getting the results of a calculation, and it is incidental that to get these results expressions need to be keyed in buttons need to be pressed. When creating use cases concentrate on the goals that the originator of the use case has, rather than how the system might help them achieve these goals .
On another point, the spec you give says nothing about simulating a keyboard using a Java GUI, or a backspace key as in your mock-up. Check with the stakeholders whether 'allow the user to key in' just means giving them somewhere to type, or providing an on-screen keypad.
As I have mentioned in previous questions I am writing a maze solving application to help me learn about more theoretical CS subjects, after some trouble I've got a Genetic Algorithm working that can evolve a set of rules (handled by boolean values) in order to find a good solution through a maze.
That being said, the GA alone is okay, but I'd like to beef it up with a Neural Network, even though I have no real working knowledge of Neural Networks (no formal theoretical CS education). After doing a bit of reading on the subject I found that a Neural Network could be used to train a genome in order to improve results. Let's say I have a genome (group of genes), such as
1 0 0 1 0 1 0 1 0 1 1 1 0 0...
How could I use a Neural Network (I'm assuming MLP?) to train and improve my genome?
In addition to this as I know nothing about Neural Networks I've been looking into implementing some form of Reinforcement Learning, using my maze matrix (2 dimensional array), although I'm a bit stuck on what the following algorithm wants from me:
(from http://people.revoledu.com/kardi/tutorial/ReinforcementLearning/Q-Learning-Algorithm.htm)
1. Set parameter , and environment reward matrix R
2. Initialize matrix Q as zero matrix
3. For each episode:
* Select random initial state
* Do while not reach goal state
o Select one among all possible actions for the current state
o Using this possible action, consider to go to the next state
o Get maximum Q value of this next state based on all possible actions
o Compute
o Set the next state as the current state
End Do
End For
The big problem for me is implementing a reward matrix R and what a Q matrix exactly is, and getting the Q value. I use a multi-dimensional array for my maze and enum states for every move. How would this be used in a Q-Learning algorithm?
If someone could help out by explaining what I would need to do to implement the following, preferably in Java although C# would be nice too, possibly with some source code examples it'd be appreciated.
As noted in some comments, your question indeed involves a large set of background knowledge and topics that hardly can be eloquently covered on stackoverflow. However, what we can try here is suggest approaches to get around your problem.
First of all: what does your GA do? I see a set of binary values; what are they? I see them as either:
bad: a sequence of 'turn right' and 'turn left' instructions. Why is this bad? Because you're basically doing a random, brute-force attempt at solving your problem. You're not evolving a genotype: you're refining random guesses.
better: every gene (location in the genome) represents a feature that will be expressed in the phenotype. There should not be a 1-to-1 mapping between genome and phenotype!
Let me give you an example: in our brain there are 10^13ish neurons. But we have only around 10^9 genes (yes, it's not an exact value, bear with me for a second). What does this tell us? That our genotype does not encode every neuron. Our genome encodes the proteins that will then go and make the components of our body.
Hence, evolution works on the genotype directly by selecting features of the phenotype. If I were to have 6 fingers on each hand and if that would made me a better programmer, making me have more kids because I'm more successful in life, well, my genotype would then be selected by evolution because it contains the capability to give me a more fit body (yes, there is a pun there, given the average geekiness-to-reproducibily ratio of most people around here).
Now, think about your GA: what is that you are trying to accomplish? Are you sure that evolving rules would help? In other words -- how would you perform in a maze? What is the most successful thing that can help you: having a different body, or having a memory of the right path to get out? Perhaps you might want to reconsider your genotype and have it encode memorization abilities. Maybe encode in the genotype how much data can be stored, and how fast can your agents access it -- then measure fitness in terms of how fast they get out of the maze.
Another (weaker) approach could be to encode the rules that your agent uses to decide where to go. The take-home message is, encode features that, once expressed, can be selected by fitness.
Now, to the neural network issue. One thing to remember is that NNs are filters. They receive an input. perform operations on it, and return an output. What is this output? Maybe you just need to discriminate a true/false condition; for example, once you feed a maze map to a NN, it can tell you if you can get out from the maze or not. How would you do such a thing? You will need to encode the data properly.
This is the key point about NNs: your input data must be encoded properly. Usually people normalize it, maybe scale it, perhaps you can apply a sigma function to it to avoid values that are too large or too small; those are details that deal with error measures and performance. What you need to understand now is what a NN is, and what you cannot use it for.
To your problem now. You mentioned you want to use NNs as well: what about,
using a neural network to guide the agent, and
using a genetic algorithm to evolve the neural network parameters?
Rephrased like so:
let's suppose you have a robot: your NN is controlling the left and right wheel, and as input it receives the distance of the next wall and how much it has traveled so far (it's just an example)
you start by generating a random genotype
make the genotype into a phenotype: the first gene is the network sensitivity; the second gene encodes the learning ratio; the third gene.. so on and so forth
now that you have a neural network, run the simulation
see how it performs
generate a second random genotype, evolve second NN
see how this second individual performs
get the best individual, then either mutate its genotype or recombinate it with the loser
repeat
there is an excellent reading on the matter here: Inman Harvey Microbial GA.
I hope I did you some insight on such issues. NNs and GA are no silver bullet to solve all problems. In some they can do very much, in others they are just the wrong tool. It's (still!) up to us to get the best one, and to do so we must understand them well.
Have fun in it! It's great to know such things, makes everyday life a bit more entertaining :)
There is probably no 'maze gene' to find,
genetic algorithms are trying to setup a vector of properties and a 'filtering system' to decide by some kind of 'surival of the fittest' algorithm to find out which set of properties would do the best job.
The easiest way to find a way out of a maze is to move always left (or right) along a wall.
The Q-Algorithm seems to have a problem with local maxima this was workaround as I remember by kicking (adding random values to the matrix) if the results didn't improve.
EDIT: As mentioned above a backtracking algorithm suits this task better than GA or NN.
How to combine both algorithm is described here NeuroGen descibes how GA is used for training a NN.
Try using the free open source NerounDotNet C# library for your Neural networks instead of implementing it.
For Reinforcement Learning library, I am currently looking for one, especially for Dot NET framework..