Cellular Automata Java (Beginner) - java

I'm creating a game along the lines of Minicraft. I posted a question about how I should make a terrain similar to the one in the game here and user by the name of Quirliom posted an answer referring to what is called cellular automata.
I had absolutely no clue what it was, let alone how to do it. I did look it up and see what it was. But I have yet to find out how to do it. Could somebody please explain how to do it and how it works, perhaps a link or two or even some source codes/ examples.

For the theory, check out http://en.wikipedia.org/wiki/Book:Cellular_Automata. Once you have a sense of what cellular automata are in general, the next step is finding sources on their application to landscape generation (a pretty non-standard but not unheard of use); I suspect the initial theory readthrough will give you a pretty good sense on implementation techniques.

Formally, cellular automata are a subclass of dynamical system where space and time are discrete.
Depending on the model considered, some properties may or may not apply:
The component of the model are connected by a regular graph that is invariant by translation, rotation, etc.
Given S the state space, the updating rule is a function F(S^n) -> S where S^n is given by the neighborhood of a cell.
The updating rule is the same for all component.
The updating rule apply to all cells simultaneously, building states t+1 from states t.
Generally, cellular automata are good models to simulate a dynamical environment (sand, brownian motion, wildfires) because they allow large size and computation speed, due to their extreme simplicity.
If you want an entry in the world of cellular automata, I recommend you look up the Game of Life by Conway, find a tutorial and implement it.

Related

Comparing between two skeleton tracking on processing 3

I am currently doing my dissertation which would involve in having 2 people a professional athlete and an amateur. First with the image processing skeletonization I would like to record the professional athlete while performing the squat exercise , then when the amateur performs the exercise I want to be able to compare the professional skeleton with that of the amateur to see if it is properly formed.
Please I m open for any suggestions and opinions , Would gladly appreciate some help
Here lies your question:
properly formed.
What does properly performed actually mean ? How can this be quantified ?
Bare in mind I'm not an athletic/experienced in this field.
If I were given the task I would counter-intuitively go in the opposite direction:
moving away form Processing 3/kinect/computer. I would instead:
find a professional athlete
find a skilled with trainer with functional mobility training.
find an amateur (probably easiest)
Item 2 will be trickier.For example FMS seems to put a lot of emphasis on correct exercising and mobility (to enhance performance and reduce risk of injuries). I'm not sure if that's the only approach or the best. You might want to check opinions on Physical Fitness, consult with people studying/teaching exercise science, etc. Do check credentials as it feels like a field where everyone has an opinion/preference.
The idea is to understand how a professional educated trainer asses correct movement. Take note of how that works in the real world and try to systemise it.
What are the cues for a correct execution ?
is the key poses
the motion in between
how the skeletal and muscular system work together/ the weights/forces applied/etc.
Having a better understanding of how this works in the real world should lead you to things you can start quantifying/comparing numerically on a computer.
Try to make a checklist/score system manually using a pen and paper based on the information you gather. If this works you already have a system you can start programming.
The next step is acquiring the data.
This is probably where the kinect comes, but bare in mind:
the second version of the kinect is more precise than the first
there is a Kinect2 SDK wrapper for Processing 3: use that if you can (windows only). There is a way you can get libfreenect2 working with OpenNI on osx/linux and therefore with SimpleOpenNI in Processing, but it's not straight forward and you won't have the same precision on the skeleton tracking algorithm
use data that is as precise as possible:
you can get the accuracy of a tracked skeleton joint
use an environment that doesn't contain a complex background (makes it easy to segment users and detect/track skeletons with little change of mistaking it for something else). prefer artificial non-incandescent light (less of a problem with kinect v2, but still you want as little IR interference as possible).
comparing orientation matrices or joints on single poses might not be enough to get the full picture: how do you capture/quantify motion taking into account the things that the kinect can't easily see: muscles flexing/forces applied/moving centre of gravity/etc.
try to use a grid system that will make it simple to pair the digital values with real world measurements. Check out how people used to study motion in the past, for example Étienne-Jules Marey or Eadweard Muybridge
Motion capture by Étienne-Jules Marey
Motion study by Eadweard Muybridge (notice the grid)
It's a pretty full on project to get right involving bits of anatomy/physics/kinematics/etc.
Start with the research first:
how did people study this in the past ?
what are the current developments ?
how does it work in the real world (without computers) ?
Take your constraints into account:
what resources (people/gear/etc.) can you use ?
how much time do you have available ?
Given the above, what topic/section of the project can be realistically be tackled to get useful results.
Overall probably something along these lines:
background research
real world studies
comparison system has feature which can be measured both with kinect and by a person
record data (real world data + mobility comparison evalutation and kinect data + mobility comparison)
compare data
write evaluation of findings (how effective is the system? what are limitations ? what could be improved (future work) ? etc.)
In short be aware of the kinect limitations: skeleton tracking is probability based: it's not 100% accurate. use data that's as clean/correct as possible to begin with (make it easy to acquire good data if you can control the capture environment). From what a real trainer would track, what could you track with a kinect ? do a comparison of the intersecting measurements.

OpenGL Implementing multiple materials properly

I am working on a game scene with multiple objects that need multiple materials. I extensively searched online, but I could not find any satisfactory solution.
My scene will have like a river flowing by and the material there will require a separate shader anyway (it will combine many specular and normal maps into what would look like a river) then there is a terrain that would mix two (grass and sand textures) requiring another shader. There is also a player with hands and amour and all.
EDIT: Essentially I wish to find out the most efficient way of making the most flexible multiple material/shader implementation.
Briefly, there is a lot of complex objects around requiring varied shaders. They are not many in number, but there is a lot of complexity.
So using glUseProgram() a lot of times dosn't seem like the brightest idea. Also Much of the shader code could be made univeral like point light calculation. Making a generic shader and using if's and state uniforms could possibly work, still requiring different shaders for the river and likewise diverging materials.
I basically don't understand the organization and implementation of such a generic system. I have used engines like Unreal or possibly Blender which use Node based materials and allowing the customization of every single material without much lag. How would such a system translate into base GPU code?
If you really face timing problems because of too many glUseProgram() calls, you might want to have a look at shader subroutines and use less but bigger programs. Before that, sort your data to change states only when needed (sort per shader then per material for example). I guess this is always a good practice anyway.
Honestly, I do not think your timing problems come from the use of too many programs. You might for example want to use frustum culling (to avoid sending geometry to the GPU that will be culled) and early z-culling (to avoid complex lighting computations for fragments that will be overriden). You can also use level of detail for complex geometries that are far away, thus do not need as much details.

Detecting if multiple bent lines intersect

I'm in the process of making a swimlane diagram but can't come up with a good algorithm to automatically lay out the lines that connect the nodes in the diagram. What I essentially want is this.
However, I don't have any protection against lines overlapping or intersecting right now and it sometimes gets very messy.
Does anyone know a way to detect if a line will intersect ANY of the lines already drawn?
One idea that I've come up with is to store the paths in an array or table and check the entire table every time a new line is slated to be drawn but that does not seem efficient.
I'm doing this in javascript and java through the use of GWT so maybe there is an easy way to solve this using one of the tools provided by these languages?
If your real issue is to minimize the line intersections, there are several algorithms that try to accomplish this in diagrams. Check this link for example, there are also more algorithms that are used in auto routing for electric design automation that are also used in this kind of diagrams, like Lee algorithm, or A* Algorithm.
I don't know if the tools that you are using have enough flexibility to implement this kind of algorithms, usually you need to implement your own heuristic according to the specific type of diagram, but I hope that this links are enough to give you good ideas.
Minimizing the line intersections in a graph is a difficult NP-Hard problem, check this link (about the crossing number) for more information.
Good luck.

Design for a Debate club assignment application

For my university's debate club, I was asked to create an application to assign debate sessions and I'm having some difficulties as to come up with a good design for it. I will do it in Java. Here's what's needed:
What you need to know about BP debates: There are four teams of 2 debaters each and a judge. The four groups are assigned a specific position: gov1, gov2, op1, op2. There is no significance to the order within a team.
The goal of the application is to get as input the debaters who are present (for example, if there are 20 people, we will hold 2 debates) and assign them to teams and roles with regards to the history of each debater so that:
Each debater should debate with (be on the same team) as many people as possible.
Each debater should uniformly debate in different positions.
The debate should be fair - debaters have different levels of experience and this should be as even as possible - i.e., there shouldn't be a team of two very experienced debaters and a team of junior debaters.
There should be an option for the user to restrict the assignment in various ways, such as:
Specifying that two people should debate together, in a specific position or not.
Specifying that a single debater should be in a specific position, regardless of the partner.
If anyone can try to give me some pointers for a design for this application, I'll be so thankful!
Also, I've never implemented a GUI before, so I'd appreciate some pointers on that as well, but it's not the major issue right now.
Also, there is the issue of keeping Debater information in file, which I also never implemented in Java, and would like some tips on that as well.
This seems like a textbook constraint problem. GUI notwithstanding, it'd be perfect for a technology like Prolog (ECLiPSe prolog has a couple of different Java integration libraries that ship with it).
But, since you want this in Java why not store the debaters' history in a sql database, and use the SQL language to structure the constraints. You can then wrap those SQL queries as Java methods.
There are two parts (three if you count entering and/or saving the data), the underlying algorithm and the UI.
For the UI, I'm weird. I use this technique (there is a link to my sourceforge project). A Java version would have to be done, which would not be too hard. It's weird because very few people have ever used it, but it saves an order of magnitude coding effort.
For the algorithm, the problem looks small enough that I would approach it with a simple tree search. I would have a scoring algorithm and just report the schedule with the best score.
That's a bird's-eye overview of how I would approach it.

Boosting my GA with Neural Networks and/or Reinforcement Learning

As I have mentioned in previous questions I am writing a maze solving application to help me learn about more theoretical CS subjects, after some trouble I've got a Genetic Algorithm working that can evolve a set of rules (handled by boolean values) in order to find a good solution through a maze.
That being said, the GA alone is okay, but I'd like to beef it up with a Neural Network, even though I have no real working knowledge of Neural Networks (no formal theoretical CS education). After doing a bit of reading on the subject I found that a Neural Network could be used to train a genome in order to improve results. Let's say I have a genome (group of genes), such as
1 0 0 1 0 1 0 1 0 1 1 1 0 0...
How could I use a Neural Network (I'm assuming MLP?) to train and improve my genome?
In addition to this as I know nothing about Neural Networks I've been looking into implementing some form of Reinforcement Learning, using my maze matrix (2 dimensional array), although I'm a bit stuck on what the following algorithm wants from me:
(from http://people.revoledu.com/kardi/tutorial/ReinforcementLearning/Q-Learning-Algorithm.htm)
1. Set parameter , and environment reward matrix R
2. Initialize matrix Q as zero matrix
3. For each episode:
* Select random initial state
* Do while not reach goal state
o Select one among all possible actions for the current state
o Using this possible action, consider to go to the next state
o Get maximum Q value of this next state based on all possible actions
o Compute
o Set the next state as the current state
End Do
End For
The big problem for me is implementing a reward matrix R and what a Q matrix exactly is, and getting the Q value. I use a multi-dimensional array for my maze and enum states for every move. How would this be used in a Q-Learning algorithm?
If someone could help out by explaining what I would need to do to implement the following, preferably in Java although C# would be nice too, possibly with some source code examples it'd be appreciated.
As noted in some comments, your question indeed involves a large set of background knowledge and topics that hardly can be eloquently covered on stackoverflow. However, what we can try here is suggest approaches to get around your problem.
First of all: what does your GA do? I see a set of binary values; what are they? I see them as either:
bad: a sequence of 'turn right' and 'turn left' instructions. Why is this bad? Because you're basically doing a random, brute-force attempt at solving your problem. You're not evolving a genotype: you're refining random guesses.
better: every gene (location in the genome) represents a feature that will be expressed in the phenotype. There should not be a 1-to-1 mapping between genome and phenotype!
Let me give you an example: in our brain there are 10^13ish neurons. But we have only around 10^9 genes (yes, it's not an exact value, bear with me for a second). What does this tell us? That our genotype does not encode every neuron. Our genome encodes the proteins that will then go and make the components of our body.
Hence, evolution works on the genotype directly by selecting features of the phenotype. If I were to have 6 fingers on each hand and if that would made me a better programmer, making me have more kids because I'm more successful in life, well, my genotype would then be selected by evolution because it contains the capability to give me a more fit body (yes, there is a pun there, given the average geekiness-to-reproducibily ratio of most people around here).
Now, think about your GA: what is that you are trying to accomplish? Are you sure that evolving rules would help? In other words -- how would you perform in a maze? What is the most successful thing that can help you: having a different body, or having a memory of the right path to get out? Perhaps you might want to reconsider your genotype and have it encode memorization abilities. Maybe encode in the genotype how much data can be stored, and how fast can your agents access it -- then measure fitness in terms of how fast they get out of the maze.
Another (weaker) approach could be to encode the rules that your agent uses to decide where to go. The take-home message is, encode features that, once expressed, can be selected by fitness.
Now, to the neural network issue. One thing to remember is that NNs are filters. They receive an input. perform operations on it, and return an output. What is this output? Maybe you just need to discriminate a true/false condition; for example, once you feed a maze map to a NN, it can tell you if you can get out from the maze or not. How would you do such a thing? You will need to encode the data properly.
This is the key point about NNs: your input data must be encoded properly. Usually people normalize it, maybe scale it, perhaps you can apply a sigma function to it to avoid values that are too large or too small; those are details that deal with error measures and performance. What you need to understand now is what a NN is, and what you cannot use it for.
To your problem now. You mentioned you want to use NNs as well: what about,
using a neural network to guide the agent, and
using a genetic algorithm to evolve the neural network parameters?
Rephrased like so:
let's suppose you have a robot: your NN is controlling the left and right wheel, and as input it receives the distance of the next wall and how much it has traveled so far (it's just an example)
you start by generating a random genotype
make the genotype into a phenotype: the first gene is the network sensitivity; the second gene encodes the learning ratio; the third gene.. so on and so forth
now that you have a neural network, run the simulation
see how it performs
generate a second random genotype, evolve second NN
see how this second individual performs
get the best individual, then either mutate its genotype or recombinate it with the loser
repeat
there is an excellent reading on the matter here: Inman Harvey Microbial GA.
I hope I did you some insight on such issues. NNs and GA are no silver bullet to solve all problems. In some they can do very much, in others they are just the wrong tool. It's (still!) up to us to get the best one, and to do so we must understand them well.
Have fun in it! It's great to know such things, makes everyday life a bit more entertaining :)
There is probably no 'maze gene' to find,
genetic algorithms are trying to setup a vector of properties and a 'filtering system' to decide by some kind of 'surival of the fittest' algorithm to find out which set of properties would do the best job.
The easiest way to find a way out of a maze is to move always left (or right) along a wall.
The Q-Algorithm seems to have a problem with local maxima this was workaround as I remember by kicking (adding random values to the matrix) if the results didn't improve.
EDIT: As mentioned above a backtracking algorithm suits this task better than GA or NN.
How to combine both algorithm is described here NeuroGen descibes how GA is used for training a NN.
Try using the free open source NerounDotNet C# library for your Neural networks instead of implementing it.
For Reinforcement Learning library, I am currently looking for one, especially for Dot NET framework..

Categories

Resources