Coin collector’s problem with two backtracking steps allowed - java

So I have the coin collector’s problem where several coins are placed in a cells of a matrix with dimensions n x m. With varying amount of coins on each cell. A man in the upper left cell of the matrix (meaning 0,0) and he has to collect as many coins as possible but he can only move to the cell to the right of him(i,j+1) or down (i+1,j) and the task is to get to cell (n,m) with as many coins as possible.
I know that the solution to this problem can be found by using dynamic programming and
Setting up the recurrence
F(0,0) = F(0,0)
F(0,j) = F(0,j) + F(0,j-1) for 1<=j<=m
F(i,0) = F(i,0) + F(i-1,0) for 1<=i<=n
F(i,j) = F(i,j) + max(F(i-1,j) + F(i,j-1)) for 1<=i<=n, 1<=j<=m
And we iterate over all the cells and then we simply get the value that is at F(n,m)
I forgot to mention this but we know how much coins there are in each cell. So we don't need to discover this information by traveling.
However the twist now is, the man is allowed to backtrack two steps, these two steps either separately as one step each at two different points or as one big step together at one single point. Meaning at some point along the road, he can either go forward or down and collect the coins on that cell, and then go backwards again. How would I go about modeling this? What’s the right approach to have? I'm not necessarily interested in the code required to make this work as much as I am in what the proper theory or way to go about this is, that said I don't mind reading code in any language.
Thanks in advance

Related

TicTacToe AI - Computer Winning move java

is there a way of getting the winning move for AI on TicTacToe using for loop. So checking if any two buttons next to each other have the value X or O then make the third make the same value to make the AI win.
Using a for loop to go through array of 3 by 3 buttons and checking if any two buttons next to each other has the same value.
I have tried this but not sure if its correct because it isn;t working the computer doesn't make any winning move.
For easier understanding, you could make multiple loops for row/column/diagonal each:
Count X or O in one row/column/diagonal, if it equals 2, add the third one in the remaining field (if it is empty)
There are MANY ways to accomplish what you want. You could even bruteforce every possible move, count the winning results and chose the one with the most possibilities.
Another easy way would be to write a method which will check for every field if it blocks an opponent winning move and/or results in a winning move for oneself.
A common A.I. algorithm in gaming is The Min/Max Algorithm Basically, a player looks ahead to evaluate the state of the game resulting from every possible sequence of events, and chooses the move such as to maximize their chances of winning.
For tic-tac-toe you may want to consider starting with a player and looking at all of the possible child states that could follow a given state. You could evaluate some score such as whether the move leads you to a state where you have 2 x's lined up. You then propagate this score up your tree, so the current player has an informed decision to make.'
Min-max assumes your opponent is playing perfectly, so you can sometimes encounter problems there.
For a good description of A.I. and the tic-tac-toe problem check out Artificial Intelligence: A Modern Approach Chapter 5, adversarial search, covers gaming and specifically refers to tic-tac-toe.

how to find the best postions in a matrix?

I need to build a seating system for a Java course I am attending.
Given the amount of seats required, the system needs to give the best positions in the hall.
By best positions I mean that the seats have to be as close as possible to one another, and as close as possible to the mid row.
Now, some definitions:
Distance between seats - The minimal number of cells in the matrix that separates the two cells. For example, the distance between the cell [3,3] and [2,2] is 1.
I thought about doing a recursive backtracking function that will give me a list of all possible positions, which I'll then iterate through, grading it by the distance between all the positions and the distance of all the seats from the mid row.
This solution would be extremely inefficient. Does anyone have a better idea?
I am not sure how Dijkstra applies here, unless i misunderstand the problem.
Is your preference is first to adjacent seating, and then to distance to middle row?
i.e. Say you need to find 5 seats, and you have five consecutive seats available in first row, and then 4 available seats in middle row, then I am assuming the solution would be to choose 5 seats in first row.
So, given a required seat number N...
What I would do, is to start with middle row, pick an empty seat, and "grow" a region by flagging all adjacent empty seats. If the region is size N, then you are done. If its not, then I would push this region on a stack (say location of the "start of growth" and number of empty seats in that cluster). Then I would move on along the middle row marking/growing such regions. After middle row I would move one up and then one below. Then two up and two below, etc, until entire matrix is covered. The trick is, keep finding empty clusters until you hit one that is of size N. If you process the entire matrix and no such cluster exists, you can comeback to the stack and "smartly" pick empty clusters that add up to at least N.
Hope that helps. Fun problem.

Minimax / Alpha Beta for Android Reversi Game

I have to implement a Reversi game for Android. I have managed to implement all the game, is functional, but the problem is that I don't have an AI. In fact, at every move the computer moves in the position that achieves him the highest number of pieces.
I decided to implement and alpha-beta pruning algorithm. I did a lot of research on the internet about it, but I couldn't come to a final conclusion how to do it. I tried to implement a few functions, but I couldn't achieve the desired behaviour.
My board is stored in class Board (inside this class, the pieces occupied by each player are stored in a bi-dimensional int array). I have attached an small diagram (sorry about the way it looks).
DIAGRAM: https://docs.google.com/file/d/0Bzv8B0L32Z8lSUhKNjdXaWsza0E/edit
I need help to figure out how to use the minimax algorithm with my implementation.
What I understood so far, is that I have to make an evaluation function regarding the value of the board.
To calculate the value of the board I have to account the following elements:
-free corners (my question is that I have to take care only about the free corners, or the one that I can take at the current move?! dilemma here).
-mobility of the board: to check the number of pieces that will be available to move, after the current move.
-stability of the board… I know it means the number of pieces that can't be flipped on the board.
-the number of pieces the move will offer me
I have in plan to implement a new Class BoardAI that will take as an argument my Board object and the dept.
Can you please tell me a logical flow of ideas how I should implement this AI?
I need some help about the recursion while calculating in dept and I don't understand how it calculates the best choice.
Thank you!
First you can check this piece of code for a checkers AI that I wrote years ago. The interesting part is the last function (alphabeta). (It's in python but I think you can look at that like pseudocode).
Obviously I cannot teach you all the alpha/beta theory cause it can be a little tricky, but maybe I can give you some practical tips.
Evaluation Function
This is one of the key points for a good min/max alpha/beta algorithm (and for any other informed search algorithm). Write a good heuristic function is the artistic part in AI development. You have to know well the game, talk with expert game player to understand which board features are important to answer the question: How good is this position for player X?
You have already indicated some good features like mobility, stability and free corners. However note that the evaluation function has to be fast cause it will be called a lot of times.
A basic evaluation function is
H = f1 * w1 + f2 * w2 + ... + fn * wn
where f is a feature score (for example the number of free corners) and w is a corresponding weight that say how much the feature f is important in the total score.
There is only one way to find weights value: experience and experiments. ;)
The Basic Algorithm
Now you can start with the algorithm. The first step is understand game tree navigation. In my AI I've just used the principal board like a blackboard where the AI can try the moves.
For example we start with board in a certain configuration B1.
Step 1: get all the available moves. You have to find all the applicable moves to B1 for a given player. In my code this is done by self.board.all_move(player). It returns a list of moves.
Step 2: apply the move and start recursion. Assume that the function has returned three moves (M1, M2, M3).
Take the first moves M1 and apply it to obtain a new board configuration B11.
Apply recursively the algorithm on the new configuration (find all the moves applicable in B11, apply them, recursion on the result, ...)
Undo the move to restore the B1 configuration.
Take the next moves M2 and apply it to obtain a new board configuration B12.
And so on.
NOTE: The step 3 can be done only if all the moves are reversible. Otherwise you have to find another solution like allocate a new board for each moves.
In code:
for mov in moves :
self.board.apply_action(mov)
v = max(v, self.alphabeta(alpha, beta, level - 1, self._switch_player(player), weights))
self.board.undo_last()
Step 3: stop the recursion. This three is very deep so you have to put a search limit to the algorithm. A simple way is to stop the iteration after n levels. For example I start with B1, max_level=2 and current_level=max_level.
From B1 (current_level 2) I apply, for example, the M1 move to obtain B11.
From B11 (current_level 1) I apple, for example, the M2 move to obtain B112.
B122 is a "current_level 0" board configuration so I stop recursion. I return the evaluation function value applied to B122 and I come back to level 1.
In code:
if level == 0 :
value = self.board.board_score(weights)
return value
Now... standard algorithm pseudocode returns the value of the best leaf value. Bu I want to know which move bring me to the best leaf! To do this you have to find a way to map leaf value to moves. For example you can save moves sequences: starting from B1, the sequence (M1 M2 M3) bring the player in the board B123 with value -1; the sequence (M1 M2 M2) bring the player in the board B122 with value 2; and so on... Then you can simply select the move that brings the AI to the best position.
I hope this can be helpful.
EDIT: Some notes on alpha-beta. Alpha-Beta algorithm is hard to explain without graphical examples. For this reason I want to link one of the most detailed alpha-beta pruning explanation I've ever found: this one. I think I cannot really do better than that. :)
The key point is: Alpha-beta pruning adds to MIN-MAX two bounds to the nodes. This bounds can be used to decide if a sub-tree should be expanded or not.
This bounds are:
Alpha: the maximum lower bound of possible solutions.
Beta: the minimum upper bound of possible solutions.
If, during the computation, we find a situation in which Beta < Alpha we can stop computation for that sub-tree.
Obviously check the previous link to understand how it works. ;)

PacMan character AI suggestions for optimal next direction

Firstly, this is AI for PacMan and not the ghosts.
I am writing an Android live wallpaper which plays PacMan around your icons. While it supports user suggestions via screen touches, the majority of the game will be played by an AI. I am 99% done with all of the programming for the game but the AI for PacMan himself is still extremely weak. I'm looking for help in developing a good AI for determining PacMan's next direction of travel.
My initial plan was this:
Initialize a score counter for each direction with a value of zero.
Start at the current position and use a BFS to traverse outward in the four possible initial directions by adding them to the queue.
Pop an element off of the queue, ensure it hasn't been already "seen", ensure it is a valid board position, and add to the corresponding initial directions score a value for the current cell based on:
Has a dot: plus 10
Has a power up: plus 50
Has a fruit: plus fruit value (varies by level)
Has a ghost travelling toward PacMan: subtract 200
Has a ghost travelling away from PacMan: do nothing
Has a ghost travelling perpendicular: subtract 50
Multiply the cell's value times a pecentage based on the number of steps to the cell, the more steps from the initial direction, the closer the value of the cell gets to zero.
and enqueue the three possible directions from the current cell.
Once the queue is empty, find the highest score for each of the four possible initial directions and choose that.
It sounded good to me on paper but the ghosts surround PacMan extremely rapidly and he twitches back and forth in the same two or three cells until one reaches him. Adjusting the values for the ghost presence doesn't help either. My nearest dot BFS can at least get to level 2 or 3 before the game ends.
I'm looking for code, thoughts, and/or links to resources for developing a proper AI--preferably the former two. I'd like to release this on the Market sometime this weekend so I'm in a bit of a hurry. Any help is greatly appreciated.
FYI, this was manually cross-posted on GameDev.StackExchange
If PacMan gets stuck in a position and starts to twitch back and forth then it suggests that the different moves open to him have very similar scores after you run your metric. Then small changes in position by the ghosts will cause the best move to flip back and forth. You might want to consider adding some hysteresis to stop this happening.
Setup: Choose a random move and record it with score 0.
For each step:
Run the scoring function over the available moves.
If the highest score is x% greater than the record score then overwrite the record score and move with this one.
Apply the move.
This has the effect that PacMan will no longer pick the "best" move on each step, but it doesn't seem like a greedy local search would be optimal anyway. It will make PacMan more consistent and stop the twitches.
Have a way to change PacMan into a "path following" mode. The plan is that you detect certain circumstances, calculate a pre-drawn path for PacMan to follow, and then work out early exit conditions for that path. You can use this for several circumstances.
When PacMan is surrounded by ghosts in three of the four directions within a certain distance, then create an exit path that either leads PacMan away from the ghosts or towards a power up. The exit situation would be when he eats the power up or ceases to be surrounded.
When PacMan eats a power up, create a path to eat some nearby ghosts. The exit situation would be when there are no ghosts on the path, recalculate the path. Or if there are no ghosts nearby, exit the mode entirely.
When there are less than half the dots left, or no dots nearby, enter a path to go eat some dots, steering clear of the ghosts. Recalculate the path when a ghost comes nearby, or exit it entirely if several ghosts are nearby.
When there are no situations which warrant a path, then you can revert back to the default AI you programmed before.
You can use Ant Colony Optimisation techniques to find shortest visible path that leads to many icons to eat or can get many score.
I don't know a lot about AI or specific algorithms, but here are some things you could try that might just get you close enough for government work :)
For the problem with ghosts surrounding him quickly, maybe the ghost AI is too powerful? I know that there's supposedly specific behaviors for each ghost in classical Pacman, so if you haven't incorporated that, you may want to.
To eliminate backtracking, you could create an weight penalty for recently traversed nodes, so he's less inclined to go back to previous paths. If that's not enough to kick him in one direction or another, then you can logarithmically increase the attraction penalty, so one path will become significantly more attractive than the other at a very quick rate.
For the problem of him getting caught by ghosts, you might be able to change from a general goal-based algorithm to an evasive algorithm once the ghosts have reached a dangerous node proximity.
You might benefit of knowing how the bots "reason" (as explained in this excellent dossier). For example, knowing the chase/scatter pattern of the ghosts will allow you to get the dots in "dangerous" locations, and so on.
I am adding this answer knowing that it's not the best solution you were looking for (since you wanted to deliver next week..) but maybe will be of use to somebody reading this in the future. Sortof a time capsule :)
You should check out this description of Antiobjects, which is the technique used by the Pacman ghosts to traverse the maze. In particular, note:
Each of these antiobjects or agents
has an identical and simple algorithm
which it runs at every turn of the
game. Instead of making Ghosts smart
enough to solve "shortest path"
problems around the maze, a notion of
"Pac-Man scent" is created instead and
each tile is responsible for saying
how much Pac-Man scent is on its tile.
So you consider a similar scent-based technique to control Pacman, perhaps where Pacman preferred traversing a path with a smaller amount of scent; this would reduce the chance of him going over old ground.

point in polygon in sphere,

My question is, will the code given in http://pietschsoft.com/post/2008/07/Virtual-Earth-Polygon-Search-Is-Point-Within-Polygon.aspx work to find a point in one of areas mentioned in
below file (page 7-9):
http://www.weather.gov/directives/sym/pd01008006curr.pdf
looking forward,
A point-in-polygon algorithm usually just counts the number of times it crosses a line by "drawing" one out in any particular direction. It would then know if it's in the polygon or not by knowing how many times it crossed that line (even number it was outside, odd number it is inside). The code on that site looks like it just flips a boolean instead of adding to a counter but it's the same thing.
I must confess I have not read the PDF you linked too (too long!) but I've not come across an instance where the algorithm fails.
One tip might be to draw a rough square around the outermost extremeties of the polygon first and check whether it falls within that to avoid having to test each point).
I believe it will fail in some cases. The algorithm you linked to, which is correct for planar geometry, is incorrect for spherical geometry. Consider the rectangles which cross the 180th meridian, e.g. the rectangle labelled "M". The algorithm would consider that rectangle as covering the americas, Africa, and Europe, but not Asia or the Pacific.

Categories

Resources