Package | Description |
---|---|
s3games.ai | |
s3games.engine | |
s3games.engine.expr | |
s3games.gui | |
s3games.player |
Modifier and Type | Method and Description |
---|---|
double |
DistanceFromGoalHeuristic.heuristic(GameState gameState,
int forPlayer)
takes a current state and compares it with a goal state element by element, and sums the number of differences - not implemented yet
|
abstract double |
Heuristic.heuristic(GameState gameState,
int forPlayer)
a heuristic receives the current state and player number - from whose viewpoint the game situation is to be evaluated.
|
double |
MoreStonesHeuristic.heuristic(GameState gameState,
int forPlayer)
returns a value -1..1 depending on the ratio of the player's and opponent's elements on the relevant locations on the board
|
double |
Puzzle8Heuristic.heuristic(GameState gameState,
int forPlayer)
return a sum of distances to target locations of all elements
|
Modifier and Type | Field and Description |
---|---|
GameState |
Game.state
current state of the game
|
Modifier and Type | Method and Description |
---|---|
GameState |
GameState.getCopy()
return a copy of this state state
|
Modifier and Type | Method and Description |
---|---|
boolean |
GameState.equals(GameState other)
implementation of the equals() method - we ignore irrelevant locations and look only at element types, not the element names
|
Move |
GameState.findMove(GameState newState)
compares this state with newState, and returns a move that leads from this state to a new state
|
Modifier and Type | Method and Description |
---|---|
GameState |
Context.getState()
retrieve the game sate in this context
|
Modifier and Type | Method and Description |
---|---|
void |
Context.setState(GameState state)
set the game state for this context - the same context is reused when
searching through the game tree, a particular current state always
needs to be set
|
Constructor and Description |
---|
Context(GameState state,
GameSpecification specs,
Robot robot)
construct a new empty context
|
Modifier and Type | Method and Description |
---|---|
void |
BoardCanvas.setState(GameState egs)
controller sends the current game state to here
|
void |
GameWindow.setState(GameState egs)
set the current state of the game and visualize it
|
Modifier and Type | Method and Description |
---|---|
java.util.HashMap<Move,GameState> |
MiniMaxPlayer.expand(GameState state,
java.util.HashSet<Move> moves,
double ratio)
the only difference to standard minimax player is that here we ignore some
of the moves when we expand a move
|
Modifier and Type | Method and Description |
---|---|
protected abstract void |
AbstractMonteCarloPlayer.addScore(GameState gs,
int i)
add to the scores depending on the game result of this trial
|
protected void |
MonteCarloClassicPlayer.addScore(GameState gs,
int playerNumber)
we add 1, if we won
|
protected void |
MonteCarloRatioPlayer.addScore(GameState gs,
int i)
add the score after each trial
|
protected void |
MonteCarloRatioPlayer2.addScore(GameState gs,
int i)
add the current trial to the overall scores
|
java.util.HashMap<Move,GameState> |
MiniMaxPlayer.expand(GameState state,
java.util.HashSet<Move> moves,
double ratio)
the only difference to standard minimax player is that here we ignore some
of the moves when we expand a move
|
Move |
AStarPlayer.move(GameState state,
java.util.ArrayList<Move> allowedMoves)
A* player makes a move - it searches all the way to find the closest winning state and performs a move that is leading towards it.
|
Move |
AbstractMonteCarloPlayer.move(GameState state,
java.util.ArrayList<Move> allowedMoves)
make a move.
|
Move |
BreadthFirstSearchPlayer.move(GameState state,
java.util.ArrayList<Move> allowedMoves)
take a move using BFS algorithm.
|
Move |
CameraPlayer.move(GameState state,
java.util.ArrayList<Move> allowedMoves)
request the user to make a move, ask the camera to detect the visible
objects and try to figure out what has been moved
|
Move |
DepthFirstSearchPlayer.move(GameState state,
java.util.ArrayList<Move> allowedMoves)
make a move using DFS strategy - search the state space in depth,
when finding a first winning state, return a move that is leading towards it
|
Move |
MiniMaxPlayer.move(GameState state,
java.util.ArrayList<Move> allowedMoves)
make one minimax move: try to expand the game tree as far as it gets,
evaluating the rest with the heuristic when the time is used up
|
Move |
MousePlayer.move(GameState state,
java.util.ArrayList<Move> allowedMoves)
wait for the human user to perform a move and return it
|
abstract Move |
Player.move(GameState state,
java.util.ArrayList<Move> allowedMoves)
make a single move
|
Move |
RandomGeneralPlayer.move(GameState state,
java.util.ArrayList<Move> allowedMoves)
perform a random move
|
MiniMaxPlayer.Node |
MiniMaxPlayer.newNode(MiniMaxPlayer.Node previous,
GameState gs)
creates a new node that was reached by performing some move from the previous node.
|
void |
Player.otherMoved(Move move,
GameState newState)
information about the move made by another player - override if needed
|
protected abstract void |
AbstractMonteCarloPlayer.updateRatio(GameState gs,
java.util.Set<Move> moves)
update the ratio depending on the branching in the current state
|
protected void |
MonteCarloClassicPlayer.updateRatio(GameState gs,
java.util.Set<Move> moves)
classical monte-carlo does not update the ratio
|
protected void |
MonteCarloRatioPlayer.updateRatio(GameState gs,
java.util.Set<Move> moves)
opponent moves are taxing the ratio depending on the state neighborhood size
|
protected void |
MonteCarloRatioPlayer2.updateRatio(GameState gs,
java.util.Set<Move> moves)
update the current trial ratio every time a move is made
|
Constructor and Description |
---|
MiniMaxPlayer.Leaf(GameState gs,
MiniMaxPlayer.Node previous)
construct a new leaf
|