Playing Tic-Tac-Toe Using Genetic Neural Network with Double Transfer Functions

Computational intelligence is a powerful tool for game development. In this paper, an algorithm of playing the game Tic-Tac-Toe with computational intelligence is developed. This algorithm is learned by a Neural Network with Double Transfer functions ( NNDTF ) , which is trained by genetic algorithm ( GA ) . In the NNDTF, the neuron has two transfer functions and exhibits a node-to-node relationship in the hidden layer that enhances the learning ability of the network. A Tic-Tac-Toe game is used to show that the NNDTF provide a better performance than the traditional neural network does.


Introduction
Games such as Backgammon, Chess, Checkers, Go, Othello and Tic-Tac-Toe are widely used platforms for studying the learning ability of and developing learning algorithms for machines.By playing games, the machine intelligence can be revealed.Some techniques of artificial intelligence, such as the brute-force methods and knowledge-based methods [1], were reported.Bruteforce methods, e.g.retrograde analysis [2] and enhanced transposition-table methods, solve the game problems by constructing databases for the games.For instance, the database is formed by a terminal position [2].The best move is then determined by working backward on the constructed database.For knowledge-based methods, the best move is determined by searching a game tree.For games such as Checkers, the tree spanning is very large.Tree searching will be time consuming even for a few plies.Hence, an efficient searching algorithm is an important issue.Some searching algorithms, which are classified as knowledge-based methods, are threat-space search and -search, proof-number search [3], depth-first proof-number search and pattern search.
It can be seen that the above game-solving methods depend mainly on the database construction and searching.The problems are solved by forming a possible set of solutions based on the endgame condition, or searching for the set of solutions based on the current game condi-tion.The machine cannot learn to play the games by itself.Unlike an evolutionary approach in [1], neural network (NN) was employed to evolve and to learn for playing Tic-Tac-Toe without the need of a database.Evolutionary programming was used to design the NN and link weights.A similar idea has been applied in a Checkers game [4][5][6][7].Other games such as Backgammon [8], Othello [9] and Checkers [10] applying NNs or computational intelligence techniques can also be found.
In this paper, a neural network with double transfer functions (NNDTF) is proposed to learn the rules of Tic-tac-toe.Each possible move is evaluated by a proposed algorithm with a score.By maximizing the total scores (evaluated values), the rules of Tic-Tac-Toe can be extracted by the NNDTF.Different from the traditional feed-forward multiple-perception NN, some modified transfer functions with a node-to-node relationship are introduced to the proposed NN.The modified transfer functions are allowed to change the shapes during operation.Hence, the working domain is larger than that of the traditional one.By introducing the node-to-node relationship between hidden nodes, information can be exchanged between hidden layers.As a result, the learning ability is enhanced.A genetic algorithm (GA) [11] is investigated to train the NNDTF.The trained NNDTF will then be employed to play Tic-Tac-Toe with a human player as an example.

Algorithm for Playing Tic-Tac-Toe
The game Tic-tac-toe, also known as naughts and crosses, is a two-player game.Each player will place a marker, "X" for the first player and "O" for the second player, in turn in a three-by-three grid area.The first player takes the first move.The goal is to place three markers in a line of any direction on the grid area.
An algorithm is proposed in this section to evaluate the move on each grid.An "X" and an "O" on a grid are denoted by 1 and -1 respectively.An empty grid is denoted by 0.5.The following procedure is used to evaluate each possible move.
1) Place an "X" on an empty grid.
2) Corresponding to step 1, sum up all the grid values for each line in any direction, e.g., for a grid in the corner, we have three evaluated values because there are three lines to win or lose.
3) Remove the "X" placed in step 1 and place an "X" on another empty grid.Evaluate this grid using the algorithm in step 2. Repeat this process till all empty grids are evaluated.
4) After evaluation, each grid will have been assigned at least 2 evaluated values for all possible lines, e.g. the center grid will have 4 evaluated values, corner grids will have 3 evaluated values and other grids will have 2 evaluated values.There are totally 6 possible evaluated values: 3 (1 + 1 + 1), 2.5 (1 + 1 + 0.5), 2 (1 + 0.5 + 0.5), 1 (-1 + 1 + 1), 0.5 (-1 + 1 + 0.5) and -1 (-1 -1 + 1).The most important evaluated value of a grid is 3, which indicts a winning of the game (3 "X"s in a line) if you put an "X" on that grid.The priority of taking that move is the highest.The second important evaluated value of a grid is -1 (2 "O"s and 1 "X" in a line), which indicts that the opponent should be prevented from winning the game.The priority of taking that move is the second highest.Using this rationale, the list of priority in a descending order is: 3, -1, 2.5, 2, 1, 0.5.Based on these assigned evaluated values, a score will be assigned to each possible move.First, each evaluated value is assigned a score: , 4 The sum of the scores of a grid is the final score.The final scores will be used to determine the priorities of the possible move.A higher final score of a grid indicates a higher priority of that move.The reasons for choosing the scores in this way with the properties of (1) to ( 5) are as follows.As the evaluated value of 3 indicates a winning of the game (3 "X"s in a line), the score of 6  must be the highest.There are at most four evaluated values for a grid.Hence, 6  must be greater than 4 times the second largest evaluated values, i.e.

5
4 .Consequently, the priority of a grid having an evaluated score with a higher priority will not be affected by other lower evaluated scores.For instance, consider a grid having evaluated values of 3 and 0.5, and another grid having evaluated values of 1, 2.5, 2 and 0.5.The final score of the former grid (7 7 + 2 2 ) is bigger than and latter grid (7 7 + 2 2 and 6 6 + 5 5 + 4 4 + 2 2 ).Thus, the "X" should be place at the grid having an evaluated value of 3 to win the game.
Take the game as shown in Figure 1 as an example, we have 3 "X"s and 3 "O"s.The next move will be to place an "X".After assigning an empty grid to be 0.5, an "X" to be 1, an "O" to be -1, Figure 1(b) is obtained.Following Step 1 to Step 3, we obtain the evaluated values as shown in Figure 1(c).Based on Step 4, Figure 1(d) shows the final scores for the empty grids.As the highest score is 873324, the most appropriate move is to put an "X" on the bottom right corner.This move not 0.5 0.5 1 only lines up 3 "X"s to win a game, but also prevents the opponent to line up 3 "O"s.The second appropriate move, indicted by the final score of 52906, can gain a chance to win by lining up 2 "X"s, and prevent the opponent to win.

Neural Network with Double Transfer Functions (NNDTF)
NN was proved to be a universal approximator [12].A 3-layer feed-forward NN can approximate any nonlinear continuous function to an arbitrary accuracy.NNs are widely applied in areas such as prediction, system modeling and control [12].Owing to its particular structure, a NN is good in learning [2] using some learning algorithms such as GA [1] and back propagation [2].In general, the processing of a traditional feed-forward NN is done in a layer-by-layer manner.In this paper, by introducing a node-to-node relationship in the hidden layer of the NN, a better performance can be obtained.
Figure 2 shows the proposed neuron.It has two activation transfer functions to govern the input-output relationships of the neuron: static transfer function (STF) and dynamic transfer function (DTF).For the STF, the parameters are fixed and its output depends on the inputs of the neuron.For the DTF, the parameters of the activation transfer function depend on the outputs of other neurons and its STF.With this proposed neuron, the connection of the proposed NN is shown in Figure 3, which is a three-layer NN.A node-to-node relationship is introduced in the hidden layer.Comparing with the traditional feed-forward NN [12], it was reported in [13] that the proposed NN can offer a better performance and need fewer hidden nodes.The details of the NNDTF are presented as follows.

The Neuron Models
We consider the STF first.Let ij v be the synaptic connection weight from the i-th input component i x to the j-th neuron.The output j  of the j-th neuron's STF is defined as, where where j s m and j s  are the static mean and static stan- dard deviation for the j-th STF respectively.The parameters ( j s m and j s  ) are fixed after the training processing.Thus, the activation transfer function is static.The output of the STF depends on the inputs of the neuron only.From (7), the output value is ranged from -1 to 1.The shape of the proposed activation transfer function is shown in Figure 4 and Figure 5.It can be observed from these 2 figures that Considering the DTF, the neuron output j z of the j-th neuron is defined as,   where    p .In this DTF, unlike the STF, the activation transfer function is dynamic as the parameters of its activation transfer function depend on the outputs of the 1-th j  and 1-th j  neurons.Referring to (14), the input-output relationship of the proposed neuron is as follows,

Connection of the NNDTF
As shown in Figure 3, the NNDTF has three layers with in n nodes in the input layer, h n nodes in the hidden layer, and out n nodes in the output layer.In the hidden layer, the neuron model presented in the previous section is employed.The output value of the hidden node depends on the neighboring nodes and input nodes.In the output layer, a static activation transfer function is employed.Considering an input-output pair ) , ( y x , the output of the -th j node of the hidden layer is given by The output of the NNDTF is defined as, where jl w denotes the weight of the link between the -th j hidden and the -th l output nodes;

Genetic Algorithm
Genetic algorithms (GAs) are powerful searching algorithms.The traditional GA process [14][15][16] is shown in Figure 6.First, a population of chromosomes is created.Second, the chromosomes are evaluated by a defined fitness function.Third, some of the chromosomes are selected for performing genetic operations.Forth, genetic operations of crossover and mutation are performed.The produced offspring replace their parents in the initial population.This GA process repeats until a user-defined criterion is reached.In this paper, the traditional GA is modified and new genetic operators [11] are introduced to improve its performance.The modified GA process is shown in Figure 7.Its details will be given as follows.

Initial Population
The initial population is a potential solution set P. The  first set of population is usually generated randomly.
   17) to (19) that the potential solution set P contains some candidate solutions i p (chromosomes).The chromosome i p contains some variables i p (genes).

Evaluation
Each chromosome in the populati on will be evaluated by a defined fitness function.The better chromosomes will return higher values in this process.The fitness function to evaluate a chromosome in the population can be written as, The form of the fitness function depends on the application.

Selection
Two chromosomes in the population will be selected to undergo genetic operations for reproduction by the method of spinning the roulette wheel [16].It is believed that high potential parents will produce better offspring (survival of the best ones).The chromosome having a higher fitness value should therefore have a higher chance to be selected.The selection can be done by assigning a probability i q to the chromosome i p : The cumulative probability i q for the chromosome i p is defined as, The selection process starts by randomly generating a nonzero floating-point number, , ,    pop_size, and 0 ˆ0  q .It can be observed from this selection process that a chromosome having a larger   i f p will have a higher chance to be selected.Consequently, the best chromosomes will get more offspring, the average will stay and the worst will die off.In the selection process, only two chromosomes will be selected to undergo the genetic operations.

Genetic Operations
The genetic operations are to generate some new chromosomes (offspring) from their parents after the selection process.They include the crossover and the mutation operations.

Crossover
The crossover operation is mainly for exchanging information from the two parents, chromosomes 1 p and 2 p , obtained in the selection process.The two parents will produce one offspring.First, four chromosomes will be generated according to the following mechanisms, where denotes a weight to be determined by users, max , p p denotes a vector with each element obtained by taking the maximum among the corresponding element of 1 p and 2 p .For instance, min , p p gives a vector by taking the minimum value.For instance, os , the one with the largest fitness value is used as the offspring of the crossover operation.The offspring is defined as, where os i denotes the index i which gives a maximum value of   i c f os , 1 2 3 4 i , , ,  .If the crossover operation can provide a good offspring, a higher fitness value can be reached in less iteration.As seen from ( 23) to (26), the offspring spreads over the domain: (23) and (26) will move the offspring near centre region of the concerned domain (as w in (26) approaches 1, os approaches max p and min p respectively).The chance of getting a good offspring is thus enhanced.

Mutation
The offspring (30) will then undergo the mutation operation.The mutation operation is to change the genes of the chromosomes.Consequently, the features of the chromosomes inherited from their parents can be changed.Three new offspring will be generated by the mutation operation:  .The first new offspring ( 1 j  ) is obtained according to (30) with that only one i b (i being randomly generated within the range) is allowed to be 1 and all the others are zeros.The second new offspring is obtained according to (30) with that some randomly chosen bi are set to be 1 and others are zero.The third new offspring is obtained according to (30) with all 1 i b  .These three new offspring will then be evaluated using the fitness function of (21).A real number will be generated randomly and compared with a user-defined number . If the real number is smaller than pa, the one with the largest fitness value l f among the three new offspring will replace the chromosome with the smallest fitness f in the population (even when .) If the real number is larger than pa, the first offspring will replace the chromosome with the smallest fitness value s f in the population if l s f f  ; the second and the third offspring will do the same.p a is effectively the probability of accepting a bad offspring in order to reduce the chance of converging to a local optimum.Hence, the possibility of reaching the global optimum is kept.
We have three offspring generated in the mutation process.From (30), the first mutation ( 1 j  ) is in fact a uniform mutation.The second mutation allows some randomly selected genes to change simultaneously.The third mutation changes all genes simultaneously.The second and the third mutations allow multiple genes to be changed.Hence, the domain to be searched is larger as compared with a domain characterized by changing a single gene.As three offspring are produced in each generation, the genes will have a larger space for improving the fitness value when the fitness value is small.When the fitness values are large and nearly steady, changing the value of a single gene (the first mutation) may be enough as some genes may have reached the optimal values.
After the operation of selection, crossover, and mutation, a new population is generated.This new population will repeat the same process.Such an iterative process can be terminated when the result reaches a defined condition, e.g. a defined number of iterations have been reached.

Training of the NNDTF
In this section, the GA will be employed to train the parameters of the NNDTF to play Tic-Tac-Toe based on the gaming algorithm in Section 2. The NNDTF with 9 inputs and 1 output is employed.The grids are numbered from 1 to 9 from right to left and from top to bottom.An "X" on the grid is denoted by 1, an "O" is denoted by -1, and an empty grid is denoted by 0.5.The grid pattern represented by numerical values will be used as the input of the NNDTF.The output of the NNDTF (   y t which a floating point number ranged from 1 to 9) represents the position of the marker that should be placed on.In order to have a legal move (place a marker on an empty grid), the marker is placed on an empty grid that has its grid number closest to the output of the network.
To perform the training, we have to determine the parameters to be trained and the fitness function describing the problem's objective.The parameters of the modified network to be turned is 1 1 for all i, j,l, which will be chosen as the chromosome for the GA.denotes the maximum final score value among all the empty grids for the t-th training pattern   t x .The GA is to maximize the fitness value (ranged from 0 to 1) so as to force the output of the NNDTF to the grid number having the largest final score to ensure the best move.

Example
In this session, a 9-input-1-output NNDTF is used for training.The number of hidden nodes is chosen to be 8. 100 training patterns are used for training with 50000 iterations.The population size, probability of acceptance, and w are chosen to be 10, 0.5 and 0.1, respectively.After training, the fitness value obtained is 0.9605.The upper and lower bounds of each parameter are 1 and -1, respectively.The initial values of the parameters are generated randomly.
For comparison purposes, a traditional 3-layer feedforward NN [17] trained by GA with arithmetic crossover and non-uniform mutation [17] is also applied under the same conditions to learn the gaming strategy in Section 2. The probabilities of crossover and mutation are selected to be 0.8 and 0.1, respectively.The shape parameter of the traditional GA for non-uniform mutation [17] is selected to be 5.These parameters are selected by trial and error for the best performance.After training for 50000 iterations, the fitness value obtained is 0.9456.
To test the performance of our proposed method, our trained NN plays Tic-Tac-Toe with the trained traditional NN for 50 games is carried out.The first 25 grid patterns, which are generated randomly with 2 "O"s and 2 "X"s, are the same as the next 25 grid patterns.For the first 25 games, the proposed approach moves first.For the second 25 games, the traditional approach moves first.The results are tabulated in Table 1.It can be seen that the proposed approach performs better.The number of wins is 18 by using NNDTF while only 13 by using the tradition NN.

Conclusions
A neural network with double transfer functions and trained with genetic algorithm has been proposed.An algorithm of playing Tic-Tac-Toe has been presented.A new transfer function of the neuron with a node-to-node relationship has been proposed.The proposed neural network is trained by genetic algorithm to learn the algorithm of playing Tic-tac-toe.As a comparison, the trained NN has played against the traditional NN trained by the traditional GA.The result has shown that the proposed approach performs better.

Figure 2 .
Figure 2. Model of the proposed neuron.
the number of input and   j s net  is a static activation transfer function.The activation transfer function is defined as,


the activation transfer function of the output neuron.The transfer function of the output node is defined as follows, are the mean and the standard devia- tion of the output node activation transfer function respectively.The parameters of the NNDTF can be trained by GA[11].

Figure 7 .
Figure 7. Procedure of the modified GA.
, and (24) and (25) will move the offspring near the domain boundary (as w in (24) and (25) approaches 1, 100 different training patterns (obtained based on the proposed gaming algorithm stated in Section 2) are used to feed into the NNDTF for training.The fitness functions is designed as follows, y t denotes the output of the NNDTF with the t-th training pattern   t x as the input, score for grid   y t and the t-th training pattern

end end evaluate P() end end
+1select 2 parents p 1 and p 2 from P(-1) perform crossover operation according (23) to (28) perform mutation operation according to(30)to three offspring nos 1 , nos 2 and nos 3 // reproduce a new P() if random number < p a The one among nos 1 , nos 2 and nos 3 with the largest fitness value replaces the chromosome with the smallest fitness value in the population else if f(nos 1 ) > smallest fitness value in the P(-1) nos 1 replaces the chromosome with the smallest fitness value if f(nos 2 ) > smallest fitness value in the updated P(-1) nos 2 replaces the chromosome with the smallest fitness value if f(nos 3 ) > smallest fitness value in the updated P(-1) nos 3 replaces the chromosome with the smallest fitness value