A Reinforcement Learning System to Dynamic Movement and Multi-Layer Environments

There are many proposed policy-improving systems of Reinforcement Learning (RL) agents which are effective in quickly adapting to environmental change by using many statistical methods, such as mixture model of Bayesian Networks, Mixture Probability and Clustering Distribution, etc. However such methods give rise to the increase of the computational complexity. For another method, the adaptation performance to more complex environments such as multi-layer environments is required. In this study, we used profit-sharing method for the agent to learn its policy, and added a mixture probability into the RL system to recognize changes in the environment and appropriately improve the agent’s policy to adjust to the changing environment. We also introduced a clustering that enables a smaller, suitable selection in order to reduce the computational complexity and simultaneously maintain the system’s performance. The results of experiments presented that the agent successfully learned the policy and efficiently adjusted to the changing in multi-layer environment. Finally, the computational complexity and the decline in effectiveness of the policy improvement were controlled by using our proposed system.


Introduction
Along with the increasing need for rescue robots in disasters such as earthquakes and tsunami, there is an urgent need to develop robotics software for learning and adapting to any environment.Reinforcement Learning (RL) is often used in developing robotic software.RL is an area of machine learning within the computer science domain, and many RL methods have recently been proposed and applied to a variety of problems [1]- [4], where agents learn the policies to maximize the total number of rewards decided according to specific rules.In the process whereby agents obtain rewards; data consisting of state-action pairs are generated.The agents' policies are effectively improved by a supervised learning mechanism using the sequential expression of the stored data series and rewards.
Normally, RL agents need to initialize the policies when they are placed in a new environment and the learning process starts afresh each time.Effective adjustment to an unknown environment becomes possible by using statistical methods, such as a Bayesian network model [5] [6], mixture probability and clustering distribution [7] [8], etc., which consist of observational data on multiple environments that the agents have learned in the past [9] [10].However, the use of a mixture model of Bayesian networks increases the system's calculation time.Also, when there are limited processing resources, it becomes necessary to control the computational complexity.On the other hand, by using mixture probability and clustering distribution, even though the computational complexity was controlled and the system's performance was simultaneously maintained, the experiments were only conducted on fixed obstacle 2D-environments.Therefore, examination of the computational complexity load and the adaptation performance in dynamic 3D-environments is required.
In this paper, we describe modifications of profit-sharing method with new parameters that make it possible to work on dynamic movement of multi-layer environments.We then describe a mixture probability consisting of the integration of observational data on environments that agent learned in the past within framework of RL, which provides initial knowledge to the agent and enables efficient adjustment to a changing environment.We also describe a novel clustering that makes it possible to select fewer elements for a significant reduction in the computational complexity while retaining system's performance.
The paper is organized as follows.Section 2 briefly explains the profit-sharing method, the mixture probability, the clustering distribution, and the flow system.The experimental setup and procedure as well as the presentation of results are described in Section 3. Finally, Section 4 summarizes the key points and mentions our future work.

Profit-Sharing
Profit-sharing is an RL method that is used as a policy learning mechanism in our proposed system.RL agents learn their own policies through "rewards" received from an environment.

2D-Environments
The policy is given by the following function: where S and A denote a set of state and action, respectively.Pair ( )( ) w s a is used as the weight of the rule ( ( ) , w s a is positive in this paper).When state 0 s is observed, a rule is selected in proportion to the weight of rule ( ) , w s a .The agent selects a single rule corresponding to given state 0 s using the following probability: The agent stores the sequence of all rules that were selected until the agent reaches the target as an episode.
, , , , where L is the length of the episode.When the agent selects rule ( ) s a and requires reward r , the weight of each rule in the episode is reinforced by where ( ) f i is referred to as the reinforcement function and ( ] ( ) 0,1 γ ∈ is the "learning rate".In this paper, the following nonfixed reward is used: where 0 r is the initial reward, t is the action number limit in one trial and n is the real action number until the agent reaches the target.We expect that the agent can choose a more suitable rule to reach the target in a dynamic environment by using this nonfixed reward.

3D-Environments
The weight ( ) , w s a becomes ( ) , , w z s a where 1, , ( n is number of layers in this paper).The proba- bility of the rule ( ) becomes to this following function: ) w z s a P z s a w z s a and the new episode is given in the following function: ) 1 By the movement on z , we can set the pseudo-reward [11] by using the following function: and update the weights according to the following function by using function (10): , )

Ineffective Rule Suppression
As Figure 1, agent selects rule ( ) z then moves to 2 z .When agent selects any rule in 2 z and finally moves back to state ( ) 1 , L z s , the rules were selected on 2 z are became detour rules which may not contribute to the acquisition of the reward and these detour rules are called as ineffective rule [12] [13].
The ineffective rule has more negative effect such as the rules continue being selected repeatedly on the movement of z and agent cannot avoid from that situation.And this may make the policy learning become stagnation.From these reasons, the suppression of ineffective rule becomes necessary.
In this paper, we use this following method to suppress the ineffective rule: Here, we use i L as the length of episode i  and C L as a fixed number for determination ineffective rule.When , all rules in i  are decided to be ineffective rule.Here, all rules in i  and the fi- nal rule in ( )  will be excluded from  as shown on Figure 1.

Mixture Probability
Mixture probability is a mechanism for recognizing changes in the environment and consequently improving the agent's policy to adjust to those changes.
The joint distribution [14] ( ) , , P z s a , consisting of the episode observed while learning an agent's policy, is probabilistic knowledge about the environment.Furthermore, the policy acquired by the agent is improved by using the mixture probability of ( ) obtained in multiple known environments.The mixing distribution is given by the following function: , , , , where m denotes the number of joint distributions, and i β is the mixing parameter ( ) adjusting the environment subject to this mixing parameter, we expect appropriate improvement of the policy on the unknown dynamic environment.
In this paper, we use the following Hellinger distance [15] function to fix the mixing parameter: , where H D is the distance between i P and Q , and H D is set to 0 when i P and Q are the same.i P is joint distributions obtained in m different environments that an agent has learned in the past, Q is the sample distribution obtained from the successful trial of τ times in an unknown environment, and x is the total number of rules.Given that ( ) , 2 H i D P Q ≤ is established, the mixing parameter can be fixed by the following function: However, when ( ) and when all distributions are equal, the mixing para- meter is evenly allotted.

Clustering Distributions
We expect that the computational complexity of the system can be controlled and it will be possible to maintain the effectiveness of policy learning by selecting only the suitable joint distributions as the mixture probability elements based on this clustering method.
In this study, we used the group average method as opposed to the clustering method.The distance between the clusters can be determined by the following function: D P Q will be selected as the mixture probability element from each cluster.

Flow System
The system framework is shown in Figure 2. A case involving the application of mixture probability and clustering distributions to improve the agent's policy is explained in the following procedure: Step 1 Learn the policy in m environments by using the profit-sharing method to make the joint distribu- tions ( ) and then continue learning the updated weight by using the profit-sharing method.

Experiments
We performed an experiment to demonstrate the agent navigation problem and to illustrate the applied improvement in the RL agent's policy through the modification of parameters of the profit-sharing method and using the mixture probability scheme.The purpose of this experiment was to evaluate the adjustment performance in the unknown dynamic 3D-environment by applying the policy improvement, and to evaluate its effectiveness by using mixture probability.

Experimental Setup
The aim in the agent navigation problem is to arrive at the target from the default position of the environment where the agent is placed.In the experiment, the reward is obtained when the agent reaches the target by avoiding the obstacle in the environment, as shown in Figure 3.
The types of state and action are shown in Table 1 and Table 2, respectively.Table 1 shows the output actions of an agent in 8 directions and Table 2 shows 256 types of the total input states coming from the combination of existing obstacles in 8 directions.The 8 directions are the top left, top, top right, left, right, bottom left, bottom, and bottom right.The agent has 2048 (8 actions × 512 states) rules in total that result from a combination of input states and output actions in a layer.The size of agent, target, and environment are 1 × 1, 5 × 5, and 50 × 50, respectively.

Experimental Procedure
The agent learns the policy by using the profit-sharing method.A trial is considered to be successful if an agent  reaches the target at least once out of 200 action attempts.The action is selected by randomization and that action continues until the state is changed.The purpose of the experiment is to learn the policy in unknown dynamic environments , A B E E and C E in three cases (fixed obstacle, periodic dynamic and nonperiodic dynamic environments), by employing only the profit-sharing method and the mixture probability scheme (elements are m and n ); the evaluation is based on the success rate of 2000 trials.The experimental parameters are shown in Table 3.Some of known environments that became mixture probability elements, and the unknown dynamic environments ( ) E E E used to evaluate the policy improvement are shown in Figure 4 and Figure 5, respectively.

Discussion
The success rate of policy improvement in , A B E E and C E by using only profit-sharing method and using mixture probabilities and clustering is shown in Figure 6, and the processing time from Step 3 (system flow) until experiment finish in cases using all 50 elements and using only 35, 25 and 15 elements is shown in and Table 4, respectively Figure 6 shows that the immediate success rate obtained by policy improvement is higher than that obtained by only the profit-sharing method in all environments.This means the speed of adaptation in unknown environment is higher and the higher success rate continues until the experiments end.This results shows the success rate by policy improvement is higher than using only the profit-sharing more than 20% in A E and C E , and more than 30% in B E .So, we can say the policy improvement is effective in all environments.Even the success rate by using only 15 elements is also higher than that using only the profit-sharing method, but is still lower compared to the results using 25 and 35 elements.Hence, we can say by reducing the number of elements too much, the influence on policy improvement is apparent in all environments.However, although    the success rate using all 50 elements was the highest, but that obtained using 25 elements was almost the same as that using all the elements in this result.So, the decline in effectiveness can still be controlled even if the number of mixture probability elements is reduced to half.Furthermore, from the results in Table 4, we can see that by reducing the number of elements, the processing time was reduced considerably.Hence, we can say by using 25 elements, we can reduce the processing time without declining in policy improvement performance.
Figure 7 shows the typical trajectories of agent following the policy acquired while selecting data in environment C E in cases 1 -500, 501 -1000 and 1001 -2000 trials.The intensity of color (from light red to dark red) show the frequency of agent's trajectories when they reached to target in each layers.
In these results, we can see in the first 500 trials, agent reached to all sub-targets in top layer.But due to the agent which started from sub-target 1 was the most difficult for reaching next sub-target, the number of time that agent reached to sub-target 1 became fewer in 501 -1000 trials and finally almost reached to sub-target 2 and 3 in 1001 -2000 trials.Also in middle layer, agent reached to all sub-targets in first 500 trials.But due to the agent which started from sub-target 5 was more easily for reaching to the final goal, the more number of trials there are, the more frequency of agent's trajectories from sub-target 5 to the final goal increased clearly.
From the results of typical agent', we can say by using the pseudo-reward, the agent can choose more suitable rules to reach the target in each layers even agent might be sometimes more difficult to reach in some layer, but more easily to reach to the final goal.

Supplemental Experiments
These experiments were conducted to compare the performance of the policy improvement in cases of fixed obstacle, periodic dynamic movement and nonperiodic dynamic movement on A E and B E by using 25 elements.And experiments in only periodic and nonperiodic cases by using the same parameters were conducted 5 times.

Discussion
The results of policy improvement by using 25 elements of mixture probabilities in three cases are shown in Figure 8, and the results of five sets of experiments in periodic and nonperiodic dynamic movement are shown in Figure 9, respectively.
Figure 8 shows that the success rate in the case of periodic dynamic movement was almost no difference in the early period compared with the fixed obstacle case in both A E and B E , and continued to keep abreast of high success rate until the experiments end.On the other hand, in the case of nonperiodic dynamic movement, even the success rate in B E was almost no difference or sometime was conversely higher compared with the fixed obstacle case.However, as shown in Figure 9, even though the experiments were conducted by using the same parameters, the results of nonperiodic case in A E was quite low compared to periodic case.And the re- sults of nonperiodic case were unstable in all A E and B E .From these results, we can deduce that agent successfully learns the policy in the periodic dynamic movement environment and can more easily reach the target when the obstacle moves out from the trajectory as in B E .On the contrary, when the obstacle moves into the trajectory, it will be more difficult for the agent to reach the target.

Conclusions
In this research, we used the joint distributions ( ) , , P z s a as the knowledge and the sample distribution Q to   find the degree of similarity between the unknown and each known environment.We then used this as the basis to update the initial knowledge as being very useful for the agent to learn the policy in a changing environment.
Even if obtaining the sample distribution is time-consuming, it is still worthwhile if the agent can efficiently learn the policy in an unknown dynamic environment.Also, by using the clustering method to collect similar elements and then selecting just one suitable joint distribution as the mixture probability elements from each cluster, we can avoid using similar elements to maintain a variety of elements when we reduce their number.
From the results of the computer experiment as an example application in the agent navigation problem, we can confirm that the policy improvement in dynamic movement environments is effective by using the mixture probabilities.Furthermore, agent is possible to select suitable rules to reach to the target in multi-layer by using the pseudo-reward.And the decline in effectiveness of the policy improvement can be controlled by using the clustering method.We conclude that the improvement of stability and speed in policy learning, and the control of computational complexity are effective by using our proposed system.
Improvement of the RL policy is also required by using mixture probability with a positive and negative weight value for making the system adaptable to unknown environments that are not similar to any known environments.Finally, a new reward process is needed as well as a new mixing parameter for the agent to adjust to a changing environment more efficiently and to be able to work well in any complicate environments in future work.

Figure 3 .
Figure 3. Environment of agent navigation problem.

Figure 4 .
Figure 4. Some of known environments.

Figure 7 .
Figure 7.Typical agent trajectories in C E .

Table 1 .
Types of action.

Table 2 .
Some types of state.