空 挡 广 告 位 | 空 挡 广 告 位

Sony Patent | Learning Apparatus And Learning Method

Patent: Learning Apparatus And Learning Method

Publication Number: 20190244133

Publication Date: 20190808

Applicants: Sony

Abstract

There is provided a learning apparatus and a learning method that allow a reinforcement learning model to be easily corrected on the basis of user input. A display control section causes a display section to display reinforcement learning model information regarding a reinforcement learning model. A correcting section corrects the reinforcement learning model on the basis of user input to the reinforcement learning model information. The present disclosure can be applied to, for example, a personal computer (PC) and the like that correct a reinforcement learning model on the basis of input from a user and perform reinforcement learning of a movement policy of an agent using the corrected reinforcement learning model.

TECHNICAL FIELD

[0001] The present disclosure relates to a learning apparatus and a learning method, and particularly relates to a learning apparatus and a learning method that allow a reinforcement learning model to be easily corrected on the basis of user input.

BACKGROUND ART

[0002] There are reinforcement learning models that learn, when an agent, an environment, an action, and a reward are given, a policy for maximizing the reward (see NPL 1, for example).

CITATION LIST

Non Patent Literature

[NPL 1]

[0003] “Maximum Entropy Inverse Reinforcement Learning,” Brian D. Ziebart, Andrew Maas, J. Andrew Bagnell, and Anind K. Dey, the Association for the Advancement of Artificial Intelligence (AAAI), 2008.7.13

SUMMARY

Technical Problem

[0004] However, making a reinforcement learning model corrected easily on the basis of user input has not been devised.

[0005] The present disclosure has been made in view of the foregoing situation and allows a reinforcement learning model to be easily corrected on the basis of user input.

Solution to Problem

[0006] A learning apparatus according to one aspect of the present disclosure includes: a display control section configured to cause a display section to display reinforcement learning model information regarding a reinforcement learning model; and a correcting section configured to correct the reinforcement learning model on a basis of user input to the reinforcement learning model information.

[0007] A learning method according to one aspect of the present disclosure corresponds to a learning apparatus according to one aspect of the present disclosure.

[0008] According to one aspect of the present disclosure, reinforcement learning model information regarding a reinforcement learning model is displayed on a display section, and the reinforcement learning model is corrected on the basis of user input to the reinforcement learning model information.

[0009] It is noted that the learning apparatus according to the first aspect of the present disclosure can be implemented by causing a computer to execute a program.

[0010] Further, in order to implement the learning apparatus according to the first aspect of the present disclosure, the program to be executed by the computer can be provided by transmitting the program via a transmission medium or by recording the program on a recording medium.

Advantageous Effect of Invention

[0011] According to one aspect of the present disclosure, a reinforcement learning model can be easily corrected on the basis of user input.

[0012] It is noted that the effects described herein are not necessarily limitative, and any of the effects described in the present disclosure may be provided.

BRIEF DESCRIPTION OF DRAWINGS

[0013] FIG. 1 is a block diagram depicting an example of a configuration of a first embodiment of a PC as a learning apparatus to which the present disclosure is applied.

[0014] FIG. 2 is a diagram for describing an environment map.

[0015] FIG. 3 is another diagram for describing the environment map.

[0016] FIG. 4 is a diagram depicting an example of the environment map on which policy information has been superimposed.

[0017] FIG. 5 is a diagram for describing a first method of teaching a movement policy.

[0018] FIG. 6 is another diagram for describing the first method of teaching the movement policy.

[0019] FIG. 7 is a diagram for describing a second method of teaching a movement policy.

[0020] FIG. 8 is a flowchart for describing a movement policy learning process of the PC in FIG. 1.

[0021] FIG. 9 is a flowchart for describing a correction process in FIG. 8.

[0022] FIG. 10 is a block diagram depicting an example of a configuration of a second embodiment of a PC as a learning apparatus to which the present disclosure is applied.

[0023] FIG. 11 is a diagram depicting an example of an environment map on which reward function information has been superimposed.

[0024] FIG. 12 is a diagram for describing a method of teaching a reward function.

[0025] FIG. 13 is a flowchart for describing a movement policy learning process of the PC in FIG. 10.

[0026] FIG. 14 is a flowchart for describing a correction process in FIG. 13.

[0027] FIG. 15 is a diagram depicting another example of an environment map on which policy information of a movement policy has been superimposed.

[0028] FIG. 16 is a block diagram depicting an example of a configuration of hardware of a computer.

DESCRIPTION OF EMBODIMENTS

[0029] Hereinafter, modes for carrying out the present disclosure (hereinafter referred to as embodiments) will be described. It is noted that description will be given in the following order.

[0030] 1. First Embodiment: Personal Computer (PC) (FIGS. 1 to 9)

[0031] 2. Second Embodiment: Personal Computer (PC) (FIGS. 10 to 14)

[0032] 3. Third Embodiment: VR (Virtual Reality) device (FIG. 15)

[0033] 4. Fourth Embodiment: Computer (FIG. 16)

First Embodiment

[0034] (Example of Configuration of First Embodiment of PC)

[0035] FIG. 1 is a block diagram depicting an example of a configuration according to the first embodiment of a PC as a learning apparatus to which the present disclosure is applied.

[0036] A PC 10 in FIG. 1 includes an environment setting section 11, an initialization section 12, a learning section 13, a display control section 14, a display section 15, a receiving section 16, and a correcting section 17. The PC 10 includes a computer, for example, and performs reinforcement learning of a movement policy of an agent.

[0037] Specifically, in a case where the agent exists in a virtual world such as a simulation, the environment setting section 11 of the PC 10 builds a surrounding environment of the agent in the virtual world on the basis of an operation environment file and the like of the agent. Then, the environment setting section 11 generates an environment map (environment information). The environment map is a GUI (Graphical User Interface) image depicting the surrounding environment.

[0038] By contrast, in a case where the agent is a robot or the like that exists in the real world, the environment setting section 11 generates an environment map of a surrounding environment of the agent on the basis of data observed by various sensors of the agent in the real world. The environment setting section 11 supplies the generated environment map to the display control section 14.

[0039] On the basis of an initial value of a value function or a movement policy supplied from the receiving section 16, the initialization section 12 initializes a reinforcement learning model that learns the movement policy of the agent. At this time, an initial value of a reward function used for the reinforcement learning model is also set. Here, although a reward function model is assumed to be a linear basis function model that performs a weighted addition on a predetermined reward basis function group selected from a reward basis function group registered in advance, the reward function model is not limited thereto. The initialization section 12 supplies the initialized reinforcement learning model to the learning section 13.

[0040] The learning section 13 optimizes the reinforcement learning model supplied from the initialization section 12 or the correcting section 17, and learns the movement policy on the basis of the optimized reinforcement learning model. The learning section 13 supplies the optimized reinforcement learning model to the correcting section 17 and supplies the learned movement policy to the display control section 14. Further, the learning section 13 outputs a final learning result of the movement policy. In addition, the learning section 13 holds the learned movement policy if necessary.

[0041] The display control section 14 supplies the environment map supplied from the environment setting section 11 to the display section 15 and causes the display section 15 to display the environment map. Further, the display control section 14 generates policy information and the like as reinforcement learning model information regarding the reinforcement learning model. The policy information is a GUI image depicting the movement policy supplied from the learning section 13 or the correcting section 17. The display control section 14 superimposes the policy information and the like on the environment map. The display control section 14 supplies the policy information and the like superimposed on the environment map to the display section 15 and causes the display section 15 to display the policy information and the like superimposed on the environment map. In addition, the display control section 14 generates, if necessary, a selection screen for selecting whether or not to add a reward basis function. The display control section 14 supplies the selection screen to the display section 15 and causes the display section 15 to display the selection screen.

[0042] The receiving section 16 receives input from the user. For example, the receiving section 16 receives the initial value of the value function or the movement policy input from the user, and supplies the initial value of the value function or the movement policy to the initialization section 12. Further, the receiving section 16 receives, from the user who has seen the policy information and the like displayed on the display section 15, input of a movement path as indirect teaching of the movement policy with respect to the policy information, and supplies the movement path to the correcting section 17.

[0043] The correcting section 17 corrects the reinforcement learning model supplied from the learning section 13 so as to optimize the movement policy on the basis of the movement path supplied from the receiving section 16 according to various inverse reinforcement learning methods. At this time, the correcting section 17 adds the reward basis function of the reinforcement learning model if necessary. The method described in NPL 1 can be used as an inverse reinforcement learning method, for example.

[0044] When the surrounding environment of the agent is assumed to be M and the movement path supplied from the receiving section 16 is assumed to be Z.sub.E(s, a), for example, optimization of a movement policy n is defined by the following equation (1).

[Math. 1]

.pi.*=argmax.sub..pi. P(Z.sub.E|.pi.M) (1)

[0045] It is noted that s represents a state of the agent such as the position of the agent, a represents an action of the agent, and P represents a probability.

[0046] In general, there are many movement policies .pi.* that satisfy the equation (1) described above, and there are various problem setting methods to constrain the movement policies .pi.* to one. In any of the problem setting methods, the reward function is also indirectly corrected while the movement policy n is being optimized. The correcting section 17 supplies the corrected reinforcement learning model to the learning section 13 and supplies the optimized movement policy to the display control section 14.

[0047] (Description of Environment Map)

[0048] FIGS. 2 and 3 are diagrams for describing the environment map.

[0049] In the examples in FIGS. 2 and 3, a region 32 and a region 33 exist around an agent 31. The agent 31 is movable in the region 32, while the agent 31 is not movable in the region 33. In the movable region 32, a goal 34 and an obstacle 35 exist. A positive reward value is set in the goal 34. The obstacle 35 is an obstacle to the movement.

[0050] First, in this case, as depicted in FIG. 2, the environment setting section 11 generates a GUI image 30 that depicts a surrounding environment in two dimensions. The surrounding environment includes the agent 31, the region 32, the region 33, the goal 34, and the obstacle 35. Next, the environment setting section 11 divides the GUI image 30 into grids (lattice points) on the basis of a rectangular coordinate system of the reinforcement learning model and generates an environment map 50. Each of these grids serves as a unit of the reward function or probability density distribution of the reinforcement learning model.

[0051] (Example of Environment Map on which Policy Information has been Superimposed)

[0052] FIG. 4 is a diagram depicting an example of the environment map on which policy information of a movement policy has been superimposed. The movement policy has been learned by the reinforcement learning model before correction by the correcting section 17.

[0053] As depicted in FIG. 4, in a case where the environment map 50 in FIG. 3 has been generated, the display control section 14 generates policy information 71. Using an arrow, the policy information 71 indicates a movement path based on the movement policy from the current position of the agent 31 to the goal 34. The movement policy has been learned by the reinforcement learning model before correction by the correcting section 17.

[0054] Further, the display control section 14 calculates, from the movement policy supplied from the learning section 13, probability density distribution (movement prediction distribution) of the agent 31 reaching the goal 34 in a case where the agent 31 exists in each grid. Then, the display control section 14 generates contour-line images 72 to 75. The contour-line images 72 to 75 are GUI images of contour lines of the probabilities of the movement prediction distribution. It is noted that the probabilities of the movement prediction distribution are high in order of the contour-line images 72, 73, 74, and 75.

[0055] The display control section 14 superimposes the policy information 71 and the contour-line images 72 to 75 generated as described above on the environment map 50 and causes the display section 15 to display the policy information 71 and the contour-line images 72 to 75 superimposed on the environment map 50.

[0056] It is noted that, although the obstacle 35 is an obstacle to the movement, there is a possibility that as depicted in FIG. 4, the movement path based on the movement policy learned by the reinforcement learning model before correction is a path that passes through the obstacle 35 since the obstacle 35 exists in the movable region 32. Further, the contour-line images 72 to 75 do not need to be superimposed on the environment map 50.

[0057] (Description of First Method of Teaching Movement Policy)

[0058] FIGS. 5 and 6 are diagrams for describing a first method of teaching a movement policy with respect to the policy information 71 in FIG. 4.

[0059] In a case where the policy information 71 and the contour-line images 72 to 75 have been superimposed on the environment map 50 as depicted in FIG. 4, the user inputs a movement path 111. The movement path 111 extends from the current position of the agent 31 to the goal 34 without passing through the obstacle 35, as depicted in FIG. 5, for example. In this manner, the user teaches a movement policy corresponding to the movement path 111 as a desired movement policy.

[0060] In this case, the correcting section 17 corrects the reinforcement learning model so as to optimize the movement policy on the basis of the movement path 111, and supplies the optimized movement policy to the display control section 14. The display control section 14 generates policy information 121. Using an arrow, the policy information 121 indicates the movement path based on the movement policy supplied from the correcting section 17. Further, the display control section 14 calculates movement prediction distribution from the movement policy and generates contour-line images 122 to 125 of the probabilities of the movement prediction distribution. Then, as depicted in FIG. 6, the display control section 14 superimposes the policy information 121 and the contour-line images 122 to 125 on the environment map 50 and causes the display section 15 to display the policy information 121 and the contour-line images 122 to 125 superimposed on the environment map 50. It is noted that the probabilities of the movement prediction distribution are high in order of the contour-line images 122, 123, 124, and 125.

[0061] (Description of Second Method of Teaching Movement Policy)

[0062] FIG. 7 is a diagram for describing a second method of teaching a movement policy with respect to the policy information 71 in FIG. 4.

[0063] As depicted in FIG. 4, in a case where the policy information 71 and the contour-line images 72 to 75 have been superimposed on the environment map 50, the user inputs a movement path 131 as depicted in FIG. 7, for example. The movement path 131 is in the middle of the movement path extending from the current position of the agent 31 to the goal 34 without passing through the obstacle 35. In this manner, the user teaches a movement policy corresponding to the movement path 131 as a desired movement policy.

[0064] In this case, the correcting section 17 corrects the reinforcement learning model so as to optimize the movement policy corresponding to the movement path extending to the goal 34 through the movement path 131 on the basis of the movement path 131. The correcting section 17 supplies the optimized movement policy to the display control section 14. The display control section 14 generates policy information 141. Using an arrow, the policy information 141 indicates a path after the movement path 131, which is part of the movement path based on the movement policy supplied from the correcting section 17. Further, the display control section 14 calculates movement prediction distribution from the movement policy and generates contour-line images 142 to 145 of the probabilities of the movement prediction distribution.

[0065] Then, as depicted in FIG. 7, the display control section 14 superimposes the movement path 131, the policy information 141, and the contour-line images 142 to 145 on the environment map 50 and causes the display section 15 to display the movement path 131, the policy information 141, and the contour-line images 142 to 145 superimposed on the environment map 50. It is noted that the probabilities of the movement prediction distribution are high in order of the contour-line images 142, 143, 144, and 145.

[0066] Examples of a method of inputting the movement path 111 (131) include a method of inputting the locus of the movement path 111 (131) using a mouse, not depicted, a method of inputting the coordinates of a grid on the movement path, and the like.

[0067] (Description of Processes of PC)

[0068] FIG. 8 is a flowchart for describing a movement policy learning process of the PC 10 in FIG. 1.

[0069] In step S31 in FIG. 8, the environment setting section 11 of the PC 10 determines whether or not the agent exists in a virtual world. In a case where it has been determined in step S31 that the agent exists in the virtual world, the environment setting section 11 obtains an operation environment file and the like of the agent in step S32.

[0070] In step S33, the environment setting section 11 builds a surrounding environment of the agent in the virtual world on the basis of the operation environment file and the like of the agent that have been obtained in step S32, and generates an environment map of the surrounding environment. Then, the environment setting section 11 supplies the generated environment map to the display control section 14 and causes the process to proceed to step S36.

[0071] On the other hand, in a case where it has been determined in step S31 that the agent does not exist in the virtual world, that is, in a case where the agent exists in the real world, the process proceeds to step S34. In step S34, the environment setting section 11 obtains data observed by various sensors of the agent in the real world.

[0072] In step S35, the environment setting section 11 generates an environment map of the surrounding environment of the agent on the basis of the data obtained in step S34, supplies the environment map to the display control section 14, and causes the process to proceed to step S36.

[0073] In step S36, the display control section 14 supplies the environment map supplied from the environment setting section 11 to the display section 15 and causes the display section 15 to display the environment map.

[0074] In step S37, the receiving section 16 determines whether or not an initial value of a value function or a movement policy has been input. In a case where it has been determined in step S37 that the initial value of the value function or the movement policy has not been input yet, the receiving section 16 stands by until the initial value of the value function or the movement policy is input.

[0075] On the other hand, in a case where it has been determined in step S37 that the initial value of the value function or the movement policy has been input, the receiving section 16 receives the initial value of the value function or the policy input from the user and supplies the initial value of the value function or the policy to the initialization section 12. Then, in step S38, the initialization section 12 initializes the reinforcement learning model on the basis of the value function or the movement policy supplied from the receiving section 16. The initialization section 12 supplies the initialized reinforcement learning model to the learning section 13.

[0076] In step S39, the learning section 13 selects a method for optimizing the reinforcement learning model according to input from the user or the like. Examples of the optimization method include an MDP (Markov decision process) and the like.

[0077] In step S40, the learning section 13 optimizes (searches) the reinforcement learning model supplied from the initialization section 12 or the correcting section 17 according to the optimization method selected in step S39, and learns (improves) the movement policy on the basis of the optimized reinforcement learning model. The learning section 13 supplies the optimized reinforcement learning model to the correcting section 17. The learning section 13 supplies the learned movement policy to the display control section 14.

[0078] In step S41, the display control section 14 generates policy information and contour-line images on the basis of the movement policy supplied from the learning section 13, and superimposes the policy information and the contour-line images on the environment map.

[0079] In step S42, the display control section 14 supplies the environment map on which the policy information and the contour-line images have been superimposed to the display section 15, and causes the display section 15 to display the environment map.

[0080] In step S43, the receiving section 16 determines whether or not the user who has seen the policy information and the like displayed on the display section 15 has taught a movement policy with respect to the policy information. In a case where it has been determined in step S43 that the movement policy has been taught, the receiving section 16 receives input of a movement path as teaching of the movement policy, supplies the movement path to the correcting section 17, and causes the process to proceed to step S44.

[0081] In step S44, the correcting section 17 performs a correction process of correcting the reinforcement learning model supplied from the learning section 13 on the basis of the movement path supplied from the receiving section 16. The details of this correction process will be described with reference to FIG. 9 described later.

[0082] In step S45, the PC 10 determines whether or not to end the process. For example, in a case where the reinforcement learning model has converged or in a case where the user has given an end instruction, the PC 10 determines to end the process in step S45. Then, the learning section 13 outputs the current movement policy as a final learning result and ends the process.

[0083] On the other hand, in a case where the reinforcement learning model has not converged yet and the user has not given any end instruction, the PC 10 determines not to end the process in step S45 and returns the process to step S40.

[0084] Further, in a case where it has been determined in step S43 that the movement policy has not been taught, the process returns to step S40.

[0085] It is noted that the process in the first step S40 may be started in a case where the user has given an instruction to start optimization (search).

[0086] FIG. 9 is a flowchart for describing the correction process in step S44 in FIG. 8.

[0087] In step S51 in FIG. 9, the correcting section 17 corrects the reinforcement learning model supplied from the learning section 13 by solving a policy optimization problem of the reinforcement learning model on the basis of the movement path supplied from the receiving section 16 according to various inverse reinforcement learning methods. The correcting section 17 supplies the optimized movement policy to the display control section 14.

[0088] In step S52, the display control section 14 generates policy information and contour-line images on the basis of the movement policy supplied from the correcting section 17, and superimposes the policy information and the contour-line images on the environment map.

[0089] In step S53, the display control section 14 supplies the environment map on which the policy information and the contour-line images have been superimposed to the display section 15, and causes the display section 15 to display the environment map.

[0090] In step S54, the correcting section 17 determines whether or not to add, as a reward basis function .phi..sub.n+1(s, a), a reward basis function among the reward basis function group registered in advance. The reward basis function is not any of n reward basis functions .phi..sub.1(s, a) to .phi..sub.n(s, a) used in the corrected reinforcement learning model.

[0091] For example, the correcting section 17 sequentially adds, as the reward basis function .phi..sub.n+1(s, a), each reward basis function other than the reward basis functions .phi..sub.1(s, a) to .phi..sub.n(s, a) among the reward basis function group registered in advance. It is noted that the reward basis function .phi..sub.i may be a reward basis function .phi..sub.i(s) that depends only on a state s. On the basis of the movement path supplied from the receiving section 16, the correcting section 17 solves the policy optimization problem of the reinforcement learning model to which the reward basis function .phi..sub.n+1(s, a) has been added.

[0092] As a result, in a case where there is at least one reward basis function .phi..sub.n+1(s, a) whose objective function corresponding to the problem setting has been improved compared to the reinforcement learning model before addition, the correcting section 17 determines, in step S54, to add the reward basis function .phi..sub.n+1(s, a) whose objective function has been improved most. On the other hand, in a case where there is no reward basis function .phi..sub.n+1(s, a) whose objective function has been improved, the correcting section 17 determines in step S54 not to add any reward basis function .phi..sub.n+1(s, a).

[0093] In a case where it has been determined in step S54 that the reward basis function .phi..sub.n+1(s, a) is added, the display control section 14 causes the display section 15 to display the selection screen for selecting whether or not to add the reward basis function in step S55.

[0094] In step S56, the receiving section 16 determines whether or not the user who has seen the selection screen has made input for selecting addition of the basis function. In a case where it has been determined in step S56 that the input for selecting the addition of the basis function has been made, the receiving section 16 receives the input.

[0095] In step S57, similarly to the process in step S51, the correcting section 17 corrects the reinforcement learning model by solving, on the basis of the movement path supplied from the receiving section 16, the policy optimization problem of the reinforcement learning model to which the reward basis function .phi..sub.n+1(s, a) has been added. The correcting section 17 supplies the corrected reinforcement learning model to the learning section 13 and supplies the optimized movement policy to the display control section 14.

[0096] Since processes in steps S58 and S59 are similar to the processes in steps S52 and S53, respectively, description will be omitted. After the process in step S59, the process returns to step S44 in FIG. 8 and proceeds to step S45.

[0097] On the other hand, in a case where it has been determined in step S54 that the reward basis function .phi..sub.n+1(s, a) is not added, or in a case where it has been determined in step S56 that no input for selecting addition of the reward basis function .phi..sub.n+1(s, a) has been made, the reward basis function .phi..sub.n+1(s, a) is not added. Then, the correcting section 17 supplies the reinforcement learning model corrected in step S51 to the learning section 13, returns the process to step S44 in FIG. 8, and causes the process to proceed to step S45.

[0098] It is noted that, before the process in step S54, the correcting section 17 may determine whether or not the difference (distance scale) between the movement policy optimized in step S51 and the movement policy taught by the user is greater than a threshold value. In a case where the difference (distance scale) is greater than the threshold value, the correcting section 17 may cause the process to proceed to step S54. In this case, when the distance scale is equal to or less than the threshold value, the reward basis function .phi..sub.n+1(s, a) is not added. The correcting section 17 supplies the reinforcement learning model corrected in step S51 to the learning section 13, returns the process to step S44 in FIG. 8, and causes the process to proceed to step S45.

[0099] As described above, the PC 10 causes the display section 15 to display the policy information. Therefore, the user can recognize the current policy by viewing the policy information displayed on the display section 15. Accordingly, while viewing the policy information, the user can intuitively teach a desired movement policy and directly and easily correct the reinforcement learning model through the GUI. That is, the user can directly and easily correct the reinforcement learning model by interacting with the PC 10. This, as a result, makes it possible to prevent the movement policy that is considered apparently inappropriate by the user from being learned. Thus, it is possible to improve the movement policy and optimize the reinforcement learning model efficiently.

Second Embodiment

[0100] (Example of Configuration of Second Embodiment of PC)

[0101] FIG. 10 is a block diagram depicting an example of a configuration of the second embodiment of a PC as a learning apparatus to which the present disclosure is applied.

[0102] Among components depicted in FIG. 10, the same components as the components in FIG. 1 are denoted by the same reference signs. Redundant description will be omitted as appropriate.

[0103] The configuration of a PC 200 in FIG. 10 differs from the configuration of the PC 10 in FIG. 1 in that the learning section 13, the display control section 14, the receiving section 16, and the correcting section 17 are replaced by a learning section 203, a display control section 204, a receiving section 206, and a correcting section 207, respectively. In the PC 200, the user does not directly correct a reinforcement learning model by teaching a movement policy, but indirectly corrects the reinforcement learning model by teaching a reward function.

[0104] Specifically, the learning section 203 of the PC 10 optimizes the reinforcement learning model supplied from the initialization section 12 or the correcting section 207, and learns the movement policy on the basis of the optimized reinforcement learning model. The learning section 203 supplies the optimized reinforcement learning model to the correcting section 207 and supplies the reward function (reward value distribution) in the optimized reinforcement learning model to the display control section 204. Further, the learning section 203 outputs a final learning result of the movement policy. In addition, the learning section 203 holds the learned movement policy if necessary.

[0105] The display control section 204 supplies an environment map supplied from the environment setting section 11 to the display section 15 and causes the display section 15 to display the environment map. Further, the display control section 204 generates reward function information as reinforcement learning model information. The reward function information is a GUI image depicting the reward function supplied from the learning section 203 or the correcting section 207. The display control section 204 superimposes the reward function information on the environment map. The display control section 204 supplies the reward function information superimposed on the environment map to the display section 15 and causes the display section 15 to display the reward function information superimposed on the environment map.

[0106] The receiving section 206 receives input from the user. For example, the receiving section 206 receives an initial value of a value function or the movement policy input from the user, and supplies the initial value of the value function or the movement policy to the initialization section 12. Further, the receiving section 206 receives, from the user who has seen the reward function information and the like displayed on the display section 15, input of grid-based reward values as teaching of the reward function with respect to the reward function information, and supplies the grid-based reward values to the correcting section 207.

[0107] The correcting section 207 corrects the reward function in the reinforcement learning model supplied from the learning section 203 such that the reward function approximates the grid-based reward values on the basis of the grid-based reward values supplied from the receiving section 206 according to various inverse reinforcement learning methods. At this time, the correcting section 207 adds a reward basis function of the reinforcement learning model if necessary. The method described in NPL 1 can be used as an inverse reinforcement learning method, for example.

[0108] When n reward basis functions included in the reward function are assumed to be .phi..sub.i(s, a) (i=1, 2, … , n) and the weight for a reward basis function (pi is assumed to be w.sub.i, the reward function is approximated by updating the weight w.sub.i using the following equation (2) with the least squares method.

[Math. 2]

w*=(.lamda.1+.PHI..sup.1.PHI.).sup.-1R (2)

[0109] It is noted that R.sub.E(s, a) indicates distribution of the grid-based reward values each taught in a state s and an action a. .phi. represents a design matrix, I represents a unit matrix, and .lamda. represents a regularization parameter.

[0110] The reward function approximation method is not limited to the method using the equation (2). Further, the reward basis function .phi..sub.i may be a reward basis function .phi..sub.i(s) that depends only on a state s. In this case, the distribution R.sub.E is distribution R.sub.E(s) that depends only on the state s.

[0111] The correcting section 207 supplies, to the learning section 203, the reinforcement learning model in which the reward function has been corrected, and supplies the corrected reward function to the display control section 204.

[0112] (Example of Environment Map on which Reward Function Information has been Superimposed)

[0113] FIG. 11 is a diagram depicting an example of the environment map on which the reward function information of the reward function in the reinforcement learning model before correction by the correcting section 207 has been superimposed.

[0114] As depicted in FIG. 11, in a case where the environment map 50 in FIG. 3 has been generated, the display control section 204 generates reward function information 221 (a reward value map). Using a color, a pattern, or the like, the reward function information 221 depicts a reward value of each grid on the basis of the reward function in the reinforcement learning model before correction by the correcting section 207. Then, the display control section 204 superimposes the reward function information 221 on the environment map 50 and causes the display section 15 to display the reward function information 221 superimposed on the environment map 50.

[0115] In the example in FIG. 11, a reward value of a grid corresponding to the goal 34 is positive while reward values of the other grids are zero. Therefore, the reward function information 221 is a GUI image in which the color of the grid corresponding to the goal 34 (gray in the example in FIG. 11) is different from the color of the other grids (transparent color in the example in FIG. 11).

[0116] (Description of Method of Teaching Reward Function)

[0117] FIG. 12 is a diagram for describing a method of teaching the reward function with respect to the reward function information 221 in FIG. 11.

[0118] In a case where the reward function information 221 has been superimposed on the environment map 50 as depicted in FIG. 11, the user inputs a negative reward value -r.sub.1 for each grid in a region 241 of the obstacle 35 as depicted in FIG. 12, for example. Further, the user inputs a negative reward value -r.sub.2 for each grid in a region 242. The region 242 is located on the side opposite to the goal 34 in the vertical direction with respect to the agent 31.

[0119] As described above, the user teaches, as a desired reward function, the reward function in which the reward value of the grid corresponding to the goal 34 is positive, the reward value of each grid in the region 241 is the reward value -r.sub.1, and the reward value of each grid in the region 242 is the reward value -r.sub.2.

[0120] In this case, the correcting section 207 corrects the reward function in the reinforcement learning model so as to approximate the reward function taught by the user on the basis of the reward value -r.sub.1 of each grid in the region 241 and the reward value -r.sub.2 of each grid in the region 242. Then, the correcting section 207 supplies the corrected reward function to the display control section 204. The display control section 204 generates reward function information of the reward function supplied from the correcting section 207. The display control section 204 superimposes the reward function information on the environment map 50 and causes the display section 15 to display the reward function information superimposed on the environment map 50.

[0121] (Description of Processes of PC)

[0122] FIG. 13 is a flowchart for describing a movement policy learning process of the PC 200 in FIG. 10.

[0123] Since processes in steps S131 to S139 in FIG. 13 are similar to the processes in steps S31 to S39 in FIG. 8, respectively, description will be omitted.

[0124] In step S140, the learning section 203 optimizes the reinforcement learning model supplied from the initialization section 12 or the correcting section 207 according to the optimization method selected in step S139, and learns the movement policy on the basis of the optimized reinforcement learning model. The learning section 203 supplies the optimized reinforcement learning model to the correcting section 207 and supplies the reward function in the optimized reinforcement learning model to the display control section 204.

[0125] In step S141, the display control section 204 generates reward function information on the basis of the reward function supplied from the learning section 203, and superimposes the reward function information on the environment map.

[0126] In step S142, the display control section 204 supplies the environment map on which the reward function information has been superimposed to the display section 15, and causes the display section 15 to display the environment map.

[0127] In step S143, the receiving section 206 determines whether or not the user who has seen the reward function information displayed on the display section 15 has taught reward function information with respect to the reward function information. In a case where it has been determined in step S143 that the reward function information has been taught, the receiving section 206 receives grid-based reward values as teaching of the reward function information, supplies the reward values to the correcting section 207, and causes the process to proceed to step S144.

[0128] In step S144, the correcting section 207 performs a correction process of correcting the reinforcement learning model supplied from the learning section 203 on the basis of the grid-based reward values supplied from the receiving section 206. The details of this correction process will be described with reference to FIG. 14 described later.

[0129] In step S145, the PC 200 determines whether or not to end the process, similarly to the process in step S45. In a case where it has been determined in step S145 that the process ends, the learning section 203 outputs the current movement policy as a final learning result and ends the process.

[0130] On the other hand, in a case where it has been determined in step S145 that the process does not end, the process returns to step S140. Further, in a case where it has been determined that the reward function has not been taught in step S143, the process returns to step S140.

[0131] It is noted that the process in the first step S140 may be started in a case where the user has given an instruction to start optimization.

[0132] FIG. 14 is a flowchart for describing the correction process in step S144 in FIG. 13.

[0133] In step S151 in FIG. 14, the correcting section 207 solves a regression problem for approximating the distribution of the current reward values by using a reward function model according to various inverse reinforcement learning methods. The current reward values have been updated with the reward values supplied from the receiving section 206. The reward function model includes n reward basis functions .phi..sub.1(s, a) to .phi..sub.n(s, a). In this manner, the reward function in the reinforcement learning model is corrected. The correcting section 207 supplies the corrected reward function to the display control section 204.

[0134] In step S152, the display control section 204 generates reward function information on the basis of the reward function supplied from the correcting section 207, and superimposes the reward function information on the environment map.

[0135] In step S153, the display control section 204 supplies the environment map on which the reward function information has been superimposed to the display section 15, and causes the display section 15 to display the environment map.

[0136] In step S154, the correcting section 207 determines whether or not to add, as a reward basis function .phi..sub.n+1(s, a), a reward basis function among a reward basis function group registered in advance. The reward basis function is not any of the n reward basis functions .phi..sub.1(s, a) to .phi..sub.n(s, a) used in the corrected reinforcement learning model.

[0137] For example, the correcting section 207 sequentially adds, as the reward basis function .phi..sub.n+1(s, a), each reward basis function other than the reward basis functions .phi..sub.1(s, a) to .phi..sub.n(s, a) among the reward basis function group registered in advance. Then, the correcting section 207 uses the equation (2) described above to approximate the reward function to which the reward basis function .phi..sub.n+1(s, a) has been added, and uses the following equation (3) to calculate an absolute value D (distance scale) of a residual between the approximated reward function and reward distribution R.sub.E.

[Math. 3]

D=.parallel.R.sub.E-w.sup.T.PHI..parallel. (3)

[0138] In a case where there is at least one reward basis function .phi..sub.n+1(s, a) with which the absolute value D decreases (improves) compared to the absolute value D before addition, the correcting section 207 determines, in a step S154, to add the reward basis function .phi..sub.n+1(s, a) with which the absolute value D is smallest. On the other hand, in a case where there is no reward basis function .phi..sub.n+1(s, a) with which the absolute value D decreases compared to the absolute value D before addition, the correcting section 207 determines, in step S154, not to add any reward basis function .phi..sub.n+1(s, a).

[0139] In a case where it has been determined in step S154 that the reward basis function .phi..sub.n+1(s, a) is added, the process proceeds to step S155. Since processes in steps S155 and S156 are similar to the processes in steps S55 and S56 in FIG. 9, respectively, description will be omitted.

[0140] In step S157, similarly to the step S151, the correcting section 207 solves the regression problem for approximating the distribution of the current reward values, which have been updated with the reward values supplied from the receiving section 206, by using the reward function model to which the reward basis function .phi..sub.n+1(s, a) has been added. In this manner, the reward function in the reinforcement learning model is corrected. The correcting section 207 supplies, to the learning section 203, the reinforcement learning model in which the reward function has been corrected, and supplies the corrected reward function to the display control section 204.

[0141] Since processes in steps S158 and S159 are similar to the processes in steps S152 and S153, respectively, description will be omitted. After the process in step S159, the process returns to step S144 in FIG. 13 and proceeds to step S145.

[0142] On the other hand, in a case where it has been determined in step S154 that the reward basis function .phi..sub.n+1(s, a) is not added or in a case where it has been determined in step S156 that no input for selecting addition of the reward basis function .phi..sub.n+1(s, a) has been made, the reward basis function .phi..sub.n+1(s, a) is not added. Then, the correcting section 207 supplies the reinforcement learning model corrected in step S151 to the learning section 203, returns the process to step S144 in FIG. 13, and causes the process to proceed to step S145.

[0143] It is noted that, before the process in step S154, the correcting section 207 may determine whether or not the distance scale between the reward function corrected in step S151 and the distribution of the current reward values updated with the reward values taught by the user is greater than a threshold value. In a case where the distance scale is greater than the threshold value, the correcting section 207 may cause the process to proceed to step S154. In this case, when the distance scale is equal to or less than the threshold value, the reward basis function .phi..sub.n+1(s, a) is not added and the correcting section 207 supplies the reinforcement learning model corrected in step S151 to the learning section 13, returns the process to step S144 in FIG. 13, and causes the process to proceed to step S145.

[0144] As described above, the PC 200 causes the display section 15 to display the reward function information. Therefore, the user can recognize the reward function by viewing the reward function information displayed on the display section 15. Accordingly, while viewing the reward function information, the user can intuitively teach a reward function that causes the agent to take an action to be taken and indirectly and easily correct the reinforcement learning model through the GUI. That is, the user can indirectly and easily correct the reinforcement learning model by interacting with the PC 200. This, as a result, makes it possible to prevent learning with the reinforcement learning model using the reward function that is considered apparently inappropriate by the user. Thus, it is possible to improve the movement policy and optimize the reinforcement learning model efficiently.

[0145] It is noted that, in the first and second embodiments, the display section 15 and the receiving section 16 (receiving section 206) may be integrated with each other to form a touch panel. In this case, the receiving section 16 receives input of the user’s operation on the touch panel. For example, in the second embodiment, the user performs a pinch-in/pinch-out operation or the like to a region to which a reward value is input in the environment map on the touch panel, thereby correcting (increasing or decreasing) the reward value in the region and inputting the corrected reward value.

[0146] Further, while the environment map in the first and second embodiments is the GUI image that is a bird’s eye view of the surrounding environment of the agent, the environment map may be a GUI image viewed from the agent. In this case, the agent is not included in the environment map.

[0147] In addition, while the environment map in the first and second embodiments is the GUI image depicting the surrounding environment in two dimensions, the environment map may be a GUI image depicting the surrounding environment in one or three dimensions.

[0148] Further, in the above description, the policy information is superimposed on the environment map in the PC 10 to which the movement policy is taught, while the reward function information is superimposed on the environment map in the PC 200 to which the reward function is taught. However, the teaching contents and the superimposed contents do not need to correspond to each other. That is, the PC 10 may superimpose the reward function information on the environment map, while the PC 200 may superpose the policy information on the environment map. In this case, the user of the PC 10 teaches the policy information while viewing the environment map on which the reward function information has been superimposed. The user of the PC 200 teaches the reward function while viewing the environment map on which the policy information has been superimposed.

Third Embodiment

[0149] (Example of Environment Map on which Policy Information has been Superimposed)

[0150] The configuration of one embodiment of a VR device as a learning apparatus to which the present disclosure is applied is similar to the configuration of the PC 10 in FIG. 1, except that an agent always exists in a virtual world and the display section 15 is a head-mounted display mounted on the head of the user. Therefore, description of each section of the VR device will be made using each section of the PC 10 in FIG. 1. The VR device provides experience of the virtual world viewed from the agent.

[0151] FIG. 15 is a diagram depicting an example of an environment map on which policy information of a movement policy learned by the reinforcement learning model before correction by the correcting section 17 has been superimposed. The environment map is displayed on the display section 15 of such a VR device.

[0152] As depicted in FIG. 15, an environment map 260 displayed on the display section 15 of the VR device is a GUI image depicting a surrounding environment viewed from the agent in three dimensions. In the example in FIG. 15, walls 261 to 263 exist in front of, to the left, and to the right of the agent. A space closer to the agent than to the walls 261 to 263 is a movable region 264. Further, an obstacle 265 that is an obstacle to the movement of the agent exists in the movable region 264. A goal 266 exists on the side opposite to the agent across the obstacle 265 in the movable region 264. A positive reward value is set in the goal 266.

[0153] It is noted that, in the example in FIG. 15, the environment map 260 is viewed from the agent, and the agent itself does not exist in the environment map 260. Alternatively, the environment map 260 may be viewed from slightly behind the agent and may include the back or the like of the agent.

[0154] As depicted in FIG. 15, in a case where the environment map 260 has been generated, the display control section 14 generates policy information 281. Using an arrow, the policy information 281 indicates a movement path based on the movement policy from the current position of the agent to the goal 266. The movement policy has been learned by the reinforcement learning model before correction by the correcting section 17. Then, the display control section 14 superimposes the policy information 281 on the environment map 260 and causes the display section 15 to display the policy information 281 superimposed on the environment map 260. It is noted that contour-line images may also be superimposed on the environment map 260 in FIG. 15, as in the case of FIG. 4.

[0155] The obstacle 265 is an obstacle to the movement. However, since the obstacle 265 exists in the movable region 264, there is a possibility that the movement path based on the movement policy learned by the reinforcement learning model before correction is a path that passes through the obstacle 265, as depicted in FIG. 15.

[0156] In such a case, for example, the user inputs a movement path 282 by operating a controller, not depicted. The movement path 282 is a path extending from the current position of the agent to the goal 266 without passing through the obstacle 265, as depicted in FIG. 15. In this manner, the user teaches the movement policy corresponding to the movement path 282 as a desired movement policy.

[0157] It is noted that the configuration of the VR device as the learning apparatus to which the present disclosure is applied can also be similar to the configuration of the PC 200 in FIG. 10.

[0158] In the VR device, the receiving section 16 (receiving section 206) may include a gaze detecting section that continuously detects the gaze direction of the user mounting the display section 15 on the head. The gaze detecting section may receive input of a movement path for moving in the gaze direction of the user. Further, the receiving section 16 (receiving section 206) may include a motion detecting section that detects the motion of the user. The motion detecting section may receive input of a movement path according to the motion of the user.

[0159] Further, the PC 10 (PC 200) and the receiving section 16 (receiving section 206) of the VR device may include a hand gesture detecting section that detects a hand gesture of the user. The hand gesture detecting section may receive input from the user on the basis of a specific hand gesture. In this case, for example, the user inputs a movement path for moving in the right direction by swinging an arm in the right direction while keeping a hand in a specific shape.

[0160] In addition, the PC 10 (PC 200) and the receiving section 16 (receiving section 206) of the VR device may include a voice recognition section that recognizes the voice of the user. The voice recognition section may receive input from the user on the basis of the speech of the user.

[0161] Further, whether or not to add the reward basis function described above may be determined using a random sampling method which is inspired by Preference IRL. The details of the Preference IRL are described in “APRIL: Active Preference-learning based Reinforcement Learning,” Riad Akrour, Marc Schoenauer, and Mich’ele Sebag, European Conference, ECML PKDD 2012, Bristol, UK, Sep. 24 to 28, 2012. Proceedings, Part II, for example.

[0162] In addition, in the above description, the reward basis function to be added to the reinforcement learning model is selected from the reward basis function group registered in advance. However, the reward basis function may be a new reward basis function other than the reward basis function group registered in advance.

[0163] Further, the contents of the processes performed in the PC 10 (PC 200) and the VR device may be stored in a database, not depicted, so as to make the processes reproducible.

[0164] The PC 10 (PC 200) and the VR device correct the reinforcement learning model on the basis of input from the user in various surrounding environments. Thus, the PC 10 (PC 200) and the VR device are capable of learning a robust movement policy in the corrected reinforcement learning model.

Fourth Embodiment

[0165] (Description of Computer to which Present Disclosure is Applied)

[0166] The series of processes described above can be executed by hardware or software. In a case where the series of processes is executed by software, a program constituting the software is installed in a computer. Here, the computer includes a computer incorporated in dedicated hardware, a general-purpose personal computer, for example, that is capable of executing various functions by installing various programs, and the like.

[0167] FIG. 16 is a block diagram depicting an example of a configuration of hardware of a computer in which a program executes the series of processes described above.

[0168] In a computer 400, a CPU (Central Processing Unit) 401, a ROM (Read Only Memory) 402, and a RAM (Random Access Memory) 403 are mutually connected to each other via a bus 404.

[0169] In addition, an input/output interface 405 is connected to the bus 404. An input section 406, an output section 407, a storage section 408, a communication section 409, and a drive 410 are connected to the input/output interface 405.

[0170] The input section 406 includes a keyboard, a mouse, a microphone, and the like. The output section 407 includes a display, a speaker, and the like. The storage section 408 includes a hard disk, a non-volatile memory, and the like. The communication section 409 includes a network interface and the like. The drive 410 drives a removable medium 411 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.

[0171] In the computer 400 configured as described above, for example, the CPU 401 loads the program stored in the storage section 408 into the RAM 403 via the input/output interface 405 and the bus 404 and executes the program, whereby the series of processes described above is performed.

[0172] The program to be executed by the computer 400 (CPU 401) can be recorded and provided on the removable medium 411 as a package medium or the like, for example. Further, the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.

[0173] In the computer 400, the program can be installed in the storage section 408 via the input/output interface 405 by attaching the removable medium 411 to the drive 410. Further, the program can be received by the communication section 409 via a wired or wireless transmission medium and installed in the storage section 408. Additionally, the program can be installed in the ROM 402 or the storage section 408 in advance.

[0174] It is noted that the program to be executed by the computer 400 may be a program that performs the processes in chronological order in the order described in the present specification or may be a program that performs the processes in parallel or at necessary timing such as on occasions of calls.

[0175] Further, the effects described in the present specification are merely examples and not limitative, and other effects may be provided.

[0176] The embodiments of the present disclosure are not limited to the embodiments described above, and various modifications can be made without departing from the gist of the present disclosure.

[0177] For example, the present disclosure can be configured as cloud computing in which one function is shared and processed in cooperation by a plurality of apparatuses through a network.

[0178] Further, each of the steps described in the flowcharts described above can not only be executed by one apparatus but also be shared and executed by a plurality of apparatuses.

[0179] In addition, in a case where a plurality of processes is included in one step, the plurality of processes included in the one step can not only be executed by one apparatus but also be shared and executed by a plurality of apparatuses.

[0180] Further, the present disclosure can also be applied to a learning apparatus that performs reinforcement learning of a policy of an action other than movement.

[0181] Examples of the action other than movement include warning such as horning of a vehicle as an agent, indirect indication of intention such as a turn signal to another agent, a combination of these actions and movement, and the like.

[0182] In addition, the present disclosure can also be applied to a learning apparatus that performs reinforcement learning of policies of a plurality of agents (multiple agents) at a time. In this case, a movement policy and a reward function are taught for each agent after an agent is specified.

[0183] It is noted that the present disclosure can also be configured as follows.

[0184] (1)

[0185] A learning apparatus including:

[0186] a display control section configured to cause a display section to display reinforcement learning model information regarding a reinforcement learning model;* and*

[0187] a correcting section configured to correct the reinforcement learning model on a basis of user input to the reinforcement learning model information.

[0188] (2)

[0189] The learning apparatus according to (1), in which the reinforcement learning model information includes policy information indicating a policy learned by the reinforcement learning model.

[0190] (3)

[0191] The learning apparatus according to (1), in which the reinforcement learning model information includes reward function information indicating a reward function used in the reinforcement learning model.

[0192] (4)

[0193] The learning apparatus according to any one of (1) to (3), in which the user input includes teaching of a policy.

[0194] (5)

[0195] The learning apparatus according to (4), in which, in a case where an objective function is improved by adding a basis function of a reward function used in the reinforcement learning model, the correcting section adds the basis function of the reward function.

[0196] (6)

[0197] The learning apparatus according to any one of (1) to (3), in which the user input includes teaching of a reward function.

[0198] (7)

[0199] The learning apparatus according to (6), in which, in a case where a difference between the reward function taught as the user input and a reward function of the reinforcement learning model corrected on the basis of the user input is decreased by adding a basis function of the reward function used in the reinforcement learning model, the correcting section adds the basis function of the reward function.

[0200] (8)

[0201] The learning apparatus according to any one of (1) to (7), in which the display control section superimposes the reinforcement learning model information on environment information indicating an environment and causes the display section to display the reinforcement learning model information superimposed on the environment information.

[0202] (9) A learning method including:

[0203] a display control step of a learning apparatus causing a display section to display reinforcement learning model information regarding a reinforcement learning model;* and*

[0204] a correcting step of the learning apparatus correcting the reinforcement learning model on a basis of user input to the reinforcement learning model information.

REFERENCE SIGNS LIST

[0205] 10 PC, 14 Display control section, 15 Display section, 17 Correcting section, 71 Policy information, 50 Environment map, 200 PC, 204 Display control section, 207 Correcting section, 221 Reward function information, 260 Environment map, 281 Policy information

您可能还喜欢...