|
MrWhy.com » Videos » On a Connection between Importance Sampling and the Likelihood Ratio Policy Gradient |
|
|
On a Connection between Importance Sampling and the Likelihood Ratio Policy Gradient
|
On a Connection between Importance Sampling and the Likelihood Ratio Policy Gradient
Likelihood ratio policy gradient methods have been some of the most successful reinforcement learning algorithms, especially for learning on physical systems. We describe how the likelihood ratio policy gradient can be derived from an importance sampling perspective. This derivation highlights how likelihood ratio methods under-use past experience by (a) using the past experience to estimate the gradient of the expected return at the current policy parameterization, rather than to obtain a more complete estimate, and (b) using past experience under the current policy rather than using all past experience to improve the estimates. We present a new policy search method, which leverages both of these observations as well as generalized baselines - a new technique which generalizes commonly used baseline techniques for policy gradient methods. Our algorithm outperforms standard likelihood ratio policy gradient algorithms on several testbeds.
Video Length: 0
Date Found: March 26, 2011
Date Produced: March 25, 2011
View Count: 0
|
|
|
|
|
I got punched by an old guy, for farting near his wife. Read MoreComic book creator Stan Lee talks the future of the medium in the digital age. Panelists Zachary... Read MoreThe U.S. launch of Spotify is still on music lovers' minds. Join Zachary Levi, from NBC’s... Read MoreTuesday: Rupert Murdoch testifies before Parliament on the hacking scandal that brought down "News... Read MoreAfter a long slump, the home construction industry may be showing signs of life. But as Bill... Read More | 1 2 3 4 5 |
|
|
|