In this course, you will learn how to solve problems with large, high-dimensional, and potentially infinite state spaces. You will see that estimating value functions can be cast as a supervised learning problem---function approximation---allowing you to build agents that carefully balance generalization and discrimination in order to maximize reward. We will begin this journey by investigating how our policy evaluation or prediction methods like Monte Carlo and TD can be extended to the function approximation setting. You will learn about feature construction techniques for RL, and representation learning via neural networks and backprop. We conclude this course with a deep-dive into policy gradient methods; a way to learn policies directly without learning a value function. In this course you will solve two continuous-state control tasks and investigate the benefits of policy gradient methods in a continuous-action environment.
Dieser Kurs ist Teil der Spezialisierung Spezialisierung Verstärkungslernen
von
Über diesen Kurs
Probabilities & Expectations, basic linear algebra, basic calculus, Python 3.0 (at least 1 year), implementing algorithms from pseudocode.
Kompetenzen, die Sie erwerben
- Artificial Intelligence (AI)
- Machine Learning
- Reinforcement Learning
- Function Approximation
- Intelligent Systems
Probabilities & Expectations, basic linear algebra, basic calculus, Python 3.0 (at least 1 year), implementing algorithms from pseudocode.
von

University of Alberta
UAlberta is considered among the world’s leading public research- and teaching-intensive universities. As one of Canada’s top universities, we’re known for excellence across the humanities, sciences, creative arts, business, engineering and health sciences.

Alberta Machine Intelligence Institute
The Alberta Machine Intelligence Institute (Amii) is home to some of the world’s top talent in machine intelligence. We’re an Alberta-based
Lehrplan - Was Sie in diesem Kurs lernen werden
Welcome to the Course!
Welcome to the third course in the Reinforcement Learning Specialization: Prediction and Control with Function Approximation, brought to you by the University of Alberta, Onlea, and Coursera. In this pre-course module, you'll be introduced to your instructors, and get a flavour of what the course has in store for you. Make sure to introduce yourself to your classmates in the "Meet and Greet" section!
On-policy Prediction with Approximation
This week you will learn how to estimate a value function for a given policy, when the number of states is much larger than the memory available to the agent. You will learn how to specify a parametric form of the value function, how to specify an objective function, and how estimating gradient descent can be used to estimate values from interaction with the world.
Constructing Features for Prediction
The features used to construct the agent’s value estimates are perhaps the most crucial part of a successful learning system. In this module we discuss two basic strategies for constructing features: (1) fixed basis that form an exhaustive partition of the input, and (2) adapting the features while the agent interacts with the world via Neural Networks and Backpropagation. In this week’s graded assessment you will solve a simple but infinite state prediction task with a Neural Network and TD learning.
Control with Approximation
This week, you will see that the concepts and tools introduced in modules two and three allow straightforward extension of classic TD control methods to the function approximation setting. In particular, you will learn how to find the optimal policy in infinite-state MDPs by simply combining semi-gradient TD methods with generalized policy iteration, yielding classic control methods like Q-learning, and Sarsa. We conclude with a discussion of a new problem formulation for RL---average reward---which will undoubtedly be used in many applications of RL in the future.
Policy Gradient
Every algorithm you have learned about so far estimates a value function as an intermediate step towards the goal of finding an optimal policy. An alternative strategy is to directly learn the parameters of the policy. This week you will learn about these policy gradient methods, and their advantages over value-function based methods. You will also learn how policy gradient methods can be used to find the optimal policy in tasks with both continuous state and action spaces.
Bewertungen
- 5 stars84,13 %
- 4 stars12,96 %
- 3 stars2,06 %
- 2 stars0,55 %
- 1 star0,27 %
Top-Bewertungen von PREDICTION AND CONTROL WITH FUNCTION APPROXIMATION
Good course with a lot of technical information. I would add another assignment or make current ones a little bit more extensive, as there are many concepts to learn.
Adam & Martha really make the walk through Sutton & Barto's book a real pleasure and easy to understand. The notebooks and the practice quizzes greatly help to consolidate the material.
A great and interactive course to learn about using function approximation for control. Great way to learn DRL and its alternatives.
Difficult but excellent and impressing. Human being is incredible creating such ideas. This course shows a way to the state when all such ingenious ideas will be created by self learning algorithms.
Über den Spezialisierung Verstärkungslernen
The Reinforcement Learning Specialization consists of 4 courses exploring the power of adaptive learning systems and artificial intelligence (AI).

Häufig gestellte Fragen
Wann erhalte ich Zugang zu den Vorträgen und Aufgaben?
Was bekomme ich, wenn ich diese Spezialisierung abonniere?
Ist finanzielle Unterstützung möglich?
Haben Sie weitere Fragen? Besuchen Sie das Learner Help Center.