Chevron Left
返回到 Sample-based Learning Methods

学生对 阿尔伯塔大学 提供的 Sample-based Learning Methods 的评价和反馈

4.8
200 个评分
44 个审阅

课程概述

In this course, you will learn about several algorithms that can learn near optimal policies based on trial and error interaction with the environment---learning from the agent’s own experience. Learning from actual experience is striking because it requires no prior knowledge of the environment’s dynamics, yet can still attain optimal behavior. We will cover intuitively simple but powerful Monte Carlo methods, and temporal difference learning methods including Q-learning. We will wrap up this course investigating how we can get the best of both worlds: algorithms that can combine model-based planning (similar to dynamic programming) and temporal difference updates to radically accelerate learning. By the end of this course you will be able to: - Understand Temporal-Difference learning and Monte Carlo as two strategies for estimating value functions from sampled experience - Understand the importance of exploration, when using sampled experience rather than dynamic programming sweeps within a model - Understand the connections between Monte Carlo and Dynamic Programming and TD. - Implement and apply the TD algorithm, for estimating value functions - Implement and apply Expected Sarsa and Q-learning (two TD methods for control) - Understand the difference between on-policy and off-policy control - Understand planning with simulated experience (as opposed to classic planning strategies) - Implement a model-based approach to RL, called Dyna, which uses simulated experience - Conduct an empirical study to see the improvements in sample efficiency when using Dyna...

热门审阅

KN

Oct 03, 2019

Great course! The notebooks are a perfect level of difficulty for someone learning RL for the first time. Thanks Martha and Adam for all your work on this!! Great content!!

UZ

Nov 23, 2019

Good balance of theory and programming assignments. I really like the weekly bonus videos with professors and developers. Recommend to everyone.

筛选依据:

1 - Sample-based Learning Methods 的 25 个评论(共 43 个)

创建者 Manuel V d S

Oct 04, 2019

Course was amazing until I reached the final assignment. What a terrible way to grade the notebook part. Also, nobody around in the forums to help... I would still recommend this to anyone interested, unless you have no intention of doing the weekly readings.

创建者 Kaiwen Y

Oct 02, 2019

I spend 1 hour learning the material and coding the assignment while 8 hours trying to debug it so that the grader will not complain. The grader sometimes insists on a particular order of the coding which does not really matter in the real world. Also, grader inconsistently gives 0 marks to a particular part of the problem while give a full mark on other part using the same function. (Like numpy.max) However, the forum is quite helpful and the staff is generally responsive.

创建者 Stewart A

Sep 03, 2019

Great course! Lots of hands-on RL algorithms. I'm looking forward to the next course in the specialization.

创建者 LuSheng Y

Sep 10, 2019

Very good.

创建者 Luiz C

Sep 13, 2019

Great Course. Every aspect top notch

创建者 Sodagreenmario

Sep 18, 2019

Great course, but there are still some little bugs that can be fixed in notebook assignments.

创建者 Alejandro D

Sep 19, 2019

Excellent content and delivery.

创建者 Mark J

Sep 23, 2019

In my opinion, this course strikes a comfortable balance between theory and practice. It is, essentially, a walk-through of the textbook by Sutton and Barto entitled, appropriately enough, 'Reinforcement Learning'. Sutton's appearances in some of the videos are an added treat.

创建者 Majd W

Dec 06, 2019

One of the amazing things this specialization stands out in is that it is based on a textbook. if you read from it and watch the lectures, you will have a very good understanding of the material. Also, the programming assignments are very beneficial.

创建者 Shashidhara K

Dec 12, 2019

This course required more work than the 1st in the series, (may be i took it lightly as the first was not that difficult). Request : Please include some worked examples (calculations) or include in graded/ungraded quiz, will be nice.

创建者 David R

Dec 10, 2019

Course is not easy, videos presentation is a bit dull - but the material is cool and interesting, and the additional quizzes, videos and especially notebooks make it a great course - you learn a lot and see progress. Highly recommended.

创建者 LUIS M G M

Nov 22, 2019

Great course!!! Even better than the 1st one. I tried to read the book before taking the course, and some algorithmics have not been clear to me until I saw the videos (DynaQ, DynaQ+). Same wrt some key concepts (on vs off policy learning).

创建者 Nikhil G

Nov 25, 2019

Excellent course companion to the textbook, clarifies many of the vague topics and gives good tests to ensure understanding

创建者 Li W

Nov 27, 2019

Very good introductions and practices to the classic RL algorithms

创建者 Manuel B

Nov 28, 2019

Great course! Really powerful but simply ideas to solve sequential optimization problems based on learning how the environment works.

创建者 Ivan S F

Sep 29, 2019

Great course. Clear, concise, practical. Right amount of programming. Right amount of tests of conceptual knowledge. Almost perfect course.

创建者 Wang G

Oct 19, 2019

Very Nice Explanation and Assignment! Look forward the next 2 courses in this specialization!

创建者 Sriram R

Oct 21, 2019

Well done mix of theory and practice!

创建者 Kyle N

Oct 03, 2019

Great course! The notebooks are a perfect level of difficulty for someone learning RL for the first time. Thanks Martha and Adam for all your work on this!! Great content!!

创建者 Shi Y

Nov 10, 2019

最喜欢的Coursera课程之一,难度适中的RL课程,非常推荐,学习到了很多自学很难理解全面的知识。感谢老师和助教们!

创建者 John H

Nov 10, 2019

It was good.

创建者 Rashid P

Nov 12, 2019

Best RL course ever done

创建者 Alex E

Nov 19, 2019

A fun an interesting course. Keep up the great work!

创建者 Sohail

Oct 07, 2019

Fantastic!

创建者 Damian K

Oct 05, 2019

Great balance between theory and demonstration of how all techniques works. Exercises are prepared so it is possible to focus on core part of concepts. And if you will you can take deep dive into exercise and how experiments are designed. Very recommended course.