Probabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over complex domains: joint (multivariate) distributions over large numbers of random variables that interact with each other. These representations sit at the intersection of statistics and computer science, relying on concepts from probability theory, graph algorithms, machine learning, and more. They are the basis for the state-of-the-art methods in a wide variety of applications, such as medical diagnosis, image understanding, speech recognition, natural language processing, and many, many more. They are also a foundational tool in formulating many machine learning problems.
The Leland Stanford Junior University, commonly referred to as Stanford University or Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto, California, United States.
- 5 stars
- 4 stars
- 3 stars
- 2 stars
- 1 star
来自PROBABILISTIC GRAPHICAL MODELS 1: REPRESENTATION的热门评论
Overall very good quality content. PAs are useful but some questions/tests leave too much to interpretation and can be frustrating for students. Audio quality for the classes could also be improved.
The lecture was a bit too compact and unsystematic. However, if you also do a lot of reading of the textbook, you can learn a lot. Besides, the Quiz and Programming task are of high qualities.
Prof. Koller did a great job communicating difficult material in an accessible manner. Thanks to her for starting Coursera and offering this advanced course so that we can all learn...Kudos!!
The course was deep, and well-taught. This is not a spoon-feeding course like some others. The only downside were some "mechanical" problems (e.g. code submission didn't work for me).
Excellent course, the effort of the instructor is well reflected in the content and the exercices. A must for every serious student on (decision theory or markov random fields tasks.
Superb exposition. Makes me want to continue learning till the very end of this course. Very intuitive explanations. Plan to complete all courses offered in this specialization.
I have Actually Earned Three Years of my life (at least) and one possible patent because of this course.\n\nThank You Daphne Mam. God Bless Everybody Associated with it.
learned a lot. lectures were easy to follow and the textbook was able to more fully explain things when I needed it. looking forward to the next course in the series.
关于 概率图模型 专项课程
Learning Outcomes: By the end of this course, you will be able to
Apply the basic process of representing a scenario as a Bayesian network or a Markov network
Analyze the independence properties implied by a PGM, and determine whether they are a good match for your distribution
Decide which family of PGMs is more appropriate for your task
Utilize extra structure in the local distribution for a Bayesian network to allow for a more compact representation, including tree-structured CPDs, logistic CPDs, and linear Gaussian CPDs
Represent a Markov network in terms of features, via a log-linear model
Encode temporal models as a Hidden Markov Model (HMM) or as a Dynamic Bayesian Network (DBN)
Encode domains with repeating structure via a plate model
Represent a decision making problem as an influence diagram, and be able to use that model to compute optimal decision strategies and information gathering strategies
Honors track learners will be able to apply these ideas for complex, real-world problems