Hi, my name is Abel Rodriguez. And I am a Professor of Statistics at the Baskin School of Engineering at the University of California, Santa Cruz. Welcome to my course on mixture models. In this course, I will introduce you to an important class of statistical models. Mixture models provide a flexible approach to modeling data and are useful in a number of applications from density estimation, clustering and classification problems. This is because of a couple of reasons. So one of them is that a standard families of probability distributions such as the Gaussian the exponential or the Poisson that you may be familiar with are often too restrictive for modeling features that appear in real data such as multi-modality or zero inflation. Mixture models, which can be related to Kernel density estimation procedures address. This issues in a way that allows for natural generalizations of other well-known procedures. But it's not only because of its application in density estimation. It's also true that in addition to providing flexible probability distributions finite mixture models have a very strong relationship with some classical procedures for clustering and classification. Such as k-mean clustering, linear discriminant analysis in quadratic discriminant analysis. More generally, mixture models provide a tool to understand and generalize this approaches as well as to quantify the uncertainty associated with the estimates and predictions generated by them. To succeed in this course, you should be familiar with the basics of calculus based probability, the principles of maximum likelihood estimation and Bayesian estimation. And this includes common computational tools such as expectation maximization and Markov chain Monte Carlo algorithm, particularly Gibbs sampling and random walk Metropolis-Hastings methods. If you feel that you need to strengthen your background in this area, so I recommend that you consider taking before this one other Coursera courses such as Bayesian Statistics: From Concepts to Data by Herbert Lee. And Bayesian Statistics: Techniques and Models by Matthew Heiner. You will also need some basic knowledge of the R language. There are a number of coding assignments in this course, if you need to strengthen your knowledge of R, I recommend The Course R Programming by Roger Peng and co-authors, and this is also available in Coursera. Now, let me tell you a little bit about how the course is structured. So there are five modules, we start with basic concepts, then we move on to maximum likelihood estimation for mixture models, then we consider Bayesian estimation, then we consider kind of traditional applications of mixture models. And then we move onto some advanced topics related to for example selecting the number of components in the mixture component and related topics. Each module has lecture videos with video questions to help you check your understanding. It has short quizzes, useful readings, and peer review assignments that will allow you to apply what you have learned and to see how your fellow learners have approach the material. There are also regular discussions that I encourage you to participate in. I wish you the best of luck in this course, Bayesian statistical measure models.