Deep Learning Specialization on Coursera

斯坦福大学概率图模型课程PPT

+2 投票

斯坦福大学于2012年3月19日启动了一个在线的概率图模型课程,由机器学习领域的大牛Daphne Koller教授授课:

https://class.coursera.org/pgm/

目前该课程已经进行到第5周,以下是该课程的PPT链接,希望对没有注册上的同学有用。

时间: 2012年 4月 10日 分类:图模型 作者: 52opencourse (24,880 基本)
编辑 2012年 5月 1日 作者:52opencourse
如何能下载视频?????????谢谢
直接preview观看在线视频可以吗?
这个课程即将再开一轮,注册后就可以下载视频了。

3个回答

+1投票

Lecture Slides


Click on the lecture titles to download the annotated slides for each lecture, or click on the slides link next to each section label to download the combined slides for the whole section. For further reading, we have also provided relevant references to the class textbook next to each lecture.


Introduction and Overview (combined slides)

Welcome!

Overview and Motivation Chapter 1.

Distributions Chapters 2.1.1 to 2.1.3.

Factors. Chapter 4.2.1.


Bayesian Network Fundamentals (combined slides)

Semantics and Factorization Chapters 3.2.1, 3.2.2. If you are unfamiliar with genetic inheritance, please watch this short Khan Academy video for some background.

Reasoning Patterns. Chapter 3.2.1.

Flow of Probabilistic Influence. Chapter 3.3.1.

Conditional Independence. Chapters 2.1.4, 3.1.

Independencies in Bayesian Networks. Chapter 3.2.2.

Naive Bayes. Chapter 3.1.3.

Application - Medical Diagnosis Chapter 3.2: Box 3.D (p. 67).


Template Models (combined slides)

Overview. Chapter 6.1.

Temporal Models - DBNs. Chapters 6.2, 6.3.

Temporal Models - HMMs. Chapters 6.2, 6.3.

Plate Models. Chapter 6.4.1.


Octave Tutorial

Octave Tutorial Code


Structured CPDs (combined slides)

Overview. Chapters 5.1, 5.2.

Tree-Structured CPDs. Chapter 5.3.

Independence of Causal Influence. Chapter 5.4.

Continuous Variables. Chapter 5.5.


Markov Network Fundamentals (combined slides)

Pairwise Markov Networks. Chapter 4.1.

General Gibbs Distribution. Chapter 4.2.2.

Conditional Random Fields. Chapter 4.6.1.

Independencies in Markov Networks. Chapter 4.3.1.

I-Maps and Perfect Maps. Chapter 3.3.4.

Log-Linear Models. Chapter 4.4, p. 125.

Shared Features in Log-Linear Models. Chapter 4: Box 4.B (p. 112), Box 4.C (p. 126), Box 4.D (p. 127).


Representation Wrapup: Knowledge Engineering (combined slides)

Knowledge Engineering.


 

已回复 2012年 4月 10日 作者: 52opencourse (24,880 基本)
0 投票

Variable Elimination (combined slides)

Conditional Probability Queries. Chapter 9.3.

MAP Queries. Chapter 13.2.1.

Variable Elimination Algorithm. Chapter 9.2.

Variable Elimination Complexity. Chapter 9.4 through 9.4.2.3.

VE - Graph Based Perspective. Chapter 9.4.

Finding Elimination Orderings. Chapter 9.4.3.


Belief Propagation (combined slides)

Belief Propagation. Chapter 11.3.2

Properties of Cluster Graphs. Chapter 11.3.2


Belief Propagation, Part 2 (combined slides)

Properties of Belief Propagation. Chapter 11.3.3

Clique Tree Algorithm - Correctness. Chapter 10.2.1

Clique Tree Algorithm - Computation. Chapters 10.2.2, 10.3.3.1

Clique Trees and Independence. Chapter 10.1.2

Clique Trees and VE. Chapter 10.4.1

BP in Practice. Box 11.C

Loopy BP and Message Decoding. Box 11.A


MAP Estimation Part 1 (combined slides)

MAP Exact Inference. Chapter 13.2.1

Finding a MAP Assignment. Chapter 13.2.2

已回复 2012年 4月 10日 作者: 52opencourse (24,880 基本)
0 投票

MAP Estimation Part 2 (combined slides)

Tractable MAP Problems. Chapter 13.6.

Dual Decomposition - Intuition. Dual Decomposition is not in the textbook, but for further information you may refer to the original paper: MRF Energy Minimization and Beyond via Dual Decomposition N. Komodakis, N.Paragios and G. Tziritas

Dual Decomposition - Algorithm.


Sampling Methods (combined slides)

Simple Sampling. Chapter 12.1.

Markov Chain Monte Carlo . Chapter 12.3 up to 12.3.2.2.

Using a Markov Chain. Chapter 12.3.5.

Gibbs Sampling. Review of Chapter 12.3.2 as applied to Gibbs Sampling.

Metropolis Hastings Algorithm. Chapter 12.3.4.2.


Inference In Temporal Models, Summary (combined slides)

Inference in Temporal Models

Inference - Summary


Decision Making (combined slides)

Maximum Expected Utility Chapter 22.1.1, 23.2.104, 23.4.1-2, 23.5.1

Utility Functions Chapter 22.2.1-3, 22.3.2, 22.4.2

Value of Perfect Information Chapter 23.7.1-2


Learning: Parameter Estimation, Part 1 (combined slides)

Overview. Chapter 16.1 and Intro to Chapter 17


Maximum Likelihood Estimation. Chapter 17.1

Maximum Likelihood Estimation for Bayesian Networks. Chapter 17.2 through 17.2.1

Bayesian Estimation. Chapter 17.3.2

Bayesian Prediction. Chapter 17.4

Bayesian Estimation for Bayesian Networks


Learning: Parameter Estimation, Part 2 (combined slides)

Maximum Likelihood Estimation for Log-Linear Models. Chapter 20.1 - 20.2

Maximum Likelihood Estimation for Conditional Random Fields. Chapter 20.1 - 20.2

MAP Estimation for Markov Random Fields and Conditional Random Fields. Chapter 20.1 - 20.2

已回复 2012年 5月 1日 作者: 52opencourse (24,880 基本)
NLPJob

Text Summarization

Keyword Extraction

Text Processing

Word Similarity

Best Coursera Course

Best Coursera Courses

Elastic Patent

本站架设在 DigitalOcean 上, 采用创作共用版权协议, 要求署名、非商业用途和保持一致. 转载本站内容必须也遵循“署名-非商业用途-保持一致”的创作共用协议.