Probabilistic Graphical Models 1: Representation

Location type
Logo Coursera
Provider rating: starstarstarstar_borderstar_border 6.3 Coursera has an average rating of 6.3 (out of 4 reviews)

Need more information? Get more details on the site of the provider.

Description

When you enroll for courses through Coursera you get to choose for a paid plan or for a free plan

  • Free plan: No certicification and/or audit only. You will have access to all course materials except graded items.
  • Paid plan: Commit to earning a Certificate—it's a trusted, shareable way to showcase your new skills.

About this course: Probabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over complex domains: joint (multivariate) distributions over large numbers of random variables that interact with each other. These representations sit at the intersection of statistics and computer science, relying on concepts from probability theory, graph algorithms, machine learning, and more. They are the basis for the state-of-the-art methods in a wide variety of applications, such as medical diagnosis, image understanding, speech recognition, natural language processing, and many, many more. They are also a foundational tool in formulating many machine learning pr…

Read the complete description

Frequently asked questions

There are no frequently asked questions yet. Send an Email to info@springest.com

Didn't find what you were looking for? See also: Speech, Machine Learning, Algorithms, Statistics, and Computer Science.

When you enroll for courses through Coursera you get to choose for a paid plan or for a free plan

  • Free plan: No certicification and/or audit only. You will have access to all course materials except graded items.
  • Paid plan: Commit to earning a Certificate—it's a trusted, shareable way to showcase your new skills.

About this course: Probabilistic graphical models (PGMs) are a rich framework for encoding probability distributions over complex domains: joint (multivariate) distributions over large numbers of random variables that interact with each other. These representations sit at the intersection of statistics and computer science, relying on concepts from probability theory, graph algorithms, machine learning, and more. They are the basis for the state-of-the-art methods in a wide variety of applications, such as medical diagnosis, image understanding, speech recognition, natural language processing, and many, many more. They are also a foundational tool in formulating many machine learning problems. This course is the first in a sequence of three. It describes the two basic PGM representations: Bayesian Networks, which rely on a directed graph; and Markov networks, which use an undirected graph. The course discusses both the theoretical properties of these representations as well as their use in practice. The (highly recommended) honors track contains several hands-on assignments on how to represent some real-world problems. The course also presents some important extensions beyond the basic PGM representation, which allow more complex models to be encoded compactly.

Created by:  Stanford University
  • Taught by:  Daphne Koller, Professor

    School of Engineering
Basic Info Course 1 of 3 in the Probabilistic Graphical Models Specialization Level Advanced Language English How To Pass Pass all graded assignments to complete the course. User Ratings 4.7 stars Average User Rating 4.7See what learners said Coursework

Each course is like an interactive textbook, featuring pre-recorded videos, quizzes and projects.

Help from your peers

Connect with thousands of other learners and debate ideas, discuss course material, and get help mastering concepts.

Certificates

Earn official recognition for your work, and share your success with friends, colleagues, and employers.

Stanford University The Leland Stanford Junior University, commonly referred to as Stanford University or Stanford, is an American private research university located in Stanford, California on an 8,180-acre (3,310 ha) campus near Palo Alto, California, United States.

Syllabus


WEEK 1


Introduction and Overview
This module provides an overall introduction to probabilistic graphical models, and defines a few of the key concepts that will be used later in the course.


4 videos expand


  1. Video: Welcome!
  2. Video: Overview and Motivation
  3. Video: Distributions
  4. Video: Factors

Graded: Basic Definitions

Bayesian Network (Directed Models)



In this module, we define the Bayesian network representation and its semantics. We also analyze the relationship between the graph structure and the independence properties of a distribution represented over that graph. Finally, we give some practical tips on how to model a real-world situation as a Bayesian network.


15 videos, 6 readings expand


  1. Video: Semantics & Factorization
  2. Video: Reasoning Patterns
  3. Video: Flow of Probabilistic Influence
  4. Video: Conditional Independence
  5. Video: Independencies in Bayesian Networks
  6. Video: Naive Bayes
  7. Video: Application - Medical Diagnosis
  8. Video: Knowledge Engineering Example - SAMIAM
  9. Reading: Setting Up Your Programming Assignment Environment
  10. Reading: Installing Octave/MATLAB on Windows
  11. Reading: Installing Octave/MATLAB on Mac OS X (10.10 Yosemite and 10.9 Mavericks)
  12. Reading: Installing Octave/MATLAB on Mac OS X (10.8 Mountain Lion and Earlier)
  13. Reading: Installing Octave/MATLAB on GNU/Linux
  14. Reading: Octave/MATLAB resources
  15. Video: Basic Operations
  16. Video: Moving Data Around
  17. Video: Computing On Data
  18. Video: Plotting Data
  19. Video: Control Statements: for, while, if statements
  20. Video: Vectorization
  21. Video: Working on and Submitting Programming Exercises

Graded: Bayesian Network Fundamentals
Graded: Bayesian Network Independencies
Graded: Octave/Matlab installation
Graded: Simple BN Knowledge Engineering

WEEK 2


Template Models for Bayesian Networks



In many cases, we need to model distributions that have a recurring structure. In this module, we describe representations for two such situations. One is temporal scenarios, where we want to model a probabilistic structure that holds constant over time; here, we use Hidden Markov Models, or, more generally, Dynamic Bayesian Networks. The other is aimed at scenarios that involve multiple similar entities, each of whose properties is governed by a similar model; here, we use Plate Models.


4 videos expand


  1. Video: Overview of Template Models
  2. Video: Temporal Models - DBNs
  3. Video: Temporal Models - HMMs
  4. Video: Plate Models

Graded: Template Models

Structured CPDs for Bayesian Networks



A table-based representation of a CPD in a Bayesian network has a size that grows exponentially in the number of parents. There are a variety of other form of CPD that exploit some type of structure in the dependency model to allow for a much more compact representation. Here we describe a number of the ones most commonly used in practice.


4 videos expand


  1. Video: Overview: Structured CPDs
  2. Video: Tree-Structured CPDs
  3. Video: Independence of Causal Influence
  4. Video: Continuous Variables

Graded: Structured CPDs
Graded: BNs for Genetic Inheritance
Graded: BNs for Genetic Inheritance PA Quiz

WEEK 3


Markov Networks (Undirected Models)



In this module, we describe Markov networks (also called Markov random fields): probabilistic graphical models based on an undirected graph representation. We discuss the representation of these models and their semantics. We also analyze the independence properties of distributions encoded by these graphs, and their relationship to the graph structure. We compare these independencies to those encoded by a Bayesian network, giving us some insight on which type of model is more suitable for which scenarios.


7 videos expand


  1. Video: Pairwise Markov Networks
  2. Video: General Gibbs Distribution
  3. Video: Conditional Random Fields
  4. Video: Independencies in Markov Networks
  5. Video: I-maps and perfect maps
  6. Video: Log-Linear Models
  7. Video: Shared Features in Log-Linear Models

Graded: Markov Networks
Graded: Independencies Revisited
Graded: Markov Networks for OCR

WEEK 4


Decision Making



In this module, we discuss the task of decision making under uncertainty. We describe the framework of decision theory, including some aspects of utility functions. We then talk about how decision making scenarios can be encoded as a graphical model called an Influence Diagram, and how such models provide insight both into decision making and the value of information gathering.


3 videos expand


  1. Video: Maximum Expected Utility
  2. Video: Utility Functions
  3. Video: Value of Perfect Information

Graded: Decision Theory
Graded: Decision Making
Graded: Decision Making PA Quiz

WEEK 5


Knowledge Engineering & Summary
This module provides an overview of graphical model representations and some of the real-world considerations when modeling a scenario as a graphical model. It also includes the course final exam.


1 video expand


  1. Video: Knowledge Engineering

Graded: Representation Final Exam
There are no reviews yet.

Share your review

Do you have experience with this course? Submit your review and help other people make the right choice. As a thank you for your effort we will donate $1.- to Stichting Edukans.

There are no frequently asked questions yet. Send an Email to info@springest.com