What's inside
Representation
✦
How to specify a model.
General techniques for parameterizing multivariate probability distributions and representing them in the intuitive visual language of graphs.
Inference
✦
How to ask the model questions.
Inference algorithms for answering useful questions, such as: What is the most probable explanation for a given hypothesis, and how certain are we?
Learning
✦
How to fit a model to real-world data.
Methods for learning directed, undirected, and latent variable models, with an emphasis on structure learning and parameter learning.
Download the full tutorial
This 200-page tutorial reviews the theory and methods of representation, learning, and inference in probabilistic graphical modeling. As an accompaniment to this tutorial, we provide links to exceptional external resources that provide additional depth.
Download tutorial chapters
Chapter 1: Introduction.
In this brief introduction, we provide a high-level overview of what to expect from this tutorial.
Chapter 2: Preliminaries.
This chapter covers the basics of probability theory and graph theory, which provide the mathematical foundations of probablistic graphical modeling.
Chapter 3: Representation.
In this chapter, we focus on general techniques for parameterizing probability distributions with relatively few parameters. We explore how the resulting models can be elegantly described as directed and undirected graphs.
Chapter 4: Exact Inference.
In this chapter, we focus on exact probabilistic inference in graphical models. Though exact inference is NP-hard in the general case, tractable solutions can be obtained for certain kinds of problems. As illustrated throughout this chapter, the tractability of an inference problem depends heavily on the structure of the graph that describes the probability of interest.
Chapter 5: Approximate Inference.
This chapter introduces the two main families of approximate inference algorithms: (1) sampling methods, which produce answers by repeatedly generating random numbers from a distribution of interest, and (2) variational methods, which formulate inference as an optimization problem.
Chapter 6: Learning.
This chapter introduces methods for fitting predictive models on real data. We highlight two main tasks: (1) structure learning, where we want to infer variable dependencies in the graphical model, and (2) parameter learning, where the graph structure is known and we want to estimate the factors.
Chapter 7: Discussion – The Variational Autoencoder.
In this concluding discussion, we present a highly influential deep probabilistic model: the variational autoencoder (VAE). Using the VAE as a case study, we draw connections among ideas from throughout this tutorial and demonstrate how these ideas are useful in machine learning research.
How to cite this work
Please cite our work using the following BibTeX.
@misc{maasch2025pgm,
title={Probabilistic Graphical Models: A Concise Tutorial},
author={Jacqueline Maasch and Willie Neiswanger and Stefano Ermon and Volodymyr Kuleshov},
year={2025},
eprint={2507.17116},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2507.17116},
}