David MacKay
.






Search :
.
Slides for Information Theory, Pattern Recognition, and Neural Networks Lectures by David MacKay

Slides for Information Theory, Pattern Recognition, and Neural Networks Lectures

Note: I use the blackboard in lectures, and I give the audience problems to solve. These slides are therefore an incomplete record of the lectures. Often, the slides' main function is to provide a review of key points from the preceding lectures.

Lecture 1 Introduction to information theory
Lecture 2Introduction to compression. Information content.  
Lecture 3Source coding theorem, bent coin lottery
Lecture 4Symbol codes
Lecture 5Symbol codes and Arithmetic coding
Lecture 6Information measures for Noisy Channels
Lecture 7Noisy Channels; Inference
Lecture 8Coding theorem
Lecture 9
Lecture 10Inference
Lecture 11Monte Carlo methods and Variational methods
Lecture 12Content-addressable memories

These slides are copyright (c) 2005, 2006, 2007, 2008, 2009, 2012 David J.C. MacKay. They may be used for any educational purpose.

See also The Information Theory, Inference, and Learning Algorithms website, which includes all the figures from the book, for use in teaching.

Back to the Information Theory, Pattern Recognition, and Neural Networks course


Site last modified Sun Aug 31 18:51:05 BST 2014