Probabilistic Deep Learning with TensorFlow 2

Imperial College London
via Coursera
Save (0)
ClosePlease login

No account yet? Register

Welcome to this course on Probabilistic Deep Learning with TensorFlow!

You will learn how probability distributions can be represented and incorporated into deep learning models in TensorFlow, including Bayesian neural networks, normalising flows and variational autoencoders. You will learn how to develop models for uncertainty quantification, as well as generative models that can create new samples similar to those in the dataset, such as images of celebrity faces.

You will put concepts that you learn about into practice straight away in practical, hands-on coding tutorials, which you will be guided through by a graduate teaching assistant. In addition there is a series of automatically graded programming assignments for you to consolidate your skills.

At the end of the course, you will bring many of the concepts together in a Capstone Project, where you will develop a variational autoencoder algorithm to produce a generative model of a synthetic image dataset that you will create yourself.

This course follows on from the previous two courses in the specialisation, Getting Started with TensorFlow 2 and Customising Your Models with TensorFlow 2. The additional prerequisite knowledge required in order to be successful in this course is a solid foundation in probability and statistics. In particular, it is assumed that you are familiar with standard probability distributions, probability density functions, and concepts such as maximum likelihood estimation, change of variables formula for random variables, and the evidence lower bound (ELBO) used in variational inference.

Instructor(s)

Dr Kevin Webster
Imperial College London
via Coursera
Free (audit)
English
Paid Certificate Available
Approx. 53 hours to complete
Self paced
Advanced Level
Subtitles: Subtitles: Arabic, French, Portuguese (European), Italian, Vietnamese, German, Russian, English, Spanish