Anomaly Detection and (variational) autoencoders
13th Nov. 2020

10:00-13:00

Registraiont is closed as the seminar took place. You can find the material (Slides and Jupyter notebooks) below or in the GitHub repository.

The seminars will be held with ZOOM. All registered should have received an email with the ZOOM link for the first seminar. If you have not received it or you registered after Thursday 12th Nov. 11:00 please get in touch with the organisers.

Agenda
10:00-11:00 - Theory of autoencoders with introduction to variational autoencoders
11:00-12:00 - Hands-on session with guided exercises and examples (Python)
12:00-13:00 - Invited Talk (see below)

Material

The material can be accessed from the GitHub Repository. For more convenience here are the slides and the hands-on Google colab jupyter notebooks

Slides

Slides on Autoencoders

Slides on an introduction to Variational Autoencoders

Slides on an introduction to Google Colab and Jupyter Notebooks

Jupyter Notebooks (Python, the links will open a Google Colab instance, no need to install anything locally)

Your first autoencoder

Your first variational autoencoder

Invited Talk (speaker will present live)

Title: Unsupervised Learning for Thermophysical Analysis on the Lunar Surface
Speaker: Ben Moseley
Abstract: We investigate the use of unsupervised machine learning to understand and extract valuable information from thermal measurements of the lunar surface. We train a variational autoencoder (VAE) to reconstruct observed variations in lunar surface temperature from over 9 yr of Diviner Lunar Radiometer Experiment data and in doing so learn a fully data-driven thermophysical model of the lunar surface. The VAE defines a probabilistic latent model that assumes the observed surface temperature variations can be described by a small set of independent latent variables and uses a deep convolutional neural network to infer these latent variables and to reconstruct surface temperature variations from them. We find it is able to disentangle five different thermophysical processes from the data, including (1) the solar thermal onset delay caused by slope aspect, (2) effective albedo, (3) surface thermal conductivity, (4) topography and cumulative illumination, and (5) extreme thermal anomalies. Compared to traditional physics-based modeling and inversion, our method is extremely efficient, requiring orders of magnitude less computational power to invert for underlying model parameters. Furthermore our method is physics-agnostic and could therefore be applied to other space exploration data sets, immediately after the data is collected and without needing to wait for physical models to be developed. We compare our approach to traditional physics-based thermophysical inversion and generate new, VAE-derived global thermal anomaly maps. Our method demonstrates the potential of artificial intelligence-driven techniques to complement existing physical models as well as for accelerating lunar and space exploration in general.
Full Paper Link: https://iopscience.iop.org/article/10.3847/PSJ/ab9a52/meta

Technical Prerequisites

You will need to have access to a reasonably fast internet connection to be able to follow the live stream of the lecture. The exercises will be all carried out in Google Colab, therefore you only need Google Chrome installed on your computer. A google account is also necessary. More resources can be found here: https://astroml-hackdays.org/prerequisites

Know-how Prerequisites

To be able to follow this lecture, you will need an intermediate understanding of mathematics, linear algebra and statistics. No previous knowledge of neural networks is assumed. An intermediate Python experience will be required to be able to follow and work on the exercises.

ABSTRACT

In this workshop we will talk about dimensionality reduction and anomaly detection (the two techniques go hand in hand). We will cover first the theory and the most used methods to do dimensionality reduction and anomaly detection on different kind of input data. We will see how different techniques can be used to detect the relevant features that can then be used for clustering and anomaly detection. Examples are PCA (see for example Philos Trans A Math Phys Eng Sci. 2016 Apr 13; 374(2065): 20150202. doi: 10.1098/rsta.2015.0202) and t-NSE. (Journal of Machine Learning Research 9 (2008) 2579-2605, PDF). In Figure 1 you can see for example how such method can be used to visualise clusters of similar hand-written digits.

Figure 1: Different ways of visualising clusters of similar hand-written digits (MNIST dataset) with the first two components of PCA (Left) and the first two using T-SNE (right) (Source towarddatascience.com).

Figure 1: Different ways of visualising clusters of similar hand-written digits (MNIST dataset) with the first two components of PCA (Left) and the first two using T-SNE (right) (Source towarddatascience.com).

We will look at the limitations of the classical methods as PCA (in particular from a computational point of view) and we will focus and analyse in details a special kind of neural networks that are typically used for this kind of tasks: autoencoders. We will see how we can use those special architectures to try to reconstruct a set of input data and then, by measuring the reconstruction error how we can detect anomalies (or outliers). In Figure 2 you can see the typical architecture of an autoencoder that we will use during the hands-on sessions.

Figure 2: a typical architecture of an autoencoder. This type of architectures try to learn to reconstruct the input data via an encoding-decoding approach.

Figure 2: a typical architecture of an autoencoder. This type of architectures try to learn to reconstruct the input data via an encoding-decoding approach.

We will also look at different autoencoders architectures that are able to deal with 2-dimensional data as images. Those architectures uses more advanced neural networks layers as convolutional ones.

Next
Next

De-noising of images -11th Dec. 2020