Denoising Auto-encoder

Download Report

Transcript Denoising Auto-encoder

Denoising Auto-encoder

Original Image 5/1/2020 Corrupted Image Yichao Li Denoised Image 1

• • • Introduction Building good predictors on complex domains means learning complicated functions.

Learning deep architectures is difficult Stacked auto-encoder is a successful approach 2 5/1/2020 Yichao Li

• • Stacked Auto-encoders Greedy layer-wise initialization Global fine-tuning 5/1/2020 Yichao Li 3

Motivated Question • While unsupervised learning of a mapping that produces “good” intermediate representations of the input pattern seems to be key, little is understood regarding what constitutes “good” representations for initializing deep architectures, or what explicit criteria may guide learning such representations.

5/1/2020 Yichao Li 4

Motivated Question • What would make a good unsupervised criterion for finding good feature representations?

5/1/2020 Yichao Li 5

Motivation • • • Our ability to “fill-in-the-blanks” in the sensory input – Missing pixels, image from sound,…… Associated memory Good fill-in-the-blanks performance ---> distribution is well captured 5/1/2020 Yichao Li 6

Motivation • • • Our ability to “fill-in-the-blanks” in the sensory input – Missing pixels, image from sound,…… Associated memory Good fill-in-the-blanks performance ---> distribution is well captured

What the authors propose: unsupervised initialization by explicit fill-in-the-blanks training.

5/1/2020 Yichao Li 7

Hypothesis

What the authors propose: unsupervised initialization by explicit fill-in-the-blanks training.

5/1/2020 Yichao Li 8

Denoising Auto-encoder 5/1/2020 Yichao Li 9

Stack Denoising Auto-encoder

Second Layer First Layer

5/1/2020 Yichao Li 10

• Supervised Learning 5/1/2020 Yichao Li 11

Experiments Original data is here: http://www.iro.umontreal.ca/~lisa/twiki/bin/view.cgi/Public/DeepVsShallowComparisonICML2007 5/1/2020 Yichao Li 12

Experiments • How they calculate classification error with 95% confidence interval?

5/1/2020 Yichao Li 13

Observed phenomenon 5/1/2020 Yichao Li 14

Learned Features 5/1/2020 Yichao Li 15

Learned Features 5/1/2020 Did the authors compare SdA with Stacked Sparse Auto encoder?

Sparsity? Denoising?

Yichao Li 16

Relationship to Other Approaches • • • • • Image denoising algorithms Data augmenting (rotation, translation,scaling) Training with noise & regularization Robust coding over noisy channels Trying to learn missing values? NO 5/1/2020 Yichao Li 17

Manifold Learning Perspective 5/1/2020 Yichao Li 18

Manifold Learning Perspective 5/1/2020 Yichao Li 19

Future work • Investigate other types of corruption process for representation itself. How?

5/1/2020 Yichao Li 20

Questions • Can you think of another criteria regarding what constitutes “good ” representations?

5/1/2020 Yichao Li 21

Related DNA sequences

2-mer representation

Random DNA sequences

5/1/2020 Input

Intermediate Features

Yichao Li 22

References • • • http://www.cs.nyu.edu/~ranzato/research/projec ts.html

4 th CiFAR Summer School on Learning and Vision in Biology and Engineering Toronto, August 5-9, 2008 Vincent, H. Larochelle Y. Bengio and P.A. Manzagol, Extracting and Composing Robust Features with Denoising Autoencoders, Proceedings of the Twenty-fifth International Conference on Machine Learning (ICML‘08), pages 1096 - 1103, ACM, 2008.

5/1/2020 Yichao Li 23

• Questions?

5/1/2020 Yichao Li 24