Embeddings ... and a practical case
Section outline
-
We concluded with a brief introduction to deep neural networks as another form of projection, usually called embedding. This method is used for large homogenous data in which variables represent the same measurement at different coordinates, times or similar. Typical examples are texts (variables are letters), sounds (variables are amplitudes) and, most notably, images (variables are pixel values). A trained neural networks takes such data and computes a smaller number, for instance 2000 features that describe an object (e.g. an image) in a way that is not interpretable, but describes the object well. For fun, we took photos from Moscow, computed an embedding, measured distances between images and put them into hierarchical clustering, which successfully distinguished between different themes like parks, buildings (further split into churches and non-churches), statues...
For the second part of our final lecture, we showed a practical analysis of Human Development Index data, in which we used various techniques we learned in this course.
-
Загружено 15/11/20, 12:44
-