Тема Название Описание
Страница Much Further Reading

Every section below contains a few papers (or even Wikipedia pages) that easy to read, without much math. Some students asked more advanced questions (e.g. relation between Bayesian modeling and logistic regression), so here is a list of some more advanced books.

Exam Файл Data for the third part
Visualizations (and Getting to Know Orange) Страница Exercise (visualizations)
Файл Mushrooms
Страница Exercise (insignificance of significance)
Гиперссылка Orange and basic visualizations
Гиперссылка Mosaic and Sieve diagram
(If the video doesn't play: it happens to me, too. There seem to be something wrong on YT's side. I hope it resolves, otherwise I'll reupload.)
Гиперссылка Task solutions
Гиперссылка Arguments against testing of null hypotheses
Гиперссылка Of Carrots, Horses and the Fear of Heights
A blog post about how statistical tests work and why we have to be very careful when using them in data mining.
Гиперссылка How to Abuse p-values in Correlations
A shorter and drier version of the same.
Файл Cohen (1994): The Earth is Round (p < 0.05)
A famous, juicy paper with general arguments against null-hypothesis testing.
Introduction to predictive modelling Страница Surviving on mushrooms
Страница Recognizing types of animals
Файл Animals
Страница Exploring Human Development Index
Файл Human development index (+ religions + continents)
Гиперссылка Classification trees
Гиперссылка Decision tree learning (Wikipedia) [mandatory read, but see remark]

The page contains the crux of the lecture. Its title and the fact that the first link on the page points to a completely unrelated kind of decision trees demonstrate why classification tree is a better term than decision tree. [Mandatory reading, with grain of salt; you most certainly don't need to know that "It is also possible for a tree to be sampled using MCMC." :) ]

Гиперссылка Information Gain in Decision Trees (Wikipedia) [optional reading]

We have spent a lot of time explaining the concept of entropy. Wikipedia page is rather mathematical and dry, but may be a good antidote to less formal and less organized exposition at the lecture. :)

Гиперссылка Induction of Decision Trees (Quinlan, 1986) [optional reading]

Quinlan is the author of one of the first and most influential algorithm for induction of classification trees. The article is more of historical interest, but it shows the thinking of the pioneers of AI. After some philosophy in the first two sections, it explains the reasoning behind the tree-induction algorithms.

Model Performance Страница Scores for evaluation of models
Файл mushroom-predictions
Файл Sara's Hamsters
Файл Sara's Hamsters - solution
Страница Cross validation
Гиперссылка Scores for evaluation of model performance
Гиперссылка List of performance scores (Wikipedia)
Lists all kinds of scores, useful as reference
Гиперссылка Cross validation (Wikipedia) [optional]

Use this page as a list of different sampling techniques.

Гиперссылка An introduction to ROC analysis (Fawcett, 2006) [mandatory: first seven sections]

A very accessible paper about ROC curves.

Гиперссылка A Unified View of Performance Metrics: Translating Threshold Choice into Expected Classification Loss [just the Introduction; optional]
Linear models for classification Страница Recognizing mushrooms - again
Файл Mushrooms (numeric)
Страница Decision boundaries
Гиперссылка Linear models for classification
Гиперссылка Logistic regression (Shalizi, 2012) [mandatory, see below which parts]

A more mathematical (compared to our lecture), but still friendly explanation of logistic regression. Read the first 6 pages, that is, section 12.1 and the (complete) section 12.2. You don't have to know the formulas, but you need to understand their meaning.

(This is Chapter 12 from Advanced Data Analysis from an Elementary Point of View. Download the draft from the author's site while it's free.)

Файл Nomograms for Visualization of Naive Bayesian Classifier (Možina, 2004) [mandatory, you may skip Section 2]

A quick derivation of the Naive Bayesian classifier, and derivation and explanation of nomograms.

Страница Nomograms for Linear Models
Other types of classifiers Страница Exploration of Kernel Methods
Гиперссылка Other models
Гиперссылка A nice explanation of the kernel trick
Гиперссылка Kernel Methods for Pattern Analysis (Shawe-Taylor, Christiannini, 2004) [optional, beyond this course]

The best-known book about kernel methods like SVM. Warning: lots of mathematics. Not a required reading for this class.

Гиперссылка Random Forests (Breiman, 2001) [optional]

Contrary from SVM, random forests are so easy to explain and understand that they don't require additional literature. But if anybody is interested, here's the definitive paper about them.

Файл The Random Subspace Method for Constructing Decision Forests (Ho, 1998) [optional]

... and this is the paper from the less-known inventor of the method. It was Breiman (above paper) though, who thoroughly examined the method and gave it a name. Neither this paper nor the above is a required reading, though.

Regularization Страница Regularization Experiment
Гиперссылка Regularization
Гиперссылка Elements of Statistical Learning [optional, way beyond this course]

We are just telling you about this book because we must do it at some point. It is too difficult for this course, but it provides an overview of machine learning methods from statistical point of view. Chapters 4 and 5 should not be too difficult, and you can read them to better understand linear models and regularization.

You can download the book for free.

Clustering Страница Clustering versus Classification
Страница Exploration of linkage functions
Файл Data sets for clustering
Страница Exploration of Dendrograms
Страница Exploration of Clusters
Гиперссылка Clustering (part 1: k-means and hierarchical clustering)
Гиперссылка Clustering (part 2: linkages, distances)
Гиперссылка Introduction to Data Mining, Chapter 8: Cluster Analysis: Basic Concepts and Algorithms (Tan P-N, Kumar, 2006)

Obligatory reading: sections 8.2 (you may skip 8.2.6), 8.3 (skip 8.3.3), The Silhouette Coefficient (pg. 541). Everything else is also quite easy to read, so we recommend it.

Text mining Страница Fake news
Гиперссылка Text Mining
Гиперссылка Text Mining - In-class assignment
Гиперссылка Allahyari et al. - A Brief Survey of Text Mining

Comprehensive overview of text mining techniques and algorithms. [obligatory]

Гиперссылка Bird and Klein: Regular Expressions for Natural Language Processing
Why regular expression can be very helpful. [optional read]
Гиперссылка Ramos: Using TF-IDF to Determine Word Relevance in Document Queries

Why using TF-IDF is a good idea. [technical, interesting read]

Гиперссылка An opinion word lexicon and a training dataset for Russian sentiment analysis of social media
Файл Text Mining course notes

Notes from the text mining lecture.

Гиперссылка Liu, Bing: Sentiment Analysis and Opinion Mining

A book on sentiment analysis and opinion mining. Freely available in the link.

Projections and embeddings Гиперссылка Projections
Гиперссылка Deep learning and images
Файл Distances
Гиперссылка Analysis of Multivariate Social Science Data, Chapter 3: Multidimensional Scaling (Bartholomew, 2008) [recommended]

The chapter is particularly interesting because of some nice examples at the end.

Гиперссылка FreeViz—An intelligent multivariate visualization approach to explorative analysis of biomedical data (Demšar, 2007) [optional]

See the example in the introduction. You can also read the Methods section, if you're curious.

Embeddings ... and a practical case Файл animals and fruits
Assignments Страница Assignment: ROC Curve
Страница Assignment: Regression
Страница Assignment: Classifiers and their Decision Boundaries
Страница Solution: Classification boundaries
Страница Solution: ROC curve
Страница Solution: Regression