The course will include selected advanced topics in motion perception using computer vision. Concrete topics will change each year according to trends in this fast developing field.
in computer science and industry. Potential topics will include:
1.    Overview of the field motion estimation and applications.
2.    Optical flow estimation using least-squares.
3.    Variational optical flow estimation.
4.    Parametric template tracking using Lucas-Kanade.
5.    Histogram-based tracking using Mean Shift
6.    Tracking by detection and disriminative trackers.
7.    Recursive Bayes filter for online state estimation. 
8.    Tracking by Kalman filter.
9.    Tracking by particle filters.
10.    Tracking deformable objects by constelation models.
11.    Methodologies of tracker comparison.
12.    Long-term tracking by detection.

Coursework consists of 4 mandatory two-week assignments (complete them on your own with consultations with the assistant and professor), starting within the first weeks of the course and a larger assignment to be completed by the end of the semester. In addition, 4 non-mandatory homework assignments will be given (~1h of work each). More info available at e-classroom.

The course is ranked in the upper quarter in difficulty level (estimated by students) among all courses at FRI. Lots of work, but you'll learn alot as well.

The syllabus is based on a selection of modern statistical natural learning techniques and their practical use. The lectures introduce the main tasks and techniques, explain their operation and theoretical background. During practical sessions and seminars the gained knowledge is applied to language practical task using open source tools. Student investigate and solve assignments, based on real-world problems form English and Slovene languages.

1.    Introduction to natural language processing: motivation, language understanding, Turing test, traditional and statistical approaches.
2.    Language resources: corpuses, dictionaries, thesauruses, networks and semantic data bases, overview of tools.
3.    Linguistics: phonology and morphology, syntactical analysis, formal grammars.
4.    Using automata and grammars: automata and algorithms for searching strings, syntax parsing, dependency parsing.
5.    Part-of-speech tagging: types of tags, lemmatization, ngrams, Hidden Markov model, rule-based tagging.
6.    Computational and lexical semantics: semantic representations, rule-to-rule approaches, semantic role labelling.
7.    Clustering words and text similarity measures: cosine distance, language networks and graphs, WordNet, vector representation, vector weighting, sematic correlation.
8.    Text mining: adaptation of classification methods to the specifics of text, support vector machines for language, feature selection.
9.    Deep networks for text: document representations for deep neural networks, autoencoders, recurrent neural networks.
10.    Text summarization: text representations, matrix factorization, multi-document summarization, extractive methods, query based methods.
11.    Machine translation: language model, translation model, alignment model, challenges in machine translation.
12.    Augmenting text with other data sources: heterogeneous networks, word2vec representation, heterogeneous ensembles of classifiers, link analysis.
13.    Methodology and evaluation in NLP.

The students have to solve five web quizes and are graded for three assignments (to be submitted in April, May, and June).

The course introduces techniques and procedures for analysis of biomedical signals and images like: cardiology signals (electrocardiogram - ECG), neurophysiology signals (electromyogram - EMG, electroencephalogram - EEG), medical images (computed tomography – CT images) with the emphasis on problems of biomedical researches. We will recognize how we can automatically, non-invasive and punctually, within 24-hour electrocardiogram signals, detect heart beats, classify them, and detect transient ischaemic disease, which is one of the most terrible heart diseases; and if we do not discover it punctually, it may lead to heart infarct. We will see how we can, using some non-linear signal processing techniques, analyze electromyograms recorded from the abdomen of a pregnant women, early during pregnancy (23 rd week), estimate, or try to predict, danger of pre-term birth. We will recognize techniques of analyzing electroencephalographic signals, which are recorded from the head of a person, with the aim of human-computer interaction, without using classic input devices. We will also recognize techniques of analysis of 3-dimensional tomographic images with the aim of extraction and visualization of anatomic structures of human body organs. The topics cover: representation of international standardized databases of signal samples (MIT/BIH DB, LTST DB, TPEHG DB, EEGMMI DS, BCI DB, CTIMG DB), techniques of feature extraction from signals and images (band-pass filters, morphological algorithms, principal components, Karhunen-Loeve transform, sample entropy, contour extraction), noise extraction, techniques of visualization of diagnostic and morphology feature-vector time series, and anatomic structures, analysis of feature-vector time series, spectral analysis, modelling, event detection, clustering, classifications, as well as metrics, techniques and protocols to evaluate performance and robustness of biomedical computer systems. Coursework consists of three seminars, quiz during lectures, and final exam.

The course relies mostly on computer vision, as most biometrics technologies are based on it. Students interested in cutting edge technology, much of which is still in a research stage, are the intended target for the course. The main content (will evolve due to developments in the field):

1.    Biometry basics
2.    Biometrical modalities
3.    Structure of a typical biometric system
4.    Recognition/verification/identification
5.    Metrics
6.    Conditions for correct comparisons of the systems (databases, frameworks)
7.    Performance and usefulness of the systems
8.    Computer vision as the foundation of the biometric systems

9.    Fingerprint
a.    Acquisition
b.    Quality assessment and quality improvement
c.    Processing
d.    Singular points, minutiae, ridges
e.    Matching

10.    Iris
a.    Acquisition
b.    Quality improvement
c.    Processing (segmentation, normalization, coding)
d.    Feature points
e.    Matching

11.    Face
a.    Acquisition
b.    Sub-modalities
c.    Processing
d.    Feature points (appearance/
model/texture-based approach)
e.    Matching

12.    Gait
a.    Acquisition
b.    Influence of dynamics
c.    Processing (appearance/
model-based approach)
d.    Dynamic feature points
e.    Matching

13.    Ear
a.    Acquisition
b.    Processing
c.    Feature points
d.    Matching

14.    Multi-biometric systems / multi-modality / fusions
15.    Key problems of modalities/systems (research challenges)

The lectures introduce the approaches and explain their operation. At tutorial the knowledge is applied to practical problems in Matlab and open source tools.

Student work includes three assignments and a final exam. The deadlines for the three assignments are approximately around beginning to mid of November, beginning to mid of December and beginning to mid of January.