The course will include selected advanced topics in motion perception using computer vision. Concrete topics will change each year according to trends in this fast developing field.
in computer science and industry. Potential topics will include:
1. Overview of the field motion estimation and applications.
2. Optical flow estimation using least-squares.
3. Variational optical flow estimation.
4. Parametric template tracking using Lucas-Kanade.
5. Histogram-based tracking using Mean Shift
6. Tracking by detection and disriminative trackers.
7. Recursive Bayes filter for online state estimation.
8. Tracking by Kalman filter.
9. Tracking by particle filters.
10. Tracking deformable objects by constelation models.
11. Methodologies of tracker comparison.
12. Long-term tracking by detection.
Coursework consists of 4 mandatory two-week assignments (complete them on your own with consultations with the assistant and professor), starting within the first weeks of the course and a larger assignment to be completed by the end of the semester. In addition, 4 non-mandatory homework assignments will be given (~1h of work each). More info available at e-classroom.
The course is ranked in the upper quarter in difficulty level (estimated by students) among all courses at FRI. Lots of work, but you'll learn alot as well.
- nosilec: Matej Kristan
1. Introduction to natural language processing: motivation, language understanding, Turing test, traditional and statistical approaches.
2. Language resources: corpuses, dictionaries, thesauruses, networks and semantic data bases, overview of tools.
3. Linguistics: phonology and morphology, syntactical analysis, formal grammars.
4. Using automata and grammars: automata and algorithms for searching strings, syntax parsing, dependency parsing.
5. Part-of-speech tagging: types of tags, lemmatization, ngrams, Hidden Markov model, rule-based tagging.
6. Computational and lexical semantics: semantic representations, rule-to-rule approaches, semantic role labelling.
7. Clustering words and text similarity measures: cosine distance, language networks and graphs, WordNet, vector representation, vector weighting, sematic correlation.
8. Text mining: adaptation of classification methods to the specifics of text, support vector machines for language, feature selection.
9. Deep networks for text: document representations for deep neural networks, autoencoders, recurrent neural networks.
10. Text summarization: text representations, matrix factorization, multi-document summarization, extractive methods, query based methods.
11. Machine translation: language model, translation model, alignment model, challenges in machine translation.
12. Augmenting text with other data sources: heterogeneous networks, word2vec representation, heterogeneous ensembles of classifiers, link analysis.
1. Uvod: motivacija, razumevanje jezika, Turingov test, tradicionalni in statističen pristop.
2. Jezikovni viri: korpusi, slovarji, tezavri, omrežja in semantične baze, pregled orodij.
3. Lingvistika: fonologija in morfologija, sintaktična analiza, formalne gramatike.
4. Uporaba avtomatov in gramatik: avtomati in algoritmi za iskanje nizov, prepoznavanje sintakse, gramatično razčlenjevanje.
5. Oblikoslovno označevanje besedil: vrste oznak, lematizacija, ngrami, skriti markovski model, označevanje s pravili.
6. Računska in leksikalna semantika: predstavitve pomena, metode s pravili, leksikalna semantika.
7. Razvrščanje besedil in mere podobnosti: kosinusna razdalja, jezikovna omrežja in grafi, WordNet, vektorska predstavitev, uteževanje vektorjev, semantična korelacija.
8. Tekstovno rudarjenje: prilagojene klasifikacijske metode, metoda podpornih vektorjev na dokumentih, izbira atributov.
9. Globoka omrežja in besedila: predstavitev besedil za uporabo v globokih nevronskih mrežah, avtoenkoderji, rekurzivne nevronske mreže.
10. Povzemanje: predstavitve besedil, matrična faktorizacija, ekstrakcijske metode, povpraševane metode.
11. Strojno prevajanje: jezikovni model, prevajalni model, poravnava jezikov, parametri modelov, izzivi v prevajanju.
12. Dopolnjevanje besedil z drugimi viri informacij: heterogena omrežja, predstavitev word2vec, heterogeni ansambli klasifikatorjev, analiza povezav.
13. Metodologija in evalvacija pri obdelavi naravnega jezika.
- nosilec: Marko Robnik Šikonja
The course introduces techniques and procedures for analysis of biomedical signals and images like: cardiology signals (electrocardiogram - ECG), neurophysiology signals (electromyogram - EMG, electroencephalogram - EEG), medical images (computed tomography – CT images) with the emphasis on problems of biomedical researches. We will recognize how we can automatically, non-invasive and punctually, within 24-hour electrocardiogram signals, detect heart beats, classify them, and detect transient ischaemic disease, which is one of the most terrible heart diseases; and if we do not discover it punctually, it may lead to heart infarct. We will see how we can, using some non-linear signal processing techniques, analyze electromyograms recorded from the abdomen of a pregnant women, early during pregnancy (23 rd week), estimate, or try to predict, danger of pre-term birth. We will recognize techniques of analyzing electroencephalographic signals, which are recorded from the head of a person, with the aim of human-computer interaction, without using classic input devices. We will also recognize techniques of analysis of 3-dimensional tomographic images with the aim of extraction and visualization of anatomic structures of human body organs. The topics cover: representation of international standardized databases of signal samples (MIT/BIH DB, LTST DB, TPEHG DB, EEGMMI DS, BCI DB, CTIMG DB), techniques of feature extraction from signals and images (band-pass filters, morphological algorithms, principal components, Karhunen-Loeve transform, sample entropy, contour extraction), noise extraction, techniques of visualization of diagnostic and morphology feature-vector time series, and anatomic structures, analysis of feature-vector time series, spectral analysis, modelling, event detection, clustering, classifications, as well as metrics, techniques and protocols to evaluate performance and robustness of biomedical computer systems. Coursework consists of three seminars, quiz during lectures, and final exam.
- nosilec: Franc Jager
The course relies mostly on computer vision, as most biometrics technologies are based on it. Students interested in cutting edge technology, much of which is still in a research stage, are the intended target for the course. The main content (will evolve due to developments in the field):
1. Biometry basics
2. Biometrical modalities
3. Structure of a typical biometric system
6. Conditions for correct comparisons of the systems (databases, frameworks)
7. Performance and usefulness of the systems
8. Computer vision as the foundation of the biometric systems
b. Quality assessment and quality improvement
d. Singular points, minutiae, ridges
b. Quality improvement
c. Processing (segmentation, normalization, coding)
d. Feature points
d. Feature points (appearance/
b. Influence of dynamics
c. Processing (appearance/
d. Dynamic feature points
c. Feature points
14. Multi-biometric systems / multi-modality / fusions
15. Key problems of modalities/systems (research challenges)
The lectures introduce the approaches and explain their operation. At tutorial the knowledge is applied to practical problems in Matlab and open source tools.
Student work includes three assignments and a final exam. The deadlines for the three assignments are approximately around beginning to mid of November, beginning to mid of December and beginning to mid of January.
- nosilec: Peter Peer