Parallel and distributed computing: need for parallelization
Modern parallel architectures: shared-memory systems, distributed-memory systems, graphic processing units, modern coprocessors, FPGA circuits, heterogeneous systems
Parallel languages and programming environments: OpenMP, OpenMPI, OpenCL
Parallel algorithms, analysis and programming: data and functional parallelism, pipeline, scalability, programming strategies, patterns, concepts and examples, speedup analyis, scalability
Implementation of typical scientific algorithms on mentioned architectures, choosing the right hardware architecture for an algorithm
Parallel performance: load balancing, scheduling, communication overhead, cache effects, spatial and temporal locality, energy efficiency
Using national high-performance computing infrastructure: access, computational power, working with data storage, environment setup, large-scale simulations
Advanced topics: eksa-scale computing, FPGA programming, importance of data representation on speedup