1. Parallel and distributed computing: need for parallelization
  2. Modern parallel architectures: shared-memory systems, distributed-memory systems, graphic processing units, modern coprocessors, FPGA circuits, heterogeneous systems
  3. Parallel languages and programming environments: OpenMP, OpenMPI, OpenCL
  4. Parallel algorithms, analysis and programming: data and functional parallelism, pipeline, scalability, programming strategies, patterns, concepts and examples, speedup analyis, scalability
  5. Implementation of typical scientific algorithms on mentioned architectures, choosing the right hardware architecture for an algorithm
  6. Parallel performance: load balancing, scheduling, communication overhead, cache effects, spatial and temporal locality, energy efficiency
  7. Using national high-performance computing infrastructure: access, computational power, working with data storage, environment setup, large-scale simulations
  8. Advanced topics: eksa-scale computing, FPGA programming, importance of data representation on speedup