Approximate Arithmetic for Media Processing and (C)NNs
The need to support various signal and media processing and recognition applications on energy-constrained mobile computing devices has steadily grown. In recent years there has been a growing interest in hardware neural networks, which express many benefits over conventional software models, mainly in applications where speed, cost, reliability, or energy efficiency are of great importance.
The standard hardware implementations of these algorithms and (convolutional) neural networks require many resource-, power- and time-consuming arithmetic (mainly multiplication) operations thus the goal is to reduce the size and power consumption of arithmetic circuits. In particular, in order for large (C)NNs to run in real-time on resource-constrained systems, it is of crucial importance to simplify/approximate MAC units, since they are usually responsible for significant area, power and latency costs. One option to achieve this goal is to replace the complex exact multiplying circuits with simpler, approximate ones. Approximate computing forms a design alternative that exploits the intrinsic error resilience of various applications and produces energy-efficient circuits with small accuracy loss.
In the course, we will study the importance of low-power, low-memory solutions, evaluate accuracy of media processing algorithms and CNNs based on approximate computing, evaluate power reduction in approximate circuits and investigate training-time methodologies to compensate for the reduction in accuracy. During the course, the students will implement various circuits in FPGAs and evaluate them in terms of speed, area and power consumption.
- nosilec: Patricio Bulić