Abstract
Many daily applications we interact with are powered by Deep Learning (DL) techniques at their core. This includes features such as natural language processing and speech recognition in intelligent personal assistants, advanced driver assistance systems in cars, and user behaviour prediction and data security in smart homes. These DL techniques are mostly based on Deep Neural Networks (DNNs) that are easy to parallelise and scalable, making general-purpose Graphics Processing Units (GPUs) the preferred platform for their execution. However, limited memory bandwidth, low performance-per-watt, and area overheads of GPUs are leading to a paradigm shift towards Domain Specific Architectures (DSAs) to improve energy and performance figures. The thesis will focus on addressing significant research challenges related to design methodologies aimed at enhancing the energy efficiency and performance of hardware accelerators for deep neural networks (DNNs). The specific research topics covered in the thesis will be tailored to the student's expertise and interests, and will include compression techniques, design space exploration techniques, multi-chip module-based implementations, and mapping techniques. The required skills for this thesis include a deep understanding of computer architectures and proficiency in programming languages.
Keywords
Deep Neural Networks
Hardware Accelerators
Design Methodologies
Embedded Systems
Microarchitectures
ERC sector(s)
PE Physical Sciences and Engineering
Fields of study
Name supervisor
Maurizio Palesi
E-mail
maurizio.palesi@unict.it
Name of Department/Faculty/School
Department of Electrical Electronics and Computer Engineering
Name of the host University
University of Catania (UNICT)
EUNICE partner e-mail of destination Research
leonardo.mirabella@studium.unict.it
Country
Italy
Thesis level
Master
Minimal language knowledge requisite
English C2
Italian C2
Thesis mode
Hybrid
Length of the research internship
6 months
Financial support available (other than E+)
No