Upcoming Events
Past Events
2021-09-27 Tutorial on "Wireless Federated Learning" at
IEEE SPAWC 2021, online event.
2021-09-19 Special Session on "Neural Network Compression and Compact Deep Features: From Methods to Standards" at
IEEE ICIP 2021, online event.
2020-12-11 Tutorial on "Distributed Deep Learning: Concepts, Methods & Applications in Wireless Networks"
IEEE GLOBECOM 2020 in Taipe, Taiwan.
2020-06-15 Workshop on "Efficient Deep Learning for Computer Vision"
IEEE CVPR 2020 in Seattle, USA.
2020-05-05 Special Session on "Distributed Machine Learning on Wireless Networks" at
IEEE ICASSP 2020 in Barcelona, Spain.
2020-05-04 Tutorial on "Distributed and Efficient Deep Learning"
IEEE ICASSP 2020 in Barcelona, Spain.
This webpage aims to regroup publications and software produced as part of a project at Fraunhofer HHI on developing new method for federated and efficient deep learning.
Why Neural Network Compression ?
State-of-the-art machine learning models such as deep neural networks are known to work excellently in practice. However, since the training and execution of these models require extensive computational resources, they may not be applicable in communications systems with limited storage capabilities, computational power and energy resources, e.g., smartphones, embedded systems or IoT devices.
Our research addresses this problem and focuses on the development of techniques for reducing the complexity and increasing the execution efficiency of deep neural networks.
Why Federated Learning ?
Large deep neural networks are trained on huge data corpora. Therefore, distributed training schemes are becoming increasingly relevant. A major issue in distributed training is the limited communication bandwidth between contributing nodes or prohibitive communication cost in general.
In our research we investigate new methods for reducing the communication cost for distributed training. This includes techniques of communication delay and gradient sparsification as well as optimal weight update encoding. Our results show that the upstream communication can be reduced by more than four orders of magnitude without significantly harming the convergence speed.
Software
- DeepCABAC: A universal tool for neural network compression (software)
- Robust and Communication-Efficient Federated Learning from Non-IID Data (software)
- Clustered Federated Learning (software)