Skip to content

Commit 28733f4

Browse files
Merge pull request #498 from yoshitomo-matsubara/dev
Update projects
2 parents 236c558 + a0af209 commit 28733f4

File tree

1 file changed

+41
-2
lines changed

1 file changed

+41
-2
lines changed

docs/source/projects.rst

+41-2
Original file line numberDiff line numberDiff line change
@@ -27,14 +27,44 @@ It is pip-installable and published as a PyPI package i.e., you can install it b
2727
Papers
2828
*****
2929

30+
A Multi-task Supervised Compression Model for Split Computing
31+
----
32+
* Author(s): Yoshitomo Matsubara, Matteo Mendula, Marco Levorato
33+
* Venue: WACV 2025
34+
* PDF: `Paper <https://arxiv.org/abs/2501.01420>`_
35+
* Code: `GitHub <https://github.com/yoshitomo-matsubara/ladon-multi-task-sc2>`_
36+
37+
Split computing (≠ split learning) is a promising approach to deep learning models for resource-constrained
38+
edge computing systems, where weak sensor (mobile) devices are wirelessly connected to stronger edge servers through
39+
channels with limited communication capacity. State-of-theart work on split computing presents methods for single tasks
40+
such as image classification, object detection, or semantic segmentation. The application of existing methods to
41+
multitask problems degrades model accuracy and/or significantly increase runtime latency. In this study, we propose Ladon,
42+
the first multi-task-head supervised compression model for multi-task split computing. Experimental results show that
43+
the multi-task supervised compression model either outperformed or rivaled strong lightweight baseline models in terms
44+
of predictive performance for ILSVRC 2012, COCO 2017, and PASCAL VOC 2012 datasets while learning compressed
45+
representations at its early layers. Furthermore, our models reduced end-to-end latency (by up to 95.4%) and
46+
energy consumption of mobile devices (by up to 88.2%) in multi-task split computing scenarios.
47+
48+
3049
Understanding the Role of the Projector in Knowledge Distillation
3150
----
3251
* Author(s): Roy Miles, Krystian Mikolajczyk
3352
* Venue: Proceedings of the 38th AAAI Conference on Artificial Intelligence (AAAI-24)
3453
* PDF: `Paper <https://ojs.aaai.org/index.php/AAAI/article/view/28219/28433/>`_
3554
* Code: `GitHub <https://github.com/roymiles/Simple-Recipe-Distillation>`_
3655

37-
**Abstract**: In this paper we revisit the efficacy of knowledge distillation as a function matching and metric learning problem. In doing so we verify three important design decisions, namely the normalisation, soft maximum function, and projection layers as key ingredients. We theoretically show that the projector implicitly encodes information on past examples, enabling relational gradients for the student. We then show that the normalisation of representations is tightly coupled with the training dynamics of this projector, which can have a large impact on the students performance. Finally, we show that a simple soft maximum function can be used to address any significant capacity gap problems. Experimental results on various benchmark datasets demonstrate that using these insights can lead to superior or comparable performance to state-of-the-art knowledge distillation techniques, despite being much more computationally efficient. In particular, we obtain these results across image classification (CIFAR100 and ImageNet), object detection (COCO2017), and on more difficult distillation objectives, such as training data efficient transformers, whereby we attain a 77.2% top-1 accuracy with DeiT-Ti on ImageNet. Code and models are publicly available.
56+
**Abstract**: In this paper we revisit the efficacy of knowledge distillation as a function matching and metric learning
57+
problem. In doing so we verify three important design decisions, namely the normalisation, soft maximum function, and
58+
projection layers as key ingredients. We theoretically show that the projector implicitly encodes information on past
59+
examples, enabling relational gradients for the student. We then show that the normalisation of representations is tightly
60+
coupled with the training dynamics of this projector, which can have a large impact on the students performance.
61+
Finally, we show that a simple soft maximum function can be used to address any significant capacity gap problems.
62+
Experimental results on various benchmark datasets demonstrate that using these insights can lead to superior or
63+
comparable performance to state-of-the-art knowledge distillation techniques, despite being much more computationally
64+
efficient. In particular, we obtain these results across image classification (CIFAR100 and ImageNet), object detection
65+
(COCO2017), and on more difficult distillation objectives, such as training data efficient transformers, whereby
66+
we attain a 77.2% top-1 accuracy with DeiT-Ti on ImageNet. Code and models are publicly available.
67+
3868

3969
FrankenSplit: Efficient Neural Feature Compression With Shallow Variational Bottleneck Injection for Mobile Edge Computing
4070
----
@@ -43,7 +73,16 @@ FrankenSplit: Efficient Neural Feature Compression With Shallow Variational Bott
4373
* PDF: `Paper <https://ieeexplore.ieee.org/document/10480247/>`_
4474
* Code: `GitHub <https://github.com/rezafuru/FrankenSplit>`_
4575

46-
**Abstract**: The rise of mobile AI accelerators allows latency-sensitive applications to execute lightweight Deep Neural Networks (DNNs) on the client side. However, critical applications require powerful models that edge devices cannot host and must therefore offload requests, where the high-dimensional data will compete for limited bandwidth. Split Computing (SC) alleviates resource inefficiency by partitioning DNN layers across devices, but current methods are overly specific and only marginally reduce bandwidth consumption. This work proposes shifting away from focusing on executing shallow layers of partitioned DNNs. Instead, it advocates concentrating the local resources on variational compression optimized for machine interpretability. We introduce a novel framework for resource-conscious compression models and extensively evaluate our method in an environment reflecting the asymmetric resource distribution between edge devices and servers. Our method achieves 60% lower bitrate than a state-of-the-art SC method without decreasing accuracy and is up to 16x faster than offloading with existing codec standards.
76+
**Abstract**: The rise of mobile AI accelerators allows latency-sensitive applications to execute lightweight Deep
77+
Neural Networks (DNNs) on the client side. However, critical applications require powerful models that edge devices
78+
cannot host and must therefore offload requests, where the high-dimensional data will compete for limited bandwidth.
79+
Split Computing (SC) alleviates resource inefficiency by partitioning DNN layers across devices, but current methods
80+
are overly specific and only marginally reduce bandwidth consumption. This work proposes shifting away from focusing on
81+
executing shallow layers of partitioned DNNs. Instead, it advocates concentrating the local resources on variational
82+
compression optimized for machine interpretability. We introduce a novel framework for resource-conscious compression
83+
models and extensively evaluate our method in an environment reflecting the asymmetric resource distribution between
84+
edge devices and servers. Our method achieves 60% lower bitrate than a state-of-the-art SC method without decreasing
85+
accuracy and is up to 16x faster than offloading with existing codec standards.
4786

4887

4988
torchdistill Meets Hugging Face Libraries for Reproducible, Coding-Free Deep Learning Studies: A Case Study on NLP

0 commit comments

Comments
 (0)