Distributing deep neural networks for maximising computing capabilities and power efficiency in swarm

Abstract : Deploying neural networks models over embedded devices have an increased interest and many works is ongoing on that topic. Energy consumption, model sizes and inference time are critical issues as explained in the literature. In the context of IoT and edge computing, tradeoff have been studied in order to get a low cost but rapid answer, robust to connection issue exploiting early exiting or distributing deep neural networks. Those approaches exploits the cloud as an endpoint, balancing the load with respect to different computing capabilities. In this paper, we propose to extend those approaches to networks of embedded devices such as a swarm of drones, where every device has the same computing capabilities (in terms of energy and speed). Computing load may be balanced among the whole swarm in order to maximise either the lifetime of specific devices or lifetime of the whole swarm. We develop criteria to best cut and distribute those networks, validate them through power measurement and express the different tradeoffs we have to address.
Document type :
Conference papers
Complete list of metadatas

Cited literature [12 references]  Display  Hide  Download

https://hal.archives-ouvertes.fr/hal-02434837
Contributor : Chengfang Ren <>
Submitted on : Friday, January 10, 2020 - 1:18:26 PM
Last modification on : Monday, February 10, 2020 - 6:14:16 PM

File

08702672.pdf
Files produced by the author(s)

Identifiers

  • HAL Id : hal-02434837, version 1

Citation

Victor Gacoin, Anthony Kolar, Chengfang Ren, Regis Guinvarc'. Distributing deep neural networks for maximising computing capabilities and power efficiency in swarm. 2019 IEEE International Symposium on Circuits and Systems (ISCAS), May 2019, Sapporo, Japan. ⟨hal-02434837⟩

Share

Metrics

Record views

3

Files downloads

8