Evaluating Edge-Cloud Computing Trade-Offs for Mobile Object Detection and Classification with Deep Learning

Authors

  • Whendell Magalhães Federal University of Campina Grande
  • Mainara Farias
  • Leandro Balby Marinho Federal University of Campina Grande
  • Herman Martins Gomes Federal University of Campina Grande
  • Glaucimar Aguiar Hewlett Packard Enterprise
  • Plínio Silveira Hewlett Packard Enterprise

Abstract

Internet-of-Things (IoT) applications based on Artificial Intelligence, such as mobile object detection and recognition from images and videos, may greatly benefit from inferences made by state-of-the-art Deep Neural Networks (DNNs) models. However, adopting such models in IoT applications poses an important challenge, since DNNs usually require lots of computational resources (i.e. memory, disk, CPU/GPU, and power), which may prevent them to run on resource-limited edge devices. On the other hand, moving the heavy computation to the Cloud may significantly increase running costs and latency of IoT applications. Among the possible strategies to tackle this challenge are: (i) DNN model partitioning between edge and cloud; and (ii) running simpler models in the edge and more complex ones in the cloud, with information exchange between models, when needed. Variations of strategy (i) also include: running the entire DNN on the edge device (sometimes not feasible) and running the entire DNN on the cloud. All these strategies involve trade-offs in terms of latency, communication, and financial costs. In this article we investigate such trade-offs in real-world scenarios. We conduct several experiments in object detection and image classification models. Our experimental setup includes a Raspberry PI 3 B+ and a cloud server equipped with a GPU. Experiments using different network bandwidths are performed. Our results provide useful insights about the aforementioned trade-offs.

Downloads

Download data is not yet available.

Downloads

Published

2020-06-30