TAO Meixia,HUANG Kaibin
With the proliferation of end devices,such as smartphones,wearable sensors and drones,an enormous amount of data is generated at the network edge.This motivates the deployment of machine learning algorithms at the edge that exploit the data to train artificial intelligence (AI) models for making intelligent decisions.Traditional machine learning procedures,including both training and inference,are carried out in a centralized data center,thus requiring devices to upload their raw data to the center.This can cause severe network congestion and also expose users'private data to hackers'attacks.Thanks to the recent development of mobile edge computing (MEC),the above issues can be addressed by pushing machine learning towards the network edge,resulting in the new paradigm of edge learning.The notion of edge learning is to allow end devices to participate in the learning process by keeping their data local,and perform training and inference in a distributed manner with coordination by an edge server.Edge learning can enable many emerging intelligent edge services,such as autonomous driving,unmanned aerial vehicles (UAVs),and extended reality (XR).For this reason,it is attracting growing interests from both the academia and industry.
The research and practice on edge learning are still in its infancy.In contrast to cloud-based learning,edge learning faces several fundamental challenges,including limited on-device computation capacities,energy constraints,and scarcity of radio resources.This special issue aims at providing a timely forum to introduce this exciting new area and latest advancements towards tackling the mentioned challenges in edge learning.
To begin with,the first paper“Enabling Intelligence at Network Edge:An Overview of Federated Learning”by YANG et al.serves as a comprehensive overview of federated learning(FL),a popular edge learning framework,with a particular focus on the implementation of FL on the wireless infrastructure to realize the vision of network intelligence.
Due to the salient features of edge learning (notably,FL),such as the non independent and identically distributed (i.i.d) dataset and a dynamic communication environment,device scheduling and resource allocation should be accounted for in designing distributed model training algorithms.To this end,the second paper“Scheduling Policies for Federated Learning in Wireless Networks:An Overview”by SHI et al.provides a comprehensive survey of existing scheduling policies of FL in wireless networks and also points out a few promising relevant future directions.The third paper“Joint User Selection and Resource Allocation for Fast Federated Edge Learning”by JIANG et al.presents a new policy for joint user selection and communication resource allocation to accelerate the training task and improve the learning efficiency.
Edge learning includes both edge training and edge inference.Due to the stringent latency requirements,edge inference is particularly bottlenecked by the limited computation and communication resources at the network edge.The fourth paper“Communication-Efficient Edge AI Inference over Wireless Networks”by YANG et al.identifies two communication-efficient architectures for edge inference,namely,ondevice distributed inference and in-edge cooperative inference,thereby achieving low latency and high energy efficiency.The fifth paper“Knowledge Distillation for Mobile Edge Computation Offloading”by CHEN et al.introduces a new computation offloading framework based on deep imitation learning and knowledge distillation that assists end devices to quickly make fine-grained offloading decisions so as to minimize the end-to-end task inference latency in MEC networks.By considering edge inference in MEC-enabled UAV systems,the last paper“Joint Placement and Resource Allocation for UAV-Assisted Mobile Edge Computing Networks with URLLC”by ZHANG et al.jointly optimizes the UAV's placement location and transmitting power to facilitate ultrareliable and low-latency round-trip communication from sensors to UAV servers to actuators.
We hope that the aforementioned six papers published in this special issue stimulate new ideas and innovations from both the academia and industry to advance this exciting area of edge learning.