Dynamic Scheduling for Vehicle-to-Vehicle Communications Enhanced Federated Learning
车辆到车辆通信增强联邦学习的动态调度
Abstract 摘要
Leveraging the computing and sensing capabilities of vehicles, vehicular federated learning (VFL) has been applied to edge training for connected vehicles. The dynamic and interconnected nature of vehicular networks presents unique opportunities to harness direct vehicle-to-vehicle (V2V) communications, enhancing VFL training efficiency. In this paper, we formulate a stochastic optimization problem to optimize the VFL training performance, considering the energy constraints and mobility of vehicles, and propose a V2V-enhanced dynamic scheduling (VEDS) algorithm to solve it. The model aggregation requirements of VFL and the limited transmission time due to mobility result in a stepwise objective function, which presents challenges in solving the problem. We thus propose a derivative-based drift-plus-penalty method to convert the long-term stochastic optimization problem to an online mixed integer nonlinear programming (MINLP) problem, and provide a theoretical analysis to bound the performance gap between the online solution and the offline optimal solution. Further analysis of the scheduling priority reduces the original problem into a set of convex optimization problems, which are efficiently solved using the interior-point method. Experimental results demonstrate that compared with the state-of-the-art benchmarks, the proposed algorithm enhances the image classification accuracy on the CIFAR-10 dataset by and reduces the average displacement errors on the Argoverse trajectory prediction dataset by .
利用车辆的计算和感知能力,车联网联邦学习(VFL)已被应用于联网车辆的边缘训练。车联网的动态和互联特性为利用直接车对车(V2V)通信提供了独特的机会,从而提高了 VFL 训练效率。本文针对车辆的能量约束和移动性,将 VFL 训练性能优化问题转化为随机优化问题,并提出了一种 V2V 增强动态调度(VEDS)算法来解决该问题。VFL 的模型聚合需求以及移动性带来的有限传输时间导致了分段目标函数,这给问题的求解带来了挑战。因此,我们提出了一种基于导数的漂移加惩罚方法,将长期随机优化问题转化为在线混合整数非线性规划(MINLP)问题,并提供理论分析来界定在线解与离线最优解之间的性能差距。
对调度优先级的进一步分析将原始问题简化为一组凸优化问题,这些问题可以使用内点法有效地解决。实验结果表明,与最先进的基准相比,所提出的算法将 CIFAR-10 数据集上的图像分类精度提高了 ,并将 Argoverse 轨迹预测数据集上的平均位移误差降低了 。
I Introduction
我简介
The rapid advancement of vehicular networks has enabled various new applications, including vehicular cooperative perception, trajectory prediction, and route planning. These applications produce vast amounts of data and require timely training of machine learning (ML) models to adapt to changing road conditions [1]. In conventional ML frameworks, data is transmitted to a central server for model training, which poses privacy risks and incurs significant delays. As more and more vehicles are equipped with powerful computing capabilities and can collect data via on-board sensors, the ML training process can shift from centralized servers to the vehicles themselves. Therefore, vehicular federated learning (VFL) is a promising framework for timely training and privacy conservation [2].
车辆网络的快速发展催生了各种新应用,包括车辆协同感知、轨迹预测和路线规划。这些应用产生了海量数据,需要及时训练机器学习 (ML) 模型以适应不断变化的路况 [1]。在传统的 ML 框架中,数据被传输到中央服务器进行模型训练,这会带来隐私风险并造成重大延迟。随着越来越多的车辆配备强大的计算能力,并能够通过车载传感器收集数据,ML 训练过程可以从集中式服务器转移到车辆本身。因此,车联网联邦学习 (VFL) 是一个很有前景的框架,可以实现及时训练和隐私保护 [2]。
VFL is a distributed ML framework, where an ML model is trained over multiple vehicles. Vehicles with local data and computing capabilities are called source vehicles (SOVs). Each SOV trains an ML model based on the local dataset and uploads the model parameters to the roadside unit (RSU). The RSU aggregates the received parameters to obtain a global model and then broadcasts the new models to vehicles to start a new round. Implemented in vehicular networks, VFL takes advantage of the distributed data and processing capabilities while maintaining data privacy [3, 4].
VFL 是一种分布式机器学习框架,其中机器学习模型在多辆车之间进行训练。拥有本地数据和计算能力的车辆被称为源车辆 (SOV)。每个 SOV 基于本地数据集训练一个机器学习模型,并将模型参数上传到路边单元 (RSU)。RSU 聚合接收到的参数以获得全局模型,然后将新模型广播到车辆以开始新一轮。VFL 在车联网中实现,利用了分布式数据和处理能力,同时维护数据隐私[3, 4]。
The distinguished characteristic of VFL is the high mobility of vehicles [4], bringing about challenges and opportunities. On the one hand, mobility leads to many challenges. Firstly, the channel conditions of vehicular networks change rapidly due to the high mobility, which complicates the channel estimation and leads to unreliable data transmissions [5]. Secondly, the connections between vehicle-to-infrastructure (V2I) are intermittent. One vehicle may leave the coverage of an RSU before uploading all of the local model parameters [6], which imposes stringent latency requirements for model aggregation in VFL. The current solution to this problem is to increase the processor frequency and transmission power to reduce the computation and communication latency [7]. However, this may greatly increase the energy consumption of SOVs.
VFL 的显著特点是车辆的高度移动性 [4],带来了挑战和机遇。一方面,移动性带来了许多挑战。首先,由于高移动性,车联网的信道条件变化迅速,这使得信道估计变得复杂,并导致数据传输不可靠 [5]。其次,车辆到基础设施 (V2I) 之间的连接是间歇性的。一辆车可能在上传所有本地模型参数之前就离开了 RSU 的覆盖范围 [6],这对 VFL 中的模型聚合提出了严格的延迟要求。目前解决这个问题的方法是提高处理器频率和传输功率,以降低计算和通信延迟 [7]。然而,这可能会大大增加 SOV 的能耗。
On the other hand, mobility also brings about communication opportunities [8, 9]. Recent advancements in vehicle-to-vehicle (V2V) communications via sidelinks enable vehicles to communicate directly with each other, enhancing transmission rates and reliability in vehicular networks [10, 11]. Many vehicles that are not scheduled for training can also be involved in VFL by relaying the model uploads, which are namely opportunistic vehicles (OPVs). Utilizing the sidelinks, SOVs can upload their model parameters to the RSUs with the help of OPVs. Mobility increases the likelihood of scheduled vehicles encountering OPVs at closer ranges, under better channel conditions, or with line-of-sight paths. Leveraging these OPVs may increase the success rate of model uploading and therefore enhance the learning performance.
另一方面,移动性也带来了通信机会[8, 9]。近年来,车联网(V2V)通信技术的进步,使得车辆能够通过侧链直接相互通信,从而提高了车联网的传输速率和可靠性[10, 11]。许多未安排培训的车辆也可以通过中继模型上传参与 VFL,这些车辆被称为机会车辆(OPVs)。利用侧链,SOVs 可以在 OPVs 的帮助下将模型参数上传到 RSUs。移动性增加了计划车辆在更近的范围内、更好的信道条件下或视线路径下遇到 OPVs 的可能性。利用这些 OPVs 可以提高模型上传的成功率,从而提高学习性能。
Currently, many studies have leveraged V2V sidelinks to support various applications in vehicular networks, such as vehicular task offloading [12, 13, 14], vehicular edge caching [15, 16] and cooperative perception [17, 18, 19]. However, few works utilize V2V sidelinks to improve the performance of VFL. Different from other applications [12, 13, 14, 15, 16, 17, 18, 19], VFL operates on a longer time scale with model aggregation requirements. Therefore, a dynamic scheduling algorithm is needed to adapt to the changing environment throughout the VFL training.
目前,许多研究利用 V2V 侧链来支持车联网中的各种应用,例如车辆任务卸载 [12, 13, 14]、车辆边缘缓存 [15, 16] 和协同感知 [17, 18, 19]。然而,很少有工作利用 V2V 侧链来提高 VFL 的性能。与其他应用 [12, 13, 14, 15, 16, 17, 18, 19] 不同,VFL 在更长的时间尺度上运行,并具有模型聚合需求。因此,需要一种动态调度算法来适应 VFL 训练过程中的不断变化的环境。
In this work, we consider a VFL system that utilizes the V2V communication resources and employs the OPVs to assist SOVs in model uploading, enhancing the VFL performance. The main contributions are summarized as follows:
在这项工作中,我们考虑了一种利用 V2V 通信资源并使用 OPV 来协助 SOV 上传模型的 VFL 系统,从而提高 VFL 性能。主要贡献总结如下:
-
•
We characterize the convergence bound of the VFL system, and formulate a stochastic optimization problem to minimize the global loss function, considering the energy constraints and the channel uncertainty caused by vehicle mobility. A V2V-enhanced dynamic scheduling (VEDS) algorithm is proposed to solve it.
我们刻画了 VFL 系统的收敛界,并考虑到车辆移动带来的能量约束和信道不确定性,将全局损失函数最小化问题转化为随机优化问题。提出了一种 V2V 增强动态调度(VEDS)算法来解决该问题。 -
•
The model aggregation requirements and the limited transmission time in VFL result in a stepwise objective function, which is non-convex and hard to solve. We propose a derivative-based drift-plus-penalty method to convert the long-term stochastic optimization problem to an online mixed integer nonlinear programming (MINLP) problem. We provide a theoretical performance guarantee for the proposed transformation by bounding the performance gap between the online and offline solutions. Our analysis further shows the impact of approximation parameters on the performance bound.
• VFL 中的模型聚合需求和有限传输时间导致了分步目标函数,该函数是非凸的且难以求解。我们提出了一种基于导数的漂移加惩罚方法,将长期随机优化问题转换为在线混合整数非线性规划 (MINLP) 问题。我们通过对在线和离线解决方案之间的性能差距进行界定,为提出的转换提供了理论性能保证。我们的分析进一步表明了近似参数对性能界限的影响。 -
•
Through the analysis of the MINLP problem, we identify the priority in the OPV scheduling and reduce the original problem to a set of convex optimization problems, which are solved using the interior-point method.
通过对 MINLP 问题的分析,我们确定了 OPV 调度中的优先级,并将原始问题简化为一组凸优化问题,这些问题使用内点法求解。 -
•
Experimental results show that, compared with the state-of-the-art benchmarks, the test accuracy is increased by for image classification on the CIFAR-10 dataset, and the average displacement error (ADE) is reduced by for trajectory prediction on the Argoverse dataset.
• 实验结果表明,与最先进的基准相比,在 CIFAR-10 数据集上的图像分类测试精度提高了 ,在 Argoverse 数据集上的轨迹预测平均位移误差 (ADE) 降低了 。
The rest of this paper is organized as follows. The related papers are reviewed in Section II. Section III introduces the system model, including the FL, computation, and communication models. The convergence analysis and problem formulation are provided in Section IV, and the VEDS algorithm is proposed in Section V. Experimental results are shown in Section VI, and conclusions are drawn in Section VII.
本文的其余部分组织如下:第二部分回顾了相关文献;第三部分介绍了系统模型,包括联邦学习、计算和通信模型;第四部分提供了收敛分析和问题公式化,第五部分提出了 VEDS 算法;第六部分展示了实验结果,第七部分得出结论。
II Related Works
II 相关工作
Many studies have explored the application of federated learning (FL) in wireless networks [20], addressing critical issues such as wireless resource management[21, 22, 23, 24, 25, 26, 27], compression and sparsification[28, 29, 30, 31, 32], and training algorithm design [33, 34, 35]. However, these studies rarely consider the unique characteristics of vehicular networks, such as high mobility and rapidly changing channel conditions.
许多研究探索了联邦学习 (FL) 在无线网络中的应用[20],解决无线资源管理[21, 22, 23, 24, 25, 26, 27]、压缩和稀疏化[28, 29, 30, 31, 32]以及训练算法设计[33, 34, 35]等关键问题。然而,这些研究很少考虑车联网的独特特性,例如高移动性和快速变化的信道条件。
More recent studies have begun to investigate FL in vehicular networks. These studies recognize the challenges posed by the high mobility of vehicles and the dynamic nature of vehicular environments [36, 5, 7, 37]. In [36], the impact of vehicle mobility on data quality, such as noise, motion blur, and distortion, is considered, and a resource optimization and vehicle selection scheme is proposed in the context of VFL. The proposed scheme dynamically schedules vehicles with higher image quality, increasing the convergence rate and reducing the time and energy consumption in FL training. In [5], the short-lived connections between vehicles and RSUs are considered, and a mobility-aware optimization algorithm is proposed. The proposed algorithm enhances the convergence performance of VFL by optimizing the duration of each training round and the number of local iterations. In [7, 37], the impact of rapidly time-varying channels resulting from vehicle mobility is considered. Specifically, a mobility and channel dynamic aware FL (MADCA-FL) scheme is proposed in [7], which optimizes the success probability of vehicle selection and model parameter updating based on the analysis of vehicle mobility and channel dynamics. In [37], a more realistic scenario is explored within a 5G new radio framework, and a joint VFL and radio access technology parameter optimization scheme is proposed under the constraints of delay, energy, and cost, aiming to maximize the successful transmission rate of locally trained models. However, most existing studies focus on V2I aggregation, overlooking the potential of harnessing V2V sidelinks to enhance the VFL training efficiency.
最近的研究开始调查车辆网络中的联邦学习。这些研究认识到车辆高移动性和车辆环境动态性带来的挑战[36, 5, 7, 37]。在[36]中,考虑了车辆移动性对数据质量的影响,例如噪声、运动模糊和失真,并在 VFL 的背景下提出了一种资源优化和车辆选择方案。该方案动态地调度具有更高图像质量的车辆,从而提高了收敛速度,并减少了 FL 训练中的时间和能量消耗。在[5]中,考虑了车辆与 RSU 之间短暂的连接,并提出了一种移动感知优化算法。该算法通过优化每个训练轮次的持续时间和本地迭代次数来提高 VFL 的收敛性能。在[7, 37]中,考虑了车辆移动性导致的快速时变信道的影响。
具体来说,[7]中提出了一种移动性和信道动态感知的联邦学习(MADCA-FL)方案,该方案基于对车辆移动性和信道动态的分析,优化了车辆选择和模型参数更新的成功概率。在[37]中,在 5G 新无线电框架内探索了更现实的场景,并提出了一种联合 VFL 和无线接入技术参数优化方案,该方案在延迟、能量和成本约束下,旨在最大化本地训练模型的成功传输率。然而,大多数现有研究都集中在 V2I 聚合上,忽略了利用 V2V 侧链来提高 VFL 训练效率的潜力。
Enhancements in V2V communications through sidelinks, as introduced in the recent updates by the Third Generation Partnership Project (3GPP) [10, 11], enable vehicles to communicate with each other directly. This advancement supports a variety of vehicular applications, including vehicular task offloading [12, 13, 14], vehicular edge caching [15, 16] and cooperative perception [17, 18, 19]. In [12, 13], vehicular task offloading strategies are proposed based on V2V communications, where tasks from one vehicle are offloaded to another to reduce the computational load on the original vehicle and enhance the task execution performance. Further investigations [14] have explored the integration of V2I and V2V communications, utilizing vehicles within the network as relays to improve the efficiency of task offloading processes. In terms of vehicular edge caching, the V2V sidelinks are utilized to enhance the caching hit rate and reduce the content access latency [15, 16]. In [17, 18, 19], the scenario of vehicular cooperative perception is explored, where V2V assistance expands the sensing range and enhances the accuracy of vehicle perception.
第三代合作伙伴计划 (3GPP) 最近的更新中引入了侧链,增强了 V2V 通信[10, 11],使车辆能够直接相互通信。这一进步支持各种车辆应用,包括车辆任务卸载[12, 13, 14]、车辆边缘缓存[15, 16]和协同感知[17, 18, 19]。在[12, 13]中,提出了基于 V2V 通信的车辆任务卸载策略,其中一辆车的任务被卸载到另一辆车,以减少原始车的计算负荷并提高任务执行性能。进一步的研究[14]探索了 V2I 和 V2V 通信的集成,利用网络中的车辆作为中继来提高任务卸载过程的效率。
在车辆边缘缓存方面,V2V 侧链被用来提高缓存命中率并降低内容访问延迟[15, 16]。在[17, 18, 19]中,探索了车辆协同感知的场景,其中 V2V 辅助扩展了感知范围并提高了车辆感知的准确性。
In the context of VFL, V2V communication resources have great potential for optimizing training efficiency. By appropriately utilizing these resources, the convergence speed of FL can be significantly improved, and the energy consumption of vehicles can be balanced.
在 VFL 的背景下,V2V 通信资源在优化训练效率方面具有巨大潜力。通过合理利用这些资源,可以显著提高 FL 的收敛速度,并平衡车辆的能耗。
III System Model
III 系统模型
III-A VFL Model
III-A VFL 模型
We consider a VFL system as shown in Fig. 1, where an RSU (indexed by in the following) orchestrates the training of a neural network model with the assistance of vehicles that enter its coverage area. During the training round, the vehicles that possess local datasets and are willing to participate in the collaborative training of the neural network model are referred to as SOVs, denoted by . The vehicles that do not participate in model training, but have communication capabilities and can help SOVs upload the models are referred to as OPVs, denoted by .
我们考虑一个如图 1 所示的 VFL 系统,其中一个 RSU(在以下内容中用 索引)在进入其覆盖区域的车辆的帮助下协调神经网络模型 的训练。在 训练轮次中,拥有本地数据集并愿意参与神经网络模型协同训练的车辆被称为 SOV,用 表示。不参与模型训练,但具有通信能力,可以帮助 SOV 上传模型的车辆被称为 OPV,用 表示。
Each SOV holds a local dataset with an associated distribution over the space of samples . For each data sample , a loss function is used to measure the fitting performance of the model vector . The local loss function of vehicle is defined as the average loss over the distribution , i.e.,
每个 SOV 都包含一个本地数据集,该数据集与样本空间上的关联分布 相关联。对于每个数据样本 ,使用损失函数 来衡量模型向量 的拟合性能。车辆 的局部损失函数定义为分布 上的平均损失,即
Different from traditional FL, where the set of clients participating in model training is fixed, the set of vehicles participating in VFL training varies in each round due to mobility. We assume that the vehicles are drawn from a given distribution , and the global loss function is defined as the average local loss function over the distribution , i.e.,
与传统的联邦学习 (FL) 不同,传统的 FL 中参与模型训练的客户端集合是固定的,而 VFL 训练中参与的车辆集合由于移动性而在每一轮中都会发生变化。我们假设车辆是从给定分布 中抽取的,全局损失函数定义为分布 上的平均局部损失函数,即:
(1) |
The goal is to minimize the global loss function by optimizing the global parameter through rounds of training. denotes the index of training rounds.
目标是通过 轮训练,优化全局参数 来最小化全局损失函数。 表示训练轮次的索引。
The VFL training process in each round includes three stages: local updates, model uploading and model aggregation.
每轮 VFL 训练过程包括三个阶段:本地更新、模型上传和模型聚合。
III-A1 Local Updates
III-A1 本地更新
At the start of round, the RSU broadcasts its model parameters to the SOVs. After receiving the global model , every SOV uses stochastic gradient descent (SGD) algorithm to update the local model:
在 回合开始时,RSU 将它的模型参数 广播给 SOVs。在收到全局模型 后,每个 SOV 使用随机梯度下降 (SGD) 算法来更新本地模型:
(2) |
where is the learning rate, is a subset randomly sampled from the sample space . We assume that the batch size of all SOVs is the same, and denote it by .
其中 是学习率, 是从样本空间 中随机抽取的子集。我们假设所有 SOV 的批次大小相同,并用 表示。
III-A2 Model Uploading
III-A2 模型上传
After an SOV completes the local updates, it uploads its model parameters to the RSU for model aggregation. SOVs can upload their model either via a direct V2I link or with the help of the OPVs via a V2V sidelink. The set of SOVs that successfully upload their model to the RSU is denoted by . The detailed communication model for model uploading is described in Section III-C.
在 SOV 完成本地更新后,它会将其模型参数上传到 RSU 进行模型聚合。SOV 可以通过直接的 V2I 链接或借助 OPV 通过 V2V 侧链上传其模型。成功将模型上传到 RSU 的 SOV 集合用 表示。模型上传的详细通信模型在第 III-C 节中描述。
III-A3 Model Aggregation
III-A3 模型聚合
At the end of the round, the RSU aggregates the received model parameters:
在 轮结束时,RSU 会聚合接收到的模型参数:
(3) |
and then starts a new round.
然后开始新一轮。
III-B Computation Model
III-B 计算模型
We adopt a standard computation model [38] [39] for local updates. The total workload for computing local updates for each vehicle is , where is the number of floating point operations (FLOPs) needed for processing each sample. Further, we define (in cycle/s) as the clock frequency of the vehicular processor in round . Hence, the computation latency for updating the local model is determined as follows:
我们采用标准计算模型[38][39]进行本地更新。每辆车进行本地更新的总工作量为 ,其中 表示处理每个样本所需的浮点运算次数(FLOPs)。此外,我们将 (以周期/秒为单位)定义为第 轮中车辆处理器的时钟频率。因此,更新本地模型的计算延迟确定如下:
and the computation energy usage is
以及计算能耗是
where is the energy consumption coefficient that depends on the chip architecture of the processor.
其中 是能耗系数,它取决于处理器的芯片架构。
III-C Communication Model
III-C 沟通模型
We assume that the vehicular network operates in a discrete time-slotted manner. The slots in round are denoted by , where is the number of slots in round and the slot length is denoted by . The round duration is set to be the average sojourn time of vehicles in the RSU coverage. We assume that, based on historical information, the average sojourn time of vehicles within the RSU coverage area can be estimated, but the specific sojourn time of each vehicle cannot be known in advance. The timeline of the proposed system is shown in Fig. 2.
我们假设车辆网络以离散时隙的方式运行。回合 中的时隙用 表示,其中 是回合 中的时隙数,时隙长度用 表示。回合持续时间 设置为车辆在 RSU 覆盖范围内的平均停留时间。我们假设,根据历史信息,可以估计车辆在 RSU 覆盖区域内的平均停留时间,但无法提前知道每辆车的具体停留时间。所提议系统的时序图如图 2 所示。
In every slot, one SOV is scheduled to upload its model parameters to the RSU either via a direct V2I link, called direct transmission (DT), or with the help of the OPVs, called cooperative transmission (COT). We use to denote the SOV scheduling decision. if the SOV is scheduled for model uploading in slot . Otherwise, . Note that since , the subscript of is omitted for simplicity, and the same applies in the following text. has the following constraints:
在每个时隙中,一个 SOV 被安排将它的模型参数上传到 RSU,可以通过直接的 V2I 链接,称为直接传输 (DT),或者借助 OPV,称为协作传输 (COT)。我们使用 来表示 SOV 调度决策。 如果 SOV 被安排在时隙 中上传模型。否则, 。注意,由于 , 的下标 为了简便省略了,以下文本中也是如此。 有以下约束:
(4) |
(5) |
We use a binary variable to denote the transmission mode. if the SOV transmits its model to the RSU via DT. if the SOV transmits its model to the RSU via COT. has the binary constraint:
我们使用一个二元变量 来表示传输模式。 如果 SOV 通过 DT 将其模型传输到 RSU。 如果 SOV 通过 COT 将其模型传输到 RSU。 具有二元约束:
(6) |
For DT, the scheduled SOV uploads its model parameters to the RSU directly using the whole bandwidth . The transmission rate (bit/s) for the SOV is
对于 DT,计划的 SOV 使用整个带宽 将其模型参数直接上传到 RSU。SOV 的传输速率(bit/s)为
where is the channel coefficient between vehicle and the RSU. Due to the high mobility of vehicles, the channel coefficient varies in different slots. If vehicle leaves the RSU coverage, . is the transmission power of vehicle , and is the noise power spectrum density.
其中 表示车辆 与 RSU 之间的信道系数。由于车辆的高机动性,信道系数在不同的时隙中会发生变化。如果车辆 离开 RSU 覆盖范围, 。 是车辆 的传输功率, 是噪声功率谱密度。
For COT, the scheduled SOV uses the first half of the slot to transmit its model parameters to the OPVs, and the OPVs use distributed space-time code (DSTC) [40] to relay the model parameters to the RSU in the second half of the slot, as shown in Fig. 2. We use to denote the OPV scheduling decision in slot , where if the OPV is scheduled for COT, and otherwise. has the binary constraint:
对于 COT,计划的 SOV 使用时隙的前半部分将模型参数传输到 OPV,而 OPV 使用分布式时空码 (DSTC) [40] 在时隙的后半部分将模型参数中继到 RSU,如图 2 所示。我们使用 来表示时隙 中的 OPV 调度决策,其中 如果 OPV 被调度用于 COT,否则为 。 具有二进制约束:
(7) |
The transmission rate of SOV using COT is [40, 41, 42]
使用 COT 的 SOV 传输速率为 [40, 41, 42]
The V2V transmission rate between SOV and OPV is
SOV 和 OPV 之间的 V2V 传输速率为
where is the channel coefficient between SOV and OPV . To ensure that the scheduled OPVs can reliably decode the signal before it begins to transmit, we have the following constraint:
其中 表示 SOV 和 OPV 之间的信道系数。为了确保计划的 OPV 能够在开始传输之前可靠地解码信号,我们有以下约束:
(8) | ||||
We use to denote the transmission power allocation in slot . There is a power constraint for SOVs:
我们使用 来表示时隙 中的传输功率分配。SOV 存在功率约束:
(9) |
and for OPVs: 以及用于 OPV:
(10) |
In every slot, the communication energy consumption for each SOV is
在每个时隙中,每个 SOV 的通信能耗为
and for each OPV , it is
并且对于每个 OPV ,它是
The data transmitted for each SOV is
每个 SOV 传输的数据是
The SOV has successfully transmitted its model to the RSU if the amount of transmitted model parameters in all slots is greater than or equal to the model size, i.e., , where denotes the model size. We use an indicator function to denote whether the vehicle has successfully transmitted its model, where if condition is true, and otherwise. Using this notation, the aggregation rule (3) can be rewritten as
如果所有时隙中传输的模型参数量大于或等于模型大小,即 ,其中 表示模型大小,则 SOV 已成功将模型传输到 RSU。我们使用指示函数 来表示车辆 是否已成功传输其模型,其中 如果条件 为真,否则为 。使用此符号,聚合规则 (3) 可以改写为
(11) |
IV Problem Formulation
IV 问题陈述
IV-A Convergence Analysis
IV-A 收敛分析
The goal of the VFL is to minimize the global loss function (1). However, this objective function is implicit due to the deep and
diverse neural network architectures of ML. Therefore, convergence analysis is performed for an explicit objective function. Following the state-of-the-art literature [23, 24, 27, 28, 29], we make the following assumptions:
VFL 的目标是最小化全局损失函数 (1)。然而,由于 ML 的深度和多样化的神经网络架构,该目标函数是隐式的。因此,对显式目标函数进行收敛分析。遵循最先进的文献 [23, 24, 27, 28, 29],我们做出以下假设:
Assumption 1:
The local loss function is -smooth for each SOV in each round , i.e.,
假设 1:
局部损失函数 对于每个回合 中的每个 SOV 都是 -光滑的,即,
Assumption 2:
The local loss function is -strongly convex for each SOV in each round , i.e.,
假设 2:
局部损失函数 对于每一轮 中的每个 SOV 都是 -强凸的,即,
Assumption 3:
The stochastic gradient is unbiased and variance-bounded, i.e.,
假设 3: 随机梯度是无偏的且方差有界的,即,
Then, the following Lemma is derived:
然后,推导出以下引理:
Lemma 1. Based on the given assumptions and the aggregation rule (11), the expected loss decreases after one round is upper bounded by
引理 1.基于给定的假设和聚合规则 (11),一轮后预期损失的下降上限为
(12) |
where the expectation is taken over the randomness of SGD.
其中期望值是在 SGD 的随机性上取的。
Based on Lemma 1, the convergence performance of the proposed VFL after rounds of training is given by:
基于引理 1,所提出的 VFL 在 轮训练后的收敛性能由下式给出:
IV-B Problem Formulation
IV-B 问题公式化
Based on Theorem 1, we alternatively minimize the upper bound of in (13), which is equivalent to minimizing in each round . The optimization problem is formulated as
基于定理 1,我们交替最小化 在 (13) 中的上界,这等价于在每一轮 中最小化 。优化问题被表述为
(14a) | ||||
s.t. | (14b) | |||
(14c) (14 世纪) | ||||
(14d) (14 天) | ||||
constraints (4)(10), 约束 (4) (10), |
where denotes the SOV scheduling, denotes the transmission mode, is the OPV scheduling, is the power allocation throughout round . The constraints (14b) and (14c) indicate that for each vehicle, the total energy consumption cannot exceed the given energy budget. The constraint (14d) ensures that the vehicles begin to transmit after they finish local updates. The constraints (4)(10) limit the range of optimization variables.
其中 表示 SOV 调度, 表示传输模式, 表示 OPV 调度, 表示第 轮的功率分配。约束 (14b) 和 (14c) 表明,对于每辆车,总能耗不能超过给定的能量预算。约束 (14d) 确保车辆在完成本地更新后开始传输。约束 (4) (10) 限制了优化变量的范围。
V V2V-Enhanced Dynamic Scheduling Algorithm
V V2V 增强动态调度算法
In this section, we propose the VEDS algorithm that solves in an online fashion. Firstly, we propose a derivative-based drift-plus-penalty method to convert the long-term stochastic optimization problem into an online MINLP problem. The converted MINLP problem is then decoupled into a DT problem and a COT problem. The DT problem is convex and is directly solved using the Karush-Kuhn-Tucker (KKT) conditions. Analysis of the OPV scheduling priority reduces the COT problem to a set of convex problems, which are solved using the interior-point method.
在本节中,我们提出了 VEDS 算法,该算法以在线方式解决 。首先,我们提出了一种基于导数的漂移加惩罚方法,将长期随机优化问题转化为在线 MINLP 问题。然后将转换后的 MINLP 问题解耦为 DT 问题和 COT 问题。DT 问题是凸的,可以直接使用 Karush-Kuhn-Tucker(KKT)条件求解。对 OPV 调度优先级的分析将 COT 问题简化为一组凸问题,这些问题使用内点法求解。
V-A Transformation of the stochastic optimization problem
V-A 随机优化问题的转化
is a stochastic optimization problem. The greatest challenge to solving this problem lies in the uncertainty of channel state information. In vehicular networks, this results from the rapid changes in channels due to the high mobility of vehicles. In reality, future channel information is often difficult to predict, and even if we could acquire future channel information, addressing this problem remains highly complex due to the integer optimization variables and the non-convex objective function.
是一个 随机优化问题。解决这个问题的最大挑战在于信道状态信息的不可预测性。在车联网中,这是由于车辆的高机动性导致信道快速变化造成的。实际上,未来的信道信息往往难以预测,即使我们能够获取未来的信道信息,由于整数优化变量和非凸目标函数,解决这个问题仍然非常复杂。
One effective way to tackle this kind of problem is the drift-plus-penalty method in Lyapunov optimization [43][44]. By constructing virtual queues, the long-term stochastic optimization problem is transformed into an online problem and online decision-making algorithms can be designed to solve it. However, the model aggregation requirements and the limited transmission time of VFL result in a stepwise objective function (15), which cannot be handled by the typical drift-plus-penalty method. Therefore, we propose a derivative-based drift-plus-penalty method to address this challenge. Firstly, we use the shifted sigmoid function to approximate it and transform into .
解决这类问题的有效方法之一是 Lyapunov 优化中的漂移加惩罚方法 [43][44]。通过构建虚拟队列,将长期随机优化问题转化为在线问题,并可以设计在线决策算法来解决它。然而,VFL 的模型聚合需求和有限传输时间导致了阶梯式目标函数 (15),这无法通过典型的漂移加惩罚方法处理。因此,我们提出了一种基于导数的漂移加惩罚方法来应对这一挑战。首先,我们使用移位 sigmoid 函数来近似它,并将 转换为 。
(16a) | ||||
s.t. | (16b) | |||
constraints (4)(10), (14b)(14d), 约束 (4) (10), (14b) (14d), |
where is a shifted sigmoid function, defined as
and is an approximation parameter. As increases, the function converges towards the indicator function , becoming a more precise approximation. Constraint (16b) ensures that a vehicle will not be scheduled after it finishes transmitting its model.
其中 是一个移位的 sigmoid 函数,定义为
并且 是一个近似参数。随着 的增加,函数 收敛到指标函数 ,成为一个更精确的近似。约束 (16b) 确保车辆在完成模型传输后不会被调度。
We define as the amount of model parameters that
has been transmitted, where
我们定义 为模型参数的数量,
已被传输,在
(17) |
The derivative of with respect to is
对 的导数是
According to (17), . As increases from to , increases from to . Therefore, is an increasing function with respect to , reaching its minimum when , and reaching its maximum when . We define
根据(17), 。当 从 增加到 时, 从 增加到 。因此, 是关于 的增函数,当 时达到最小值,当 时达到最大值。我们定义
is a decreasing function with respect to . Since , there is
是关于 的递减函数。由于 ,所以有
(18) |
We convert the long-term stochastic optimization problem into an online optimization problem as follows. For the SOVs, virtual queues are created to represent the difference between the cumulative energy consumption up to slot and the budget, evolving as follows:
我们将长期随机优化问题转化为在线优化问题,如下所示。对于 SOVs,创建虚拟队列 来表示直到时隙 的累积能耗与预算之间的差值,其演变方式如下:
(19) |
Likewise, virtual queues are created for the OPVs, evolving as follows:
同样,OPV 的虚拟队列 按以下方式演变:
(20) |
All virtual queues are initialized to 0, i.e., , and . Then, problem can be transformed to :
所有虚拟队列都初始化为 0,即 和 。然后,问题 可以转换为 :
(21a) | ||||
s.t. | (21b) | |||
(21c) (21 世纪) | ||||
(21d) (21 天) | ||||
(21e) | ||||
(21f) | ||||
(21g) (21 克) | ||||
(21h) (21 小时) |
We derive the following theorem to guarantee the performance of the proposed transformation. Superscript † is used to denote the solution to , and ∗ is used to denote the optimal offline solution to .
我们推导出以下定理来保证所提变换的性能。上标 † 用于表示 的解,而 ∗ 用于表示 的最佳离线解。
Theorem 2. Suppose all queues are initialized to 0, the difference between the optimal value of solving and the counterpart of solving is bounded by:
定理 2.假设所有队列都初始化为 0,则求解 的最优值与求解 的对应值的差值受以下限制:
(22) |
The energy consumption of the SOV is bounded by
SOV 的能耗受以下因素限制:
(23) | ||||
and that of the OPV is bounded by
以及 OPV 的边界是
(24) | ||||
where , , , , and .
其中 、 、 、 和 。
Theorem 2 shows that, instead of solving the long-term stochastic optimization problem , we alternatively solve the online problem . The performance is bounded with respect to the optimal offline solution to , and the energy consumption for each vehicle is also bounded. The trade-off between the objective function (16a) and the energy consumption is balanced by the weight parameter . The worst-case performance can be improved by increasing the parameter , equivalent to reducing the approximation parameter . However, choosing overly small values compromises the precision of approximating the indicator function with the sigmoid function . Therefore, in practice, it is crucial to carefully choose the values of and to ensure optimal approximation performance under the energy constraints.
定理 2 表明,我们不是直接求解长期随机优化问题 ,而是通过交替求解在线问题 。性能受限于 的最优离线解,每辆车的能耗也受到限制。目标函数 (16a) 和能耗之间的权衡由权重参数 平衡。通过增加参数 可以改善最坏情况下的性能,这相当于减少近似参数 。然而,选择过小的 值会影响用 sigmoid 函数 近似指示函数 的精度。因此,在实践中,仔细选择 和 的值以确保在能量约束下获得最佳近似性能至关重要。
is an MINLP with binary variables and continuous variables , which exhibits high computational complexity for direct solution. However, due to the existence of constraint (21c), enumerating and only introduces a linear increase in computational complexity. Therefore, we fix the SOV scheduling decision and transmission mode , and focus on solving and . Specifically, when the SOV scheduling and transmission mode are decided, is reduced to the following sub-problems.
是一个包含二元变量 和连续变量 的混合整数非线性规划 (MINLP) 问题,直接求解具有很高的计算复杂度。然而,由于存在约束 (21c),枚举 和 仅会线性地增加计算复杂度。因此,我们固定 SOV 调度决策 和传输模式 ,并专注于求解 和 。具体来说,当 SOV 调度 和传输模式 被确定后, 被简化为以下子问题。
V-B Direct Transmission Problem
V-B 直接传输问题
When SOV is scheduled for transmission and DT mode is selected (), is reduced to
当 SOV 计划传输且选择 DT 模式( )时, 将减少到
(25a) | ||||
s.t. | (25b) |
is a convex problem. The optimal solution is derived using the KKT conditions, in Proposition 1:
是一个凸问题。命题 1 中使用 KKT 条件推导出最优解:
Proposition 1. Given the SOV scheduling decision, the optimal power allocation strategy for DT is given by
命题 1.在 SOV 调度决策下,DT 的最优功率分配策略为
(26) |
where is defined as .
V-C Cooperative Transmission Problem
V-C 协作传输问题
When SOV is scheduled for transmission and COT mode is selected (), is reduced to
当 SOV 计划传输且选择 COT 模式( )时, 将减少到
(27a) | ||||
s.t. | (27b) | |||
(27c) | ||||
is still an MINLP problem, and directly enumerating the binary variable introduces exponential complexity. We further analyze the OPV scheduling priority and prove the following proposition.
仍然是一个 MINLP 问题,直接枚举二元变量 会引入指数复杂度。我们进一步分析了 OPV 调度优先级并证明了以下命题。
Proposition 2. Suppose is solvable, then there must exist an optimal set of that adheres to a specific structure: the variables are arranged according to the descending order of values, and the optimal solution involves selecting the top based on this ordering.
命题 2.假设 可解,则必须存在一组最优的 符合特定结构: 变量按 值的降序排列,最优解涉及选择前 个 。
基于此排序的 STSUBSCRIPT *n* end_POSTSUBSCRIPT (*t*)。
请提供要翻译的文本。
Proof: This proposition is proved by contradiction. Assume that all optimal solutions do not adhere to the proposed structure, i.e., they do not select the top based on the highest values.
证明: 该命题通过反证法证明。假设所有最优解 不符合所提出的结构,即它们没有根据最高的 值选择前 。
Consider one of the optimal solutions , that includes some with lower values set to 1, while at least one with a higher value (within the top ) is set to 0.
Consider another OPV scheduling strategy , where all within the top highest values are set to 1. For all , we consider the power allocation strategy:
考虑以下最佳解决方案之一 ,其中包含一些 ,其较低的 值设置为 1,而至少一个 具有较高的 值(在最高 中)设置为 0。
考虑另一种 OPV 调度策略 ,其中所有位于前 个最高 值内的 都设置为 1。对于所有 ,我们考虑以下功率分配策略:
and set . The solution set is a feasible solution, since all constraints of are satisfied, and the objective function is
并设置 。解集 是一个可行解,因为 的所有约束条件都满足,目标函数是
Since the objective function of the solution set is equal to that of the optimal solution set , is also an optimal solution. This contradicts the assumption that none of the optimal solutions adhere to the proposed structure. Proposition 2 is proved.
由于解集 的目标函数等于最优解集 的目标函数,因此 也是最优解。这与假设没有最优解 符合所提出的结构相矛盾。命题 2 得证。
Based on Proposition 2, we can sort the elements of in descending order based on the values of , and schedule the first OPVs for COT, i.e., set for them, and set for all other vehicles. 重试 错误原因
When is given, the constraint (27c) becomes 重试 错误原因
(28) | ||||
where . is reduced to 重试 错误原因
(29) | ||||
s.t. |
is a convex optimization problem since the objective (29) is to maximize a concave function and all constraints (21e), (25b) and (28) are convex, which can be solved by optimization tools, such as CVX [45], based on the interior-point method. All transformations of are equivalent transformations, and the procedure of solving is summarized in Algorithm 1, where , and denotes the value of (21a), i.e., the objective function of . 重试 错误原因
V-D The Complete Algorithm 重试 错误原因
The whole procedure of the proposed VEDS algorithm is summarized in Algorithm 2. At the start of each round, the RSU broadcasts the global model to the SOVs, and the SOVs perform local updates based on their local dataset. In each slot, the RSU solves based on the current channel state and the amount of transmitted model parameters . Based on the solution to , the resources are allocated, and the virtual queues are updated. This process is iterated until the end of the round. 重试 错误原因
V-E Complexity Analysis 重试 错误原因
The complexity of Algorithm 2 is where and denote the complexity of solving and , respectively. can be solved in constant time according to Proposition 1. is a convex optimization problem with a convex objective and up to linear constraints, involves an optimization variable of dimension . Utilizing the interior-point method, can be addressed with a complexity of for a given precision of the solution. Ignoring lower-order terms, the overall computation complexity of Algorithm 2 is 重试 错误原因
VI Experiments 重试 错误原因
In this section, we evaluate the performance of the proposed VEDS algorithm. Firstly, it is compared with the benchmarks under different vehicle speeds , approximation parameters , and weight . Then, the proposed VEDS algorithm is evaluated for the CIFAR-10 image classification task [46]. Finally, the VEDS algorithm is applied to a real-world trajectory prediction dataset Argoverse [47] to showcase the value in practical vehicular applications. 重试 错误原因
Simulation Parameters 重试 错误原因 | Values 重试 错误原因 |
---|---|
System Bandwidth 重试 错误原因 | 20MHz |
Carrier Frequency | 5.9GHz |
Maximum Transmission Power | 0.3W |
Noise Power Spectrum Density | -174dBm/Hz |
Shadowing Fading Std. Dev. | 3dB (LOS, NLOSv), 4dB (NLOS) |
Vehicle Blockage Loss | dB |
Energy Consumption Coefficient | |
Energy Constraints | Random selected from J to J |
VI-A Simulation setups
A road network is built based on SUMO [48], as shown in Fig. 3. An RSU is placed at the center of the road network. The vehicles move according to the Manhattan mobility model with a maximum speed of m/s, where is a variable for the experiments. For wireless communications, we adopt the V2X channel models in 3GPP TR 37.885 [11]. In the urban environment, the pathloss of the LOS and the NLOSv channels are given by where is the distance between two devices, is the carrier frequency. The pathloss of the NLOS channel is specified by The simulation parameters are summarized in Table I.
For comparison, the following benchmarks are considered:
VI-A1 Optimal Benchmark
All the SOVs within the RSU coverage can successfully upload their model parameters.
VI-A2 Dynamic algorithm with V2I-only communications (V2I-only)
This framework adjusts transmission strategies dynamically in every time slot, considering vehicle mobility. However, it solely uses V2I communications, meaning that the OPVs are not included. This is a special case of our proposed algorithm.
VI-A3 Mobility and channel dynamic-aware FL (MADCA-FL)
This is a state-of-the-art VFL framework that considers the rapidly changing channel and vehicle mobility [7].
VI-A4 Static resource allocation and device scheduling algorithm (SA)
This framework does not consider the rapidly time-varying channel and vehicle mobility. It schedules vehicles based on their initial channel states and positions, which is a modified version of the state-of-the-art device scheduling and resource allocation scheme [26].
VI-B Performance of VEDS under different parameters
VI-B1 Impact of vehicle speed
Firstly, we validate the performance of our algorithm under different vehicle speeds. We use the objective function of , i.e., the number of successful aggregations, as the performance metrics. As illustrated in Fig. 9, the number of successful aggregations of our framework initially increases and then decreases as the vehicle speed is adjusted from 0 (a stationary scenario) to m/s, achieving of the optimal benchmark performance when m/s. This performance increase at low speeds can be attributed to the mobility of vehicles allowing OPVs to enter the coverage of the RSU, while the SOVs largely remain within the RSU coverage area. If the vehicles move at high speed, the departure of some SOVs from RSU coverage results in deteriorated channel conditions. However, with the assistance of OPVs, these SOVs can still transmit the model back to the RSU. In comparison, the V2I-only framework and the MADCA-FL also consider vehicle mobility and exhibit certain robustness to changes in mobility. The SA framework, which employs static device scheduling, shows a significant performance decline in high-speed scenarios.
VI-B2 Impact of
We evaluate our proposed algorithm for different values of , as shown in Fig. 9. It is illustrated that as increases from to , the number of successful aggregations first increases and then decreases, reaching a maximum when is approximately equal to . This is because as Theorem 2 suggests, when the parameter is too small, the sigmoid function becomes overly smooth, leading to a suboptimal approximation of the indicator function (as shown in Fig. 9). On the other hand, when is too large, the term diminishes, resulting in a loose bound in (22). Both scenarios adversely affect the overall performance of the algorithm. We also explain this phenomenon from a more intuitive perspective. As illustrated in Fig. 9, when is small, the weight increases slowly with respect to , the amount of transmitted model parameters. In this case, the algorithm tends to schedule vehicles evenly to balance their energy consumption. Consequently, it is possible that many vehicles have transmitted most of their model parameters but have not completed the upload. In the FL context, such a scenario is considered a transmission failure. When is large, also increases slowly when is small, and thus, the aforementioned phenomenon persists, leading to suboptimal performance.
VI-B3 Impact of
Then, we evaluate our proposed VEDS algorithm for different weight parameters . The number of successful aggregations and the energy consumption of all vehicles are shown in Fig. 9 and Fig. 9, respectively. It is illustrated that as increases from to , vehicles tend to consume more energy, which results in higher energy usage and a greater number of successful aggregations. When exceeds a threshold (around ), most vehicles use their maximum transmission power to upload their model, and the energy constraints are violated. Therefore, in practical systems, it is crucial to carefully choose the value of to ensure optimal training performance under energy constraints.
VI-C Evaluation on the CIFAR-10 dataset
Then, we evaluate the proposed VEDS algorithm on the CIFAR-10 dataset[46], which comprises training images and test images across ten categories. We consider both the independent and identically distributed (i.i.d.) and non-independent and identically distributed (non-i.i.d) settings. For the i.i.d. setting, the dataset is evenly divided into subsets, each containing samples from all 10 categories. For the non-i.i.d. setting, data samples are organized by category, and each vehicle holds a disjoint subset of data with samples from categories. Using the dataset, we train a convolutional neural network (CNN) with six convolutional layers. The learning rate is , and the batch size is set to .
The test accuracy of the VEDS algorithm compared with the benchmarks is illustrated in Fig. 12 (i.i.d.) and Fig. 12 (non-i.i.d.), where , and vehicle speed m/s. In the i.i.d. scenario, both VEDS and the benchmarks achieve high test accuracy. The VFL convergence speed of the VEDS algorithm closely approaches that of the optimal benchmark and is significantly higher than other benchmarks. Under the non-i.i.d. scenario, the convergence speed and the highest test accuracy of the VEDS algorithm are close to the optimal benchmark and exceed the other three benchmarks. After 1000 seconds of training, VEDS achieves a test accuracy of , exceeding V2I-only, MADCA-FL and SA over , and . After 10000 seconds of training, the highest achievable accuracies are for the optimal benchmark, for VEDS, for V2I-only, for MADCA-FL, and for SA.
VI-D Evaluation on Argoverse trajectory prediction dataset
Finally, we evaluate the proposed VEDS algorithm on the real-world trajectory prediction dataset Argoverse [47]. Argoverse encompasses more than sequences gathered from Pittsburgh and Miami. Each sequence is captured from a moving vehicle at a sampling frequency of Hz. The task is to predict the position of the vehicle for the next 3 seconds. The dataset is organized into training, validation, and test sets, containing , , and sequences, respectively. The sequences are uniformly partitioned into 40 subsets.
Based on the dataset, the VFL system collaboratively trains a lane graph convolutional neural network (LaneGCN) [49]. The LaneGCN includes three sub neural networks: an ActorNet, a MapNet, and a FusionNet. The ActorNet contains a 1D CNN and a Feature Pyramid Network (FPN) to extract features of vehicle trajectories. The MapNet is a graph convolutional neural network that represents and extracts the map features. The FusionNet is used to fuse the vehicle trajectory features and the map features to output the final trajectory prediction results. We employ ADE as the metric for trajectory prediction, which is the average distance between the actual and predicted vehicle positions on the trajectory.
The performance of the proposed framework compared with the benchmarks is illustrated in Fig. 12. It is shown that the proposed VEDS algorithm outperforms the benchmarks both in terms of ADE. Specifically, VEDS achieves an ADE of after rounds of training, which is , , lower than V2I-only, MADCA-FL and SA, respectively. These results validate the strong performance of our proposed VEDS algorithm when applied to real-world autonomous driving datasets.
VII Conclusions
In this paper, we have considered a VFL system, where the SOVs and OPVs in a vehicular network collaborate to train an ML model under the orchestration of the RSU. A VEDS algorithm has been proposed to optimize the VFL training performance under energy constraints and channel uncertainty of vehicles. Convergence analysis has been performed to transform the implicit FL loss function into the number of successful aggregations. Then, a derivative-based drift-plus-penalty method has been proposed to convert the long-term stochastic optimization problem into an online MINLP problem, and a theoretical performance guarantee has been provided for the proposed transformation by bounding the performance gap between the online and offline solutions. Based on the analysis of the scheduling priority, the MINLP problem has been further reduced to a set of convex optimization problems, which can be efficiently solved using the interior-point method. Experimental results have illustrated that our proposed framework is robust under different vehicle speeds. The test accuracy is increased by for the CIFAR-10 dataset, and the ADE is reduced by for the Argoverse dataset.
Appendix A Proof of Lemma 1
For simplicity, we use to denote in the appendix. According to Assumption 1 and definition (1), the global loss function is also -smooth and -strongly convex. There is:
(30) | ||||
According to Assumption 3, there is
For the term , we have
(31) | ||||
Appendix B Proof of Theorem 1
Appendix C Proof of Theorem 2
We define a quadratic Lyapunov function as
We define , , , , and . Then, the Lyapunov drift of a single round is defined as
(34) | ||||
By adding on both sides of (34), the upper bound on the derivative-based drift-plus-penalty function is
We define the -round drift as
Then, the -round drift-plus-penalty function is bounded by:
(35) | |||
where inequality holds because solving yields a minimum value of (21a).
Based on the definition of , we have , and therefore
and
(36) |
Similarly, there is
(37) |
Substituting (36) and (37) into (35), we have
Since , we have
Since the function is continuous and derivable, there exist a point such that
Based on (18), we have
Finally, there is
For energy consumption, we have
Therefore, the energy consumption of is bounded by
Likewise, the energy consumption of is bounded by
Theorem 2 is proved.
Appendix D Proof of Proposition 1
The Lagrangian of is given by:
Then the KKT condition is given by:
If neither nor is zero, there is no solution to these equations. Therefore, three cases are considered:
1) If , , then
2) If , , then
.
3) If , , then . We get:
where denotes .
References
- [1] Y. Sun, W. Shi, X. Huang, S. Zhou, and Z. Niu, “Edge learning with timeliness constraints: Challenges and solutions,” IEEE Commun. Mag., vol. 58, no. 12, pp. 27–33, Dec. 2020.
- [2] J. Yan, T. Chen, B. Xie, Y. Sun, S. Zhou, and Z. Niu, “Hierarchical federated learning: Architecture, challenges, and its implementation in vehicular networks,” ZTE Commun., vol. 21, no. 1, pp. 38–45, Mar. 2023.
- [3] A. M. Elbir, B. Soner, S. Çöleri, D. Gündüz, and M. Bennis, “Federated learning in vehicular networks,” in Proc. IEEE Int. Mediterranean Conf. Commun. Netw. (MeditCom), Athens, Greece, Sept. 2022, pp. 72–77.
- [4] J. Posner, L. Tseng, M. Aloqaily, and Y. Jararweh, “Federated learning in vehicular networks: Opportunities and solutions,” IEEE Netw., vol. 35, no. 2, pp. 152–159, Mar. 2021.
- [5] B. Xie, Y. Sun, S. Zhou, Z. Niu, Y. Xu, J. Chen, and D. Gunduz, “MOB-FL: mobility-aware federated learning for intelligent connected vehicles,” in Proc. IEEE Int. Conf. Commun. (ICC), Rome, Italy, May 2023, pp. 3951–3957.
- [6] C. Feng, H. H. Yang, D. Hu, Z. Zhao, T. Q. S. Quek, and G. Min, “Mobility-aware cluster federated learning in hierarchical wireless networks,” IEEE Trans. Wireless Commun., vol. 21, no. 10, pp. 8441–8458, Oct. 2022.
- [7] X. Zhang, Z. Chang, T. Hu, W. Chen, X. Zhang, and G. Min, “Vehicle selection and resource allocation for federated learning-assisted vehicular network,” IEEE Trans. Mobile Comput., vol. 23, no. 5, pp. 3817–3829, May 2024.
- [8] Y. Sun, B. Xie, S. Zhou, and Z. Niu, “MEET: Mobility-Enhanced Edge inTelligence for Smart and Green 6G Networks,” IEEE Commun. Mag., vol. 61, no. 1, pp. 64–70, Oct. 2023.
- [9] T. Chen, J. Yan, Y. Sun, S. Zhou, D. Gündüz, and Z. Niu, “Mobility accelerates learning: Convergence analysis on hierarchical federated learning in vehicular networks,” arXiv preprint arXiv:2401.09656, 2024.
- [10] 3GPP, “Study on evaluation methodology of new Vehicle-to-Everything use cases for LTE and NR,” 3rd Generation Partnership Project (3GPP), Technical Report 3GPP TR 37.885, Sept. 2018.
- [11] M. Harounabadi, D. M. Soleymani, S. Bhadauria, M. Leyh, and E. Roth-Mandutz, “V2X in 3GPP standardization: NR sidelink in release-16 and beyond,” IEEE Commun. Standards Mag., vol. 5, no. 1, pp. 12–21, Mar. 2021.
- [12] Y. Sun, X. Guo, J. Song, S. Zhou, Z. Jiang, X. Liu, and Z. Niu, “Adaptive learning-based task offloading for vehicular edge computing systems,” IEEE Trans. Veh. Technol., vol. 68, no. 4, pp. 3061–3074, Jan. 2019.
- [13] W. Fan, Y. Su, J. Liu, S. Li, W. Huang, F. Wu, and Y. Liu, “Joint task offloading and resource allocation for vehicular edge computing based on v2i and v2v modes,” IEEE Trans. Intell. Transp. Syst., vol. 24, no. 4, pp. 4277–4292, Jan. 2023.
- [14] L. Liu, M. Zhao, M. Yu, M. A. Jan, D. Lan, and A. Taherkordi, “Mobility-aware multi-hop task offloading for autonomous driving in vehicular edge computing and networks,” IEEE Trans. Intell. Transp. Syst., vol. 24, no. 2, pp. 2169–2182, Jan. 2023.
- [15] X. Zhou, M. Bilal, R. Dou, J. J. P. C. Rodrigues, Q. Zhao, J. Dai, and X. Xu, “Edge computation offloading with content caching in 6G-enabled IoV,” IEEE Trans. Intell. Transp. Syst., vol. 25, no. 3, pp. 2733–2747, Mar. 2024.
- [16] H. Wu, B. Wang, H. Ma, X. Zhang, and L. Xing, “Multi-agent federated deep reinforcement learning based collaborative caching strategy for vehicular edge networks,” IEEE Internet Things J., early access, Apr. 2024.
- [17] Y. Jia, R. Mao, Y. Sun, S. Zhou, and Z. Niu, “MASS: Mobility-aware sensor scheduling of cooperative perception for connected automated driving,” IEEE Trans. Veh. Technol., vol. 72, no. 11, pp. 14 962–14 977, Jun. 2023.
- [18] R. Mao, J. Guo, Y. Jia, J. Dong, Y. Sun, S. Zhou, and Z. Niu, “MoRFF: Multi-view object detection for connected autonomous driving under communication and localization limitations,” in Proc. IEEE Veh. Technol. Conf. (VTC), Hong Kong, China, Oct. 2023, pp. 1–7.
- [19] G. Luo, C. Shao, N. Cheng, H. Zhou, H. Zhang, Q. Yuan, and J. Li, “EdgeCooper: Network-aware cooperative lidar perception for enhanced vehicular awareness,” IEEE J. Sel. Areas Commun., vol. 42, no. 1, pp. 207–222, Jan. 2024.
- [20] M. Chen, D. Gündüz, K. Huang, W. Saad, M. Bennis, A. V. Feljan, and H. V. Poor, “Distributed learning in wireless networks: Recent progress and future challenges,” IEEE J. Sel. Areas Commun., vol. 39, no. 12, pp. 3579–3605, Oct. 2021.
- [21] H. H. Yang, Z. Liu, T. Q. Quek, and H. V. Poor, “Scheduling policies for federated learning in wireless networks,” IEEE Trans. Commun., vol. 68, no. 1, pp. 317–333, Sept. 2019.
- [22] J. Ren, Y. He, D. Wen, G. Yu, K. Huang, and D. Guo, “Scheduling for cellular federated edge learning with importance and channel awareness,” IEEE Trans. Wireless Commun., vol. 19, no. 11, pp. 7690–7703, Aug. 2020.
- [23] G. Zhu, Y. Wang, and K. Huang, “Broadband analog aggregation for low-latency federated edge learning,” IEEE Trans. Wireless Commun., vol. 19, no. 1, pp. 491–506, Jan. 2020.
- [24] M. M. Amiri and D. Gündüz, “Federated learning over wireless fading channels,” IEEE Trans. Wireless Commun., vol. 19, no. 5, pp. 3546–3557, May 2020.
- [25] M. Chen, Z. Yang, W. Saad, C. Yin, H. V. Poor, and S. Cui, “A joint learning and communications framework for federated learning over wireless networks,” IEEE Trans. Wireless Commun., vol. 20, no. 1, pp. 269–283, Oct. 2020.
- [26] W. Shi, S. Zhou, Z. Niu, M. Jiang, and L. Geng, “Joint device scheduling and resource allocation for latency constrained wireless federated learning,” IEEE Trans. Wireless Commun., vol. 20, no. 1, pp. 453–467, Sept. 2020.
- [27] Y. Sun, S. Zhou, Z. Niu, and D. Gündüz, “Dynamic scheduling for over-the-air federated edge learning with energy constraints,” IEEE J. Sel. Areas Commun., vol. 40, no. 1, pp. 227–242, Nov. 2021.
- [28] J. Wangni, J. Wang, J. Liu, and T. Zhang, “Gradient sparsification for communication-efficient distributed optimization,” in Proc. Adv. Neural Inf. Process. Syst. (NIPS), Montréal, Canada, Dec. 2018, pp. 1299–1309.
- [29] Y. Du, S. Yang, and K. Huang, “High-dimensional stochastic gradient quantization for communication-efficient edge learning,” IEEE Trans. Signal Process., vol. 68, pp. 2128–2142, Mar. 2020.
- [30] E. Ozfatura, K. Ozfatura, and D. Gündüz, “Time-correlated sparsification for communication-efficient federated learning,” in in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Melbourne, Vic, Australia, Jul. 2021, pp. 461–466.
- [31] N. Shlezinger, M. Chen, Y. C. Eldar, H. V. Poor, and S. Cui, “UVeQFed: universal vector quantization for federated learning,” IEEE Trans. Signal Process., vol. 69, pp. 500–514, Dec. 2021.
- [32] Y. Sun, S. Zhou, Z. Niu, and D. Gündüz, “Time-correlated sparsification for efficient over-the-air model aggregation in wireless federated learning,” in Proc. IEEE Int. Conf. Commun. (ICC), Seoul, South Korea, May 2022, pp. 3388–3393.
- [33] T. Chen, G. Giannakis, T. Sun, and W. Yin, “LAG: lazily aggregated gradient for communication-efficient distributed learning,” in Proc. Adv. Neural Inf. Process. Syst. (NIPS), vol. 31, Montréal, Canada, Dec 2018, pp. 5055–5065.
- [34] J. Sun, T. Chen, G. B. Giannakis, Q. Yang, and Z. Yang, “Lazily aggregated quantized gradient innovation for communication-efficient federated learning,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 4, pp. 2031–2044, Apr. 2022.
- [35] E. Ozfatura, S. Rini, and D. Gündüz, “Decentralized SGD with over-the-air computation,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), Taipei, Taiwan, Dec. 2020, pp. 1–6.
- [36] H. Xiao, J. Zhao, Q. Pei, J. Feng, L. Liu, and W. Shi, “Vehicle selection and resource optimization for federated learning in vehicular edge computing,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 8, pp. 11 073–11 087, Aug. 2022.
- [37] M. F. Pervej, R. Jin, and H. Dai, “Resource constrained vehicular edge federated learning with highly mobile connected vehicles,” IEEE J. Sel. Areas Commun., vol. 41, no. 6, pp. 1825–1844, May 2023.
- [38] X. Zhang, X. Zhou, M. Lin, and J. Sun, “ShuffleNet: An extremely efficient convolutional neural network for mobile devices,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Salt Lake City, UT, USA, Jun. 2018, pp. 6848–6856.
- [39] Q. Zeng, Y. Du, K. Huang, and K. K. Leung, “Energy-efficient resource management for federated edge learning with CPU-GPU heterogeneous computing,” IEEE Trans. Wireless Commun., vol. 20, no. 12, pp. 7947–7962, Dec. 2021.
- [40] J. Laneman and G. Wornell, “Distributed space-time-coded protocols for exploiting cooperative diversity in wireless networks,” IEEE Trans. Inf. Theory, vol. 49, no. 10, pp. 2415–2425, Oct. 2003.
- [41] I. Maric and R. D. Yates, “Bandwidth and power allocation for cooperative strategies in gaussian relay networks,” IEEE Trans. Inf. Theory, vol. 56, no. 4, pp. 1880–1889, Mar. 2010.
- [42] R. Urgaonkar and M. J. Neely, “Delay-limited cooperative communication with reliability constraints in wireless networks,” IEEE Trans. Inf. Theory, vol. 60, no. 3, pp. 1869–1882, Jan. 2014.
- [43] M. J. Neely, Stochastic network optimization with application to communication and queueing systems. San Rafael, CA, USA: Morgan & Claypool, 2010.
- [44] M. J. Neely, “Stochastic network optimization with non-convex utilities and costs,” in Proc. Inf. Theory and Applicat. Workshop (ITA), San Diego, CA, USA, Feb. 2010, pp. 1–10.
- [45] M. Grant and S. Boyd, “CVX: MATLAB Software for Disciplined Convex Programming,” Sept. 2013, [Online]. Available: http://cvxr.com/cvx.
- [46] A. Krizhevsky, V. Nair, and G. Hinton, “Learning multiple layers of features from tiny images,” Tech. Rep., Apr. 2009.
- [47] M.-F. Chang, J. Lambert, P. Sangkloy, J. Singh, S. Bak, A. Hartnett, D. Wang, P. Carr, S. Lucey, D. Ramanan, and J. Hays, “Argoverse: 3d tracking and forecasting with rich maps,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Long Beach, CA, USA, Jun. 2019, pp. 8740–8749.
- [48] P. A. Lopez, M. Behrisch, L. Bieker-Walz, J. Erdmann, Y.-P. Flötteröd, R. Hilbrich, L. Lücken, J. Rummel, P. Wagner, and E. Wießner, “Microscopic Traffic Simulation using SUMO,” in Proc. IEEE Int. Conf. Intell. Transp. Syst. (ITSC), Maui, HI, USA, Nov. 2018, pp. 2575–2582.
- [49] M. Liang, B. Yang, R. Hu, Y. Chen, R. Liao, S. Feng, and R. Urtasun, “Learning lane graph representations for motion forecasting,” in Proc. European Conf. Comput. Vis. (ECCV), Glasgow, UK, Aug. 2020, pp. 541–556.