| In the era of AI,marked by breakthroughs in deep learning,autonomous driving has gained unprecedented attention.Research and experimentation in this field are in full swing both domestically and internationally,making autonomous driving a new direction for the development of future technology.Due to the complexity of traffic scenarios,single-vehicle intelligence solutions require high perception and computational efficiency.Vehicle-road collaboration solutions,on the other hand,reduce the computational burden on vehicles by adding intelligent roadside perception devices,expanding the range of target detection,and providing more safety redundancy for autonomous driving,which is of great significance for the development of future intelligent transportation systems.Images contain rich color and texture information but lack accurate distance information.Point cloud data can effectively describe the spatial position of objects,but their texture information is relatively less detailed.This paper mainly studies the fusion perception of Li DAR and cameras in roadside units and the implementation of vehicle-road interaction.The main contents are as follows:The roadside unit adopts a target-level fusion perception solution.In the perception and acquisition stage of the target,the camera and Li DAR are first calibrated.In the camera perception channel,GCC-YOLO is proposed based on YOLOv5 to obtain image target information.GCC-YOLO has more gradient flow information during feature extraction and introduces attention mechanisms in the feature pyramid,making feature extraction more efficient.At the same time,the Ghost Net concept is used to reduce network parameter volume and improve computational efficiency.In the Li DAR three-dimensional perception channel,a CPA Pillar RGB and point cloud fusion encoding scheme is proposed based on the Point Pillars algorithm.The RGB information obtained from the camera channel is fully utilized to optimize the point feature encoding in the point cloud data,enabling the pillar to carry richer features.Channel attention mechanisms are also introduced in the main network to more effectively extract data features.In the information fusion and interaction stage,the KM algorithm is used to associate and match targets.This research improves matching efficiency by optimizing global search matching into local search matching through position estimation.Then,the fusion results of target attributes are output through an extended Kalman filter.The Vehicle-to-Infrastructure(V2I)interaction message layer dataset is encoded using the ASN.1 standard and writes the perception fusion results into the vehicle-road interaction message body.The message format follows the logic of nested "message frame,message body,data frame,data element." Based on the fusion perception results,this research uses vehicle request and road sharing messages and verifies through experiments that after the vehicle sends a Msg_VIR message request to the roadside node,the roadside device can share the target information fused from Li DAR and camera dual-channel perception to the requesting vehicle through Msg_SSM message,realizing vehicle-road information interaction. |