雨中

BEVFusion: An open source autonomous driving BEV project

An efficient and general open source framework for multi-task multi-sensor fusion

 
Overview
Multi-sensor fusion is crucial for accurate and reliable autonomous driving systems. State-of-the-art approaches are based on point-level fusion: augmenting LiDAR point clouds with camera features .
However, the camera-to-LiDAR projection discards the semantic density of camera features, hindering the effectiveness of such approaches, especially for semantically oriented tasks such as 3D scene segmentation. In this paper, we propose BEVFusion, an efficient and general multi-task multi-sensor fusion framework. This approach unifies multimodal features in a shared bird's-eye view (BEV) representation space, effectively preserving both geometric and semantic information. To achieve this, we diagnose and eliminate key efficiency bottlenecks in view translation through optimized BEV pooling, reducing latency by over 40x. BEVFusion is fundamentally task-agnostic and seamlessly supports diverse 3D perception tasks with minimal architectural changes. It establishes a new state-of-the-art on the nuScenes benchmark, achieving 1.3% mAP and NDS for 3D object detection and 13.6% mIoU for BEV map segmentation, all while reducing computational cost by 1.9x.




参考设计图片
×
Design Files
 
 
Search Datasheet?

Supported by EEWorld Datasheet

Forum More
Update:2025-08-06 17:25:16

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
community

Robot
development
community

About Us Customer Service Contact Information Datasheet Sitemap LatestNews


Room 1530, 15th Floor, Building B, No.18 Zhongguancun Street, Haidian District, Beijing, Postal Code: 100190 China Telephone: 008610 8235 0740

Copyright © 2005-2024 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号