Multi-sensor fusion is crucial for accurate and reliable autonomous driving systems. State-of-the-art approaches are based on point-level fusion: augmenting LiDAR point clouds with camera features. However, the camera-to-LiDAR projection discards the semantic density of camera features, hindering the effectiveness of such approaches, especially for semantic-oriented tasks such as 3D scene segmentation. In this paper, we propose BEVFusion, an efficient and general multi-task multi-sensor fusion framework. This approach unifies multimodal features in a shared bird's-eye view (BEV) representation space, effectively preserving both geometric and semantic information. To achieve this, we diagnose and eliminate key efficiency bottlenecks in view translation through optimized BEV pooling, reducing latency by over 40x. BEVFusion is fundamentally task-agnostic and seamlessly supports diverse 3D perception tasks with minimal architectural changes. It establishes a new state-of-the-art on the nuScenes benchmark, achieving 1.3% mAP and NDS for 3D object detection and 13.6% mIoU for BEV map segmentation, all while reducing computational cost by 1.9x.
JetCar is a miniature self-driving car based on Jetson Nano. It can navigate on street maps and follow parking text and directional arrows.
A versatile LiDAR SLAM package. Supports various types of LiDAR sensors (mechanical, solid-state, etc.), 6-axis and 9-axis IMU, loose coupling and tight coupling, mapping and positioning, loop detection, etc.
With features like adaptive cruise control, driver monitoring, automatic lane centering, etc., it is available for Toyota, Hyundai, Honda and many other brands, about 275+ models. openpilot complies with ISO26262 guidelines.
The 1000W Class D Audio Amplifier Reference Design provides examples of audio amplifiers and push-pull power converters. It runs using the KV1x Tower® Series platform or the k64 Freedom board.