image

New method enhances scene reconstruction to test autonomous driving models

Dynamic driving scene reconstruction methods, such as DriveDreamer4D [68] and Street Gaussians [58], encounter significant challenges when rendering larger maneuvers (e.g., multi-lane shifts). In contrast, the proposed ReconDreamer significantly improves rendering quality via incrementally integrating world model knowledge. Credit: Ni et al

Developing vehicles that can operate safely without a human driver has been a key goal of many teams in the AI research community. As only testing autonomous vehicles on real streets would be both unsafe and unfeasible, their underlying algorithms are first trained and tested extensively in simulations.

While simulation platforms used to train models for autonomous driving have improved significantly in recent years, they often have various limitations. There are two types of simulation techniques: open-loop methods, in which outputs (i.e., actions or responses of simulated vehicles) do not affect future input data, and closed-loop methods, in which outputs influence subsequent inputs, producing an adaptive cycle.

Open-loop simulation techniques are generally easier to implement, yet they do not adapt to changes or mistakes made by the tested models in simulation. In contrast, closed-loop methods better reflect dynamic real-world settings and can thus assess the performance of systems with greater accuracy, yet they are also more computationally demanding and do not always render complex maneuvers and new vehicle trajectories.

Researchers at GigaAI, Peking University, Li Auto Inc. and CASIA recently developed a new method that could enhance driving scene reconstruction in simulations. This method, outlined in a paper posted to the arXiv preprint server, essentially works by incrementally integrating knowledge from autonomous driving world models into a scene.

“Closed-loop simulation is crucial for end-to-end autonomous driving,” wrote Chaojun Ni, Guaosheng Zhao and their colleagues in their paper. “Existing sensor simulation methods (e.g., NeRF and 3DGS) reconstruct driving scenes based on conditions that closely mirror training data distributions. However, these methods struggle with rendering novel trajectories, such as lane changes.”

Recent studies focusing on the rendering of driving scenes to train and test models found that the integration of knowledge from world models can improve the rendering of new trajectories. While some approaches that integrate world models attained promising results, they often failed to produce accurate representations of particularly complex maneuvers, such as multi-lane shifts.

The main objective of the study by Chaojun, Ni and their colleagues was to develop a new method that could improve the rendering of these maneuvers. Their proposed solutions, called ReconDreamer and Drive Restorer, entail the training of world models to progressively mitigate unwanted effects in the rendering of complex driving maneuvers.

“We introduce ReconDreamer, which enhances driving scene reconstruction through incremental integration of world model knowledge,” wrote Ni, Zhao and their colleagues. “Specifically, DriveRestorer is proposed to mitigate artifacts via online restoration. This is complemented by a progressive data update strategy designed to ensure high-quality rendering for more complex maneuvers.”

The researchers have already carried out various tests to evaluate the ability of their method to produce improved renderings of driving scenes. Their findings were highly promising, as ReconDreamer was found to enhance the quality of renderings showing complex maneuvers, while also improving the spatiotemporal coherence of elements in a scene.

“To the best of our knowledge, ReconDreamer is the first method to effectively render in large maneuvers,” wrote the researchers. “Experimental results demonstrate that ReconDreamer outperforms Street Gaussians in the NTA-IoU, NTL-IoU, and FID, with relative improvements of 24.87%, 6.72%, and 29.97%. Furthermore, ReconDreamer surpasses DriveDreamer4D with PVG during large maneuver rendering, as verified by a relative improvement of 195.87% in the NTA-IoU metric and a comprehensive user study.”

The new driving scene reconstruction method introduced by this team of researchers could soon be used to improve the training and evaluation of computational models for autonomous driving in simulations. In addition, it could inspire the development of similar techniques to enhance the rendering of complex scenes, including scenes that can be used to assess models for robotics and other applications.

More information:
Chaojun Ni et al, ReconDreamer: Crafting World Models for Driving Scene Reconstruction via Online Restoration, arXiv (2024). DOI: 10.48550/arxiv.2411.19548

Journal information:
arXiv

© 2024 Science X Network

Citation:
New method enhances scene reconstruction to test autonomous driving models (2024, December 11)
retrieved 12 December 2024
from https://techxplore.com/news/2024-12-method-scene-reconstruction-autonomous.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

Comments are closed

Uploading