Helm.ai, a leading provider of advanced AI software for high-end ADAS, autonomous driving, and robotics automation, today announced the launch of VidGen-2, its next-generation generative AI model for producing highly realistic driving video sequences. VidGen-2 offers 2X higher resolution than its predecessor, VidGen-1, improved realism at 30 frames per second, and multi-camera support with 2X increased resolution per camera, providing automakers with a scalable and cost-effective solution for autonomous driving development and validation.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20241001089771/en/
Trained on thousands of hours of diverse driving footage using NVIDIA H100 Tensor Core GPUs, VidGen-2 leverages Helm.ai’s innovative generative deep neural network (DNN) architectures and Deep Teaching™, an efficient unsupervised training method. It generates highly realistic video sequences at 696 x 696 resolution, double that of VidGen-1, with frame rates ranging from 5 to 30 fps. The model also enhances 640 x 384 resolution video quality at 30 fps, delivering smoother and more detailed simulations. Videos can be generated by VidGen-2 without an input prompt or with a single image or input video as the prompt.
VidGen-2 also supports multi-camera views, generating footage from three cameras at 640 x 384 (VGA) resolution for each. The model ensures self-consistency across all camera perspectives, providing accurate simulation for various sensor configurations.
The model generates driving scene videos across multiple geographies, camera types, and vehicle perspectives. The model not only produces highly realistic appearances and temporally consistent object motion, but also learns and reproduces human-like driving behaviors, simulating the motions of the ego-vehicle and surrounding agents in accordance with traffic rules. It creates a wide range of scenarios, including highway and urban driving, multiple vehicle types, pedestrians, cyclists, intersections, turns, weather conditions, and lighting variations. In multi-camera mode, the scenes are generated consistently across all perspectives.
VidGen-2 gives automakers a significant scalability advantage over traditional non-AI simulators by enabling rapid asset generation and imbuing agents in simulations with sophisticated, real-life behaviors. Helm.ai’s approach not only reduces development time and cost but also closes the “sim-to-real” gap, offering a highly realistic and efficient solution that broadens the scope of simulation-based training and validation.
“The latest enhancements in VidGen-2 are designed to meet the complex needs of automakers developing autonomous driving technologies,” said Vladislav Voroninski, Helm.ai’s CEO and founder. “These advancements enable us to generate highly realistic driving scenarios while ensuring compatibility with a wide variety of automotive sensor stacks. The improvements made in VidGen-2 will also support advancements in our other foundation models, accelerating future developments across autonomous driving and robotics automation.”
About Helm.ai
Helm.ai develops next-generation AI software for ADAS, autonomous driving, and robotics automation. Founded in 2016 and headquartered in Redwood City, CA, the company reimagines AI software development to make scalable autonomous driving a reality. Helm.ai offers full-stack realtime AI solutions, including deep neural networks for highway and urban driving, end-to-end autonomous systems, and development and validation tools powered by Deep Teaching™ and generative AI. The company collaborates with global automakers on production-bound projects. For more information on Helm.ai, including products, SDK, and career opportunities, visit https://helm.ai or follow Helm.ai on LinkedIn.
View source version on businesswire.com: https://www.businesswire.com/news/home/20241001089771/en/