Accurate and reliable 3D perception is the key remaining bottleneck to making self-driving vehicles safe and ubiquitous. Today, it's relatively easy to get an autonomous car to work 99% of the time, but it's the incredibly long tail of edge cases that's preventing them from reaching real-world deployment without a backup driver constantly watching over. All of this comes down to how well the autonomous car can see and understand the world around it. The key to achieving accurate, safer-than-human level 3D perception all starts with the LiDAR. That said, both legacy LiDAR solutions and newer upstarts, which largely leverage off-the-shelf components, have still struggled to meet the stringent performance requirements needed to solve key edge cases encountered in everyday driving scenarios.
Luminar, founded in 2012 by Austin Russell, has taken an entirely new approach to LiDAR, building its' system from the ground up at the component level for over 5 years. The result was the first and only solution that meets and exceeds all of the key performance requirements demanded by Car/Truck OEM's and technology leaders to achieve safe autonomy, in addition to unit economics that can enable widespread adoption across even mainstream consumer vehicle platforms. This has culminated with last years' release of their first scalable product for autonomous test and development fleets, which has subsequently led to rapidly accelerating adoption in the market. During this talk, raw Luminar LiDAR data from autonomous test vehicles will be presented to the audience, demonstrating real world examples of life-threatening edge cases and how they can now be avoided.