Extended Metrics of LiDAR?
In a new white paper released by artificial perception pioneer AEye proposes newly extended metrics for evaluating advanced LiDAR system performance. Industry leaders recognize that the conventional metrics of frame rate, angular resolution, and detection range currently used for evaluating LiDAR performance no longer alone adequately measure the effectiveness of sensors to solve real world use cases that underlie autonomous driving. In response, AEye has proposed three new extended metrics for LiDAR evaluation: intra-frame object revisit rate, instantaneous enhanced resolution, and object classification range. The AEye white paper describes these capabilities and why they matter within the context of real-world automotive applications.
“Current metrics used for evaluating LiDAR systems designed for autonomous driving often fail to adequately address how a system will perform in real-world conditions,” said AEye co-founder and CEO, Luis Dussan. “These extended metrics are more apropos to measuring advanced LiDAR performance, and are key to evaluating systems that will solve the most challenging use cases.”
First generation LiDAR sensors passively search a scene and detect objects using scan patterns that are fixed in both time and in space, with no ability to enhance performance with a faster revisit nor to apply extra resolution to high interest areas like the road surface or intersections. A new class of advanced solid-state LiDAR sensors enable intelligent information capture that expands the capabilities of LiDAR and moves from passive “search” or detection of objects to active search and, in many cases, to the actual acquisition and classification attributes of objects in real-time making perception and path planning software safer and more effective.
Extended Metric #1: From Frame Rate to Object Revisit Rate
It is universally accepted that a single interrogation point, or shot, does not deliver enough confidence to verify a hazard. Therefore, passive LiDAR systems need multiple interrogations/detects on the same object or position over multiple frames to validate an object. New, intelligent LiDAR systems, such as AEye’s iDAR™, can revisit an object within the same frame. These agile systems can accelerate the revisit rate by allowing for intelligent shot scheduling within a frame, with the ability to interrogate an object or position multiple times within a conventional frame.
In addition, existing LiDAR systems are limited by the physics of fixed laser pulse energy, fixed dwell time, and fixed scan patterns. Next generation systems such as iDAR, are software definable by perception, path and motion planning modules so that they can dynamically adjust their data collection approach to best fit their needs. Therefore, Object Revisit Rate, or the time between two shots at the same point or set of points, is a more important and relevant metric than Frame Rate alone.
Extended Metric #2: From Angular Resolution to Instantaneous (Angular) Resolution
The assumption behind the use of resolution as a conventional LiDAR metric is that the entire Field of View will be scanned with a constant pattern and uniform power. However, AEye’s iDAR technology, based on advanced robotic vision paradigms like those utilized in missile defense systems, was developed to break this assumption. Agile LiDAR systems enable a dynamic change in both temporal and spatial sampling density within a region of interest, creating instantaneous resolution. These regions of interest can be fixed at design time, triggered by specific conditions, or dynamically generated at run-time.
“Laser power is a valuable commodity. LiDAR systems need to be able to focus their defined laser power on objects that matter,” said Allan Steinhardt, Chief Scientist at AEye. “Therefore, it is beneficial to measure how much more resolution can be applied on demand to key objects in addition to merely measuring static angular resolution over a fixed pattern. If you are not intelligently scanning, you are either over sampling, or under sampling the majority of a scene, wasting precious power with no gain in information value.”
Extended Metric #3: From Detection Range to Classification Range
The traditional metric of detection range may work for simple applications, but for autonomy the more critical performance measurement is classification range. While it has been generally assumed that LiDAR manufacturers need not know or care about how the domain controller classifies or how long it takes, this can ultimately add latency and leave the vehicle vulnerable to dangerous situations. The more a sensor can provide classification attributes, the faster the perception system can confirm and classify. Measuring classification range, in addition to detection range, will provide better assessment of an automotive LiDAR’s capabilities, since it eliminates the unknowns in the perception stack, pinpointing salient information faster.
Unlike first generation LiDAR sensors, AEye’s iDAR is an integrated, responsive perception system that mimics the way the human visual cortex focuses on and evaluates potential driving hazards. Using a distributed architecture and edge processing, iDAR dynamically tracks objects of interest, while always critically assessing general surroundings. Its software-configurable hardware enables vehicle control system software to selectively customize data collection in real-time, while edge processing reduces control loop latency. By combining software-definability, artificial intelligence, and feedback loops, with smart, agile sensors, iDAR is able to capture more intelligent information with less data, faster, for optimal performance and safety.
AEye’s iDAR system is uniquely architected to scale from modular ADAS solutions to fully integrated mobility/robot-taxi implementations. In order to deliver automotive-grade ADAS solutions at scale, AEye has partnered with top Tier 1 global automotive suppliers such as Hella, LG Electronics, and Aisin to design and manufacture best-in-class ADAS systems to global automakers. In addition, the company is engaged in pilots with more than a dozen non-disclosed OEMs and mobility companies.
Conti Automated Platooning
Continental and Knorr-Bremse are taking a further step in their development partnership towards highly automated commercial vehicle driving: for automated platooning (i.e. driving in a column), the engineering company from Hannover and the world market leader for braking systems, and leading supplier of subsystems for rail and commercial vehicles, have joined forces to develop the Platooning Demonstrator, based on a platoon of three trucks of different makes. Initial test runs and demonstrations to customers have already been conducted at testing grounds.
The cooperation partners show with this Platooning Demonstrator what driving functions they can develop, jointly with the vehicle manufacturers, for automated driving. This includes the formation of platoons, driving together, the emergency braking function, exiting by individual vehicles and safe splitting up of the entire platoon. During the development work, special attention is being paid to the process for transferring control from the driver to the vehicle. A key element of this is clear instructions on what to do, which the driver receives via the specially designed human/machine interface. It displays the information graphically and clearly. This enables the driver to track the status of the system transparently at all times. The transfer itself is initiated on request by the push of a button as soon as the partner vehicle is less than 50 meters away. The synchronous vehicle-to-vehicle (V2V) emergency braking function ensures greater traffic safety: By initiating simultaneous braking of all the vehicles, without any delay due to reaction times, the vehicles come to a stop the same distance apart as during driving.
That now gives customers a test platform for platooning regardless of vehicle make, and a basis on which the technology can be further developed. “With the Platooning Demonstrator, we’ve reached the first milestone of our joint work. The focus now is on exchanging ideas with the vehicle manufacturers for further development of the system solution in line with the product strategies of the customers”, says Gilles Mabire, Head of the Commercial Vehicles & Aftermarket Business Unit at Continental.
Quanergy Partners with Chery
Quanergy Systems, Inc., a leading provider of LiDAR (Light Detection and Ranging) sensors and smart sensing solutions, announced a partnership with Chery Automobile, one of the largest automotive manufacturers in China.
On June 20th, 2019, Chery unveiled the logo of its new brand Chery Lion, and announced its strategic plan of working with selected partners to solve technological challenges across the ecosystem. Quanergy signed on to Chery Lion’s Smart Partner Program as the LiDAR partner, to focus efforts on advancing autonomous driving and smart cities in China.
Tesla AutoPilot Price Increase
Tesla will increase the price of the “full self-driving” version of its Autopilot by around $1,000 starting August 16th, according to CEO Elon Musk.
Read all autonomous vehicle news.
You are welcome to subscribe to receive emails with the latest Autonomous Self-Driving Driverless and Auto-Piloted Car News , you can also get weekly news summaries or midnight express daily news summaries.