Safe Kids Need Safe AVs
Self-driving cars, or automated vehicles (AVs), are currently being tested on U.S. roads, yet, according to safety advocates working to protect children in AVs, more needs to be done to address the unique safety needs of children. To help ensure that those needs are prioritized, today, Safe Kids Worldwide announced a new set of resources from its Children in Automated Vehicles Consortium and is calling on the safety community of advocates, manufacturers, policy makers, engineers, and regulators to collaborate to protect children in AVs.
The Consortium, comprised of a diverse group of innovative safety professionals, developed a new toolkit at www.safekids.org/AVs, which includes several resources, including a model law and advocacy resources to help safety leaders put the following recommendations into action:
- Consider children every time vehicle regulations are amended for automated technologies, including updating National Highway Traffic Safety Administration’s position papers and rulemaking processes;
- Enact legislation that establishes whether or not children may travel alone in AVs with a clear set of qualifying conditions, such as a minimum child age and effective monitoring/communication systems to alert parents or caregivers;
- Enact legislation that clarifies who is responsible for children when there is no human driver;
- Update crash data collection and reporting guidelines and tools to capture child data in AVs;
- Train and educate on-scene first responders and law enforcement on child passengers in AVs.
“This is an exciting time in the development of automated vehicles and those of us in child passenger safety have an unprecedented opportunity to come together to ensure the protection of our most precious cargo: children,” said Torine Creppy, president of Safe Kids Worldwide. “We need safety advocates working together with policy makers, regulators, and engineers now, during the development and testing phase of AVs – to make sure that child passenger safety is not only considered but is one of the top priorities when it comes to developing this new technology.”
Historically, many children have been injured or killed when new vehicle technologies were introduced but were not intentionally designed or regulated for child passengers, such as when front-seat air bags were introduced in the 1990s. Further tragedies were avoided only after a nationwide campaign urged families and drivers to properly restrain all children under 13 in a back seat.
“We can’t let what happened with airbags happen with automated vehicles,” said Joseph Colella, chair of the Consortium’s Education Working Group and Director of Child Passenger Safety for the Juvenile Products Manufacturers Association. “Based on lessons learned, we need to be certain that AV developers, vehicle and car seat manufacturers, regulators and safety advocates are prioritizing child safety while AVs are being developed.”
The past decade of ride-sharing is another example of how policies have lagged behind consumer technologies. In the U.S., many state laws do not clearly identify who is responsible for providing a child restraint or for their proper use in ride-sharing vehicles. Despite the emergence of ride-sharing and ride-sourcing apps, these ambiguities in state laws increase risk, and lead to confusion among the public and challenges to law enforcement.
“It is clear that children need to be considered when we’re designing AVs, when we’re testing AVs, and when we’re coming up with policies for AVs,” said Kristy Brinker Brouwer, chair of the Consortium’s Policy Working Group and Professor of the Practice of Mechanical Engineering at Kettering University. “As laws evolve to allow AVs to be tested on public roads, we have an historic opportunity to learn from our past successes and previous unintended consequences so we can keep our most vulnerable passengers safe when it comes to this emerging form of transportation.”
About the Children in Automated Vehicles Consortium
In 2019, Safe Kids Worldwide convened a Consortium of pioneers to lead the way in a joint effort to protect children under 13 in AVs. This group of innovative professionals includes researchers, vehicle- and child-restraint manufacturers, law enforcement officers, consumer advocacy groups, communications experts, EMS and fire safety professionals, an attorney and public health organizations working in the U.S, Australia, and Europe.
Kodiak Develops EVs for Dover AFB
Dover Air Force Base (Dover AFB) and Kodiak Robotics, Inc. today announced that Kodiak has been awarded an AFWERX Phase II Small Business Innovation Research (SBIR) contract to develop autonomous vehicles for the Dover AFB flightline.
Flightline vehicles represent a natural fit for autonomous technology, given the structured driving environment and high demand for drivers. Through this SBIR Phase II, Kodiak and Dover AFB will also partner to identify the flightline vehicles best suited to automation. This SBIR contract demonstrates the Air Force’s growing interest in automation technology’s potential to increase efficiency and safety.
The Air Force’s SBIR program, which is administered by AFWERX, is designed to help startups and small businesses adapt their civilian technology for Air Force use. Through this SBIR Phase II contract, Kodiak will partner with Dover AFB to adapt Kodiak’s solution to the needs of flightline vehicles, and validate the software in simulation. At the completion of the SBIR project, Kodiak will be eligible for a SBIR Phase III contract that will fund autonomous flightline vehicle deployment.
“Kodiak’s SBIR partnership with Dover AFB shows the flexibility of the Kodiak Driver to adapt to a wide range of deployment environments and customers,” said Don Burnette, Kodiak’s CEO. “We are excited to partner with the 436th Aerial Port Squadron to bring the benefits of autonomous technology to the flightline.”
Navya and REE Partner for Level 4 AVs
Navya a leading company in autonomous driving systems and REE Automotive (“REE”), a leader in e-Mobility which recently announced its merger with 10X Capital Venture Acquisition Corp. announce that they have signed an agreement to collaborate in the development of a level 4 autonomous system including REEcorner technology and Navya self-driving solutions.
REE is revolutionizing the e-Mobility industry through its highly modular and disruptive REEcorner technology which integrates critical vehicle components (steering, braking, suspension, powertrain and control) into the arch of the wheel. REE’s proprietary x-by-wire technology challenges century-old automotive concepts by being agnostic to vehicle size and design, power-source and driving mode (human or autonomous). Platforms utilizing REEcorners can present significant functional and operational advantages over conventional EV “skateboards” currently available in the market.
Navya is a leading player in level 4 autonomous driving systems for passenger and goods transport. Since 2015, Navya autonomous mobility solutions have been first to market and first to on-road service in real conditions. The Autonom® Shuttle, main development platform, is dedicated to passenger transport. Since its launch, more than 180 units have been sold in 23 countries, as of 31 December, 2020. The Autonom® Tract is designed for goods transport.
The next generation of level 4 autonomous mobility solutions:
Powered by REE, Driven by Navya, the co-developed next generation level 4 autonomous system will be designed as a high standard, state of the art autonomous mobility solution with key competitive advantages considering quality, cost and performance. Safety first principles will be implemented based on very high Safety requirements with regards to ISO 26262:2018 and ISO/PAS 21448:2019.
Arbe 4D for AutoX
Arbe, a global leader in next-generation 4D Imaging Radar Solutions, today announced that AutoX has chosen its 4D Imaging Radar Platform for their Level 4 autonomous vehicles, RoboTaxis, as well as other autonomous driving projects. Arbe recently revealed plans to go public through a SPAC merger with Industrial Tech Acquisitions, Inc at equity value of approximately $723M, merger conditions discussed below.
Over the next five years, AutoX is expected to integrate 400,000 Arbe-based ultra-high resolution radar systems in their Level 4 fleet. Multiple radar units will be included as an integral component of the sensor suite for safety application development, AI-based perception algorithms, and sensor fusion.
Veoneer Supplies MBZ EQS
The automotive technology company Veoneer, Inc. is proud supplier of critical building blocks in the Mercedes EQS, the electric sedan equipped to offer hands-off self-driving tech.
Mercedes EQS Drive Pilot system is an example of collaborative driving; the car can take control under certain conditions, but the driver needs to be ready to retake the wheel when needed.
The Drive pilot system contains Veoneer’s 4th generation stereo vision camera system, comprised by fully integrated hardware and perception software to master the challenges of highly automated driving. The system also contains Veoneer’s advanced 77GHz radars using super-pulse modulation techniques for enhanced perception, operating at a distance up to 150-meters with high range resolution and supreme angular accuracy.
Veoneer’s 4th generation stereo vision camera system uses Convolutional Neural Network technology for free space and small obstacle detection to maneuver safely. The stereo vision camera processes and classifies 3D objects (vehicles, motorbikes, pedestrians, lanes, landmarks, signals, posts, etc.) under a variety of weather conditions. Veoneer’s 77GHz radar, generation 1.2, have 50% more range in the rear corners to detect motorcycles and over 100% more range in the front corners, compared to its predecessor.
“Veoneer is proud to deliver key active safety technology to the groundbreaking Mercedes EQS. To be a part of the most advanced vehicles in the world is a key part of our development as we continue to build Veoneer’s position as a world leader in active safety, ADAS and autonomous driving technologies.” says Matthias Bieler, Executive Vice President, Business Units Europe.
AImotive aiSim 3.0
AImotive, a global leader in automated driving (AD) technology, announced aiSim 3.0, the next generation of the world’s first ISO26262-certified simulator for the development and validation of ADAS and AD systems. aiSim 3.0 brings multi-node and multi-client capability together with physics-based sensor simulation enabling high and measurable correlations between virtual and real-world testing from large-scale software-in-the-loop testing to real-time environments.
Multi-client support allows several ego vehicles or multi-ECU control systems to be placed in the same virtual environment to test interactions between numerous automated vehicles or system components. The multi-node feature enables the distribution of sensor simulation to multiple computers to simulate physics-based models efficiently, even for the most complex sensor setups during real-time, hardware-in-the-loop testing.
The aiSim 3.0 sensor simulation framework builds on the Khronos Group’s Vulkan API to improve multi-GPU raytracing performance. Vulkan creates a transparent pipeline for the physically-based simulation of various sensor modalities (cameras, radars, LiDARs) and efficiently shares tasks between available GPU resources. Since Vulkan is an open standard, aiSim runs from the cheapest notebook to the largest cloud cluster on all compatible systems, regardless of the manufacturer.
Test automation and manual test execution for forensics analysis are equally important during virtual validation. The recent release of aiSim provides an intuitive user experience, enhancing the usability of existing features, such as creating scenarios, sensor setups, changing the weather and environment, and analyzing results from a single unified interface. Its flexible architecture and open APIs ease its integration into continuous development – continuous integration pipelines providing native support for both on-site or in-cloud deployments.
MINEYE Intros Sensing
Recently at the Shanghai Auto Show, MINIEYE, a self-driving technology developer, launched a full-area sensing solution for passenger cars.
Out-of-cabin sensing is the ADAS solution from L0 to L2+. In-cabin sensing includes a number of functions such as DMS, OMS, and in-cabin interaction and object monitoring.
MINIEYE provides ADAS system solutions to meet the needs of automotive OEMs’ customers for different levels of autonomous driving functions from L1~L2+. From single camera to achieve AEB and ACC, to multi-sensor fusion to achieve HWP and TJP, it is convenient for customers to choose different sensor configurations from function and cost factors.
For in-cabin sensing, MINIEYE exhibited DMS and OMS products. In addition to the traditional monitoring function, MINIEYE focuses more on creating in-cabin interaction function. By monitoring the driver’s gaze direction, it can not only determine whether the driver is fatigued, but also interact with other systems in the cabin, for example, by staring at the center screen for a few seconds to control its lighting up or reducing the brightness of the screen.
MINIEYE has been successfully mass-produced in automotive OEMs of commercial vehicles such as Dongfeng, Liuzhou Automobile, Heavy Truck, Shaanxi Automobile, etc. The shipment volume is close to 100,000 sets in Q1 2021. For passenger cars, MINIEYE has been selected by a new energy company for its L2+ project, and other customers include BYD, JAC, etc. It has won a total of 15 factory-installed projects.
AR manufacturer and perception software provider Blickfeld enables Smart LiDAR functionality on their sensors. This makes Blickfeld sensors the first LiDAR sensors that not only collect detailed 3D data but are also capable of computing and providing enriched information through on-device pre-processing. This pre-processing is an industry first and is accomplished by a high-performance computing chipset integrated into the LiDAR. The first feature introduced by Blickfeld is a pre-processing algorithm that enables motion detection. As a result, Smart LiDARs offer easier, faster, and more cost-effective integration into applications and solutions.
Integration of computing hardware enables on-device data processing
Smart LiDARs are characterized by intelligent pre-processing functions in addition to the 3D measurements. Thanks to these functions, the collected data is converted into insightful information early in the processing pipeline without the need for external computing hardware to perform these steps. Instead, Blickfeld’s sensors have a specialized system-on-chip (SoC) computing chipset that enables the execution of algorithms on the sensors themselves. This allows robust on-device data analysis, enabling fast and straightforward implementation of numerous use cases in areas such as smart city, security, or industry.
Motion detection as the first on-device pre-processing feature
Blickfeld is developing an algorithm library optimized for the chipset that includes a variety of pre-processing features. The first algorithm presented enables dynamic motion detection. Here, the static background within the point cloud is identified and removed so that the device only transmits motion information. Dynamic objects, such as people and vehicles, are detected and highlighted along with their movements. This feature is included with the current firmware on Blickfeld’s Cube 1 series devices and is already being used successfully in customer projects. The acquired data is made available at the open software interface. Besides immediate motion detection, another advantage is the enormous reduction of the data volume, greater than 95% in typical cases.
The next generation of LiDAR sensor technology
Currently, Blickfeld is working on the expansion of the algorithm library. Additional pre-processing algorithms will be released in the near future to take on further steps of capturing and interpreting the environment. A large number of functions can be flexibly combined, making them adaptable to different applications. This flexibility of the Smart LiDAR, coupled with the ease of use of the sensor and interfaces, pays into the goal of making LiDAR technology accessible to all users.
Dr. Florian Petit, Blickfeld founder, explains, “We designed our sensors as Smart LiDARs from the beginning and designed a corresponding chipset. We are very pleased to now be able to make these functions available for our customers. Smart LiDARs represent the next generation of LiDAR sensor technology, and we are excited to lead this evolution in the industry.”
RoboSense & Webasto
During the Shanghai Auto Show, RoboSense a smart and automotive LiDAR system provider, and Webasto the global Top 100 automotive supplier and market leader for roof systems, announced cooperation on the intelligent Roof Sensor Module that integrated with RoboSense automotive-grade MEMS LiDAR RS-LiDAR-M1 for Level 3 to Level 5 autonomous driving vehicles, largely simplifying the structure and mounting process of conventional sensor sets of self-driving vehicles and ensures advanced safety perception.
Through this strategic cooperation with RoboSense, Webasto is committed to seamlessly embedding the RoboSense M1, camera and other sensor systems to the panoramic roof of vehicles to provide safer and more reliable environment perception information at the smallest possible space in the roof. The solution allows different roof systems to be mounted on the vehicle body, creating greater flexibility for the development of autonomous vehicles.
As the world’s market leader, Webasto offers the broadest product range of high-quality roof systems. The rooftop design not only enables the integration of sensor systems but also adopts highly rigid transparent material and the openable top function, which creates the feeling of freedom for the passengers.
The RoboSense M1 boasts 125 ultra-high resolution scanning beams, ultra-wide FOV of 120°*25°, and a maximum detection range of 200m (150M@10%). At present, the M1(thickness 45mm, depth 108mm, width 110mm) is the thinnest solid-state LiDAR on the market. At the highest point of the vehicle, the M1’s unique mounting position in the smart roof module gives it the vantage point of view, which can greatly improve the detection ability of the perception system and ensure the safe and stable operation of the autonomous driving systems. Moreover, it costs less fuel consumption/NEDC mileage than other LiDAR products.
RoboSense for Banma Network
RoboSense the leading smart LiDAR sensor provider announced a strategic partnership with Banma Network Technology, an intelligent automobile operating system and solution provider that cooperated with Alibaba Group and SAIC Motor, and AutoX, China’s most advanced self-driving AI platform, to build a high-level autonomous driving platform for intelligent vehicles through in-depth cooperation in the 3D LiDAR sensor, AI algorithm, and intelligent automobile operation.