Besides BMW 7 Series driving level 3 soon–in autonomous and self-driving vehicle news are Cruise, May Mobility, Polestar, Luminar, Mobileye, aEye, Ghost Autonomy & Tier IV.
May Mobility Raises $105 Million Series D
May Mobility, a leader in the development and deployment of autonomous vehicle (AV) technology, announced the company’s closing of a $105 million Series D round as part of its ongoing fundraising efforts. The all-equity round has been led by Japanese telecommunications powerhouse, NTT Group and joined by new and existing financial and strategic investors including Toyota Ventures, Aioi Nissay Dowa Insurance Company, State Farm Ventures®, BMW i Ventures, Cyrus Capital and Trucks Venture Capital thus far. May Mobility’s Series B and C were led by Toyota Motor Corporation and SPARX Group Co. respectively. This latest round brings May Mobility’s total funding to approximately $300 million to date.
May Mobility will use the proceeds to accelerate the advancement and commercialization of its AV technology and services in the United States, Canada and Japan. Additionally, the funding will assist in scaling operations and pave a path for the company to reach profitability.
With this investment, NTT Group has acquired the exclusive rights to distribute May Mobility’s proprietary autonomous vehicle technology throughout Japan. The companies will work with Toyota Motor Corporation to develop an autonomous driving ecosystem, working with local stakeholders to deploy May Mobility-equipped autonomous vehicles across a variety of vehicle platforms. The companies will incorporate May Mobility’s technology to enhance Japanese transportation networks.
Polestar & Luminar & Mobileye
Polestar is working with Luminar, a leading automotive technology company, and Mobileye, a global leader in autonomous driving solutions, to enhance safety and the future autonomous driving capabilities of Polestar 4 with the integration of Luminar’s next-generation LiDAR technology with Mobileye’s Chauffeur platform.
Announced in August, Polestar 4 is planned to be the first production car to feature Mobileye Chauffeur, now with Luminar LiDAR, which builds upon the full-surround camera-based SuperVision platform available in Polestar 4 from launch. Together, the three companies aim to offer eyes-off, point-to-point autonomous driving on highways, as well as eyes-on automated driving for other environments.
With Mobileye Chauffeur, Polestar 4 is set to feature three Mobileye EyeQ6 processors, a front-facing LiDAR from Luminar, and Mobileye’s front-facing imaging radar to provide the extra layer of sensing and artificial intelligence needed to enable eyes-off, hands-off driving.
LiDAR (Light Detection and Ranging) uses lasers to create a highly detailed 3D map of the surrounding environment. Luminar’s LiDAR is uniquely engineered from chip-level up and with a higher wavelength, enabling the greatest level of performance and safety capabilities for production cars. When coupled with Mobileye’s Chauffeur platform, the result will be a turnkey, safer and high-performing automated system.
Building on the existing relationship between Luminar and Mobileye, the integration of Luminar LiDAR into Polestar 4 also expands the partnership between Luminar and Polestar which was announced in January 2023.
Thomas Ingenlath, Polestar CEO, says: “Polestar 4 comes with the highly advanced Mobileye SuperVision ADAS from the start, and we look forward to expanding that with Mobileye Chauffeur in the future. Being able to add Luminar’s industry-leading LiDAR to the platform’s development increases the strong link between our companies and brings even more world-class technology to Polestar 4.”
Prof. Amnon Shashua, CEO of Mobileye, says: “Combining our base SuperVision with an independent second redundant perception system – consisting of Luminar LiDAR, radars and an imaging radar – enables true redundancy and a level of accuracy that lays the foundation for fully autonomous driving.”
Austin Russell, founder and CEO of Luminar, says: “After collaborating with Mobileye on a solution since 2019, the true fruits of our labor with them are being realised for the first time by transitioning out of R&D and into a production vehicle with Polestar. Together, we look forward to raising the benchmark in the industry for what a safe and autonomous future can look like.”
AEye 4Sight Flex
AEye, Inc. (NASDAQ: LIDR), a global leader in adaptive, high-performance lidar solutions, announced 4Sight™ Flex – its ultra-compact, high-performance reference design for automotive. 4Sight Flex delivers unparalleled performance in a small, energy-efficient, low-cost form factor, enabling the next wave of L2+, L3, and L4 autonomy and safety features that can be integrated in-cabin.
For more than 150 years, car brands have focused on the look, feel, and efficiency of their vehicles, with design as a key brand differentiator. AEye’s 4Sight Flex optimizes for various integration scenarios, including the windshield and roof, allowing OEMs to deliver maximum safety to customers while preserving the aesthetic appeal of their cars.
“We believe that performance and design both matter and that both are important to driving lidar adoption with OEMs,” said Matt Fisch, CEO of AEye. “Our customers and partners want options that go beyond yesterday’s large antennas and today’s sensors protruding outside the vehicle. With 4Sight Flex, AEye is delivering what the market demands – exceptional lidar performance together with the option for a more integrated design in the OEMs’ location of choice.”
Superior Behind-the-Windshield Performance
AEye’s next-generation 4Sight Flex reference design delivers superior behind-the-windshield performance and is believed to be the only 1550 nanometer (nm) high-performance lidar capable of in-cabin integration. It boasts a 120° horizontal (H) x 30° vertical (V) field of view, with ultra-high resolution of up to 0.05° x 0.05° and long-range detection of up to 275 meters at 10% reflectivity, all at approximately half the size and up to 40% lower power consumption compared to AEye’s first-generation design.
Low-Cost, Low-Risk Solution
4Sight Flex also offers what AEye believes to be the lowest technical risk solution at one of the lowest volume costs in the industry. As AEye scales production for automotive volumes, it anticipates an additional 10-20% component cost reduction compared to the current generation, making this design very cost-competitive in the industry.
4Sight Flex leverages proven IP and qualified vendors from AEye’s first-generation 4Sight automotive reference design. By maximizing the re-use of automotive qualified components, AEye reduces customer risk while producing significant cost savings as volumes ramp.
AEye’s 4Sight Flex takes advantage of the 4Sight Intelligent Sensing Platform, allowing for highly programmable lidar performance that meets the performance requirements of all driving environments – highway, urban, and suburban. 4Sight Flex can also be reconfigured through software, allowing OEMs to push new capabilities to the lidar sensors via over-the-air updates. 4Sight Flex will be available in 2024.
TIER IV L4 V&V Toolkit
TIER IV, a pioneer in open-source autonomous driving (AD) technology, announces the launch of L4 V&V, a comprehensive evaluation toolkit for verifying and validating Level 4 AD functions. This toolkit includes a scenario set, a dataset, testing tools, and a comprehensive V&V process, all based on TIER IV’s achievement of obtaining Level 4 certification on October 20, 2023.
By offering design, simulation, and on-site testing support for safety evaluations, this toolkit enables partners to support autonomous vehicle manufacturers, local governments, operators, and evaluation institutions. Amid the rapid advancement of AD technology, there is a pressing need for experts and companies to support safety evaluations. With this toolkit, TIER IV will contribute to accelerating the implementation of autonomous driving in society.
“For nearly 40 years, VeriServe has helped more than 1,100 companies improve the quality of products with embedded software and information systems.” said Yoshiyuki Shinbori, Chief Executive Officer of VeriServe Corporation. “By combining TIER IV’s L4 V&V and our verification experience, as well as testing technologies utilizing our know-how, we believe we can support our customers with their autonomous driving vehicle development and evaluation for Level 4 autonomous driving approval.”
“While the government and companies are working together to create standards for Level 4 autonomous driving, there are still issues regarding how to evaluate and verify whether autonomous driving systems meet these standards,” said Minoru Kamata, Executive Director of Japan Automobile Research Institute (JARI). “I believe that launching testing tools and datasets to solve these issues and making them available to third parties is an essential initiative for the implementation of Level 4 autonomous driving in society. Starting with TIER IV’s L4 V&V, I look forward to the improvement and advancement of evaluation and verification of autonomous driving in the future.”
New oToBrite Camera Mods
oToBrite, a prominent provider of Vision-AI ADAS/AD solutions, has unveiled its latest offering in response to the surging demand for high-level Advanced Driver Assistance Systems (ADAS) and Autonomous Driving (AD) applications. With the need for enhanced perception technology, particularly for heavy commercial vehicles, as the heavier the vehicle is, the longer it will take to stop, oToBrite has successfully introduced automotive-grade 5MP/8MP camera modules. These cutting-edge modules can improve the visibility and perception capabilities of ADAS/AD systems, and have been adopted among clients in North America.
oToBrite has been a leading tier-1 player for vision-AI ADAS/AD solution in the automotive industry, leveraging its full-stack capabilities spanning camera module production technology, edge-computing system design, and vision-AI model development. The company offers flexible business model and comprehensive vision-AI technology stack, enabling it to provide system solutions, camera modules, or AI IP licensing to cater to diverse customer requirements. Its automotive-grade camera modules have already garnered the trust of prominent clients and entered the supply chain of car OEMs such as Luxgen, SONY, Toyota, XPENG, etc. with over 1 million automotive-grade camera modules deployed. To learn more about oToBrite’s offerings, please visit https://www.otobrite.com/en.
The newly launched 5MP/8MP camera modules from oToBrite feature high-sensitivity CMOS sensors. oToBrite’s 5MP camera module series is equipped with Sony IMX490 Sensor and has multiple viewing angles, including 30°, 60°, 90°, and 120°. The 8MP camera module series employs Sony IMX728 sensors and also offers various viewing angles. Both 5MP and 8MP series are equipped with GMSL2 interfaces and tested with waterproof and dustproof standards of IP67/69K. They can operate within a temperature range of -40°C to +85°C, ensuring the utmost reliability and stability for customers.
oToBrite holds a distinct advantage in camera production technology, with 1K class clean room factory certified with IATF16949 and endorsed by several leading car OEMs. Additionally, the in-house developed 5/6-axis active alignment machine for high-end camera modules exhibits the capability to manufacture over 60 SKU variants of camera modules.
Ghost Autonomy Investment
Ghost Autonomy, a pioneer in scalable autonomy software for consumer cars, announced a $5 million investment from the OpenAI Startup Fund to bring large-scale, multi-modal large language models (MLLMs) to autonomous driving. The funds will be used to accelerate ongoing research and development of LLM-based complex scene understanding required for urban autonomy. The new investment brings the company’s total funding to $220 million to date.
“Multi-modal models have the potential to expand the applicability of LLMs to many new use cases including autonomy and automotive. With the ability to understand and draw conclusions by combining video, images, and sounds, multi-modal models may create a new way to understand scenes and navigate complex or unusual environments,” said Brad Lightcap, OpenAI’s COO and manager of the OpenAI Startup Fund.
MLLMs potentially represent a new architecture for self-driving software, capable of handling the long tail of rare and complex driving scenarios. Where existing single-task networks are limited to their narrow scope and training, LLMs allow autonomous driving systems to reason about driving scenes holistically, utilizing broad-based world knowledge to navigate complex and unusual situations, even those never seen before.
“Solving complex urban driving scenarios in a scalable way has long been the holy grail for this industry – LLMs provide a breakthrough that will finally enable everyday consumer vehicles to reason about and navigate through the toughest scenarios,” stated John Hayes, founder and CEO, Ghost Autonomy. “While LLMs have already proven valuable for offline tasks like data labeling and simulation, we are excited to apply these powerful models directly to the driving task to realize their full potential.”
Ghost’s platform allows leading automakers to bring artificial intelligence and advanced autonomous driving software into the next generation of vehicles, now expanding capabilities and use cases with MLLMs. Ghost is actively testing these capabilities via its development fleet today, and is partnering with automakers to jointly validate and integrate new large models into the autonomy stack.
Cruise to Update Software in Recall
Cruise LLC (Cruise) is recalling a subsystem within the Automated Driving Systems (ADS). The Collision Detection Subsystem may improperly cause the vehicle to attempt to move to the side of the road after a crash.
Cruise has deployed an over-the-air (OTA) ADS software update in all supervised test fleet vehicles, free of charge. All affected driverless fleet vehicles will also be repaired prior to returning to service. Cruise’s number for this recall is 23-02.
The Cruise ADS is designed to perform a maneuver to minimize safety risks and other disruption to the extent possible after a collision. Within the ADS, the Collision Detection Subsystem is responsible for detecting the collision and electing a post-collision response. In many cases, the AV will pull over out of traffic. In other cases, it will stop and remain stationary. The specific post-collision response depends on the characteristics of the collision, such as the other road actors involved in the incident, the location of impact (e.g., frontal or side), and the perceived severity.
In certain circumstances, a collision may occur, after which the Collision Detection Subsystem may cause the Cruise AV to attempt to pull over out of traffic instead of remaining stationary when a pullover is not the desired post-collision response. This issue could occur after a collision with a pedestrian positioned low on the ground in the path of the AV.
This issue played a role in determining the Cruise AV’s response to a collision on October 2, 2023. In the incident, a human-driven vehicle traveling adjacent to a Cruise AV collided with a pedestrian, propelling the pedestrian across their vehicle and onto the ground in the immediate path of the AV. The AV biased rightward and braked aggressively but still made contact with the pedestrian. The Cruise ADS inaccurately characterized the collision as a lateral collision and commanded the AV to attempt to pull
over out of traffic, pulling the individual forward, rather than remaining stationary.
In certain circumstances, a collision may occur, after which the Collision Detection Subsystem may cause the Cruise AV to attempt to pull over out of traffic instead of remaining stationary when a pullover is not the desired post-collision response. This post-collision response could increase risk of injury. On October 26, 2023, Cruise proactively paused operation of its driverless fleet providing the company time to further assess and address the underlying risk.
On October 3, 2023, Cruise met with the California Department of Motor Vehicles (“DMV”), National Highway Traffic Safety Administration (“NHTSA”), and other San Francisco officials to discuss the incident, and it provided a briefing to the California Public Utilities Commission (“CPUC”). Cruise also reported the incident to NHTSA that same day, in accordance with NHTSA’s Standing General Order.
In the course of its investigation of the incident, Cruise determined that the ADS attempted to pull over after the collision, rather than remain stationary, as a result of the issue.
Cruise continued its evaluation of the AV’s post-collision response to ascertain whether and under what circumstances the issue could recur by conducting a broad review of historical driving data and running extensive simulation tests to analyze the behavior of the ADS in comparable circumstances. Cruise has developed a software update that remedies the issue described in this notice. With the new update, the Cruise AV would have remained stationary during the October 2 incident.
Cruise has deployed the remedy to its supervised test fleet, which remains in operation. Cruise will deploy the remedy to its driverless fleet prior to resuming driverless operations.