Autonomous and Self-Driving Vehicle News: Waymo, May Mobility & Emergency Lights Warning

In autonomous and self-driving vehicle news are Waymo,  May Mobility and a new study on the dangers of emergency vehicle flashing lights.

Waymo Drives LA Freeways & Fully-Autonomous for Employees in Altanta

Freeways are an intrinsic part of the Los Angeles experience. To better serve its expansive 79-square-mile service area, Waymo is beginning to provide its employees with access to fully autonomous rides on LA freeways—a key step toward expanding this capability to all riders.

Auto mode has been activated as Waymo begins its first fully autonomous miles for employees in Atlanta. This milestone builds on years of experience and over 33 million autonomous miles driven across San Francisco, Phoenix, Los Angeles, and Austin. Later this year, public autonomous rides will be available to Atlantans exclusively through Uber.

Deloitte and May Mobility Partner to Enhance Autonomous Vehicle Safety with Data-Driven Insights

Deloitte and May Mobility have formed an alliance to leverage data and analytics to improve autonomous vehicle (AV) safety for municipal and business customers. Deloitte’s 2025 Global Automotive Consumer Study highlights the need for AV companies to prioritize safety. The partnership utilizes May Mobility’s deployment data and Deloitte’s insights platform to help optimize transportation planning and rider safety.

The collaboration was first implemented in Detroit’s Accessibili-D program in 2024, aiding older adults and people with disabilities in accessing essential services. By combining Deloitte’s AI and data expertise with May Mobility’s AV technology, including its Multi-Policy Decision Making (MPDM) system, the partnership aims to enhance AV operations worldwide. The initiative provides tailored insights for communities, fostering more reliable and accessible autonomous mobility solutions.

Flashing Lights and Automated Driving: A Hidden Vulnerability

Carmakers often tout automated driving systems as making driving safer and less stressful by using artificial intelligence (AI) to detect and avoid crashes. However, new research suggests that under certain conditions, these systems may fail at the worst possible moment—when exposed to the flashing lights of emergency vehicles, reported Wired.

A study by researchers from Ben-Gurion University of the Negev and Japanese technology firm Fujitsu Limited found that some camera-based automated driving systems struggle to recognize objects on the road when emergency lights are flashing. The researchers call this effect a “digital epileptic seizure” or “epilepticar.” The AI-based object detection fluctuates in effectiveness, especially in darkness, making it harder to identify vehicles or obstacles. This flaw could lead to crashes near emergency vehicles and even be exploited by bad actors to cause accidents.

Despite the alarming findings, the study has limitations. The researchers did not test commercial self-driving systems like Tesla’s Autopilot. Instead, they examined five off-the-shelf dashcams with automated features and processed their footage through four open-source object detectors. The study does not confirm that car manufacturers use these object detection models, meaning commercial systems may already be protected against this issue.

The inspiration for the study came from reports of Teslas crashing into 16 stationary emergency vehicles between 2018 and 2021 while Autopilot was engaged. Ben Nassi, a cybersecurity and machine learning researcher at Ben-Gurion University, suspected that the flashing emergency lights might be a factor in these incidents. He pointed out that ambulances, fire trucks, and police cars come in different shapes and sizes, suggesting that the lighting, rather than the vehicle type, could be the root cause of the problem.

The U.S. National Highway Traffic Safety Administration (NHTSA) conducted a three-year investigation into these Tesla crashes, which ultimately led to a recall of Tesla’s Autopilot software. The agency determined that Autopilot did not adequately ensure that drivers remained attentive and in control of their vehicles. Other systems, like General Motors’ Super Cruise and Ford’s BlueCruise, require driver attention and function only in mapped areas.

In a written statement to WIRED, NHTSA spokesperson Lucia Sanchez acknowledged that emergency flashing lights can affect certain advanced driver assistance systems. However, Tesla, which dismantled its public relations team in 2021, did not respond to inquiries. The manufacturers of the tested camera systems—HP, Pelsee, Azdome, Imagebon, and Rexing—also did not comment.

Despite identifying a potential issue, the researchers emphasized that they do not definitively link the Tesla crashes to emergency lights. Nassi admitted that while their findings highlight a vulnerability, they do not prove it is responsible for the Autopilot-related crashes.

Additionally, the study focused only on image-based object detection. Many automakers incorporate other sensors, such as radar and lidar, which could mitigate the effect of flashing lights. Tesla, however, relies heavily on vision-based systems, believing that AI-powered image recognition will eventually lead to fully autonomous vehicles. Tesla CEO Elon Musk recently claimed that the company’s vision system would enable self-driving cars within the next year.

How an automated system reacts to flashing lights depends on how it is programmed. Some manufacturers may tune their systems to react cautiously, leading to false positives—such as braking unnecessarily when mistaking a harmless object for a hazard. Others might set their systems to react only when highly confident, which could result in failing to recognize a real obstacle, potentially leading to accidents.

To address the issue, the researchers developed a software fix called “Caracetamol”—a play on the words “car” and “Paracetamol,” the painkiller. The software is trained to recognize vehicles with flashing emergency lights, improving object detection accuracy.

Earlence Fernandes, a computer science professor at the University of California, San Diego, who was not involved in the study, found the research credible. He compared the effect to how human drivers can be temporarily blinded by emergency flashers, arguing that cameras in advanced driver assistance systems might experience similar issues.

MIT AgeLab researcher Bryan Reimer sees the study as highlighting broader concerns about AI-driven automation in vehicles. He argues that automakers need “repeatable, robust validation” to uncover such vulnerabilities. Reimer worries that some companies are advancing their technology faster than they can properly test it, potentially leaving dangerous blind spots in automated driving systems.

This research underscores the need for automakers to rigorously test their AI systems in real-world scenarios. While automation has the potential to improve road safety, unexpected vulnerabilities—like the effect of emergency lights—must be addressed to ensure these systems can truly be relied upon.