Autonomous & Self-Driving Vehicle News: Mobileye, Velodyne, Aptiv Ibeo & WiMi

In autonomous and self-driving news are Mobileye, Velodyne, Aptiv Ibeo and WiMi.

Ibeo Auto SICK 3D Solid-State LiDAR

A technology partnership between Hamburg-based automotive LiDAR specialist Ibeo Automotive Systems GmbH and SICK AG has resulted in a 3D solid-state LiDAR sensor for industrial applications. The technology, developed by Ibeo to automotive standards, is based on a new photon laser measurement technique and is entirely free of moving parts. An additional, camera-like reference image adds a ‘fourth dimension’ to the measurement provided by the sensor.

The market for autonomous and semi-autonomous systems in an industrial context is predicted to grow at above-average rates. There is particular demand here for tough, ever-smaller and above all cost-efficient sensor solutions. The new solid-state technology from Ibeo works entirely without moving parts and features a compact form factor, thereby offering decisive advantages for mobile applications.

Now Ibeo and SICK have announced a technology partnership to develop a 3D LiDAR sensor based on this innovative solid-state technology from the automotive sector, for industrial applications. In this partnership, Ibeo is providing its ibeoNEXT measurement core. SICK will develop the system design and the application software for a new industrial LiDAR sensor so that industrial applications can be created to meet customer requirements.

Aptiv Next Gen Level 1-3 Platform

Aptiv PLC a global technology company enabling the future of mobility, today announced its next-generation Level 1-3 capable ADAS platform.

Aptiv has been leading the development of advanced driver-assistance systems (ADAS) for more than 20 years, from launching the industry’s first radar-based Adaptive Cruise Control system in 1999 to its autonomous driving joint venture, Motional, which will be among the first to put fully driverless vehicles on public roads.

Aptiv has been leading the development of advanced driver-assistance systems (ADAS) for more than 20 years, from launching the industry’s first radar-based Adaptive Cruise Control system in 1999 to its autonomous driving joint venture, Motional, which will be among the first to put fully driverless vehicles on public roads.

Aptiv’s unique full-stack capabilities are helping customers realize their technology roadmaps and democratize advanced safety systems faster and at a lower cost. Its award-winning first-generation automated driving satellite compute platform has been a game-changer in the industry, leveraging the integration of its Satellite Architecture and active safety software, perception systems and compute. Aptiv’s Satellite Architecture is being deployed by multiple OEMs around the world on more than 10 million vehicles over the next few years.

Building on this trusted foundation, Aptiv’s next-gen ADAS platform will enable new levels of safety, comfort and convenience. Purpose-built for scalability, it cost-effectively spans all vehicle segments by managing the software complexity and supporting features that range from entry-level safety compliance to advanced highway pilot and parking assist. Aptiv’s ADAS platform has the ability to incorporate future technologies and features, including those developed in collaboration with Motional, providing further scalability to higher levels of automation.

“Our next-gen ADAS solution cost-effectively delivers safety features over the lifetime of the vehicle that exceed consumer expectations on a platform upon which OEMs can continue to innovate,” said Kevin Clark, CEO and president. “Our unique position as the only provider of both the brain and the nervous system of the vehicle makes Aptiv the partner of choice for developing software-defined safety solutions that can be democratized as they mature.”

As part of Smart Vehicle Architecture™, Aptiv’s next-gen ADAS platform is fully compatible with emerging zone control architectures, enabling new business models for OEMs through the creation of new features and services that can be updated over-the-air (OTA).

Aptiv’s next-gen ADAS platform also applies an Industry 5.0 approach to safety, ensuring that the driver and the vehicle work together flawlessly. Using the latest generation of up-integrated driver-state sensing and interior sensing solutions – augmented by scalable software – Aptiv’s platform not only verifies if the driver’s eyes are on the road, it also recognizes and responds to body positioning, gestures and eye movement to provide a higher level of safety.

The next-gen ADAS platform continues the acceleration of software-defined vehicles through Aptiv’s scalable full-stack features and offerings, including:

  • Proven Software Stack: Utilizes differentiated and modularized software at every level of the stack on an open, centralized compute platform that allows for the creation of new features and services.
  • Next-Generation Sensor Suite:  The next-gen platform utilizes the industry’s best-in-class interior and exterior sensing capabilities including radars, vision, and LiDAR. Among these sensors is Aptiv’s sixth-generation corner/side radars and forward-facing radars, as well as Aptiv’s first 4D imaging radar, which provides twice the detection range versus what is available on the market today. Aptiv’s Interior Sensing Platform includes radars, ultrasonic sensing, and cabin cameras, enabling OEMs to develop brand-building user experience.
  • Advanced Sensor Fusion:  Supporting the most advanced features requires a comprehensive and reliable environmental model. The platform’s differentiation comes from Aptiv’s advanced AI and machine learning algorithms to fuse 360-degree sensor inputs, providing a detailed rendering of the environment around the vehicle.
  • Development Tool Chain:  Gives OEMs the flexibility to drive further innovation on top of Aptiv’s proven solutions to accelerate the development of safe, green and connected features consumers want with the proven automotive-grade systems they can trust.

Velodyne & UNN White Paper Demos Efficiency of Transportation from LiDAR

Velodyne Lidar, Inc. and University of Nevada, Reno’s Nevada Center for Applied Research published a white paper that demonstrates lidar sensors’ ability to make transportation infrastructure more efficient, sustainable and safe. The white paper reports results of research using Velodyne’s lidar sensors to improve traffic analytics, increase pedestrian safety, reduce accidents and work toward facilitated use of autonomous vehicles.

Mobileye Intros LiDAR & RADAR

Mobileye, an Intel Company, previewed the strategy and technology that will enable autonomous vehicles (AV) to fulfill their lifesaving promise globally. During two sessions at this week’s Consumer Electronics Show, Mobileye president and chief executive officer Amnon Shashua will explain how Mobileye is set up to win globally in the AV industry.

“The backing of Intel and the trinity of our approach means that Mobileye can scale at an unprecedented manner,” Shashua said. “From the beginning, every part of our plan aims for rapid geographic and economic scalability – and today’s news shows how our innovations are enabling us to execute on that strategy.”

The Mobileye Trinity

In describing the trinity of the Mobileye approach, Shashua will explain the importance of delivering a sensing solution that is orders of magnitude more capable than human drivers. He will describe how Mobileye’s technology – including Road Experience Management™ (REM™) mapping technology, rules-based Responsibility-Sensitive Safety (RSS) driving policy and two separate, truly redundant sensing subsystems based on world-leading camera, radar and lidar technology – combine to deliver such a solution.

Mobileye’s approach solves the scale challenge from both a technology and business perspective. Getting the technology down to an affordable cost in line with the market for future AVs is crucial to enabling global proliferation. Mobileye’s solution starts with the inexpensive camera as the primary sensor combined with a secondary, truly redundant sensing system enabling safety-critical performance that is at least three orders of magnitude safer than humans. Using True Redundancy™, Mobileye can validate this level of performance faster and at a lower cost than those who are doing so with a fused system.

New Radar and Lidar Technology

Shashua explained that the company envisions a future with AVs achieving enhanced radio- and light-based detection-and-ranging sensing, which is key to further raising the bar for road safety. Mobileye and Intel are introducing solutions that will innovatively deliver such advanced capabilities in radar and lidar for AVs while optimizing computing- and cost-efficiencies.

As described in Shashua’s “Under the Hood” session, Mobileye’s software-defined imaging radar technology with 2304 channels, 100DB dynamic range and 40 DBc side lobe level that together enable the radar to build a sensing state good enough for driving policy supporting autonomous driving. With fully digital and state-of-the-art signal processing, different scanning modes, rich raw detections and multi-frame tracking, Mobileye’s software-defined imaging radar represents a paradigm shift in architecture to enable a significant leap in performance.

Shashua also will explain how Intel’s specialized silicon photonics fab is able to put active and passive laser elements on a silicon chip. “This is really game-changing,” Shashua said of the lidar SoC expected in 2025. “And we call this a photonic integrated circuit, PIC. It has 184 vertical lines, and then those vertical lines are moved through optics. Having fabs that are able to do that, that’s very, very rare. So this gives Intel a significant advantage in building these lidars.”

Worldwide Maps Bring AVs Everywhere

In Monday’s session, Shashua will explain the thinking behind Mobileye’s crowdsourced mapping technology. Mobileye’s unique and unprecedented technology can now map the world automatically with nearly 8 million kilometers tracked daily and nearly 1 billion kilometers completed to date. This mapping process differs from other approaches in its attention to semantic details that are crucial to an AV’s ability to understand and contextualize its environment.

For AVs to realize their life-saving promise, they must proliferate widely and be able to drive almost everywhere. Mobileye’s automated map-making process uses technology deployed on nearly 1 million vehicles already equipped with Mobileye advanced driver-assistance technology.

To demonstrate the scalable benefits of these automatic AV maps, Mobileye will start driving its AVs in four new countries without sending specialized engineers to those new locations. The company will instead send vehicles to local teams that support Mobileye customers. After appropriate training for safety, those vehicles will be able to drive. This approach was used in 2020 to enable AVs to start driving in Munich and Detroit within a few days.

WiMi Patents 3D Holographics

WiMi Hologram Cloud Inc., a leading Hologram AR Technology provider in China, today announced that it has obtained a patent (the “Patent”) for a three-dimensional (“3D”) holographic pulse laser processing device for optical holography use (the “New Device”). The Patent is a result of WiMi’s independent research and development and will allow the Company to further improve its intellectual property protection system. By unleashing the strengths of its proprietary intellectual property resources and enhancing its innovation mechanisms, WiMi continues to enhance its core competitive advantages to fortify its leadership in the development of new technologies.

The Patent is related to the technical field of holographic pulse laser processing devices and, in particular, pulse laser processing devices for 3D optical holography use.

3D optical holography pulse laser processing devices will be widely used in the fields of autonomous driving, medical imaging, unmanned flight, holographic spectrometers, and more. Currently in the market, a holographic pulse laser processing device includes a pulse laser processor shell (the “Shell”) with an opening on one side, a laser device which is bonded to the bottom inner wall of the Shell, and a display device which is installed on the top of the Shell. On the inner wall of one side of the Shell, there is a through hole in which a transparent protective board is bonded. There is also a focal lens bonded to the bottom inner wall of the Shell. To present the holographic data, the laser passes through the focal lens and transparent protective board, reaches the object, and then sends a command to the actuator through a deep neural network control.

COMMENT: Let Us Know What You Think