Autonomous & Self-Driving Vehicle News: Waymo, Samsung, StradVision, KPIT, dSPACE, Luminar, Velodyne, MIPI Alliance, Seegrid, Valens & Sumitomo

In autonomous and self-driving vehicle news are Waymo, Samsung, StradVision, KPIT, dSPACE, Luminar, Velodyne, MIPI Alliance, Seegrid, Valens and Sumitomo.

Waymo Collision

Waymo autonomous vehicle reportedly in manual mode struck a pedestrian in San Francisco.

Waymo spokesperson stated, “We are aware of this incident involving a Waymo vehicle, which was being driven in manual mode, and are continuing to investigate it in partnership with local authorities.The pedestrian was treated for injuries at the scene and was transported to the hospital in an ambulance.The trust and safety of the communities in which we drive are paramount to us, and we will continue investigating this incident in partnership with local authorities.

The safety driver has been placed on “administrative leave”.

Samsung Debuts Next Gen Memory Chips for Automotive

Samsung Electronics Co., Ltd., the world leader in advanced memory technology, unveiled an extensive lineup of cutting-edge automotive memory solutions designed for next-generation autonomous electric vehicles. The new lineup includes a 256-gigabyte (GB) PCIe Gen3 NVMe ball grid array (BGA) SSD, 2GB GDDR6 DRAM and 2GB DDR4 DRAM for high-performance infotainment systems, as well as 2GB GDDR6 DRAM and 128GB Universal Flash Storage (UFS)

Advanced features in infotainment systems such as high-definition maps, video streaming and 3D gaming, together with the growing use of autonomous driving systems, have been driving the demand for high-capacity, high-performance SSDs and graphics DRAM throughout the automotive industry.

In 2017, Samsung was the first in the industry to introduce UFS solutions for automotive applications. Today, the company is well-positioned to provide a total memory solution with the new automotive SSD and GDDR6 DRAM.

Samsung’s 256GB BGA SSD controller and firmware are developed in-house for optimized performance, offering a sequential read speed of 2,100 megabytes per second (MB/s) and a sequential write speed of 300MB/s, which are seven and two times faster than today’s eMMC, respectively. Furthermore, the 2GB GDDR6 DRAM features up to a 14 gigabit-per-second (Gbps) data rate per pin. Such exceptional speeds and bandwidth will support complex processing of various multimedia applications as well as large amounts of autonomous driving data, contributing to a safer, more dynamic and more convenient driving experience.

In addition, Samsung’s new automotive solutions meet the AEC-Q100 qualification — the global automotive reliability standard — allowing them to operate stably in extreme temperatures ranging from -40°C to +105°C, which is an especially crucial requirement for automotive semiconductors.

Recently, sensor deployment in autonomous vehicles to continuously monitor immediate surroundings has been increasing, and high-speed processing to interpret and predict this data for safer driving is becoming critically important. By introducing automotive memory solutions previously championed in servers and AI accelerators, Samsung is helping to pave the way for safer autonomous driving.

Having already completed customer evaluations, the new automotive memory products are currently in mass production.

StradVision Camera Software for SVNet for LG

StradVision, a pioneer in AI-based vision processing technology for Autonomous Vehicles and ADAS systems, has announced that it provides its camera perception software SVNet for LG Electronics’ latest ADAS Front Camera System.

As a software solution provider, StradVision closely cooperated with LG Electronics to support its development of an algorithm implementing various ADAS functions. For the various safety functions delivered by LG Electronics’ ADAS Front Camera System, StradVision offered full customization of Object Detection and Free Space Detection.

StradVision is accelerating the advancement of autonomous vehicles through the SVNet software, which relies on deep learning-based perception algorithms. Compared to competitions, SVNet achieves much higher efficiency in memory usage and energy consumption, and can be customized and optimized to any system on a chip (SoC), thanks to its patented and cutting-edge Deep Neural Network. The software also works seamlessly with other sensors such as LiDAR and RADAR to achieve surround vision.

SVNet is currently used in mass production models of ADAS and autonomous driving vehicles that support safety function Levels 2 to 4.

KPIT, dSPACE & Microsoft Homologation

KPIT Technologies, dSPACE, and Microsoft have teamed up to offer a unique solution for OEMs and Tier-1s seeking homologation (self-certification in the USA) for advanced driver assistance systems and autonomous driving.

Certification for autonomous vehicles requires millions of miles of testing, which can only be achieved through data-driven simulation., This is a fairly new field that requires multiple skills and tools to manage petabytes of data (e.g., domain expertise, software development capabilities, unique tools, and infrastructure). A collaborative approach among experts in infrastructure, autonomous driving, and solution expertise will deliver efficiency and effectiveness through a one-stop solution for OEMs, thereby optimizing technology spends. KPIT, dSPACE, and Microsoft combine all of these competencies to provide a one-stop solution for the mobility industry.

  • KPIT will leverage decades of experience in working on the development, validation, and integration of applications for autonomous driving for next-generation technology roadmaps. The company will contribute its expertise in software development, integration, and validation to this collaboration. KPIT will also use a suite of virtual simulation and validation tools purpose-built for autonomous driving use cases.
  • dSPACE will contribute tools and solutions for data-driven development, simulation, and validation. The company has been helping its customers improve their validation methods for more than 30 years and offers OEMs and Tier-1s new solutions for the development of applications for autonomous driving. To this end, dSPACE recently launched SIMPHERA, a web-based, highly scalable, cloud solution that lets users perform the computation-intensive validation of functions for autonomous driving quickly and easily. SIMPHERA supports the collaboration of development teams that are distributed worldwide and lets customers seamlessly integrate their applications.
  • Microsoft Azure Core and Services help automakers accelerate their digital transformation by providing global cloud services and computing capabilities uniquely tailored to deliver virtualization of infrastructure and networking for ADAS feature development and validation in a cost-performant, scalable, and repeatable manner.

ESN Reveals IAC Info at CES 2022

-Energy Systems Network (ESN), principal organizer of the Indy Autonomous Challenge (IAC), today announced details of the upcoming IAC events at CES 2022 in Las Vegas, Nev., January 3-8, 2022. Making history as the first high-speed, head-to-head autonomous racecar competition, 19 universities from 8 countries will form 9 race teams seeking to compete in the Autonomous Challenge @ CES.

Luminar Sponsors Autonomous Challenge

Luminar (NASDAQ: LAZR), the global leader in automotive lidar hardware and software technology and longtime IAC sponsor, will serve as a premier sponsor and prominent automotive technology partner of the Autonomous Challenge @ CES. The Dallara AV-21 is the most advanced race car ever built and features three Luminar Hydra LiDAR sensors to provide 360-degree long-range sensing, which enables safe autonomy at high speeds.

ROBORACE Selects Velodyne

Velodyne Lidar, Inc. (Nasdaq: VLDR, VLDRW) announced ROBORACE, the world’s first autonomous car racing series, has selected Velodyne as the official lidar sensor provider in its next generation vehicles. ROBORACE will use Velodyne‘s solid-state Velarray H800 sensors in its electric-powered autonomous race cars for the Season One championship series, which is set to begin in 2022.

The Velodyne and ROBORACE engineering teams have been working collaboratively on the race car development project. The Velarray H800 was the clear choice of the ROBORACE engineering and design team due to Velodyne’s technical prowess, sensor performance and reliability, and trust across the industry.

“At ROBORACE, we are always searching for the best technology to build into our race cars and Velodyne has one of the best products available on the market,” said Chip Pankow, Chief Championship Officer, ROBORACE. “The Velarray will help our cars achieve safe navigation and collision avoidance in competitive autonomous racing.”

ROBORACE was created to accelerate development of autonomous driving systems by pushing the technology to its limits in a range of safe, controlled environments. ROBORACE provides the platform, organization and support while racing teams are responsible for their own code and strategy. Season One of the ROBORACE Championship will feature multi-agent racing and Metaverse elements with each competition designed to provide a variety of challenges. There are commercial and university racing teams in the competition.

“ROBORACE is a proving ground for the next generation of mobility by putting self-driving technologies to the test in performance racing,” said Sinclair Vass, Chief Commercial Officer, Velodyne Lidar. “With its long-range perception and broad field of view, the Velarray H800 is a great fit for ROBORACE’s autonomous race cars. We look forward to continuing to collaborate with ROBORACE in advancing innovation in the sport of racing.”

Velodyne‘s Velarray H800 is a solid-state lidar sensor architected for automotive-grade performance. The sensor is built using Velodyne’s breakthrough proprietary micro-lidar array (MLA) architecture. The Velarray H800 allows for outstanding detection of peripheral, near-field and overhead objects while addressing corner cases on sloping and curving roads. It is designed for safe navigation and collision avoidance in advanced driver assistance systems (ADAS) and autonomous mobility applications.

MIPI Alliance Updates MIPI M-PHY

-The MIPI Alliance, an international organization that develops interface specifications for mobile and mobile-influenced industries, today announced a major update to its MIPI M-PHY physical-layer interface for connecting the latest generation of flash memory-based storage and other high data rate applications in advanced 5G smartphones, wearables, PCs, industrial IoT, and automobiles. Version 5.0 of the M-PHY interface adds a fifth gear—”High Speed Gear 5″ (HS-G5) at 23.32 Gigabits per second (Gbps)—enabling engineers to double the potential data rate per lane compared with the previous specification. M-PHY v5.0 also responds to a range of other ecosystem requirements for connecting flash memory storage, such as ongoing innovation of the JEDEC Universal Flash Storage (UFS) standard.

MIPI M-PHY is a versatile physical layer targeting applications with a particular need for high data rates, low pin counts, lane scalability and power efficiency. Key applications include connecting flash memory storage, cameras and RF subsystems, as well as providing chip-to-chip inter-processor communications (IPC). For JEDEC UFS, M-PHY serves as the physical layer for MIPI UniPro, and together both specifications have been incorporated into multiple versions of UFS over the last decade.

MIPI M-PHY v5.0 is designed to support the forthcoming MIPI UniPro v2.0 and JEDEC UFS releases. In addition to doubling the data rate to a maximum of 23.32 Gbps per lane to satisfy the storage ecosystem’s growing data rate requirements, v5.0 introduces several new capabilities intended to optimize the M-PHY interface:

● Data rates have been optimized for target applications, simplifying Phased Lock Loop (PLL) implementation and eliminating design complexity.

● High-speed startup reduces latency, for example, when accessing flash memory on power up.

● Eye monitoring visualizes signal health, enhancing debug functionality.

● New attributes for equalization and other electrical updates to HS-G5 improve the suitability of M-PHY for ultra-high data rate applications.

Seegrid Corp. Simulation Tools

-Seegrid Corporation, the leader in autonomous mobile robots (AMRs) for material handling, and Applied Intuition, a best-in-class simulation and software tools provider for autonomous vehicle development, today announced an agreement to collaborate on creating 3D virtual warehouses, factories, and distribution centers in support of accelerating advancements in autonomous technology for the material handling industry. The partnership involves a significant investment by Seegrid and will enable Seegrid to quickly validate product innovations in more environments and use cases across the supply chain than feasible with manual testing.

Seegrid, recently named #1 for all mobile robots in the United States and #1 in market share worldwide for automated tow tractors, launched several new robot models in 2021 including the company’s newest AMR, Palion Lift, the only autonomous lift truck in the market with industry-leading 3D perception. Applied Intuition is generally considered best-in-class for simulation tools that deliver high-fidelity simulation modeling to comprehensively test and rapidly accelerate autonomous vehicle development and deployment.

The two companies, heavily focused on and recognized for creating safe and efficient autonomy solutions, will also co-develop advanced simulation features and environments to support use cases unique to supply chain mobile automation. The collaboration is a development effort from Seegrid’s Blue Labs, the company’s research and development team dedicated to quickly identifying new automation technologies for their leading global brand customers in logistics, ecommerce, and manufacturing.

Valens & Sumitomo Partner A-PHY

Valens Semiconductor  and Sumitomo Electric Industries, Ltd. (TYO: 5802) announced today that they are collaborating in the field of A-PHY technology and deployments. The companies will work together to ensure that Sumitomo Electric’s wiring harness systems meet the channel requirements of the A-PHY specification, while Valens will add the Sumitomo Electric cable assembly and matching on-board connectors as an ordering option for its VA70XX customer evaluation kits. The cooperation will streamline the deployment of MIPI A-PHY technology across the automotive industry.

MIPI A-PHY is the first standardized, asymmetric, long-reach Serializer-Deserializer (SerDes) physical layer interface targeted for advanced driver-assistance systems (ADAS) and autonomous driving systems (ADS). It was released by the MIPI Alliance in September 2020. Soon after, in July 2021, the technology was adopted by the renowned IEEE standardization body as one of its own standards. A-PHY’s primary mission is to transfer high-speed data between cameras, radars, LiDARs and their related ECUs.

Sumitomo Electric will provide differential/coaxial cables that meet the A-PHY specification, and the two companies will work together to expand the variety of cabling options in the future. Valens will feature Sumitomo Electric cables in a live A-PHY demonstration at CES 2022 (Suite 29-139, Venetian Hotel).

###Deepen AI Calibrations
Deepen AI, a world leader in computer vision tools for autonomous systems, today announced the launch of Radar and IMU sensor calibrations.Deepen Calibrate is an easy-to-use web browser-based tool that supports both intrinsic and extrinsic calibrations. Deepen Calibrate cuts down the time spent on calibrating multi-sensor data from hours to minutes, enabling accurate localization, mapping, sensor fusion perception, and control. Deepen Calibrate now supports 10 calibration pairs.
  • Radar to Camera (New)
  • IMU to Vehicle (New)
  • IMU Intrinsic (New)
  • Vehicle to Camera
  • LiDAR to Camera
  • LiDAR to Vehicle
  • Stereo Camera
  • Non-overlapping Camera
  • LiDAR to LiDAR
  • Camera intrinsic

Deepen AI, has also developed a proprietary Loop-based Calibration Optimiser to identify and fix small errors in multiple sensor calibrations – resulting in highly accurate sensor fusion. Calibration Optimiser works for multiple sensors forming a closed loop. By adding Calibration Optimiser on top of the regular techniques, Deepen AI is able to reduce errors and significantly increase sensor fusion accuracy.

Deepen Calibrate makes the critical task of sensor data calibration simple and quick. Deepen Calibrate manages the complexities of the calibration process, ensuring accuracy and making autonomous systems safer, while also making a job that typically requires the time of a Ph.D-level engineer into something anyone can do.