Autonomous & Self-Driving Car News: Waymo, Lyft, SkyWater, MIT, Cruise Foresight & StradVision

In AV and self-driving car news are GM, Cruise, Waymo, Lyft, SkyWater, MIT, Foresight and StradVision.

 

Slow-Mo for Cruising

General Motors’ s, Cruise, is delaying  deployment of self-driving cars after the target year of  2019 for more testing. In a Medium post Dan Ammann, stated,

“In order to reach the level of performance and safety validation required to deploy a fully driverless service in San Francisco, we will be significantly increasing our testing and validation miles over the balance of this year, which has the effect of carrying the timing of fully driverless deployment beyond the end of the year. ”

“At Cruise, we are taking a different approach. We will deploy with our community, not at our community. Right now we are setting the stage by expanding and deepening partnerships with the city, first responders and other organizations that matter to San Franciscans, such as MADD and the Coalition for Clean Air.”

Lyft Releases Data

Lyft is sharing its large-scale dataset featuring the raw sensor camera and LiDAR inputs as perceived by a fleet of multiple, high-end, autonomous vehicles in a bounded geographic area. This dataset also includes high quality, human-labelled 3D bounding boxes of traffic agents, an underlying HD spatial semantic map.

The company wrote in blog post, “With this, we aim to empower the community, stimulate further development, and share our insights into future opportunities from the perspective of an advanced industrial Autonomous Vehicles program.”

Waymo’s PBT

A blog post about Popular Based training by Waymo engineers reports

PBT enabled dramatic improvements in model performance. Our PBT models were able to achieve higher precision by reducing false positives by 24% compared to its hand-tuned equivalent, while maintaining a high recall rate. A chief advantage of evolutionary methods such as PBT is that they can optimize arbitrarily complex metrics. Traditionally, neural nets can only be trained using simple and smooth loss functions, which act as a proxy for what we really care about. PBT enabled us to go beyond the update rule used for training neural nets, and towards the more complex metrics optimizing for features we care about, such as maximizing precision under high recall rates.

Ouster LiDAR to SERVE Autonomous Postmates’ Bots

Ouster, a leading provider of high-resolution lidar sensors used for autonomous vehicles, robotics, and mapping, announced that Postmates, the only company that enables customers to get anything on-demand, has selected the Ouster OS1 lidar sensor for use in its Serve autonomous delivery rover deploying first in Los Angeles.

The Ouster OS1 lidar delivers an industry-leading combination of performance, lightweight, value and reliability that enables Serve to seamlessly and safely navigate sidewalks, detect pedestrians, and interact with the community. Combining Postmates’ patented Socially-Aware-Navigation system with Ouster’s multi-beam flash lidar architecture, Serve brings together thoughtful design and best-in-class technologies to enable a new platform for on-demand commerce.

By using the same complementary metal-oxide-semiconductor (CMOS) technology used in consumer digital cameras and smartphones, the Ouster OS1 provides Serve with both the manufacturing scale necessary for mass deployment and the advanced perception to safely and efficiently navigate urban environments.

Serve autonomous delivery rovers equipped with Ouster OS1 lidar sensors will be deployed at commercial-scale first in Los Angeles.

Foresight Sells to Prototype

Foresight Autonomous Holdings Ltd., an innovator in automotive vision systems, announced  an additional sale of a prototype of its QuadSight™ four-camera vision system targeted for the semi-autonomous and autonomous vehicle market. The prototype system was ordered by the American subsidiary of a leading global Tier One automotive supplier. Revenue from the prototype system sale is expected to total tens of thousands of dollars.

The American supplier participated in a technological roadshow that took place in the Silicon Valley area at the beginning of July, as reported by the Company on July 8, 2019. The roadshow consisted of live, real-time demonstrations of the QuadSight system to vehicle manufacturers and Tier One suppliers. Different scenarios were tested, simulating obstacle detection in challenging weather and lighting conditions. Customer satisfaction following initial installation may lead to additional orders of QuadSight systems by the American supplier, to be integrated into cars of leading vehicle manufacturers.

By selling additional prototypes, Foresight intends to increase awareness of its unique solutions, address potential customers, and expand its presence with vehicle manufacturers and Tier One automotive suppliers. Foresight believes that closer evaluation of the technology by potential customers may lead to future collaborations in research and development, integration, production and other areas.

SkyWater & MIT Update 3DSoC

SkyWater Technology Foundry, the trusted innovation partner for tomorrow’s most advanced technology solutions, and Massachusetts Institute of Technology (MIT), announced an update at the 2019 ERI Summit on the DARPA sponsored 3DSoC program, the largest of the ERI programs. The program is being led by MIT and supported by Stanford University and SkyWater; the team has achieved progress in transferring the carbon nanotube (CNT) Field Effect Transistors (FET)-based 3DSoC technology into SkyWater’s 200mm production facility after years of development work and successful concept demonstration at MIT. The benefits of such breakthrough technology will set a new benchmark for compute performance and energy efficiency and is a pivotal move towards bringing back cutting-edge manufacturing to the U.S.

The technology supports monolithic integration of stackable tiers of CNT-based logic and RRAM to realize a high-density SoC architecture. To fabricate the 3DSoC, SkyWater is using a 90nm process which is predicted to deliver power/compute performance exceeding that of conventional 2D architectures fabricated with 7nm process flows. Though this demonstration is using 90nm geometry, the technology is compatible with node scaling for further performance gains. While observers note the program’s ambitious objectives and potential challenges, the initiative is tracking to plan as the first year concludes.

To date, the new SkyTech Center has been commissioned within SkyWater, which includes all the capital equipment required to support the program. Unit process steps as well as a complete process flow from MIT have been transferred successfully to SkyWater’s 200mm processing facility and the program is currently running a wide variety of test chips to continue to jointly develop the technology and demonstrate its potential.

China’s 51VR

China’s first self-driving simulation blue book “Annual Research Report on Autonomous Vehicle Simulation in China (2019)”, initiated by 51VR, was released at the 6th International Congress of Intelligent and Connected Vehicles Technology (CICV), the top event in the autonomous driving industry in China. Professor Cheng Bo, the Dean of Suzhou Automotive Research Institute of Tsinghua University, commented that the release of the Bluebook marks a stage for the rise of Chinese original simulation software.

Compiled from the opinions of many industry experts, the Bluebook is the first comprehensive reference book on the development status of China’s automated driving simulation test. It combines the cutting-edge research results of current academic institutions and leading Chinese companies and deals with all areas of automated driving simulation testing. The book includes the significance of simulation test, method applications, technical solutions, the current status of software, virtual scene database, demonstration area testing mode, the introduction of simulation testing standards, challenges and trends.

StradVision’s Breakthroughs

Accurate vision processing software is critical for the development of autonomous vehicles in the mass market, and at the core of this technology is strong deep-learning ability of the camera software.

Sunny Lee, COO at StradVision, spoke at the Autotech Council Meeting: Sensor Innovation for Transportation and Mobility at the Sensors Expo & Conference in San Jose about the bold advances being made in camera technology for autonomous vehicles.

Lee spoke about the breakthroughs being made via StradVision’s lean SVNet software, which can be run on automotive chipsets at significantly more affordable cost levels. StradVision aims to provide the first deep learning-based software provider fully compatible with Automotive Safety Integrity Level B (ASIL B), for functional safety.

“We are optimizing our sensor fusion technology, utilizing cameras and LiDAR sensors. This will generate much richer data about objects on the road, another critical aspect of successfully bringing fully autonomous vehicles to the consumer,” Lee said. “Another area being pursued is skeleton detection, which will provide necessary data to predict pedestrian behaviors.”

Lee also spoke about StradVision’s growth in China, Japan and Germany, with plans to grow in the U.S.A. and India. StradVision, a pioneer in vision processing technology and deep-learning software, provides the underpinning that allows Advanced Driver Assistance Systems (ADAS) in self-driving vehicles to reach the next level of safety, accuracy and convenience.

StradVision’s SVNet software provides real-time feedback, detects obstacles in blind spots, and alerts drivers to potential accidents. SVNet also prevents collisions by detecting lanes, abrupt lane changes and vehicle speeds, even in poor lighting and weather conditions.

Read all autonomous vehicle news.

SUBSCRIBE

You are welcome to subscribe to receive emails with the latest Autonomous Self-Driving Driverless and Auto-Piloted Car News , you can also get weekly news summaries or midnight express daily news summaries.