Toyota Research Institute Demos Self-Driving with AI & Live Testing

Toyota Research Institute is showing it’s latest developments in self-driving for the investor community this week. Toyota is using both on-road testing and aritificial intelligence to work on its Chaufer and Guardian driving safety systems. Technology being programmed to detect when the driver is drowsy, tired or not paying attention.

Platform 2.1

Since unveiling its Platform 2.0 research vehicle in March 2017, TRI updated its automated driving technology to 2.1. The softwar is being shown for the first time on a closed-course.

TRI’s deep learning to enable the automated vehicle system to more accurately understand the vehicle surroundings, detecting objects and roadways, and better predict a safe driving route. These new architectures are faster, more efficient and more highly accurate.

In addition to object detection, the models’ prediction capabilities can also provide data about road elements, such as road signs and lane markings, to support the development of maps, which are a key component of automated driving functionality.

Platform 2.1 also expands TRI’s portfolio of suppliers, incorporating a new high-fidelity LIDAR system provided by Luminar. This new LIDAR provides a longer sensing range, a much denser point cloud to better detect positions of three-dimensional objects, and a field of view that is the first to be dynamically configurable, which means that measurement points can be concentrated where sensing is needed most. The new LIDAR is married to the existing sensing system for 360-degree coverage. TRI expects to source additional suppliers as disruptive technology becomes available in the future.

On Platform 2.1, TRI created a second vehicle control cockpit on the front passenger side with a fully operational drive-by-wire steering wheel and pedals for acceleration and braking. This setup allows the research team to probe effective methods of transferring vehicle control between the human driver and the autonomous system in a range of challenging scenarios. It also helps with development of machine learning algorithms that can learn from expert human drivers and provide coaching to novice drivers.

Guardian & Chaufer

TRI has also designed a unified approach to showing the various states of autonomy in the vehicle, using a consistent UI across screens, colored lights and a tonal language that is tied into Guardian and Chauffeur. The institute is also experimenting with increasing a driver’s situational awareness by showing a point cloud representation of everything the car “sees” on the multi-media screen in the center stack.

With its broad-based advances in hardware and software, Platform 2.1 is a research tool for concurrent testing of TRI’s dual approaches to vehicle autonomy – Guardian and Chauffeur – using a single technology stack. Under Guardian, the human driver maintains vehicle control and the automated driving system operates in parallel, monitoring for potential crash situations and intervening to protect vehicle occupants when needed. Chauffeur is Toyota’s version of SAE Level 4/5 autonomy where all vehicle occupants are passengers. Both approaches use the same technology stack of sensors and cameras. This week marks the first time the Guardian and Chauffeur systems have been demonstrated on the same platform, which includes multiple test scenarios to demonstrate TRI’s advances in both applications.

These include the ability of the Guardian system to detect distracted or drowsy driving in certain situations, and to take action if the driver does not react to turns in the road. In such a situation, the system first warns and then will intervene with braking and steering to safely follow the road’s curvature. Chauffeur test scenarios demonstrate the vehicle’s ability to drive itself on a closed course, navigate around road obstacles, and make a safe lane change around an impediment in its path with another vehicle travelling at the same speed in the lane next to it.

Artificial Testing Scenarios

In addition to real-world testing, TRI is using simulation to accurately and safely test engineering assumptions, and investors can experience automated driving test scenarios in a virtual simulator. TRI is also making advancements in robotics and artificial intelligence.

As part of its research into human support robots that can assist with tasks in the home, such as item retrieval, TRI has pioneered new tools to give future robots enhanced, human-like dexterity in order to  grasp and manipulate objects so that they are not dropped or damaged. TRI is also applying computer vision and artificial intelligence to robot development, allowing robots to detect the physical presence of humans and objects, note their locations and retrieve objects for humans when prompted. The robots can detect when objects have been relocated, updating the item’s location in the robot’s database, and even detect faces of known people and differentiate individuals.

TRI’s progress in robotics have been made possible by its ability to increase the value and accuracy of simulation to augment physical testing. Since it is impossible to physically test the wide variety of situations robots may encounter in the real world, the institute uses simulated environments, constantly adapting them with data collected in real-world testing for greater precision.

Also, TRI is pursuing new concepts for applying artificial intelligence inside a vehicle cabin to keep occupants comfortable, safe and satisfied. The institute has created a simulator showing an in-car AI agent that can detect a driver’s skeletal pose, head and gaze position and emotion to anticipate needs or potential driving impairments. For example, when the system detects the driver taking a drink and facial expressions which might indicate discomfort, the agent hypothesizes that the driver might be feeling warm and can adjust the air conditioning or roll down the windows. If the agent detects drowsiness, it might provide a verbal prompt in the cabin suggesting that the driver pull over for coffee or navigate the car to a coffee shop.

In addition to its technology demonstrations, Toyota also released a comprehensive overview of its work on automated driving, including the philosophy that guides its approach to the technology, its ongoing research programs, and its near-term product plans. Th