In this Article
Hyundai Delivery Robot
Hyundai Motor Group (the Group) has started two pilot delivery service programs using autonomous robots based on its Plug & Drive (PnD) modular platform at a hotel and a residential-commercial complex located in the outskirts of Seoul.
The delivery robot consists of a storage unit integrated on top of a PnD driving unit. Alongside the loading box used to deliver items, a connected screen displays information for customers.
First shown at CES 2022, the Group’s PnD modular platform is an all-in-one single wheel unit that combines intelligent steering, braking, in-wheel electric drive and suspension hardware, including a steering actuator for 360-degree, holonomic rotation. It moves autonomously with the aid of LiDAR and camera sensors. An integrated storage unit allows the robot to transport products to customers.
By adding the autonomous driving capability, the PnD-based robot can find the optimal route within the area to deliver packages to recipients. It can recognize and avoid fixed and moving objects and drive smoothly, providing a fast delivery time.
“PnD-based delivery robots allow quicker delivery times with improved safety through the use of autonomous driving technology, including fast obstacle avoidance capabilities,” said Dong Jin Hyun, Head of Robotics LAB of Hyundai Motor Group. “We plan to keep upgrading mobility services, convenience, safety and affordability for customers through our pilot programs.”
Hyundai Motor Group also unveiled a video of the delivery robot put into service at Rolling Hills Hotel on its official YouTube channel (https://youtu.be/VDsmoGpnqP8).//
MIPS Partners with Mobileye
MIPS, a leading developer of highly scalable RISC processor IP, announced it is continuing its partnership with Mobileye, in accelerating innovation in autonomous driving technologies and advanced driver-assistance systems (ADAS).
As part of the companies’ long-term relationship, Mobileye has licensed new MIPS’ eVocore P8700 multiprocessors in its latest generation of System on a Chip (SoC) for ADAS and autonomous vehicle, the next-generation EyeQ SoCs.
“Mobileye’s highly efficient, scalable and proven EyeQ® SoCs are driving a revolution in driver assistance and autonomous vehicle technologies. The new MIPS eVocore CPUs provide not only the unrivaled combination of performance and efficiency that MIPS is known for, but also the differentiation of an open software development environment,” said Elchanan Rushinek, Executive Vice President of Engineering, Mobileye, whose technology is used by multiple car makers.
MIPS new eVocore P8700 multiprocessor IP cores, which include best-in-class power efficiency for use in SoC applications, are the first MIPS products based on the RISC-V open instruction set architecture (ISA). The P8700 combines a deep pipeline with multi-issue Out-of-Order (OOO) execution and multi-threading to deliver outstanding computational throughput. It has single-threaded performance greater than what is currently available in other RISC-V CPU IP offerings. The high level of scalability of the cores makes them well suited for compute-intensive tasks across a broad range of markets and applications such as automotive.
“We believe that RISC-V will become a major solution for the automotive market, providing easy porting process for running 3rd party software on devices such as Mobileye’s next-generation EyeQ SoC”, said Calista Redmond, CEO of RISC-V International.
Mobileye is using MIPS’ processors in several of the EyeQ® generations, starting with the EyeQ2, including EyeQ6H, EyeQ6L and now the next-generation EyeQ. The support for CPU’s and accelerators coherency, assisted with the growth needed to support higher performance systems, going from a single camera system to multiple cameras together with multiple sensors. The ADAS technology have become ubiquitous in all cars trims levels, when new comfort features are driving ADAS to provide L2+/L3/L4 capabilities. Autonomous vehicles are expected to initially be roll-out in segments as public transportation and delivery services, when the next-generation EyeQ can help to reduce the cost and complexity of such systems, providing AV-on-a-Chip solution. According to Strategic Market Research report, the ADAS market is expected to increase from USD $23.4 billion in 2021 to USD $75.2 billion by 2030.
“MIPS’ unique real-time features, hardware virtualization, functional safety and security technologies can provide clear advantages for companies delivering innovative solutions in areas such as autonomous vehicles. Having autonomous cars based on the RISC-V ISA and Linux open software is a great achievement, as open source are more robust and hence provide a safer platform for the car,” said Desi Banatao, MIPS CEO. “Mobileye is a pioneer in advanced driver assistance systems and is an innovator in autonomous vehicles, and we are thrilled to continue our long-time partnership in driving the future of next-generation chips for autonomous vehicles.”
Recogni Intros Scorpio 1000 TOPS
Recogni, Inc., the leader in AI-based perception for autonomous vehicles, announced Recogni Scorpio, the world’s first 1000 TOPS (Peta-Op) class inference solution for autonomous mobility. The company’s vision-inference chip enables superhuman object detection accuracy up to – 300m in real-time under various road and environment conditions, and the ability to process multiple streams of ultra-high resolution and very high frame rate cameras. Microprocessor Report noted that Recogni’s solution “performs far better” than other inference engines in leading SoCs on the market.
“Vision is fundamental to accurate perception processing and essential to autonomous driving platforms,” said RK Anand, Founder and Chief Product Officer at Recogni. “From the beginning we took a unique approach of processing high resolution images at the edge to achieve near-perfect object detection and classification, and enable autonomous driving stacks to make better driving decisions. Scorpio can process multiple 8 megapixel streams at 30 frames per second in less than 10 milliseconds using only 25 watts. That’s a performance order of magnitude greater than anything else on the market and, we believe, will help to accelerate autonomous driving to become a reality.”
Currently being evaluated by several top tier automotive manufacturers and suppliers, Recogni’s solution can achieve 1000 TOPS with less than 10ms of processing delay and below 25 watts of power consumption. This is not only 10-20 times more power efficient than competing solutions, but it enables the flexible design of autonomous driving vehicle stacks and minimizes the impact on driving range. In addition, with such a short processing time of less than 10ms, the Electronic Control Unit (ECU) has more-than-ample time for taking the necessary driving decisions. High compute capacity, efficient processing, low latency, and low power consumption are the pillars of Recogni’s platform.
“Recogni’s purpose built architecture is a unique approach to AI perception, allowing customers to perceive the environment in high resolution with very low latency and low power – this is a gamechanger for OEMs and suppliers looking to add new, powerful ADAS and self-driving features to new vehicles,” said Marc Bolitho, Chief Executive Officer at Recogni. “Recogni’s unique approach in perception processing is truly first of its kind and enables customers to deploy safe autonomous driving functions as well as extending the range for electric vehicles.”
Microprocessor Report’s Bryon Moyer completed a thorough evaluation of Recogni’s solution and reached the conclusion that, “As the industry moves to greater resolution, Recogni is well positioned, whereas other SoCs will need an upgrade.” The report also notes that competing SoCs are likely to be that upgrade, but it’s “two to three years behind Recogni’s first chip.”
The report details Recogni’s product superiority in terms of supporting high-resolution cameras at high frame rates, support for red/clear/clear/blue (RCCB) image sensors, the approach of minimizing preprocessing before inference, compression algorithms to reduce the required on-chip memory and the product’s self-managing capabilities.
Moyer recognized that “no other stand-alone automotive AI accelerator uses high-resolution cameras and processes each full frame (rather than targeting regions of interest. More common are SoCs that handle inference as one of those many functions.” This is one of the biggest differentiators and advantages of the Recogni solution.
Renesas & Fixstars Partner for AD Systems
Renesas Electronics Corporation (TSE:6723), a premier supplier of advanced semiconductor solutions, and Fixstars Corporation, a global leader in multi-core CPU/GPU/FPGA acceleration technology, announced the joint development of a suite of tools that allows optimization and fast simulation of software for autonomous driving (AD) systems and advanced driver-assistance systems (ADAS) specifically designed for the R-Car system-on-chip (SoC) devices from Renesas. These tools make it possible to rapidly develop network models with highly accurate object recognition from the initial stage of software development that take advantage of the performance of the R-Car. This reduces post-development rework and thereby helps shorten development cycles.
“Renesas continues to create integrated development environments that enable customers to adopt the ‘software-first’ approach,” said Hirofumi Kawaguchi, Vice President of the Automotive Software Development Division at Renesas. “By supporting the development of deep learning models tailored to R-Car, we help our customers build AD and ADAS solutions, while also reducing the time to market and development costs.”
“The GENESIS for R-Car, which is a cloud-based evaluation environment that we built jointly with Renesas, allows engineers to evaluate and select devices earlier in the development cycles and has already been used by many customers,” said Satoshi Miki, CEO of Fixstars. “We will continue to develop new technologies to accelerate machine learning operations (MLOps) that can be used to maintain the latest versions of software in automotive applications.”
Today’s AD and ADAS applications use deep learning to achieve highly accurate object recognition. Deep learning inference processing requires massive amounts of data calculations and memory capacity. The models and executable programs on automotive applications must be optimized for an automotive SoC, since real-time processing with limited arithmetic units and memory resources can be a challenging task. In addition, the process from software evaluation to verification must be accelerated and updates need to be applied repeatedly to improve the accuracy and performance. Renesas and Fixstars have developed the following tools designed to meet these needs.
1. R-Car Neural Architecture Search (NAS) tool for generating network models optimized for R-Car
This tool generates deep learning network models that efficiently utilize the CNN (convolutional neural network) accelerator, DSP, and memory on the R-Car device. This allows engineers to rapidly develop lightweight network models that achieve highly accurate object recognition and fast processing time even without a deep knowledge or experience with the R-Car architecture.
2. R-Car DNN Compiler for compiling network models for R-Car
This compiler converts optimized network models into programs that can make full use of the performance potential of R-Car. It converts network models into programs that can run quickly on the CNN IP and also performs memory optimization to enable high-speed, limited-capacity SRAM to maximize its performance.
3. R-Car DNN Simulator for fast simulation of compiled programs
This simulator can be used to rapidly verify the operation of programs on a PC, rather than on the actual R-Car chip. Using this tool, developers can generate the same operation results that would be produced by R-Car. If the recognition accuracy of inference processing is impacted during the process of making models more lightweight and optimizing programs, engineers can provide immediate feedback to model development, therefore shortening development cycles.
Renesas and Fixstars will continue to develop software for deep learning with the joint “Automotive SW Platform Lab” and build operation environments that maintain and improve recognition accuracy and performance by continuously updating network models.
The first set of tools available today is designed for the R-Car V4H SoC for AD and ADAS applications that combines powerful deep-learning performance of up to 34 tera operations per second (TOPS) with superior energy efficiency.
“The National Safety Council reported 857 deaths and 44,240 total injuries from work zone crashes in 20201,” said Josh Shipman, chief revenue officer. “AWP is proud to introduce next-generation, life-saving technology that can help customers significantly reduce these numbers. AFADs are proven to improve driver responsiveness, leading to fewer accidents.”
AFADs are automated work zone safety systems that use onboard Google/Waze technology to divert 25% of traffic around the work zone entirely. The smart, wireless systems allow AWP Protectors to safely control AFADs remotely using a roadside tablet. This provides greater visibility of the entire work zone and puts them closer to utility, broadband and infrastructure crews for better communication.
If a vehicle does breach a work zone, an intrusion alarm immediately alerts everyone to get out of the way.
Industry research shows motorists are more responsive to AFADs than human Protectors. One study by the Missouri Department of Transportation found that, on average, vehicles approached 4.2 mph slower and stopped 11.4 ft. further back when work zones utilized ADADs. The study also revealed that 78% of drivers prefer AFADs over human Protectors.2
Together, AFADs and AWP Protectors deliver exceptional availability and work zone coverage. One Protector can operate up to four AFADs in a single work zone, freeing up other Protectors to cover more customer job sites. Greater coverage helps crews finish projects on or ahead of schedule, translating to cost savings realized through less overtime and higher operating efficiency.
AWP AFADs also help reduce customer liability. Built-in, 360-degree night vision surveillance cameras show what really happened in the event of an accident, preventing potential litigation.
“AFADs are one more way AWP can take care of our customers’ increasingly complex safety needs, from planning to execution,” said Shipman. “We will continue to invest in the next generation of safety innovation to protect the people who make our utilities and infrastructure possible, as well as the communities they serve.”
LG Innotek New Hybrid Lenses
LG Innotek(CEO Jeong Cheol-dong) said 14th it has successfully developed new types of high-performance hybrid lenses for autonomous driving that have reduced size and thickness while increasing price competitiveness compared to existing products in the market.
Cameras are key components of autonomous driving solutions as they help detect the driver’s movements.
The company said it developed new lenses for its driver monitoring system (DMS) and advanced driver assistance system (ADAS). What distinguishes them is that the company cross-applied plastic and glass inside the lens, while other lenses use only glass to prevent structural deformation due to alteration in temperature or external force.
In particular, the company is the first in the industry to apply plastic materials to ADAS lenses regarding high resolution(8Mp). The company was able to decrease the size and price of the lenses by using plastic. Given that cameras are increasingly being used inside the car, the new lenses will give carmakers more flexibility in vehicle design, according to the company.
“The high-performance lenses are 20 percent to 30 percent thinner than all-glass products. As they get thinner, they have the advantage of increased freedom in interior and exterior designs,” a company spokesperson said. “The higher level of self-driving, the more sensing the devices will be used. So, it is important to reduce the size of the parts.”
The company said it increased the performance of its new lenses to match all-glass lens thanks to its technology that maintains consistent performance regardless of temperature.
LG Innotek expects the company will command an edge in the In-cabin camera lens market. The camera lenses employed in Autonomous vehicles are mounted in camera modules. They are key components of autonomous driving for driver assistance and driver recognition. In Europe, all vehicles are recommended to be equipped with DMS after 2025.
According to data by Strategy Analytics, the global self-driving camera market is expected to grow by around 17 percent annually to 7.9 trillion won in 2025 ($6 billion), up from 4.2 trillion won in 2021.
“We expect LG Innotek’s high-performance hybrid lens, which has overcome the limitations of plastic with innovative technologies, to create a huge wave in the market,” said LG Innotek CTO Kang Min-seok.
Waymo Decision Making
It’s hard to make driving decisions from a vehicle running a red light to a car suddenly changing lanes. To evaluate Waymo’s Driver’s ability to avoid or mitigate crashes in situations like these, Waymo developed a comprehensive scenario-based testing methodology called Waymo’s Collision Avoidance Testing (CAT). To maintain transparency and provide the public with a deeper understanding of its safety approach, they published a paper to describe how Waymo judges good collision avoidance performance, how to identify the right set of scenarios to test, and the testing tools developed.
A Waymo blog post states the data is culled from real-world data and test racks. For scenarios that are either too dangerous (e.g., at high speeds) or impractical to collect on a test track (e.g., those requiring specific intersection geometry or highly specialized vehicle types), Waymo creates fully synthetic simulations.