In autonomous and self-driving vehicle news are Qualcomm, GM, Lumotive, RoboSense, Innoviz, Hesai, Asensing, Immervision, Leddartech, Innoviz, Twinner, EasyMile, Ansys, Mobileye, INFINIQ, Quantum Corp., Pony.ai, AUTOCRYPT, Udelv, Mobileye & Ouster.
Qualcomm Tech for GM’s Ultra Cruise
GM announced this week that its next-generation hands-free driver assist system, Ultra Cruise, will be powered by a scalable compute architecture featuring system-on-chips developed by American semiconductor company Qualcomm Technologies, Inc. GM will be the first company to use the Qualcomm Technologies’ Snapdragon Ride™ Platform for advanced driver assistance technology, which features an industry leading 5-nanometer Snapdragon™ SA8540P SoC and SA9000P artificial intelligence accelerator.
Ultra Cruise’s compute, which is about the size of two laptops stacked together, will be available in 2023 on vehicles including the ultra-luxury, fully-electric Cadillac CELESTIQ. With high performance sensor interfaces and memory bandwidth, it will, in combination with GM’s homegrown Ultra Cruise software stack, be key to helping Ultra Cruise achieve an unmatched combination of capability, reliability, predictability and robust door-to-door hands-free driving in 95 percent of all driving scenarios.
“Despite its relatively small size, Ultra Cruise’s compute will have the processing capability of several hundred personal computers,” said Ken Morris, GM vice president of Electric, Autonomous and Fuel Cell Vehicle Programs. “It will take qualities that have distinguished GM’s advanced driver assist systems since 2017 to the next level with door-to-door hands-free driving.”
The Ultra Cruise compute will help power GM-developed ADAS software and features, including perception, planning, localization and mapping. These Ultra Cruise capabilities were developed in-house at GM engineering facilities in Israel, the United States, Ireland and Canada. To ensure a robust and predictable system with minimal latency, GM integrated Ultra Cruise’s software on an optimal hardware design, overlaying cameras, radar and lidar. This low-level, sensor fusion, which provides excellent detection and classification of data, and Ultra Cruise’s software stack are proprietary to GM, not available on the automotive aftermarket.
The Ultra Cruise compute is comprised of two Snapdragon SA8540P SoCs and one SA9000P AI accelerator to deliver key low-latency control functions on 16-core CPUs and high-performance AI compute of more than 300 Tera Operations Per Second for camera, radar and lidar processing. The Snapdragon SoCs are designed with 5 nm process technology, enabling superior performance and power efficiency. The Snapdragon SA8540P SoCs will provide the necessary bandwidth for Ultra Cruise’s sensing, perception, planning, localization, mapping and driver monitoring.
“We are very proud of our collaboration with General Motors on one of the industry’s first uses of our Snapdragon SoCs in an automated driving system,” said Nakul Duggal, Qualcomm Technologies, Inc. senior vice president and GM, Automotive. “Ultra Cruise powered by Snapdragon Ride on Cadillac vehicles will be an experiential and technological leap forward for the industry.”
Along with Snapdragon Ride SoCs, which are designed to meet automotive system safety standards with multiple redundancies built in, the compute includes an Infineon Aurix TC397 processor for system safety integrity. The Aurix TC397 is categorized ASIL-D – the highest Automotive Safety Integrity Level.
GM minimized complexity within the compute, opting for an air-cooled instead of liquid-cooled system that avoids heavy and inefficient thermal cooling lines throughout the vehicle, made possible by Snapdragon Ride’s high-performance, high-efficiency design.
Ultra Cruise’s compute will also have the capability to evolve over time by leveraging Snapdragon Ride’s SoCs performance and high-speed interfaces for future expansion, as well as over-the-air software updates enabled through the Ultifi software platform and GM’s Vehicle Intelligence Platform electrical architecture.
Lumotive Puts LiDAR in ZKW Headlight
– Lumotive, a leading developer of solid-state lidar systems, and the ZKW Group, a specialist for innovative premium lighting systems and electronics, unveiled a functional demonstration of Lumotive’s lidar technology integrated with a ZKW vehicle headlight. The demonstration couples Lumotive’s award-winning Meta-Lidar™ Platform — the industry’s most scalable and cost-effective 3D sensing solution — with ZKW’s advanced vehicle lighting technology to produce a headlight with superior road illumination capabilities, while also providing 3D sensing for advanced safety and autonomy features.
Compared with previous generation lidar systems that use mechanical spinning assemblies and are known for being bulky and expensive, Lumotive’s solid-state solution is tiny and scalable for seamless integration into many essential mobility products such as vehicle lighting systems. The ZKW integration features a prototype of the Lumotive M30 module, the workhorse of the Meta-Lidar™ platform, which uses pulsed laser beams to measure distances between objects and the sensors around the vehicle. The Meta-Lidar platform generates extremely accurate and precise spatial data that can be used by a driver to avoid collisions or to furt
Torc Robotics 3rd Location in Austin TX
Torc Robotics will open its third U.S. location — in Austin, Texas, in early 2022. The approximately 21,000-square-foot office will complement the self-driving truck firm’s Blacksburg, Virginia, headquarters and its Albuquerque, New Mexico, test center. The expansion coincides with the recent two-year anniversary of Torc joining the Daimler Truck family as an independent subsidiary.
Torc, a pioneer in commercializing self-driving vehicle technology, and Daimler Truck, the market share leader in commercial vehicles, have pledged to commercialize Level 4 autonomous trucks at scale within the decade. Level 4 vehicles can operate autonomously under certain conditions and parameters. “Trucking is the backbone of the U.S. economy, and we predict hauling freight will be the first successful commercial application of self-driving on-road vehicles,” said Michael Fleming, Torc Founder and CEO. “This is why we’re laser-focused on the freight industry, taking a pure-play approach and joining forces with the leading heavy-duty trucking manufacturer in North America.”
RoboSense Showed LiDAR @CES
RoboSense LiDAR exhibited its leading portfolio of smart LiDAR sensor solutions at CES 2022. Its star solution RS-LiDAR-M1 (M1) took the spotlight as the world’s first mass-produced automotive grade MEMS solid-state LiDAR, while Ruby Plus, which is a new 128-beam mechanical LiDAR made its international debut.
M1 maintained its popularity at this year’s CES, building on its recognition as the winner of the CES 2019 and 2020 Innovation Awards. In August 2021, M1 completed its first mass production and delivery in a designated project with a vehicle manufacturer. Since the manufacturer’s start of production (SOP) in June 2021, more than 10 deliveries have been completed, making M1 the first and only mass-produced automotive grade MEMS solid-state LiDAR in the world. RoboSense has also been nominated by several OEMs as LiDAR Tier1 supplier, including BYD, GAC, WM Motor, Geely’s subsidiary Zeekr, Lotus Cars, and Inceptio Technology.
RoboSense has also made the world premiere of its 128-beam mechanical LiDAR called Ruby Plus to the public at CES. After advancements in performance and design, The diameter has also been reduced from 166 mm to 125 mm, and the height has been reduced from 148.5 mm to 125 mm. Ruby Plus not only has a longer detection range and higher detection accuracy, but has also reduced its overall weight and volume by more than 50% and power consumption by 40%，reduced from 45W to 27W, technical achievements that have drawn great interest from CES attendees.
Innoviz Intros Innoviz360
Innoviz Technologies Ltd. (Nasdaq: INVZ) (the “Company” or “Innoviz”), a leading provider of high-performance, reliable and affordable LiDAR sensors and perception software, introduces a next-generation sensor to its product line – the Innoviz360.
After delivering on its InnovizTwo LiDAR promises, Innoviz continues to demonstrate its ability to innovate and launch new product lines with the Innoviz360. This third-generation LiDAR product will be offered alongside Innoviz’s perception software, which converts the LiDAR’s raw point cloud data into high-quality outputs for outstanding object detection, classification and tracking.
The new Innoviz360 HDLiDAR sensor represents a breakthrough that leapfrogs traditional, standard-resolution spinner solutions that are performance limited, expensive, big and unreliable. Unlike traditional spinner solutions that are performance limited with only up to 128 scanning lines, the new, lightweight, Innoviz360 LiDAR allows multiple scanning software configurations with up to 1280 scanning lines (10x) at a cost-effective and durable solution than traditional 360 LiDARs.
Innoviz360’s HD, wide FoV (360 ° x64 ° ) and reduced cost will help overcome major challenges in achieving full automation in scale for L4-L5 automotive applications, such as robotaxis, shuttles and trucks, and opens new market opportunities for Innoviz beyond the automotive space. These industries include logistics, mapping, industrial and smart infrastructures, which are expected to provide new meaningful revenues for the Company starting in 2023. Innoviz expects samples of its Innoviz 360 HD LiDAR to be available in Q4 2022.
Hesai Tech Showed AT 128
Hesai Technology Co., Ltd., a global leader in lidar technology for autonomous driving and advanced driver-assistance systems (ADAS), showcased its new automotive-grade lidar sensor at CES 2022. AT128 is a long-range hybrid solid-state lidar for ADAS applications.
Hesai’s AT128 is a directional long-range hybrid solid-state lidar designed for ADAS applications in mass production passenger and commercial vehicles. It combines high performance, compact design, and high reliability. With its consistent resolution over the full field of view (FOV), AT128 is also algorithm-friendly. It has a small form factor, which enables seamless integration onto the vehicle. AT128 provides the essential perception capabilities that L3+ autonomous vehicles require.
Additional highlights about AT128 include:
- Image-level Resolution: AT128 features an ultra-high measurement frequency of over 1.53 million points per second (single return), resulting in image-level resolution. Each AT128 incorporates 128 high-power multi-junction VCSEL arrays, enabling genuine 128-channel e-scanning. Such design avoids the reliability and limited lifetime issues caused by high-speed two-dimensional mechanical scanning. It also provides an unstitched ultra-wide 120° horizontal FOV and image-like structured data, bringing more convenience to autonomous vehicle algorithms.
- Superior Ranging Capability: AT128 has a ranging capability of 200 meters at 10% reflectivity, with effective ground detection as far as 70 meters. It is one of the few hybrid solid-state lidars on the market that can detect objects at such long range, while also reaching such a high measurement frequency.
- Automotive Grade and High Reliability: Designed for mass production, AT128 is an automotive-grade lidar with high reliability. All key components meet AEC-Q and other relevant standards. AT128 has undergone more than 50 design validation (DV) tests, conducted according to internationally recognized OEM standards such as electrical, mechanical, environmental, sealing, material, and EMC tests.
- Low Cost Enabled by Proprietary ASICs: AT128 is developed based on Hesai’s new-generation proprietary lidar ASICs, which greatly simplify the traditional complex assembly process. This increases manufacturing efficiency and consistency for mass production needs. More importantly, it significantly reduces cost while maintaining high performance and reliability.
Hesai’s AT128 has already been nominated by multiple ADAS programs, totaling several million units in lifetime, such as Li Auto, JiDU, HiPhi, and Lotus. The sensor will begin mass production in 2022.
At CES 2022, Hesai also unveiled a new sensor, QT128 – a short-range lidar with 105° ultra-wide vertical field of view (VFOV). QT128 is an ideal blind spot solution for L4 applications such as robotaxis and robotrucks. It features an industry-leading ultra-wide VFOV, allowing it to see more of its surroundings than other available lidar sensors. QT128 also has an automotive-grade design; its manufacturing process is guided by automotive product lifecycle standards, giving it ultra-high reliability and a long operating lifetime.
QT128 can output calibrated reflectivity values, delivering environmental details and enhancing the overall perception system. It has optimized horizontal and vertical resolution, which gives the perception system finer details in focused areas. QT128 will begin mass production in Q1 2023.
Hesai will demo multiple lidars at CES, including from its Pandar Series, QT Series, and XT series. Hesai’s complete sensor portfolio is already widely used for robotaxis, robotrucks, autonomous shuttles, delivery robots, smart city infrastructure, and other applications.
Asensing Tech Lane-Level Positions
Asensing Technology Co., Ltd. (“Asensing Technology”), a leader in high-precision positioning technology for intelligent transportation, demonstrated HD-MapBox, a mapping application that integrates high-precision map data based on high-precision positioning, at CES 2022. The application can achieve lane-level positioning and 1+ mile (2km) Predictive Cruise Control (PCC), providing a decision basis for advanced assisted driving to better meet the demanding positioning requirements of autonomous driving vehicles.
Asensing Technology CTO Situ Chunhui said, “As the premise for autonomous driving safety, high-precision positioning is of great importance for integrating positioning technology based on inertial measurement units (IMU), global navigation satellite system (GNSS) signals, visual perception system and HD map. High-precision positioning is becoming the preferred choice due to higher positioning accuracy and improved redundancy as well as an enhanced passing rate under all scenarios.”
Empowering a highly reliable autonomous driving system
Under any driving scenario, autonomous driving vehicles must accurately interpret their own lane-level location information to better predict and prevent risks and make safe driving decisions. As a result, positioning is not only part of the autonomous driving process but also the premise of autonomous driving.
However, any single positioning technology has its own limitations, especially in certain scenarios such as in tunnels and underground garages where the perception system may be adversely affected by changes in the amount of light and low GPS signal, thereby affecting driving safety.
Based on data fusion of the GNSS, IMU, ADAS camera, vehicle dynamics and HD-map, the new HD-MapBox launched by Asensing Technology is a superior solution for lane-level position. With this robust collection of data on hand, the HD-MapBox can achieve a lateral error of less than 8 inches (0.2 meters) and a longitudinal error of less than 6.5 feet (2 meters) with a 95 percent confidence interval, providing an accurate reference for highway pilot (HWP) and automated valet parking (AVP). Even if both GNSS and lane line detection are not available, the HD-MapBox can still enable vehicles to keep in lane for at least a quarter mile (400 meters).
Taking AVP with its relatively high positioning requirements as an example, the scenarios where the driverless parking solution is most applicable are mainly in underground parking facilities, especially within facilities that require navigating complex, twisting turns along a ramp or passing through areas with poor lighting. These various scenarios present several risks in actual application as ordinary ADAS cameras may not operate as normally, and the satellite positioning function may also fail.
After incorporating high-precision sensor and map data, these deficiencies are fully compensated for through accurate mapping of the driving trajectory and parking space location with dead reckoning (DR) and camera parking space detection in the construction stage of the parking space map. When automated valet parking is in progress, the vehicle is guided to a parking space or driven to the assigned parking space through the integration of several positioning technologies.
Asensing Technology founder and CEO Rongxi Li said, “With the further development of advanced assisted driving in recent years, high-precision integrated positioning technology will be more widely used, especially in terms of improving driving safety and optimizing the intelligent driving experience. High-precision integrated navigation has become a basic configuration in more and more autonomous driving systems.”/
Immervision Partners with LeddarTech
Immervision, the world’s leading developer of advanced vision systems combining optics, image processing, and sensor fusion technology, announced the collaboration with LeddarTech®, a leader in environmental sensing solutions for autonomous vehicles and advanced driver assistance systems, to jointly develop intelligent sensing automotive solutions.
Innoviz Tech Partners with Twinner
Innoviz Technologies (Nasdaq: INVZ), a leading provider of high performance, solid-state LiDAR sensors and perception software, announced its collaboration with Twinner, a German-based vehicle scanner provider for remarketing and inspection purposes within the automotive industry, to enhance the capabilities of its sophisticated car scanner. Twinner is testing InnovizOne LiDAR with its Digital Twinn® platform to provide a high-quality 360-degree view of the vehicle in order to better assess, inspect and evaluate a vehicle’s condition.
Twinner’s Digital Twinn® solution utilizes x-ray-like automotive scanning technology to generate a high-quality digital twin model of the vehicle – including its body, roof, underbody, interior and more – to identify defects that would otherwise go unnoticed by the human eye. The Digital Twinn can spot, for example, a scratch on the rim of a tire or a tiny dent on a bumper as well as identify whether a vehicle has previously been repainted so that auto manufacturers, fleet owners, service providers, car dealers and consumers can make better-informed vehicle investment decisions. Twinner has partnered with several large auto manufacturers, sellers, and logistics companies across Europe and Asia.
The integration of Innoviz’s automotive-grade, solid-state InnovizOne LiDAR into Twinner’s existing sensor suite, which includes multispectral sensors, would enable new features by providing 3D information about the vehicle. Furthermore, Twinner will evaluate Innoviz’s rich point cloud data and APIs to support its own perception software. The ability to disclose a vehicle’s condition offers value across the automotive ecosystem by providing transparency to car makers, fleet operators, car dealers, and service providers competing in a newly digitized space.
Bringing together the technologies from both companies will provide automotive manufacturers with critical solutions needed to increase vehicle perception required for ADAS and autonomous vehicles, particularly in harsh environmental and lighting conditions such as rain, snow, fog, darkness and high brightness.
EasyMile Partners with Ansys
EasyMile, an all-electric autonomous technology supplier headquartered in Toulouse, France looked to Ansys (NASDAQ: ANSS) software for a single-source, turnkey solution to demonstrate the safety of their driverless vehicles. With CADFEM France, Ansys enabled engineers to work from a single model for all safety activities across platforms—significantly shortening development cycles, speeding time to market and reducing operational costs for the company’s driverless shuttle and tow tractor solutions.
Both electric vehicles (EVs) operate at level 4 (L4)—meaning they are fully autonomous until a system failure is detected. EasyMile deployed Ansys software to conduct functional safety analysis of infrastructure and connected architecture for both vehicles.
Autonomous vehicle (AV) function depends on a high level of information to operate safely. The real-time data-processing supporting AV functionality is driven by input from a complex system of lidars, radars, cameras, internet of things (IoT) sensors, GPS and navigation software, all working together to give a 360-degree perspective of vehicle surroundings. To demonstrate safety at this level is difficult and requires clearly defined methods and tools for managing the complex architecture of these nonclassical systems.
Using Ansys software, EasyMile identified a single solution with all the tools needed to analyze their very complex AV system architecture. With Ansys’ help, EasyMile established clear guidelines for safety analysis, along with the unique templates and supporting documentation needed to successfully demonstrate the safety of their AV solutions for customers and various government regulatory bodies.
“It has been difficult in the past to demonstrate the safety of our products to clients,” says Romain Dupont, R&D for EasyMile. “Ansys medini analyze really helps us to streamline the process and bring it all together in a way that our clients can understand. Together with Ansys’ support, we’re helping shape future standards for autonomous vehicle safety.”
medini analyze is a software toolset supporting safety analysis for electronically-controlled safety related functions. It allows for the consistent and efficient application of industry guidelines specific to autonomous vehicle applications, helping to eliminate inconsistencies during analysis to accelerate certification.
Mano Corp Wins Awards Dynamic Brake
Mando Corporation, company specializing in EV and self-driving solutions, was awarded Innovation Award at the CES 2022. Mando’s latest entry, which won the Innovation Award in the VIT (vehicle intelligence & transportation) category for the second year in a row following last year, is the IDB2 HAD (integrated dynamic brake for highly autonomous driving, IDB2 HAD for short).
The Mando IDB2 HAD, the first commercially available integrated (1-box) electronic brake with dual safety design, operates normally even in situations where single point failure occurs due to its full redundancy concept while driving. IDB2 HAD is perfect complement to e-brake pedal, enabling ‘auto stow’ functionality, which folds or unfolds the pedal when needed, in highly autonomous driving conditions. From maximizing vehicle space and design flexibility perspective, IDB2 HAD very much resembles the Mando’s ‘steer-by-wire (SbW)’ technology, the company’s CES Innovation Award winner last year. Electrical hyper-connection is the key theme here, as IDB2 HAD is another product of Mando’s by-wire solution, which removes mechanical connections, and together with the SbW, Mando’s complete by-wire solution will allow users of full self-driving vehicles to engage in many other activities while moving.
The IDB2 HAD is an eco-friendly product. Most brake systems available in the market are composed of ESC (electronic stability control), master booster, vacuum pump, etc., while being mechanically connected to the brake pedal. The braking kicks in only by a driver’s input and the braking force gets generated through the hydraulic line. This design requires a large space in the engine room while needing longer time for assembly. Mando achieved weight reduction and manufacturing optimization by integrating these individual components into one single-box design. To sum up, IDB2 HAD an ecofriendly product with a significant reduction in carbon emissions. “I Do Believe” is another interpretation of the term IDB by the company. Like the ‘firm belief’ of the Development Team engineers of the Mando Brake Business Unit, the IDB2 HAD has all the elements of freedom (full self-driving), safety (dual safety), and eco-friendliness (by wire)
Mando’s EV and full self-driving solution technology was once again recognized globally winning a CES 2022 Innovation Award. The company plans to introduce a new concept ‘BbW (brake by wire)’ product equipped with the IDB2 HAD to the public in 2023.///
Mobileye’s Treasure of Data
Mobileye is sitting on a virtual treasure trove of driving data – some 200 petabytes worth. When combined with Mobileye’s state-of-the-art computer vision technology and extremely capable natural language understanding (NLU) models, the dataset can deliver thousands of results within seconds, even for incidents that fall into the “long tail” of rare conditions and scenarios. This helps the AV and state-of-the-art computer vision system handle edge cases and thereby achieve the very high mean time between failure (MTBF) rate targeted for self-driving vehicles.
“Data and the infrastructure in place to harness it is the hidden complexity of autonomous driving. Mobileye has spent 25 years collecting and analyzing what we believe to be the industry’s leading database of real-world and simulated driving experience, setting Mobileye apart by enabling highly capable AV solutions that meet the high bar for mean time between failure.”
― Prof. Amnon Shashua, Mobileye president and chief executive officer
How It Works: Mobileye’s database – believed to be the world’s largest automotive dataset – comprises more than 200 petabytes of driving footage, equivalent to 16 million 1-minute driving clips from 25 years of real-world driving. Those 200 petabytes are stored between Amazon Web Services (AWS) and on-premise systems. The sheer size of Mobileye’s dataset makes the company one of AWS’s largest customers by volume stored globally.
Large-scale data labeling is at the heart of building powerful computer vision engines needed for autonomous driving. Mobileye’s rich and relevant dataset is annotated both automatically and manually by a team of more than 2,500 specialized annotators. The compute engine relies on 500,000 peak CPU cores at the AWS cloud to crunch 50 million datasets monthly – the equivalent to 100 petabytes being processed every month related to 500,000 hours of driving.
Why It Matters: Data is only valuable if you can make sense of it and put it to use. This requires deep comprehension of natural language along with state-of-the-art computer vision, Mobileye’s long-standing strength.
Every AV player faces the “long tail” problem in which a self-driving vehicle encounters something it has not seen or experienced before. This long tail contains large datasets, but many do not have the tools to effectively make sense of it. Mobileye’s state-of-the-art computer vision technology combined with extremely capable NLU models enable Mobileye to query the dataset and return thousands of results within the long tail within seconds. Mobileye can then use this to train its computer vision system and make it even more capable. Mobileye’s approach dramatically accelerates the development cycle.
What Is Included: Mobileye’s team uses an in-house search engine database with millions of images, video clips and scenarios. They include anything from “tractor covered in snow” to “traffic light in low sun,” all collected by Mobileye and feeding its algorithms. (See sample images).
More Context: With access to the industry’s highest-quality data and the talent required to put it to use, Mobileye’s driving policy can make sound, informed decisions deterministically, an approach that removes the uncertainty of artificial intelligence-based decisions and yields a statistically high mean time between failure rate. At the same time, the dataset hastens the development cycle to bring the lifesaving promise of AV technology to reality more quickly.
INFINIQ, a leading artificial intelligence (AI) and autonomous driving data service company, unveiled its groundbreaking autonomous driving and retail innovation technology at CES 2022, to an incredible response from 180,000+ attendees from Jan. 5-7, 2022.
A three-year CES participant and CES 2022 Innovation Awards Honoree, INFINIQ is the optimal AI accelerator for business innovation, with companies requesting meetings before the event even began.
INFINIQ’s innovations at CES 2022 include:
- A data collection vehicle equipped with multi-sensors, including vision cameras, lidar and infrared/thermal imaging to collect high-quality data for autonomous driving, using sensor fusion technology to process the data and increase accuracy.
- AI Data service platform “MyCrowd,” which builds high quality datasets for AI training. My Crowd guarantees high speed and accuracy by applying AI technology to data processes, providing one-stop services for 2D/3D data mapping, personal information anonymization, 3D annotation, and data quality verification.
- Data anonymization service “Wellid,” which processes visual data so it cannot be recognized. Sensitive personal information such as faces and number plates can be anonymized, crucial to comply with global privacy regulations such as GDPR, the AI Act, CCPA, and CPRA. AI companies showed great interest in Wellid’s capabilities at CES 2022.
- Self-checkout solution and CES 2022 Innovation Award winner AI Counter, which scans products without barcodes or consistent shapes (such as bread, fruits and vegetables) and even completes calculations through its own app. AI Counter is optimized for retail services and is attracting attention from global retail companies for its possibilities, including 24-hour stores, unmanned stores and self-checkout counters. AI Counter was named a CES 2022 Innovation Awards Honoree in both the Software & Mobile Apps and Smart Cities categories.
- AI store “Mealy,” a retail concept that can be operated unmanned, 24/7, using vision AI technology. This interactive concept allowed CES 2022 attendees to walk through a real store with groceries while AI Counter tallied up their purchases and an abnormal behavior detection system monitored the behavior of shoppers.
“This is INFINIQ’s third consecutive CES and judging by the enthusiastic response and deep interest from international businesses, this year will serve as an opportunity to expand our North American business and grow as a global company,” INFINIQ CEO Park Jun-hyung said. “We have extensive experience in supplying data to all fields of the artificial intelligence industry, and we are excited to broaden our partnerships and continue our cutting-edge innovations.
Quantum Corp. All-Terrain AV @ CAVS
Quantum Corporation announces its role in accelerating all-terrain autonomous vehicle research at the Center for Advanced Vehicular Systems (CAVS) at Mississippi State University (MSU), one of the premier university automotive research centers in the world. CAVS collects vast amounts of unstructured data using Quantum R-Series Edge Storage, a high-performance, ruggedized solution purpose-built for capturing massive data volumes in edge environments. The data is generated by vehicles and used for further analysis and machine learning (ML) model development in the CAVS data center.
Data storage and processing needs for autonomous vehicle (AVs) development are growing. Mobility Foresights research estimates that 20% of new cars sold globally will have at least Level 3 autonomous driving capability by 2030. An estimated 90 million connected and autonomous vehicles will each generate up to 10 Terabytes (TB) of data per day or one Zettabyte (ZB) per day across the industry. The automotive industry increasingly requires storage solutions that are flexible, scalable, easy-to-manage, and highly reliable to address the big data challenge.
At the CAVS facility, featuring a 55-acre off-road proving ground, test vehicles equipped with a variety of sensors collect a wide array of data about the outdoor terrain. This data is then used to create a digital twin of the environment for running driving simulations. These simulations are leveraged to create navigation software that guides AVs through the outdoor terrain.
Creating a digital twin of the environment requires high-quality data collected in the field. The CAVS team needed vehicle onboard storage systems that could flawlessly collect field data and enable engineers to quickly transfer that data to the large-scale centralized data center storage for simulations.
“We needed storage that could reliably collect critical sensor data as vehicles traverse rough trails and other challenging terrain, which Quantum R-series Edge Storage provides for us,” said Daniel Carruth, associate director for advanced vehicle systems, CAVS. “With Quantum, we can move data from a vehicle to the data center quickly and easily. We have an end-to-end data management workflow that lets us stay focused on the insights that all of this data can deliver.”
Integrating the Quantum R-Series Edge Storage into a single, shareable storage platform enables CAVS engineers to make data readily available to multiple development organizations. To offload the collected vehicle data, technicians can simply remove the storage magazine from the in-car storage device and slide it into a data center chassis or use the 10-GbE network port for data offloading.
The autonomous systems developed at CAVS will be vital for the military and organizations in agriculture, energy, construction, forestry, and more. “Using the information collected in our test vehicles, we are building a comprehensive data set that will be valuable to several other teams at MSU and beyond,” says Clay Walden, executive director of CAVS. “We’re eager to see how this data will fuel breakthrough research and development in a wide variety of fields.”
“Data is a critical component in supporting the continued growth and success of the autonomous vehicle market,” said Plamen Minev, technical director, AI & Cloud, Quantum. “Working with the CAVS team is a wonderful opportunity for us to provide a data management solution that makes storing, moving, and analyzing this critical field data simpler and more streamlined for the CAVS engineering team.”
“The researchers at CAVS are capturing massive amounts of data in demanding off-road environments and then using that data to design, develop, and validate algorithms that can power self-driving military vehicles. The Quantum R-Series Edge Storage enables them to store, quickly offload, and analyze the data for simulations and further research,” said Graham Cousens, ADAS/Autonomous Vehicle Solutions practice lead, Quantum. “We are thrilled to partner with such a cutting-edge research facility to power the future of autonomous vehicles.”
To learn more about how Quantum and CAVS are working together, click here. For more information on Quantum’s solutions for Advanced-Driver Assistance Systems (ADAS) including the R-Series, click here.
Pony.ai Partners with Sinotrans
Pony.ai, a leading global autonomous driving technology company, announced on December 27th that PonyTron, its autonomous trucking business unit, formed a joint venture with Sinotrans, part of China Merchants Group and one of China’s leading logistics and freight forwarding companies. The two companies will work together to build a smart logistics network featuring autonomous driving trucking technologies. The joint venture is expected to commence operations in early 2022 with an intelligent logistics fleet consisting of over 100 trucks. Over time, this fleet size is expected to grow substantially, incorporating cutting edge autonomous driving technologies, along with sophisticated logistics technology.
AUTOCRYPT SCMS for V2X
Known for its autonomous driving security solutions, AUTOCRYPT recently announced the launch of AutoCrypt SCMS Version 5.0, a Security Credential Management System (SCMS) for Vehicle-to-Everything (V2X) communications, and a crucial component of its AutoCrypt V2X security solution. An SCMS is essential for autonomous driving as it signs and verifies the messages transmitted via V2X to ensure security and safety.
Utilizing a public key infrastructure (PKI) to encrypt, validate, and manage certificates for V2X communications, the newest version of AutoCrypt SCMS, Version 5.0, comes with newly added Certificate Revocation List (CRL) and Misbehavior Detection (MBD) functionalities. Based on the IEEE 1609.2 standard, the CRL supports both hash-based CRL, which lists the hash value of revoked certificates; and full linkage ID-based CRL, which allows for more efficient mass revocation.
AutoCrypt SCMS securely manages the entire lifecycle of a certificate and is updated regularly to comply with stringent regulations from various regions. While many security providers only provide compliance in one or two regions, AUTOCRYPT’s research and development team have secured compliance with all existing standards regarding certificate management, including the US SCMS, European-based C-ITS CMS (CCMS), and Chinese-based C-SCMS.
The company most recently participated in the OmniAir Consortium’s “OmniAir Plugfest” with companies like Blackberry, ESCRYPT, and Green Hills Software. AUTOCRYPT showcased AutoCrypt SCMS Version 5.0 by completing a demonstration of the revocation of cross-certificates in an actual driving environment and were able to demonstrate international compatibility of the entire certificate lifecycle, including issuance, management, and revocation.
“V2X technology will need to be prioritized if the industry wants to move past Level 3 Driving Automation into Level 4 and 5. And as autonomous driving technology continues to become more prevalent, security for V2X communications will be more important than ever,” said CEO and co-Founder, Daniel ES Kim. “We are very pleased to be one of the few companies to be able to provide an authentication system that supports all regional standards and look forward to continuously updating our technologies to stay above all regulatory changes.”
AUTOCRYPT currently oversees security for all smart highway/expressway projects in Korea and has focused on expanding its projects to other C-ITS endeavors worldwide. With its wide international compliance and customizable integrations, AUTOCRYPT is ideal for OEMs, public institutions, and governments looking to prioritize secure mobility for all.
Udelve Cabless AEV Transporter Driven by Mobileye
Udelv, a Silicon Valley venture-backed company, unveiled the first cab-less autonomous electric delivery vehicle for multi-stop delivery, the Transporter, driven by Mobileye, at the Consumer Electronics Show (CES).
The company released a new video showing the first look at the Transporter.
Udelv unveiled the autonomous cab-less Transporter, driven by Mobileye, virtually at CES 2022.
The autonomous Udelv Transporter’s uPod can carry up to 2,000 lbs. of cargo and make up to 80 stops per run.
Udelv unveiled the first cab-less autonomous electric delivery vehicle for multi-stop delivery at CES 2002. The company aims at having 50,000 units of the Transporter, driven by Mobileye, on public roads by 2028, with the first Transporters being commercially deployed in 2023.
The Udelv Transporter is driven by the Mobileye Drive™ self-driving system with a robust suite of cameras, LiDARs, radars and the fifth generation of EyeQ®, Mobileye’s System-on-Chip for automotive applications.
The multi-stop electric delivery vehicle features a proprietary, self-contained, hot swappable modular cargo pod called the uPod. It can carry up to 2,000 pounds of goods, make up to 80 stops per cycle at highway speeds, cover ranges between 160 and 300 miles per run depending on the battery pack option and be operated by Udelv’s mobile apps to seamlessly schedule, deliver, track and retrieve packages.
“This is a historic day for the transportation and logistics industries,” said Daniel Laury, Udelv CEO and co-founder. “The Transporter is transformative for two of the world’s largest industries: automotive and logistics. It was created to solve two great challenges of commercial fleets: the shortage of drivers and the electrification of fleets.”
Udelv’s third-generation vehicle is the result of years of experimentation, client testing and hardcore mechanical, electrical and software engineering. In 2018, Udelv made its debut in California with the first-ever autonomous delivery on public roads. Since then, Udelv has completed over 20,000 deliveries for multiple merchants in California, Arizona and Texas. Udelv aims at having 50,000 units of the Transporter, driven by Mobileye, on public roads by 2028, with the first Transporters being commercially deployed in 2023.
Laury highlighted the Transporter’s main characteristics:
- The Transporter features a patented secure, automated, hot-swappable and modular cargo space specifically designed for autonomous delivery, the uPod, with adaptive shelving and an IRIS aperture mechanism.
- The uPod can carry up to 2,000 lbs. of cargo and make up to 80 stops per run.
- The uPod is connected to a proprietary cloud-based software with intelligent loading and unloading, as well as a function to return items.
- It can deliver nearly anything from convenience goods, e-commerce packages and groceries to auto parts, electronics, and medical supplies for B2B and B2C applications.
- The Transporter is driven by the Mobileye Drive™ self-driving system with a robust suite of cameras, LiDARs, radars and the fifth generation of EyeQ®, Mobileye’s System-on-Chip for automotive applications. To rapidly deploy at scale, the Transporter will integrate Mobileye’s AV maps based on Road Experience Management (REM™), a crowdsourced, continuously updated map of the world that digitizes what autonomous vehicles need to navigate.
- The vehicle features Udelv’s 24/7 proprietary ultra-low latency camera-based teleoperation system for remote maneuvers and assistance and Udelv’s proprietary uECU (Electronic Control Unit) acting as the vehicle’s central compute unit to integrate and optimize all functions.
- Battery capacity is between 90 and 160 kWh with a 160-300 mile range.
- DC fast charging will take 45 minutes to add up to 220 miles of range.
- Top speed is 70 mph.
- The fleet of Transporters is operated by a proprietary Fleet Intelligence and Management System for route optimization and fleet planning algorithms.
- The Transporter is designed to maximize delivery efficiency and customer satisfaction while minimizing the total cost of operation.
The company has already garnered more than 1,000 reservations, including from US-based Donlen and Europe-based Planzer and Ziegler Group. The company was also awarded a prestigious contract from the US Air Force for a pilot program on Edwards Air Force Base in California.
“The Transporters will dramatically improve the efficiency and safety of last- and middle-mile delivery services and make deliveries affordable for everyone and everything from electronics and auto parts to groceries and medical supplies,” added Laury.
Ouster’s Key Use Cases
Ouster, Inc. (NYSE: OUST) (“Ouster” or the “Company”), a leading provider of high-resolution digital lidar sensors for the automotive, industrial, smart infrastructure, and robotics industries, demoed key use cases and new sensors on the floor of CES 2022 in Las Vegas.
Industry observers1 have named lidar adoption as a trend to watch at CES, citing rapidly growing demand due to its numerous applications. Ouster and its partners are showcasing a strong roster of digital lidar applications that bring increased safety, efficiency, and sustainability to their end-markets.
Ouster is exhibiting its recently unveiled DF series solid-state lidar sensors for high-volume automotive production programs, including Ouster’s breakthrough long-range sensor for automated driving and collision avoidance. Ouster is also featuring its OS series scanning lidar powered by its new L2X chip in simulated fog, rain, and vibration test conditions to demonstrate performance, reliability, and durability in inclement weather and challenging operational environments.