In this Article
L3 Chips from Ouster
Ouster, Inc. (NYSE: OUST) (“Ouster” or “the Company”), a leading provider of high-resolution digital lidar sensors for the automotive, industrial, robotics, and smart infrastructure industries, announced the launch of its newest OS series scanning sensors, REV7, powered by its next-generation L3 chip. REV7 features the all-new OSDome sensor, as well as upgraded OS0, OS1, and OS2 sensors that deliver double the range, enhanced object detection, increased precision and accuracy, and greater reliability.
“The promise of digital lidar is that year after year, with new chips like L3, our customers benefit from an ever-improving lineup of sensors that follows the exponential performance path of Moore’s Law,” said Ouster CEO Angus Pacala. “Digital lidar never stops improving – and doubling the range of our existing sensors while adding the OSDome is truly unprecedented, and is only possible with a digital architecture. REV7 is our biggest leap forward in performance and features yet, and positions us to serve a wider set of use-cases and win new customers in all of our target verticals.”
The Next-Generation Digital Lidar L3 Chip
REV7 is powered by Ouster’s next-generation L3 chip, a fully custom and proprietary system-on-chip that brings back-side-illumination technology, the same imaging technology that revolutionized the digital camera industry, to the high-performance lidar industry for the very first time. The L3 chip boasts 125 million transistors and a maximum computational power of 21.47 GMACS, bringing even better digital signal processing and more features to customers than ever before. With the improved on-chip processing, the L3 is capable of counting approximately 10 trillion photons per second and produces up to 5.2 million points per second.
Introducing the REV7 OS Series
Ouster’s REV7 sensors see more than ever before, over longer ranges, and with greater precision for improved mapping, more accurate obstacle detection, and safer autonomous operations both indoors and outdoors. Upgrades to the REV7 sensors increase their maximum operating temperature, reduce their power draw, and double their resistance to shock and vibration while maintaining the same small, light-weight, and power-efficient form factor design of previous generations. With approximately 95% automotive-grade components, Ouster’s REV7 sensors are purpose built for production-scale fleets.
REV7 OS Series Highlights:
- Powered by new backside-illuminated L3 chip with 10x increase in photon sensitivity
- 2x range increase across all sensors, up to 200m at 10% reflectivity
- High resolution for short, mid, and long-range applications
- Increases maximum operating temperature by +10° C
- 1000BASE-T1 automotive ethernet now available
With the extended range of REV7, we are unlocking an all-new category of long-range and higher-speed use-cases, essential for many robotaxi, shuttle, bus, and truck operators. Not only does the REV7 OS2 achieve over 200 meters range on 10% reflective objects, but it now has a maximum range of over 400 meters, opening up the unique ability to track vehicles and objects beyond a quarter mile in all directions. With the 10x signal improvement of the L3 chip, REV7 can better detect objects closely surrounding the vehicle as well as at a distance. Automotive and industrial customers alike can expect incredible detection performance on challenging objects such as tires, black cars, cables, fencing, or the forks on a forklift. These same benefits make REV7 an excellent fit for mapping.
The All-New OSDome Sensor
The OSDome introduces a new hemispheric field-of-view for comprehensive coverage and detection in industrial and smart infrastructure applications. Packaged in a uniquely small form-factor, the OSDome can be installed discreetly in the body of a vehicle or on the ceiling of a building. By removing the blind spot on the top of the sensor, customers can now monitor wide zones with a single sensor, simplifying installations and reducing system complexity.
OSDome Sensor Highlights:
- Hemispherical 180° FOV for floor-to-ceiling coverage
- 128 channels of vertical resolution
- 20m range (10% reflectivity) for wide area coverage
- High-precision dual returns for enhanced object detection
- Privacy-safe, accurate people tracking
Within the industrial vertical, the OSDome delivers floor-to-ceiling vertical visibility to autonomous mobile robots (AMRs) operating in warehouses, which have historically struggled to detect objects directly below or overhead. In the smart infrastructure market, the OSDome is designed to augment or replace CCTV and thermal cameras. Whether it’s monitoring a retail store for occupancy and dwell time or a secure data center for intruders in a keep-out zone, the OSDome offers the ability to cover a wide area with accurate 3D object classification and tracking.
Ouster’s REV7 sensors powered by the L3 chip are available to order today, with the first units expected to ship to customers in Q4 2022.
Baraja Spectrum-Scan LiDAR Partners with OEM
Baraja, creator of the breakthrough Spectrum-Scan™ LiDAR technology for autonomous vehicles, has entered into an advanced development agreement with a major automotive original equipment manufacturer (OEM) to develop its next-generation Spectrum HD25 LiDAR product specifically for automotive integration. Baraja has entered the agreement alongside its partner, Tier 1 automotive supplier Veoneer, to accelerate the scale required for automotive integration.
The advanced development agreement is a major step forward for automotive integration of the Spectrum HD25 product, bringing together world leaders in automotive safety and demonstrating Spectrum HD25 has Tier One, OEM and partner support. Under the agreement, Baraja will enter the next stage of development for Spectrum HD25.
Spectrum HD25 marks a generational leap for automotive LiDAR, built on Baraja’s proprietary Spectrum-Scan™ solid-state scanning platform, which was designed from the ground up to completely rethink how cars see the world around them to enable true autonomy. The LiDAR system enables the range, resolution and performance required for true autonomy, without the traditional trade-offs faced by other legacy LiDAR technologies. Spectrum HD25 delivers the world’s first LiDAR system combining per-point Doppler capability at the hardware level, with Spectrum-Scan™ and Random Modulation Continuous Wave (RMCW) ranging method to deliver unparalleled performance and accuracy at range and speed.
RoboSense HQ in Plymouth, Michigan
RoboSense, a world-leading provider of Smart LiDAR Sensor Systems today announced RoboSense North America Headquarters established in Plymouth, Michigan, USA. Peipei Zhao, Vice President of Strategy who represented the RoboSense Headquarters, Ken English, RoboSense Sr. Director of Automotive Business, Kurt Heise, Plymouth Township Supervisor, and Luz Meza, Wayne County Director of Economic Development attended the opening ceremony and delivered speeches. Representatives from the Charter Township of Plymouth, Plymouth Community Chamber of Commerce, Wayne County Economic Development Department, Michigan Economic Development Corporation, Michigan Department of Transportation, and representatives from North American car companies that RoboSense is working with to deliver autonomous solutions, jointly witnessed the opening
TIER IV Sells Camera Starer Kit
TIER IV, an open source autonomous driving startup, tannounced that it will launch sales of the C1 Camera Image Recognition Starter Kit (hereinafter “the Kit” ), through C1 Camera distribution partners. The Kit combines the Robotics/Automotive HDR Camera C1 (hereinafter “the C1 Camera”), for which mass production and shipment are slated to begin in November 2022, and the NVIDIA Jetson AGX Orin developer kit (hereinafter “the Jetson AGX Orin”). TIER IV will present a live demonstration of the Kit at ROSCon 2022 from October 19 to 21, 2022, at Kyoto International Conference Center.
TIER IV’s C1 camera has high dynamic range (HDR) up to 120 dB, automotive grade hardware, and excellent connectivity with a wide variety of applications. It is ideal not only for autonomous driving, but a wide variety of fields such as autonomous mobile robots, security, monitoring, and more. The NVIDIA Jetson AGX Orin boasts processing power of up to 275 trillion operations per second—up to eight times the speed of its predecessor, the Jetson AGX Xavier—yet offers the same form factor, with a size that fits in the palm of the hand, as well as pin compatibility, under similar pricing. It is expected to energize the development of advanced robotics, autonomously operating machines, next-generation embedded systems and edge-computing products.
Through previous collaboration between TIER IV and NVIDIA, C1 camera drivers supporting the Jetson AGX Orin are already provided as open-source software. Since the two products are expected to be used in combination in a wide range of applications, TIER IV will start offering the products in a single-package kit including accessories such as cables (selectable from among multiple choices). This will allow users to start developing image-recognition applications as soon as they purchase the Kit.
At ROSCon 2022, a live demonstration of the Kit will be offered, to showcase its outstanding recognition performance, functionality and scalability. HDR images acquired from multiple C1 cameras using the Kit will be entered into an image-recognition application using deep learning technology, and the recognition results will be displayed on a screen.
VectorNav Tech GNSS for IAC
VectorNav Technologies, a world leader in inertial navigation solutions, announced that it has been chosen to be the exclusive supplier of GNSS/INS systems for the Indy Autonomous Challenge (IAC). VectorNav will supply IAC with its VN-310 Dual Antenna GNSS/INS with Real-Time Kinematic (RTK) positioning for integration in the Dallara AV-21 racecars, enabling precision navigation and attitude estimation.
Conceived in 2019, the Indy Autonomous Challenge aims to advance technology that can speed the commercialization of fully autonomous vehicles and deployments of advanced driver-assistance systems (ADAS). By tapping into collegiate engineering programs, the challenge seeks to explore the feasibility of large-scale unmanned vehicle racing, while offering a platform for students to excel in Science, Technology, Engineering and Math (STEM) and inspire the next generation of innovators.
VectorNav is providing each of the nine IAC teams with its VN-310 Dual Antenna GNSS/INS. Comprised of 3-axis gyroscopes, accelerometers and magnetometers, as well as two GNSS receivers that enable GNSS-Compassing, the VN-310 provides milliradian-level heading, pitch and roll estimates at rates of up to 400 Hz. The VN-310 GNSS receivers are capable of RTK positioning, enabling position estimation down to centimeter-level accuracy.
“We are thrilled to be a part of the IAC competition,” said Jeremy Davis, VectorNav Director of Engineering. “IAC’s work is pushing the boundaries of autonomous vehicle capability and we are confident our inertial navigation systems will contribute to that mission while providing education and entertainment to fans worldwide.”
“The IAC is extremely excited to work with VectorNav as the GNSS/INS provider for the AV-21,” said Paul Mitchell, President, Indy Autonomous Challenge. “Localization and state estimation are the foundation of each team’s autonomy software. With VectorNav’s tactical-grade GNSS/INS systems, teams can operate at high speeds while performing complex maneuvers. Additionally, VectorNav’s technical support has been top-notch from initial integration to on-track usage of their systems.”
The IAC will host its next round of competition on November 11, 2022 at the Texas Motor Speedway in Fort Worth, Texas. VectorNav Technologies will be onsite to provide technical support to each of the IAC teams.
Intempora & Aeva Integration
Intempora, a dSPACE company and pioneer in advanced software solutions for autonomous driving, and Aeva® (NYSE: AEVA), a leader in next-generation sensing and perception systems, announced that Aeva’s Aeries™ 4D LiDAR™ sensors, including Aeries II, have been integrated into Intempora’s RTMaps (Real-Time Multisensor applications) software platform. RTMaps allows engineers to accelerate the development and deployment of next generation automated driving solutions including Advanced Driver Assistance Systems (ADAS) and fully autonomous vehicles.
“Bringing Aeva’s next generation 4D LiDAR to the RTMaps platform is a significant step forward for developers working on the forefront of automated vehicle technology,” said James Reuther, Vice President of Technology at Aeva. “With this integration, they are now able to take advantage of Aeva’s unique capabilities and 4D data in the integration of ADAS and autonomous vehicle platforms.”
Aeva 4D LiDAR sensors use Frequency Modulated Continuous Wave (FMCW) technology to deliver unique sensing and perception capabilities not possible with legacy time-of-flight 3D LiDAR sensors to enable the next wave of driver assistance and autonomous vehicle capabilities, including:
- Instant Velocity Detection: Directly detect velocity for each point in addition to 3D position to perceive where things are and precisely how fast they are moving.
- Ultra Long Range Performance: Detect and track dynamic objects such as oncoming vehicles and other moving objects at distances up to 500 meters.
- Ultra Resolution™: A real-time camera-level image providing up to 20 times the resolution of 3D LiDAR sensors.
- Semantic Segmentation: Real-time segmentation enables the detection of roadway markings, drivable regions, vegetation, road barriers, as well as detecting road hazards like tire fragments at up to twice the distance of 3D LiDAR sensors.
- 4D Localization™: Per-point velocity data enables real-time vehicle motion estimation with six degrees of freedom to enable accurate vehicle positioning and navigation without the need for additional sensors, like IMU or GPS.
“Aeva’s 4D LiDAR technology with instant velocity detection allows automated vehicles to detect and classify objects with higher confidence across longer ranges,” said Nicolas du Lac, CEO at Intempora. “We are pleased to enable Aeva on the RTMaps platform and provide our users and customers with access to the next generation of sensing and perception technology.”
RTMaps is a reliable middleware with a complete software stack to develop, test and deploy algorithms and advanced software functions for mobility. Trusted by OEMs and Tier 1 suppliers for over 20 years, RTMaps allows developers to easily develop, test, benchmark and validate multi-sensor applications for ADAS and autonomous vehicles. The software efficiently integrates and manages large amounts of data generated by different sensors such as cameras, radars, lidars, GNSS, and IMU, especially in the field of complex real-time systems like autonomous driving where large amounts of data are collected on the road.
Cognata & Hylon Partner
Cognata Ltd announces today the launch of a new collaboration with Xylon, to provide a comprehensive Advanced Driver Assistance System (ADAS) and autonomous vehicles (AV) Hardware-in-the-Loop (HIL) solution.
The all-in-one logiRECORDER Automotive HIL Video Logger by Xylon will use Cognata’s real-time photorealistic simulation platform to provide a cost-effective solution for ADAS and AV validation and verification tests.
Cognata provides its real-time photorealistic sensor simulation HIL – hardware-in-the-loop platform. The platform offers 3D digital twin environments with real-life traffic agents as moving, interactive objects. The simulation connects directly to Xylon’s logiRECORDER, removing the hassle of complex hardware boxes and improving the signal quality compared to past generations.
Xylon’s logiRECORDER improves and accelerates the design and testing of cutting-edge Autonomous Driving (AD) and ADAS Systems with an all-in-one automotive data logger for raw multi-channel video and network data recording, data analysis, and playback of the logged data in HIL simulations.
In the HIL operation mode, the logiRECORDER enables real-time conversions of synthetic data from multiple and heterogenous vehicle sensors modeled within Cognata’s simulation platform, including video cameras, LIDARs, RADARs, Thermal cameras, and more, into physical automotive sensory inputs to hardware Electronic Control Units (ECUs). In closed-loop HIL, physical ECU outputs can be converted into simulation inputs that interactively influence the simulated scenarios.
“We bring our cutting-edge simulation platform to the hardware validation space introducing new capabilities to OEMs/T1s for testing their final hardware designs in a fast iteration and new level of quality”, says Danny Atsmon, Cognata’s CEO & Founder.
“Our compact and highly configurable hardware platform quickly adjusts to emerging high-bandwidth vehicle sensors. In synergy with Cognata’s simulation platform, enables flexible and truly immersive HIL, with system performance typically expected at a much higher price point”, says Davor Kovacec, Xylon’s CEO and Founder.
Technologies such as advanced driver assistance systems, autonomous driving, and collision avoidance systems are being adopted in automobiles to improve safety and ride comfort. HIL testing is done for ECUs, algorithms, and software used in autonomous technology. Camera, RADAR, LiDAR, and other sensors are also tested to validate the sensor data using hardware in the loop test benches. The hardware in the loop market is projected to grow from USD 817 million in 2022 to USD 1,291 million by 2027, at a CAGR of 9.6%.
The major factors driving the growth of the market include technological advancements in electric and autonomous vehicles, faster product development with early stages of testing using hardware in the loop, and growing demand in developing countries.*
AEye Slected by Trucking Partner
AEye, Inc. (NASDAQ: LIDR), a global leader in adaptive, high-performance lidar solutions, announced its selection by an undisclosed trucking platform partner that complements and builds upon AEye’s existing pre-development programs with global trucking OEMs. The trucking platform provider will be revealed at the upcoming F3 Future of Freight Conference, taking place November 1-3 in Chattanooga, Tennessee.