MiSci Partners with Ford
To educate the community on the future of transportation, the Michigan Science Center (MiSci) partnered with Ford Motor Company to create MiSci’s first autonomous vehicle exhibit. Built by Ford engineers and developers working closely with MiSci, the one-of-a-kind exhibit resembles a vehicle of the future that MiSci guests can engage with and explore inside to learn about autonomous vehicles through interactive displays and a simulation.
“A key part of Ford’s efforts to build a self-driving business across this country is educating people on how the vehicles operate autonomously, as well as the benefits they provide,” said John Rich, Director of Autonomous Vehicle Technology, Ford Autonomous Vehicles LLC. “Through this interactive experience at the Michigan Science Center, our goal is to build trust in self-driving technology and inspire the next generation of engineers, scientists and mathematicians to pursue a career in transportation.”
The exhibit will allow MiSci guests to learn how autonomous vehicles operate and make decisions without a human driver using data that is collected through 3D maps, cameras, sensors and LiDAR. It also includes an interactive LiDAR simulation to show how an autonomous vehicle sees the environment around it, such as pedestrians and other cars, as it drives. MiSci guests will even be able to take a photo of their experience to keep and share digitally.
“The Michigan Science Center is honored to work with Ford to bring this unique learning experience to our guests,” said Christian Greer, president and CEO of the Michigan Science Center. “The Ford Autonomous Vehicle exhibit demonstrates how science and technology is being used in Michigan to transform the automotive industry and the future of driving.”
Ford is working closely with its self-driving technology partner Argo AI to test self-driving vehicles on public roads in Metro Detroit, including the Corktown neighborhood, among other U.S. cities. Ford also plans to launch a self-driving commercial service in Austin, Miami and Washington D.C. to provide a mobility solution that seeks to make people’s lives better.
The Ford Autonomous Vehicle exhibit joins the Michigan Science Center’s existing 220+ hands-on exhibits, live shows, Spark!Lab from the Smithsonian, Kids Town and more. Visit mi-sci.org to learn about MiSci’s Summer of Science, ECHO Distance Learning, and science experiments you can try at home.
StradVision Launches ALT
StradVision, whose AI-based camera perception software is a leading innovator in Advanced Driver Assistance Systems (ADAS) and Autonomous Vehicles, has launched a cloud-based Auto Labeling Tool (ALT) that works with the company’s pioneering SVNet solution to quickly and accurately identify potentially hazardous objects and road conditions.
Designed to be used with SVNet, ALT promises to usher in a new era for data labeling, leaving behind many of the drawbacks and risks of conventional manual data labeling solutions.
ALT takes advantage of the software’s deep learning-based embedded perception algorithms to allow vehicles to detect and recognize objects on the roads, such as other vehicles, lanes, pedestrians, animals, free space, traffic signs, and lights, even in harsh weather conditions or poor lighting.
Compared with competitors, SVNet is compact, requires dramatically less memory capacity to run, and consumes less electricity. It can also be customized for any hardware system thanks to StradVision’s patented and cutting-edge Deep Neural Network enabled software.
To achieve surround vision, SVNet’s camera and deep learning-based capabilities work seamlessly with other sensors such as LiDAR and RADAR to process collected road data with high speed and accuracy.
Data crunching for ADAS and autonomous vehicles
For Artificial Intelligence (AI) to be effective in object detection and recognition, the task of gathering and processing data is as important as developing the software.
To bring top-level perception AI to market requires significant investment – but so does data acquisition and labeling. Detecting and tagging data samples involves a detailed training process for all machine learning software, often requiring expensive and time-consuming manual input from human data labelers.
A high level of human contribution also raises the risk of human error, particularly as ADAS data labeling is a repetitive but detail-oriented task.
StradVision, however, solves this problem for ADAS and autonomous vehicles with ALT and the patented deep-learning SVNet.
StradVision’s ALT system connects the dots between data recorded by a vehicle and its AI software.
For each separate frame of image or video recorded by the vehicle’s ADAS, every object in the frame is labeled by ALT and sorted into three categories: objects (ex. vehicles, pedestrians, traffic lights, road signs, and other static objects), lanes (ex. lanes of traffic in the immediate surroundings) and segmentation (ex. road surface or free road space).
These labeled data points are in turn compiled into a knowledge bank of information as the vehicle’s AI system continues to learn.
Using SVNet as a template, ALT can drastically scale-up data labeling and AI optimization through 24/7 data processing with its Graphics Processing Unit (GPU).
Once ALT is deployed, it automatically annotates and labels 97% of objects at eight times the speed of a human being and at a fraction of the cost – removing the need for a large team to spend hundreds of hours correcting objects identified by a vehicle’s AI system.
Where human intervention is required, a small team of experts can make adjustments to guarantee data quality and reduce any potential incidents.
Augmenting the AI capabilities of Tier 1 and OEM partners’ offerings
StradVision is pleased to offer ALT to its automotive Tier 1 and OEM partners, so they can fully utilize SVNet in-house with their own data.
Automakers can use ALT with SVNet to expedite the development and deployment of their ADAS and autonomous vehicles economically, swiftly, and securely.
StradVision’s software has obtained China’s Guobiao certification and the coveted ASPICE CL2 (Automotive Software Performance Improvement and Capability Determination Containment Level 2) certification. It is being deployed in 9 million vehicles – such as SUVs, sedans, trucks and self-driving buses – worldwide in partnership with five of the world’s top auto OEMs. StradVision’s global partners also include NVIDIA, Hyundai, LG Electronics, Texas Instruments, Renesas, and Aisin Group.
DENSO’s Pittsburgh Innovation Lab
DENSO Corporation announced it has established the Pittsburgh Innovation Lab, a new U.S. R&D center designed to strengthen open innovation and enhance technology development that enables automated driving. DENSO began operations in Pittsburgh in July 2020.
DENSO’s development of automated driving technologies stems from its broader mission to offer safe and secure means of transportation for all people globally. At the Pittsburgh Innovation Lab, the company will conduct research to achieve Level-4 automated driving and develop the elemental technologies, including AI, to make it possible. The lab will work in collaboration with local universities and companies in Pittsburgh, which is a growing tech hub.
Ibeo Supplies Solid-State LiDAR for GWM
Ibeo Automotive Systems GmbH becomes the world’s first series supplier of solid-state LiDAR for China’s largest SUV and pick-up truck manufacturer Great Wall Motor (GWM). The newly developed ibeoNEXT Solid State LiDAR is used in the SUV model WEY. Ibeo has commissioned ZF Friedrichshafen AG to produce the sensors and the control unit. GWM has commissioned Haomo Technology Co., Ltd., a subsidiary of GWM, to develop the L3 autonomous driving system. China is the world’s largest car market with the fastest growing segment for automated driving.
Following the signing of a letter of intent in 2019, Ibeo has already been in pre-development with Great Wall Motor for a year. The official project started with the signing of the contract by both parties on 13.07.2020. In the future, Ibeo will supply a system that enables a Level 3 highway pilot to drive semi-autonomously at Level 3. This enables fully automated driving over longer distances on the highway. The system comprises the new ibeoNEXT Solid State LiDAR, a control unit and perception software that recognizes objects and thus enables safe driving in interaction with other systems. GWM will be using Ibeo technology for the future series production models of its premium WEY SUV, the LiDAR system will be applied in series starting in 2022. LiangDao Intelligence is responsible for the testing and validation of the full set of LiDAR systems.
JLR Reduces Car Sickness
Jaguar Land Rover is pioneering software that will reduce motion sickness by adapting the driving style of future autonomous vehicles, to continue to provide our customers with the most refined and comfortable ride possible.
Advanced machine learning then ensures the car can optimise its driving style based on data gathered from every mile driven by the autonomous fleet.
This technology can then be used to teach each Jaguar and Land Rover vehicle how to drive autonomously, while maintaining the individual characteristics of each model, whether that’s the thoroughbred performance of a Jaguar or the legendary capability of a Land Rover. All helping Jaguar Land Rover’s continued development of the ultimate cabin experience in an autonomous, electric and connected future.
Motion sickness, which affects more than 70 per cent of people*, is often caused when the eyes observe information different from that sensed by the inner ear, skin or body – commonly when reading on long journeys in a vehicle. Using the new system, acceleration, braking and lane positioning – all contributory factors to motion sickness – can be optimised to avoid inducing nausea in passengers.
As a result of the project, engineers are now able to develop more refined advanced driver-assistance systems (ADAS) features on future Jaguar and Land Rover models, such as adaptive cruise control and lane monitoring systems. The in-depth knowledge is helping Jaguar Land Rover design and manufacture capable and advanced vehicles, both now and in the future.
This is another step for Jaguar Land Rover on its journey to Destination Zero: an ambition to make our societies safer and healthier and our environments cleaner through relentless innovation. With the mission of raising the quality of future urban living, Jaguar Land Rover has also revealed Project Vector, an advanced autonomy-ready concept for future mobility.
In a post COVID-19 world, where a ‘new normal’ is emerging, customer expectations of private transport are changing, and the focus will be on safe, clean mobility where personal space and hygiene will carry a premium. New technologies and materials are being developed to meet these expectations at Jaguar Land Rover with today’s vehicles designed to help improve passenger wellbeing, including a Driver Condition Monitor and antimicrobial wireless device charging. In addition, features such as cooling seats, ambient lighting and multiple seat configurations are proven to significantly reduce the likelihood of motion sickness.
Jaguar and Land Rover models offer adaptive dynamics across its range of vehicles which help to remove low frequency motion from the road, which can lead to nausea. By altering the ride settings every 10 milliseconds, this ensures passengers always experience high levels of comfort, while also maintaining the dynamic performance DNA of every Jaguar and Land Rover model.