NVIDIA GTC Automotive News: DeepMap, Stradvision and More

At the NVIDIA GTC Conference today, there were announcements from DeepMap, Stradvision and new products from NVIDIA.

DeepMap HDR

DeepMap, a global leader in autonomous driving technology, today announced DeepMap HDR™ (High-Definition Reference), a service for companies who are building hands-free Level 2+ driving systems using crowd-sourced maps.

Complementing existing perception-based Level 2+ autonomy platforms, DeepMap HDR registers and aligns myriad crowd-sourced perception outputs to generate and update live, high-fidelity maps with absolute accuracy and better relative accuracy.

James Wu, DeepMap Co-Founder and CEO, said, “DeepMap HDR solves a critical piece of the puzzle for companies seeking to validate and improve crowd-sourced mapping data. We developed this service to enable our customers to offer safe, reliable, and high-performance hands-free driving, while expanding the Operational Design Domain (ODD) of next-generation consumer vehicles.”

The announcement was made on the opening day of the NVIDIA GPU Technology Conference (GTC), where DeepMap Co-Founder and CTO Mark Wheeler is presenting on “Future-Proof Mapping for Level 2+ Autonomy and Beyond.” The on-demand session (A21158) will be available on the GTC website on October 5 at 9am PDT, following NVIDIA CEO Jensen Huang’s keynote.

StradVision ADAS

StradVision revealed its new Advanced Driver-Assistance Systems (ADAS) solution for automotive surround view monitoring at NVIDIA’s 11th GPU Technology Conference (GTC) 2020.

StradVision Platform Engineer Kukhyun Cho’s session will explain how the company’s flagship product SVNet works with Surround View Monitors (SVMs) to form an accurate, 360-degree visualization of a vehicle’s environment. Through a process called Edge Blending, the image edges from front, rear, left, and right cameras are seamlessly fused into one combined image.

This vision solution enables ADAS functions such as Automated Valet Parking (AVP) or Advanced Parking Assist (APA), using object detection, distance estimation, free space detection, and parking space detection.

Cho will also expand on how StradVision integrates six SVNet networks with SVM onto NVIDIA’s Jetson Xavier system-on-chip (SoC) with TensorRT, a software development kit for deep learning inference. Known for its superior computing capabilities that are ideal for use with deep learning networks, the powerful Xavier AI platform enables SVNet to run the most advanced automotive Level 2 features, while generating a small footprint that will not overwhelm a vehicle’s ADAS.

SVNet is a lightweight software that allows vehicles to detect and identify objects accurately, such as other vehicles, lanes, pedestrians, animals, free space, traffic signs, and lights, even in harsh weather conditions or poor lighting.

The software relies on deep learning-based embedded perception algorithms, which compared with its competitors is more compact and requires dramatically less memory and electricity to run. SVNet supports more than 14 hardware platforms and can also be customized and optimized for any other hardware system thanks to StradVision’s patented and cutting-edge Deep Neural Network-enabled technology.

StradVision’s software is currently deployed in 8.8 million vehicles worldwide, such as SUVs, sedans, trucks, and self-driving buses, and maintains partnerships with leading global automotive Tier 1 suppliers and five of the world’s top auto OEMs. StradVision’s global partners include Aisin Group, Hyundai Motor Group, LG Electronics, Texas Instruments, Renesas, Qualcomm, Xilinx, Socionext, Ambarella, and BlackBerry QNX.

StradVision has obtained certifications including China’s Guobiao, the coveted ASPICE CL2 (Automotive Software Performance Improvement and Capability Determination Containment Level 2) certification, and most recently the internationally recognized ISO 9001:2015. The company also bagged the Grand Prize in the Electric/Electronic Category at the 14th Korea Patent Excellence Awards.

Cloud for VR

NVIDIA and AWS are bringing the future of XR streaming to the cloud.

Announced today, the NVIDIA CloudXR platform will be available on Amazon EC2 P3 and G4 instances, which support NVIDIA V100 and T4 GPUs, allowing cloud users to stream high-quality immersive experiences to remote VR and AR devices.

The CloudXR platform includes the NVIDIA CloudXR software development kit, NVIDIA Virtual Workstation software and NVIDIA AI SDKs to deliver photorealistic graphics, with the mobile convenience of all-in-one XR headsets. XR is a collective term for VR, AR and mixed reality.

With the ability to stream from the cloud, professionals can now easily set up, scale and access immersive experiences from anywhere — they no longer need to be tethered to expensive workstations or external VR tracking systems.

Lucid Motors recently announced the new Lucid Air, a powerful and efficient electric vehicle that users can experience through a custom implementation of the ZeroLight platform. Lucid Motors is developing a virtual design showroom using the CloudXR platform. By streaming the experience from AWS, shoppers can enter the virtual environment and see the advanced features of Lucid Air.

NVIDIA Maxine for Streaming Video

NVIDIA  announced the NVIDIA Maxine platform, which provides developers with a cloud-based suite of GPU-accelerated AI video conferencing software to enhance streaming video — the internet’s No. 1 source of traffic.

NVIDIA Maxine is a cloud-native streaming video AI platform that makes it possible for service providers to bring new AI-powered capabilities to the more than 30 million web meetings estimated to take place every day. Video conference service providers running the platform on NVIDIA GPUs in the cloud can offer users new AI effects — including gaze correction, super-resolution, noise cancellation, face relighting and more.

Because the data is processed in the cloud rather than on local devices, end users can enjoy the new features without any specialized hardware.

“Video conferencing is now a part of everyday life, helping millions of people work, learn and play, and even see the doctor,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “NVIDIA Maxine integrates our most advanced video, audio and conversational AI capabilities to bring breakthrough efficiency and new capabilities to the platforms that are keeping us all connected.”

Breakthrough AI Efficiency Slashes Bandwidth to Boost Call Quality
The NVIDIA Maxine platform dramatically reduces how much bandwidth is required for video calls. Instead of streaming the entire screen of pixels, the AI software analyzes the key facial points of each person on a call and then intelligently re-animates the face in the video on the other side. This makes it possible to stream video with far less data flowing back and forth across the internet.

Using this new AI-based video compression technology running on NVIDIA GPUs, developers can reduce video bandwidth consumption down to one-tenth of the requirements of the H.264 streaming video compression standard. This cuts costs for providers and delivers a smoother video conferencing experience for end users, who can enjoy more AI-powered services while streaming less data on their computers, tablets and phones.

AI Features Improve Video Conferencing Experiences from NVIDIA

New breakthroughs by NVIDIA researchers that will be included in Maxine make video conferencing feel more like face-to-face conversation. Video conference service providers will be able to take advantage of NVIDIA research in GANs, or generative adversarial networks, to offer a variety of new features.

For example, face alignment enables faces to be automatically adjusted so that people appear to be facing each other during a call, while gaze correction helps simulate eye contact, even if the camera isn’t aligned with the user’s screen. With video conferencing growing by 10x since the beginning of the year, these features help people stay engaged in the conversation rather than looking at their camera.

Developers can also add features that allow call participants to choose their own animated avatars with realistic animation automatically driven by their voice and emotional tone in real time. An auto frame option allows the video feed to follow the speaker even if they move away from the screen.

Using conversational AI features powered by the NVIDIA Jarvis SDK, developers can integrate virtual assistants that use state-of-the-art AI language models for speech recognition, language understanding and speech generation. The virtual assistants can take notes, set action items and answer questions in human-like voices. Additional conversational AI services such as translations, closed captioning and transcriptions help ensure participants can understand what is being discussed on the call.

NVIDIA EGX edge AI Platform

NVIDIA today announced widespread adoption of the NVIDIA EGX™ edge AI platform by the world’s leading tech companies, bringing a new wave of secure, GPU-accelerated software, services and servers to enterprise and edge data centers.

Hundreds of vision AI, 5G, CloudRAN, security and networking companies are teaming with major server manufacturers, including Dell Technologies, Inspur, Lenovo and Supermicro, as well as leading software infrastructure providers, including Canonical, Cloudera, Red Hat, SUSE and VMware, to leverage the NVIDIA EGX platform to help businesses bring AI to the edge.

The world’s largest industries — manufacturing, healthcare, retail, logistics, agriculture, telco, public safety and broadcast media — are able to benefit from the EGX platform to quickly and efficiently deploy AI at scale.