Best of NIVIDIA GPU: Verizon, Constellation, Marvell & More

NVIDIA news at the GPU conference showed improvements in safety and more power for self-driving autonomous car testing, but stopping of self-driving cars on public roads for now.

NVIDIA has stopped all test of autonomous self-driving vehicles on public roads after the fatal crash in Arizona.

“The reason we suspended was actually very simple — obviously there’s a new data point as a result of the accident last week,” Chief Executive Jensen Huang said in a Q&A session with the media. “As engineers, we should wait to see if we learn something from that experience.”  After the announcement stock prices of NVIDIA fell 7.8%.

DRIVE Pegasus, is being used used in  self-driving cars. NVIDIA’s next step is called Orin – it will include eight chips, two Pegasuses, and put them into two Orrins. That is the company’s roadmap.

 

NVIDIA  introduced a cloud-based system for testing autonomous vehicles using photorealistic simulation – creating a safer, more scalable method for bringing self-driving cars to the roads.

Speaking at the opening keynote of GTC 2018, NVIDIA founder and CEO Jensen Huang announced NVIDIA DRIVE™ Constellation, a computing platform based on two different servers.

The first server runs NVIDIA DRIVE Sim software to simulate a self-driving vehicle’s sensors, such as cameras, lidar and radar. The second contains a powerful NVIDIA DRIVE Pegasus™ AI car computer that runs the complete autonomous vehicle software stack and processes the simulated data as if it were coming from the sensors of a car driving on the road.

“Deploying production self-driving cars requires a solution for testing and validating on billions of driving miles to achieve the safety and reliability needed for customers,” said Rob Csongor, vice president and general manager of Automotive at NVIDIA. “With DRIVE Constellation, we’ve accomplished that by combining our expertise in visual computing and datacenters. With virtual simulation, we can increase the robustness of our algorithms by testing on billions of miles of custom scenarios and rare corner cases, all in a fraction of the time and cost it would take to do so on physical roads.”

The simulation server is powered by NVIDIA GPUs, each generating a stream of simulated sensor data, which feed into the DRIVE Pegasus for processing.

Driving commands from DRIVE Pegasus are fed back to the simulator, completing the digital feedback loop. This “hardware-in-the-loop” cycle, which occurs 30 times a second, is used to validate that algorithms and software running on Pegasus are operating the simulated vehicle correctly.

DRIVE Sim software generates photoreal data streams to create a vast range of different testing environments. It can simulate different weather such as rainstorms and snowstorms; blinding glare at different times of the day, or limited vision at night; and all different types of road surfaces and terrain. Dangerous situations can be scripted in simulation to test the autonomous car’s ability to react, without ever putting anyone in harm’s way.

“Autonomous vehicles need to be developed with a system that covers training to testing to driving,” said Luca De Ambroggi, research and analyst director at IHS Markit. “NVIDIA’s end-to-end platform is the right approach. DRIVE Constellation for virtually testing and validating will bring us a step closer to the production of self-driving cars.”

DRIVE Constellation will be available to early access partners in the third quarter of 2018.

Smart Cities with NVIDIA & Verizon

Verizon joined 100 other companies already using NVIDIA Metropolis, NVIDIA’s edge-to-cloud video platform for building smarter, faster deep learning-powered applications.

Verizon is a leading technology company with the nation’s most reliable network service. Its Smart Communities group has been busy working with cities to connect communities and set them up for the future, including attaching NVIDIA Jetson-powered smart camera arrays to street lights and other urban vantage points.

The arrays — which Verizon calls video nodes — use Jetson’s deep learning prowess to analyze multiple streams of video data to look for ways to improve traffic flow, enhance pedestrian safety, optimize parking in urban areas, and more.

Beta tests using proprietary datasets and models generated from neural network training are wrapping up on both coasts. Details of its commercial release are expected soon from Verizon.

Released last year, the NVIDIA Metropolis platform includes tools, technologies and support to build deep learning applications for everything from traffic and parking management to law enforcement and city services

Audi VR Experience

Marvell announced that its industry-first 88Q5050 secure automotive Ethernet switch is integrated into the NVIDIA DRIVE Pegasus platform for autonomous vehicles, making it the first commercially available solution with embedded security built into the core. Marvell’s secure switch can handle multi-gigabit applications for OEM car manufacturers to deliver an in-car network that supports sensor fusion, cameras, safety and diagnostics. The Marvell® embedded security technology helps to prevent vehicles from cyberattacks that can compromise a safe and seamless driving experience.

The industry-leading Marvell 88Q5050 solution employs a deep packet inspection (DPI) engine and trusted boot functionality to ensure a robust level of security. The switch also supports both blacklisting and whitelisting addresses on all its Ethernet ports to further enhance its security especially against denial of service attacks.

The Marvell Ethernet switch solution is AEC-Q100 qualified and can meet the rigorous standards of the industry and withstand harsh automotive environments. It supports multiple integrated 100BASE-T1 PHYs as well as 1000BASE-T1 interfaces, and can connect with Marvell’s previously announced 88Q2112 1000BASE-T1 PHY.

Audi AG is demonstrated its consumer virtual reality (VR) experience alongside strategic visualization partner ZeroLight at the GPU Conference

The Audi VR Experience, which is at the forefront of experiential retail in the automotive market, uses ZeroLight’s advanced vizualisation solution to provide multi-million polygon models of Audi vehicles. NVIDIA’s VR SLI 2-way multi-GPU technology is used to assign a specific GPU to each eye, scaling performance and reducing latency to deliver a smooth, consistent framerate.

Abaco ImageFlex

Abaco Systems has announced Release 2.0 of its powerful, flexible ImageFlex image processing and visualization toolkit at the GPU Technology Conference. Leveraging the enormous power of GPU technology, ImageFlex provides an easy-to-use API framework to considerably speed and simplify the development, optimization and maintenance of advanced AI applications – especially those targeted at autonomous vehicles.

ImageFlex enables developers of image/video processing and visualization applications on GPUs to be substantially more productive by hiding the complexity of the underlying software layers, while maintaining high performance. By providing an OpenGL® abstraction layer (no OpenGL experience is required) it can reduce the number of lines of code required by a factor of five, radically reducing the effort and time needed in order to create, test and maintain the application. This means faster time-to-market as well as lower development cost.

New features for ImageFlex Release 2.0 include:

  • Tools and reference examples enabling AI-based applications to be deployed on Abaco’s NVIDIA®-based GPU products.
  • Provision of a reference target tracking example – a core building block for tracking applications.
  • High quality, GPU-optimized image stabilization.

ImageFlex is highly complementary to Abaco’s NVIDIA GPU-based GVC1000 and GVC2000 hardware platforms, which use the NVIDIA Jetson supercomputer on a module for AI computing at the edge. This allows the creation of complete solutions for Degraded Visual Environment (DVE), 360° situational awareness, helmet mount sight processing, target identification and tracking and other EO/IR processing applications. It is portable across a range of graphics processing architectures and operating systems, and is potentially safety certifiable.

“ImageFlex significantly reduces our customers’ software engineering effort in the development and deployment of applications for EO/IR platforms and autonomy, and is unique in its ability to do so,” said John Muller, Chief Growth Officer at Abaco Systems. “Combined with our powerful, flexible hardware platforms, ImageFlex is evidence not only of our experience and expertise in AI-based graphics, video and visualization applications, but also of our commitment to providing our customers with more complete solutions.”

The ImageFlex API provides functions for a range of image processing operations from simple image transformations through to more complex lens distortion correction and image morphing. It includes optimized, high quality image fusion, stabilization, tracking and distortion correction algorithms, as well as a comprehensive set of reference application examples that provide core software building blocks. ImageFlex also provides tools and reference examples demonstrating how to integrate with sensors and deploy artificial intelligence-based applications such as object detection and recognition.

In addition, ImageFlex provides an innovative, high performance image fusion function that can fuse image data from multiple sources of different resolutions. The algorithm adaptively adjusts to pull through the regions of highest contrast in each source to a produce a fused result, enabling an observer or processing stage to act on the combined information of the sources.-