Toyota to Self-Drive Commercially 1st
James Kuffner, chief of Toyota Research Institute-Advanced Development says that Toyota will focus on deploying autonomous tech in commercial vehicles first.
Arbe, the provider of next-generation 4D Imaging Radar Chipset Solution, enabling high-resolution sensing for ADAS and autonomous vehicles announced the closing of $32 million in Round B funding from existing, new, and CVC investors Catalyst CEL, BAIC Capital, AI Alliance (Hyundai, Hanwha, SKT), and MissionBlue Capital, and from earlier investors Canaan Partners Israel, iAngels, 360 Capital Partners, O.G. Tech Ventures, and OurCrowd. Arbe will use the funding to move to full production of its breakthrough radar chipset, which generates an image 100 times more detailed than any other solution on the market today.
With the new funding, Arbe will focus on expanding its team to support global Tier-1 customers in moving into full production of radar systems based on Arbe’s radar development platform. The delivery of radars based on Arbe’s proprietary chipset is a game changer in the automotive industry, as Arbe’s technology is the first to enable highly precise sensing in all environment conditions. The unique radar technology produces detailed images; separates, identifies and tracks hundreds of objects in high horizontal and vertical resolution to a long range in a wide field of view; enabling the OEMs to provide all-conditions, uncompromised safety to next generation cars with an affordable sensor for mass market implementation.
On November 21, 2019, Helm.ai took home two of the three awards offered at Tech.AD Detroit, a premier autonomous driving conference attended by the top OEMs, Tier 1s, and automotive suppliers. Based on finalist nominations from Tech.AD’s expert jury and a live vote from a technical audience of decision-makers in autonomous driving, Helm.ai won the Most Innovative Use of Artificial Intelligence & Machine Learning in the Development of Autonomous Vehicles & Respective Technologies, and the Overall Community Choice Award.
Finalists for the live vote were evaluated based on the nominee’s innovation, simplicity, achievement, maturity and cost-effectiveness. Helm.ai won the audience vote for its cutting edge AI software that powers autonomous vehicles. Using patent-pending unsupervised deep learning technology, Helm.ai has achieved breakthrough performance on industry benchmarks which it exhibited at the conference. It was the first time that a company won two awards at Tech.AD.
Runner up in the first category was NVIDIA’s “NVIDIA DRIVE Constellation” platform. The 2nd category was open to all attending companies, and consisted of the audience voting on their favorite, most innovative technology.
AImotive Shipping aiWare3
AImotive, one of the world’s leading suppliers of modular automated driving technologies, announced that it has begun shipment of the latest release of its acclaimed aiWare3 NN (Neural Network) hardware inference engine IP. The aiWare3P IP core incorporates new features that result in significantly improved performance, lower power consumption, greater host CPU offload and simpler layout for larger chip designs.
“Our production-ready aiWare3P release brings together everything we know about accelerating neural networks for vision-based automotive AI inference applications;” said Marton Feher, senior vice president of hardware engineering for AImotive. “We now have one of the automotive industry’s most efficient and compelling NN acceleration solutions for volume production L2/L2+/L3 AI.”
Each aiWare3P hardware IP core offers up to 16 TMAC/s (>32 TOPS) at 2GHz, with multi-core and multi-chip implementations capable of delivering up to 50+ TMAC/s (>100 INT8 TOPS). The core is designed for AEC-Q100 extended temperature operation and includes a range of features to enable users to achieve ASIL-B and above certification. Key upgrades include:
- Enhanced on-chip data reuse and movement, scheduling algorithms and external memory bandwidth management
- Improvements ensure that 100% of most NNs execute within the aiWare3P core without host CPU intervention
- Range of upgrades reducing external memory bandwidth requirements
- Advanced cross-coupling between C-LAM convolution engines and F-LAM function engines
- Physical tile-based microarchitecture, enabling easier physical implementation of large aiWare cores
- Logical tile-based data management, enabling efficient workload scalability up to the maximum 16 TMAC/s per core
- Significantly upgraded SDK, including improved compiler and new performance analysis tools
The aiWare3P hardware IP is being deployed in L2/L2+ production solutions, as well as studies of advanced heterogeneous sensor applications. Customers include Nextchip for their forthcoming Apache5 Imaging Edge Processor, and ON Semiconductor for their collaborative project with AImotive to demonstrate advanced heterogeneous sensor fusion capabilities.
As part of their commitment to open benchmarking using well-controlled benchmarks reflecting real applications, AImotive will be releasing a full update to their public benchmark results in Q1 2020 based on the aiWare3P IP core.
The aiWare3P RTL will be shipping from January 2020.
Xilinix for Baidu Parking
Xilinx, Inc., the leader in adaptive and intelligent computing, announced that the Xilinx® Automotive (XA) Zynq® UltraScale+™ MPSoC (XAZU5EV) is powering Baidu’s production-ready Apollo Computing Unit (ACU)-Advanced platform for Automated Valet Parking (AVP), the industry’s first dedicated computing solution for AVP. The ACU is Baidu’s advanced in-vehicle computing platform for autonomous driving. ACU-Advanced is the first production-ready AVP controller enabled by an XA Zynq UltraScale+ MPSoC.
Designed for the specific scenarios and functions of valet parking, which require sophisticated and powerful deep learning inference to handle the complex driving environment, the production-ready ACU-Advanced for AVP features the XA Zynq UltraScale+ MPSoC for sensor fusion and AI processing, replacing the GPU used in proof-of-concept. In addition, the platform is fully compatible with the Baidu PaddlePaddle framework, and includes five cameras, 12 ultrasonic radars and a -40 ~ 85℃ working temperature range, which meets vehicle–level production requirements.
Xilinx is a leading automotive solution provider with over 13 years of auto industry experience. The company’s automotive solutions offer the ultimate in hardware and software partitioning flexibility combined with a variety of networking connectivity options, unique functional safety architecture configurations and security features for current and future autonomous drive modules. Cumulatively, Xilinx has shipped more than 170 million devices globally for automotive use, with 70 million used for production ADAS systems. The company works with over 200 automotive companies, comprised of major Tier 1s, OEMs, and startups globally.
Th Institute of Electrical and Electronics Engineers (IEEE) has approved a proposal to develop a standard for safety considerations in automated vehicle (AV) decision-making and named Intel Senior Principal Engineer Jack Weast to lead the workgroup. Participation in the workgroup is open to companies across the AV industry, and Weast hopes for broad industry representation. Group members will hold their first meeting in 2020’s first quarter.
The new standard – IEEE 2846 – will establish a formal rules-based mathematical model for automated vehicle decision-making that will be formally verifiable (with math), technology neutral (meaning anybody can apply it) and adjustable to allow for regional customization by local governments. It will also include a test methodology and tools necessary to perform verification of an AV to assess conformance with the standard.
Who Is Involved: Two IEEE committees co-sponsored the proposal: The IEEE Computer Society and the Vehicle Technology Society. Weast will chair the workgroup, which Weast says is open to “anyone with an interest in crafting this essential AV standard.”
Intel’s Role: Intel will bring its Responsibility-Sensitive Safety (RSS) framework as a starting point for the industry to align on what it means for an AV to drive safely. Open and technology-neutral, RSS defines what it means for a machine to drive safely with a set of logically provable rules and prescribed proper responses to dangerous situations. It formalizes human notions of safe driving in mathematical formulas that are transparent and verifiable.
The IEEE standard is needed because the decision-making capability of an AV’s computer is mostly hidden from observation. This capability is largely driven by a collection of artificial intelligence algorithms – a “black box” of sorts – that is at the heart of important intellectual property from the leading companies in the AV industry. The black-box nature of an AV’s driving policy makes it nearly impossible to comparatively judge the safety of the different vehicles. As some industry experts have said, statistical evidence – such as number of miles driven, frequency of human intervention or hours in simulation – can only go so far before the car gets into a scenario it’s never seen before.
StradVision, an innovator in vision processing technology for Autonomous Vehicles, has announced it raised $27 million in its Series B funding round, led by Posco Capital. This round brings StradVision’s total funding to $40 million.
Other Series B investors include: IDG Capital; Industrial Bank of Korea; Lighthouse Combined Investment; LSS Private Equity; Mirae Asset Venture Investment; Neoplux; and Timefolio Asset Management.
“StradVision’s software solutions for Autonomous Vehicles and ADAS systems are proving successful and attractive to leading automakers and suppliers, as our latest round of funding strongly confirms.” said Junhwan Kim, CEO of StradVision. “We appreciate all of our new investors coming on board, and StradVision will use this funding to take our groundbreaking products to the next level as we lead the advancement of camera technology in Autonomous Vehicles.”
An industry leader in camera perception software, StradVision plays a critical role in ADAS capabilities such as Automatic Emergency Braking and Blind-Spot Detection.
StradVision’s efforts are based on its SVNet Deep Learning-based software, which enables high-level perception abilities including: Lane Detection, Traffic Light & Sign Detection/Recognition, Object Detection and Free Space Detection.
With multiple mass production projects ongoing in China and Europe, in partnership with leading global OEMs and Tier 1 suppliers, StradVision will have millions of vehicles on the roadways using its software for Autonomous Vehicles and ADAS systems by 2021 — including SUVs, sedans and buses. StradVision recently earned the coveted Automotive SPICE CL2 certification, as well as China’s Guobiao (GB) certificate — and StradVision is already deploying ADAS vehicles on Chinese roads.
Read all autonomous vehicle news.
You are welcome to subscribe to receive emails with the latest Autonomous Self-Driving Driverless and Auto-Piloted Car News , you can also get weekly news summaries or midnight express daily news summaries.
[newsletters_subscribe lists=”8, 20, 21, 10, 3, ” list=”checkboxes”