In autonomous, self-driving and unmanned vehicle news are Apple, Starsky Robotics, Argo AI and Waymo.
Apple Buys Drive.ai
Apple bought Drive.ai. Drive.ai notified The state that it planned to shut down by the end of June 2019 and lay off about 90 employees according to notices filed through California’s Employment Development Department. Apple remains secretive about why it bought the company and its self-driving car plans. According to the Drive.ai website Drive.ai uses artificial intelligence to create self-driving transportation solutions that improve the state of mobility today. We noticed last week the Drive.ai’s blog posts on Medium are missing.
Apple owns the rights to Drive.ai technology that was being tested, Arlington and Frisco, TX.
Starsky 1st Heavy Duty UnManned Truck—-With Remote Driver
Starsky Robotics completed its first test driving a heavy-duty commercial truck for 9.4 miles along Florida’s Turnpike with no one in it: successfully navigating a rest area, merging onto the highway, changing lanes, and keeping a speed of 55 MPH. All without a human on-board. It is not a driverless truck—it is an unmanned truck—-onboard computers make some of the driving decisions while a remote human is surrounded by human uses a steering wheel to driver on on rough parks of the drive.
Argo AI Works with Carnegie Mellon to Form Center
Argo AI announced that they are forming the Carnegie Mellon University Argo AI Center for Autonomous Vehicle Research. To fund the Center, Argo has pledged $15 million over five years to fund a team of five world-renowned faculty leaders and support graduate students conducting research in pursuit of their doctorates, to push the envelope on the next-generation of self-driving technology.
What is unique about this center is that it will look at the task of autonomous vehicles from end-to-end, including perception, decision-making and actuation, yet push the envelope to advance beyond our first-generation system enabling us to serve more cities, operate more efficiently and interact safely in the most complex environments. This is the very essence of the role academic research plays in advancing science.
Argo will provide access to data, infrastructure, and platforms to CMU students engaged in autonomous vehicle research. Recently, Argo AI launched Argoverse™, a collection of sensor data and HD maps for computer vision and machine learning research to advance self-driving technology. The researchers and faculty working in this center will not only have access to Argoverse, they will have access to far more data and knowledge through their direct collaboration with Argo.
Argo Shares Data
Argoverse is a research collection with three distinct types of data. The first is a dataset with sensor data from 113 scenes observed by our fleet, with 3D tracking annotations on all objects. The second is a dataset of 300,000-plus scenarios observed by our fleet, wherein each scenario contains motion trajectories of all observed objects. The third is a set of HD maps of several neighborhoods in Pittsburgh and Miami, to add rich context for all of the data mentioned above.
Each of these elements is explained in detail below:
- Argoverse 3D tracking dataset. A core challenge for self-driving vehicles is knowing and understanding how other objects are moving in a surrounding scene. We call this task, “3D tracking.” Our 3D tracking dataset contains several types of sensor data: 30 frames per second (fps) video from seven cameras with a combined 360-degree field of view, forward-facing stereo imagery, 3D point clouds from long range LiDAR, and a 6-degree-of-freedom pose for the autonomous vehicle. We collect this sensor data for 113 scenes that vary in length from 15 to 30 seconds. For each scene, we annotate objects with 3D bounding cuboids. In total, the dataset contains more than 10,000 tracked objects.
- Argoverse motion forecasting dataset. For self-driving cars, it’s important to understand not just where objects have moved, which is the task of 3D tracking, but also where objects will move in the future. Like human drivers, self-driving cars need to assess, “Will that car merge into my lane?” and “Is this driver trying to turn left in front of me?” To build our motion forecasting dataset, we mined for interesting scenarios from more than 1,000 hours of fleet driving logs. “Interesting” means a vehicle is managing an intersection, slowing for a merging vehicle, accelerating after a turn, stopping for a pedestrian on the road, and more scenarios along these lines. We found more than 300,000 such scenarios. Each scenario contains the 2D, birds-eye-view centroid of each tracked object sampled at 10-hertz for five seconds. Each sequence has one interesting trajectory that is the focus of our forecasting benchmark. The challenge for an algorithm is to observe the first two seconds of the scenario and then predict the trajectory of a particular vehicle of interest for the next three seconds.
- Argoverse high-definition maps. Perhaps the most compelling aspect of Argoverse is our high-definition mapset containing 290 kilometers of mapped lanes. The maps contain not only the location of lanes, but also how they are connected for traffic flow. So when a lane enters an intersection, the map tells you which three successor lanes a driver might follow out of that intersection. The map has two other components as well: ground height and driveable area segmentation at 1m^2 resolution. Taken together, these maps make many perception tasks easier, including discarding uninteresting LiDAR returns using the ground height and driveable area features. It’s easier to forecast future driving trajectories by first inferring the lane that a driver is following. Vehicle orientation and velocity estimates can be refined by considering lane attributes like direction. No doubt countless other clever ways exist to incorporate these rich maps into self-driving perception tasks, but the academic community has not yet been able to explore this combination since no previous dataset has offered high-definition maps.
Argo AI Showed 3rd For Fusion Hybrid
Argo AI showed their new Ford Fusion Hybrid its third-generation test vehicle that Argo AI is now deploying in collaboration with Ford in all five major cities we’re operating in: Pittsburgh, Palo Alto, Miami, Washington, D.C., and now Detroit – where we’re expanding our testing footprint in Michigan beyond Dearborn.
The new cars are equipped with a significantly upgraded sensor suite, including new sets of radar and cameras with higher resolution and higher dynamic range. When trying to see an object that’s very far away, a lower resolution camera may only be able to represent it as a pixel or two. But with higher resolution, you may be able to get a dozen pixels out of the same far away object. In concert with upgraded software, this means our vehicles are getting better at seeing what’s farther ahead and classifying what it is.
The fleet features a brand-new computing system – one that offers far more processing power than in our previous cars, with improved thermal management systems that generate less heat and noise inside the vehicle. That means a smarter vehicle, but also a quieter, more comfortable ride for anyone inside.
Waymo & Argo AI Share Data
At CVPR Waymo and Argo AI published their data sets. Waymo claims 3,000 scenes,and better synchronization between the camera and lidar information. Waymo is also providing data from five lidar sensors.
The data has to labelled in order to be useful, notes Sam Abuelsamid.###
Guident Gets More AV IP
Guident Ltd, the developer of software apps for autonomous vehicles and drones, announces it has acquired the exclusive intellectual property (IP) license to U.S. Patent No. 9,964,948 B2 entitled “Remote Control and Concierge Service for an Autonomous Transit Vehicle Fleet,” from Florida International University.
This important patent describes methods for assisting Autonomous Vehicles (AV’s) by using their sensor inputs in coordination with a remote control center with the ability to operate a vehicle or drone from anywhere in the world.
The Autonomous Vehicle (AV) can send its sensory input information to the control center in real-time and the control center operator can take over operation of the vehicle, enabling it to navigate in a variety of difficult situations such as heavy weather, crowded and dangerous traffic scenarios, accident prevention and remediation, and off-grid and last mile package delivery.
Guident Ltd., also announced it has acquired the exclusive license to patent application PCT US19 14 547 entitled: “Visual sensor fusion and data sharing across connected vehicles for active safety” from Michigan State University.
If you are interested in self-driving autonomous vehicles only, you are welcome to subscribe to our newsletter.
Read all autonomous vehicle news.
You are welcome to subscribe to receive emails with the latest Autonomous Self-Driving Driverless and Auto-Piloted Car News , you can also get weekly news summaries or midnight express daily news summaries.