Self Driving Cars are now learning more about objects. Carnegie Mellon University recently produced technology to help self-driving cars detect empty space. Since its integration, system performance has improved in ranges between 5.3% – 18.4%.
Prior to CMU’s latest research, these cars were using 3D technology to identify objects around them. Sensors would map the basic shape of what was around the car and then compare it to a library of 3D images. From there, it would match the objects as best as it could to identify them. But there was one major problem: the cars didn’t know much about empty space.
Self Driving Cars Learn Spatial Awareness
Imagine this: there’s a person in front of you. But that person is partially blocked by a tree. You understand that there are two layers in this image: a human, and a tree. But you also understand a third, less obvious element: the empty space around and between the two.
Self Driving cars are still learning these layers. When mapping the person and the tree, a car’s tech might have lumped those two objects together as one. Teaching cars to detect open air and empty space allows them to better separate and identify objects. Cars will now be able to reason that their visibility might be obstructed and factor that into algorithms.
Think of self-driving cars like children: as infants and toddlers, we learn about objects and space constantly. Experts recommend certain kinds of toys (such as rattles or colorful blocks) because they help children to compute information about their five senses. Each and every new leap in technology gets these self-guiding cars one step closer to fully understanding the world on a more adult level.
This comes as especially good news for Washington, D.C., given that Uber recently began testing self-driving cars in the area. The ultimate goal is safer, friendlier roads.