However on this case the pc has to acknowledge issues which might be designed for a human being.
Lidar is superb at estimating the precise distance to one thing, however has no concept what it’s. For instance, you realize that there’s something 15.231451m away. However you do not know if it is a concrete wall or a fog financial institution.
The digicam sees what we see. And AI can then let you know that there’s a fog financial institution and that it’s someplace between 14 and 16m away from you.
You mix this info. Then you realize that there’s a fog financial institution 15.231451m away. However does that also add worth to your self-driving automobile? And in case your reply is “sure”, is it sufficient worth to make the automobile 3x costlier?
In the true world, Lidar is used for one thing fully completely different. First, you scan the world by automobile, as in the event you have been working for Google Streetview. With the cameras + lidar you may make an excellent detailed map of the world. Within the workplace, lots of of individuals put labels on all of the static objects your digicam has seen. Now the AI can use this info to drive on that avenue. So long as nothing adjustments on the road. All computing energy can now be transferred to dynamic objects comparable to vehicles and other people.
However this will by no means actually be sensible. As a result of it’s a must to continually replace the maps. The vehicles that drive round with this precept are subsequently restricted to 1 district or 1 city. Or solely on the key highways. Just because it is unaffordable to scale one thing like this up. Particularly if each automobile producer goes to keep up its personal system. So all these investments are made in a system that may by no means actually be accomplished.