A couple of years ago, I had this crazy thought – what really makes a car intelligent – its ability to predict and avoid accidents, or the decisions that it can make when an accident occurs? Whom would a truly AI car save – a pregnant mother of 2 children, or a drunk father of three?
One thing that we have never done is to consider the level at which we take an AI and say that it understands. The Turing test is all but a farce – its testing ABCD and giving out PhD results from the other side.
You see, one simple idea that we have all had over the years is that if an AI driven car can access the driving record/details of nearby drivers by simply searching the number plate of a vehicle from a central repo, then the car can then adjust its safety levels around those specific vehicles/drivers. If the repo itself can be contributed to, then that would greatly improve safety levels. We think of safety as the gold standard, and an AI as one that can keep us safe.
For instance, if a Tesla Model X notices a rogue driver who is driving over speed-limits on the H5 Highway, it can then instantly push that data into the shared repo for H5. Two minutes later, as a car from General Motors slowly gets into the route, it can access the latest push to the repository, know that there is a rogue car in question in that route, and accordingly prepare for it.
But have you asked yourself – what really happens when that rogue driver actually goes forward to hit my car – and the AI’s only option is to either absorb the collision and take the damage, or just pass it off to another car that might be more vulnerable?
In another scenario, we imagine that cars can always continue to scan nearby cars and their number plates, interpolate the data to identify the specific car or driver in question, and accordingly both report and prepare for individualized driving patterns of humans. The idea is to enable clustered knowledge-bases: a real-time wikipedia for cars to interact around, so that they know what is up, and can tell each other what is up in their quest to predict that which has not happened yet. In a perfect world, a similar central repository that enables all cars to talk to each other would be amazing – mainly because of something that you can think this way – what is the biggest challenge in AI for cars – the ability to instantly react to its environment, or to be able to predict its environmental variables?
But really, is that it? Isn’t that a lot of statistics and a really little bit of “thought”, as we speak?
What happens when I find a rogue car – or a car that is carrying a murderer, or a bus where a woman is being raped in Delhi? Does my AI just report it and stay safe in its own quarters – or does it get brave and try to stop the rogue one?
At what level do we decide that an AI is actually an AI, and not just a really good piece of statistical algorithm?