You could, for instance, ask a car fitted with a reasoning engine why it had hit the brakes, and it would be able to tell you that it thought a bicycle hidden by a van was about to enter the intersection ahead.
A machine-learning program cannot do that.
Besides helping improve program design, such information will, Dr Bhatt reckons, help regulators and insurance companies.
It may thus speed up public acceptance of autonomous vehicles.
Dr Bhatt's work is part of a long-standing debate in the field of artificial intelligence.
Early AI researchers, working in the 1950s, chalked up some successes using this sort of preprogrammed reasoning.
But, beginning in the 1990s, machine learning improved dramatically, thanks to better programming techniques combined with more powerful computers and the availability of more data.
Today almost all AI is based on it.
Dr Bhatt is not, though, alone in his scepticism.
Gary Marcus, who studies psychology and neural science at New York University and is also the boss of an AI and robotics company called Robust.AI, agrees.
To support his point of view, Dr Marcus cites a much-publicised result, albeit from eight years ago.
This was when engineers at DeepMind (then an independent company, now part of Google) wrote a program that could learn, without being given any hints about the rules, how to play Breakout, a video game which involves hitting a moving virtual ball with a virtual paddle.
DeepMind's program was a great player.
But when another group of researchers tinkered with Breakout's code—shifting the location of the paddles by just a few pixels—its abilities plummeted.
It was not able to generalise what it had learned from a specific situation even to a situation that was only slightly different.
For Dr Marcus, this example highlights the fragility of machine-learning.
But others think it is symbolic reasoning which is brittle, and that machine learning still has a lot of mileage left in it.
Among them is Jeff Hawke, vice-president of technology at Wayve, a self-driving-car firm in London.
Wayve's approach is to train the software elements running a car's various components simultaneously, rather than separately.
In demonstrations, Wayve's cars make good decisions while navigating narrow, heavily trafficked London streets—a task that challenges many humans.