Let me tell you a story. It’s a story about my brother-in-law, whom, for the sake of decency, we’ll call Steve.
Steve has a new car. He loves that new car. It’s great. Does everything he wants it to. Well, except for one thing: it keeps slowing down. He’ll drive in traffic, and the car suddenly slows down, annoyingly, and apparently without good reason. Fuel pump? Electrical issue? What could be the problem?
Well, as Steve discovered during a (potentially) rather heated conversation with the dealership, the car was slowing down because it felt Steve was driving too close to the car in front of him. Steve, never the most patient of drivers, was not impressed. Worse, he was told, there was no way to disable this particular safety feature. Steve went from being unimpressed to unhappy very quickly.
Yet, Steve’s experience may be something we all have to get used to because making cars smarter is an area of investment that is moving forward fast – really fast – and with good reason as there’s a lot of money at stake in this industry.
But, the question is, how much will we want the cars to take over? For plenty of people simply turning the morning commute over to the car and sitting back and snoozing sounds like a great idea, but it could be some time before we’re really comfortable letting cars drive us, and our families, around without our hands on the wheel as the problem quickly becomes one of trust. What happens when we want the car (which, let’s not forget, we bought) to perform in one way, and the car decides not to?
We don’t even have to get to extremes of the Trolley Problem to start to have to question whether we would want a car we’re sitting in to make decisions, that we may or may not agree with, for us. I want to get to my appointment on time. My car, however, disagrees. It wants to drive that much slower, to ease congestion up ahead, and to make sure that it keeps sufficient levels of traction on the recently rained on roads. Herein lies the problem – while taking away some of our driving choices may make us objectively safer, accepting those choices may not make us happy.
And, that’s assuming the car makes the right choice. If there’s any doubt about the safety of the car’s decisions (decisions rooted after all in software) then the backlash could be dramatic. Take this story – widely reported – in which two self-driving cars came very close to a collision. Luckily, it seems that they were simply changing lanes, and really weren’t swooping around like someone playing Grand Theft Auto on a bad day, and a collision was not in the future for them. But, the point is that any perceived weakness in the decision making capabilities of self-driving cars, especially if poor decision making, could be attributed to outside, malicious influence (like hacking), will likely cause real problems for the autonomous vehicle industry, in general.
What I suspect we’ll see, then, is what we often see with new technologies – initial, breathless enthusiasm for the idea, quickly turning to that all-too-familiar trough of disillusionment when the full reality of what we’ve signed up for finally hits home. Thesis – Antithesis, and finally Synthesis.
Let’s not forget, trains have been around for a long time – the better part of 200 years. They run on rails, which makes them a lot more predictable than cars; yet, they still require drivers – why? Because knowing there’s a person in charge is very reassuring.
I suspect the final smart car of the future will work in conjunction with the driver, and not replace them. When I need to take that phone call (or heck, do some online shopping) I can let the car take over for a while, but at the end of the day, I’m going to want to see that steering wheel waiting for me when I sit down because my life is riding in this metal box, and having software take that over completely, (and take away all the fun of driving), is something we probably won’t be comfortable with for a long, long time.