Can we trust driverless cars

“We’re sorry, but the program has encountered a problem and needs to close.”

The question is: what happens if the computer program stops running. This can happen in several ways depending on what happened.

  1. If the underlying operating system is still running. the car could pull to the side and turn off.

  2. If the program counter in the computer starts executing data oir code out of register, the car will continue to do exactly what it was doing before the program stopped responding. It will usually leave the controls in the same place and keep going until it crashes into something.

  3. iIf the computer completely stops working, the car could do anything.

I’m still struggling with trusting cars that have drivers!
And my doubts are justified every time I go out on the road. :smiley:

I don’t trust driverless cars. I heard somewhere that pilotless planes are also in our future. I wouldn’t trust them either.

Here’s a video of a crash caused by a car WITH a driver! He said when interviewed after the crash “I didn’t really nod off, I just closed my eyes for a moment. I work hard. I was tired”. When interviewed later he said “it was no big deal, it was just an accident, nobody was injured”.
http://www.msn.com/en-us/video/watch/saugus-man-cited-after-school-bus-crash-in-tunnel/vp-BBsGGQw

Troubleshooter: the software and hardware would have to be designed to fail-safe.

For example, you would use three processors and you have to get agreement on 2 out of the 3 for any action. Any failure of these would cause a shift to a hard wired sequence that would pull the car over and stop it. It would take a significant design effort to make it fail-safe…

The only flaw in a fail-safe scenario is that you can’t design a fail-safe driver.

I’m sure that driverless cars will have some crashes, but I think that there will be far less of them than with today’s cars.

My primary background happens to be embedded software for mission and safety critical applications.

Bill’s appropriately pointed out one potential mitigation for controlling behavior- voting.
There are many others but I will address the original items cited-

    1. If the underlying operating system is still running. the car could pull to the side and turn off.

No one I know would use any type of operating system on such an application much less one that is commercially available and not designed to handle real-time multitasking environments with fail safe measures built in. Regardless, you don’t need an operating system for something like this and it’s actually a detriment here.

   2. If the program counter in the computer starts executing data oir code out of register, the car will continue to do exactly what it was doing before the program stopped responding. It will usually leave the controls in the same place and keep going until it crashes into something.

No it won’t. First there would be both software and hardware watchdogs monitoring the execution of the main line programming. If it ceased to execute in the designed fashion, they would take over to interrupt the main execution and enter either a recovery mode or fail safe condition.

 3. iIf the computer completely stops working, the car could do anything.

If it stops working, the hardware in the car will not get any updated commands. If anything, it would just continue on at the prior settings. In that case- see above regarding watchdogs.

In addition to controller voting, in the systems I have developed, we also use hardware voting and that must match up in lock step with the software. The systems range from space borne platforms to autonomous systems that are designed as offensive weaponry. If we can develop systems that launch from the earth and autonomously dock with an orbiting platform or fly on their own to remote locations using stars to guide them and then decide from a list of targets which to address, we can make a car that drives around without significant risk of it going berserk because the computer hardware/software misbehaved.

I’m extremely leery of the whole concept. Automatic emergency braking makes sense and warnings if you drift over to an oncoming lane is useful as well.
There are just too many variables at play here. Sure planes can be landed with a computer, although they seldom are, bit this is a controlled environment, something highway traffic is definitely not.

Well, when controlled planes are landing they have to contend with all the bug smashers that are flying VFR as well. Heck, my brother’s relatively simple electronics stack in a Piper Seneca can navigate to a destination and enter the landing pattern all on its own. It cannot avoid other VFR traffic however.

Having one autonomous car in a sea of defective human pilots is a daunting task. Ultimately, when autonomous cars are the vast majority, they will be in a controlled environment as they will be able to communicate with each other and interact with a hive mentality. Eliminating emotion from the equation would go a long way to improving safety IMO. Training on the expressway, seamless merging (imagine that!) and other actions humans cannot orchestrate reliably will be possible then…

I think the question, when properly rephrased, answers itself. “Can we trust driverless cars MORE than cars with human drivers”. Based on my experiences on the road every day I can say with much conviction that computer driven cars will be a HUGE improvement on cars with human drivers. That being said, no system is perfect. Air bags have saved many, many lives but, as we have seen recently, they are not without problems. Can you honestly say you would prefer a car without airbags? I know several people close to me that are alive today due to airbags. How many people would still be alive if computers were driving their cars instead of them?

Fault tolerant computers have been designed and built for decades. I can show you some critical systems that have been running 24/7 for years.

I seriously doubt it has an operating system. More like a dedicated program that does what it needs. No need for the overhead of an operating system.

Well, in the small sample size we have to work with, it appears Google’s cars are involved in accidents at a rate triple the national average, so the data’s already not promising.

That said, the problem I’ve always had with driverless cars is “Captaincy.” Simply put, if you want to “take over” for the driver, you have to take over ALL driving aspects, which includes more than manipulating the controls. You need sufficient Situational Awareness to make valid “go/no go”; “continue/no continue” decisions. I don’t see a driverless car doing that well, or at all.

For example, I once had to drive from Pitsburgh PA to Albuquerque NM just after the Christmas holiday. Unfortunately, there was a horrible winter storm scheduled to blow through. It was big enough that the precipitation would essentially span the US from northern to southern border, and was supposed to pick up over a foot of snow.

Knowing this (SA), I made the decision NOT to take I-70 to I-44 through St Loius, and cut down to I-40; I decided to cut south through the Appalachians, ahead of the storm, and pick up I-40 in Nashville, before the weather made it that far east; that way, I’d be above the freezing point (hopefully), and have rain–not snow or ice–to contend with.

As I travelled, I kept an eye on my ETA, and kept the radio on to measure the progress of the storm. I had a bit of a scare when I began to encounter freezing rain; fortunately, a two-hour dinner ensured that the temerautre was back above freezing when I continued. Good thing I detoured–St. Louis got something like 2’ of snow!

I don’t see a “driverless” car doing that well, or at all.

MeanJoe, I don’t think decisions as you made are something the car could make, it’s up to the “captain” of the car to make. Just like the decision of when and where to take a rest break. The car can’t decide that.

@meanjoe75fan - I take it you actually didn’t read the article. But just the headline. While yes they’ve been involved in mora accidents…only ONE was the Google car fault. All the rest were charged to the other driver.

The problem Google car obeys all traffic laws at all times. Most drivers don’t.

I disagree, the car could more easily make those decisions by being tied into weather and road conditions constantly updated while driving. You can already get that kind of information on your smartphone but the car could do it handsfree! :smiley:

@BillRussell Well, in that case, it isn’t really a “driverless car,” is it? More of a “car with autopilot.” That may be splitting hairs, but I think it’s an important distinction, because if you’re obligated to make the occasional “Captaincy” decision during travel, then all those “pie in the sky” stories about how you can watch a movie or have the car drive you home when you’re totally drunk–wouldn’t be the case; you’d need a clear head and keep on top of the big picture.

There are some driverless cars being researched with no human controls of any kind.

@MikeInNH
Oh, I’m well aware of the full story; I just don’t think “fault” is a valid counter, for three reasons:

1)If I’m driving somewhere, getting involved in an accident is A Bad Day, no matter who is at fault. If you tell me I can drive…or somebody/thing can drive for me, but that driver is 3X as likely to be involved in an accident than I am…I’ll probably decline the offer.

  1. As we all learnt in “defensive driving,” part of your job as an astute and defensive driver is to realize when the OTHER guy is a threat (drunk, tired, angry, whatever) and avoid letting that guy get you involved in an accident. Usually, you do that by using a human’s understanding of human behavior to recognize a high-risk driver and taking steps to get away from him. It appears that’s what Google cars do so badly, and that’s understandable: we’re talking SA here, and it appears the driverless cars have none. “Not causing crashes” is mere competency; mastery is “not allowing yourself to get wrapped up in somebody else’s crash.”
  1. As of right now, Google cars are tested in the “best possible conditions”: modern roads in a mild climate. You can’t really extrapolate that across the USA. I want to know how Google cars will fare with crumbling Detroit infrastructure, or in a Buffalo snowstorm. To compare accident rates, now, is “apples to oranges”–and even so, Google cars don’t look so hot.

“There are some driverless cars being researched with no human controls of any kind.”

…which is hubris of the WORST kind. When I flew the Beech 1900, there was a manual override pump to lower the gears in case the hydraulic pumps failed. I’ve never used it; nobody I knew ever used it, either: but for damn sure we were told where it was, and tested in the sim to see if we could use it! Having manual overrides (even if they’re never touched and have an inch of dust on them) is simply proper engineering and showing a healthy level of respect for Murphy.

Which leads to Murphy’s Law: “Anything that can fail, will.” And as number of samples approaches infinity, Murphy’s Law is absolute: no matter how well-designed, everything made WILL fail, eventually. (Finite probability of failure, times infinity, equals infinite probability.) Not having backups of safety critical controls is unconscionable, and it appears the motivation here is mere peeing contest: to show that we “can” do it.