'Robot Cars Can't Count on Us in an Emergency'


I thought this was the reason they are inventing self-driving cars


Isn’t not having to pay attention the whole point of self driving cars? Sign me up. I would love to be able to read a book on a trip. I might even buy a rv so I could get up and walk around. At my age, what’s to lose? A few years in a nursing home? I didn’t live my life being afraid an I am not going to start now.

By the way , there are already self driving tractor trailers driving in some Western states now. That is where the huge economic push for self driving vehicles is going to come from. After all, there are not many people looking to save money by getting rid of their chauffeur.


No it isn’t. The push is from city drivers, or commuters who drive to cities.

Self driving tractors (using GPS) is a completely different animal.


Multiple sensors that form a “sensor suite” each adding information to the system. That’s how its done now on non-self driving cars and how it will be done on autonomous cars.

Radar, sonar, and image sensors. I don’t know of any production cars using laser sensing although many of the self driving test cars use 3D scanning laser systems. Each sensor is best at certain situations but combined they cover the required range. It also allows the system to not be dependent on a single point of failure.


Are we thinking backups for the control systems as well? 2 collision sensors is one thing - 2 sensor command/control computers would add a lot of expense, and we all know how automakers feel about adding lots of expense. :wink:


I sincerely hope and expect that automakers are doing parallel processing!

Parallel processors running simultaneously and comparing results would add little expense to the ECU’s doing these tasks. This is a requirement of the German Machinery Directive for things like forklifts and AGV’s running “by-wire” control systems as well as dual sensing. US specs are a bit more lax on this as they only specify redundancy but not exactly how that is accomplished. Failures must be recognized and a safe shutdown path must be available.

I cannot imagine automakers are not comparing and contrasting various other industries requirements while designing these systems. Goes back to the 99.999 (100% is not possible) because it’s that 0.001 percent failure that will cost you in court. As long as the incidence is very low, the company can survive financially. Lawsuits are a cost of doing business.


See, that’s where I’m slightly more pessimistic than you because we are talking about an industry that continued to produce killer Pintos when Ford calculated that the wrongful death lawsuits would in the end be cheaper than a redesign.

The business climate in general in our society is fairly short-sighted. Chase this quarter’s profits and maximize them - don’t worry about what’s gonna happen 5 years from now because the only thing that matters is stock price and dividends.

I mean, the 2008 financial crisis happened because banks intentionally removed and ignored failsafes that were meant to keep them from investing in things that would bankrupt them.

And even our most respected institutions that don’t chase profits do stupid things in the name of cost and expediency. Challenger blew up because of a design flaw that was known about from the earliest days of the shuttle program, but the prevailing attitude was “well since nothing bad has happened yet, nothing bad will ever happen.” And then NASA did it again with Columbia, which was another design flaw that had been identified in the first shuttle launch.

I strongly suspect that if an automaker can save money by eliminating a redundancy, they will, because as a society we are really bad at risk management.


I think Airbus should bare more responsibility than the Air France pilots in that accident. Put anyone in a cockpit at night with no outside visual reference, they have to rely on their instruments. And as far as I know, their i instrument sucked in that situation. When the stall alarm resumed as the pilot tried pointing the nose down, that’s a major piece of conflicting info that confuses a pilot who was trying to understand the situation. I sincerely cannot believe that the pilot holding the stick that night could get hired by such a major airline without demonstrating basic stall recovery

I know Boeing’s automation is not exactly flawless. But Boeing’s automation provides both tactile and visual feedback and the pilots should know when robots were doing what they shouldn’t. In both the 777 crashed in San Francisco and Dubai, the pilots should have FELT the auto throttle backing off at the wrong time without looking

Speaking of airline pilots, they are required to complete simulator checks, in which their emergency handling knowledge are tested, periodically in order for their licenses to remain current. Perhaps we’ll need the same policy for land base vehicle operators to keep everyone’s basic driving skill is up to date and ensure all licensed driver can deal with automation malfunctions.


Google which is the leader in this technology wasn’t even around when Pinto was designed/manufactured.

I HOPE that any self-driving vehicle will have to meet an very stringent set of requirements and standards before they are released to the general public. As I stated earlier - the earliest self driving cars to the general public is at least 15 years away.


Because the pitot had iced over. That’s not really Airbus’ fault.

He pointed the nose up. He yanked back on the stick and they entered a 7,000 foot-per-minute climb while decelerating to 52 knots. The stall warning was doing exactly what it was supposed to do, but the pilot had decided they were going too fast rather than too slow and kept applying nose-up to try and slow down. When the plane hit, the pilot had the stick all the way back and had adjusted the elevator trim to 13 degrees nose up.

All the pilot had to do was to look at the artificial horizon to see that they were oscillating between 35 and 40 degrees nose-up to figure out that whether the stall warning was sounding or not, they must be in a stall, but for reasons unknown, he didn’t.


Angle of attack was 35 to 40, but the attitude was only 10 degrees up. At one point the plane slowed to a point where the computer decided that stall warning was invalid and stopped blasting that alarm. That was when the pilot brief tried to reduce the attitude and was hit with stall warning again. Putting the nose down and getting stall warning as a result was conflicting info


A pretty broad topic, but to the point the OP was making, I heard some folks that are at MIT working on vehicle autonomy bring this subject up las month. The term they use is “Drivers are predictably unpredictable.” To combat much of the bad behavior that the paragraph offers, most automakers now require the driver’s hands to be in contact with the wheel for the limited self-driving they offer (Honda has the best by the way). A recent post at a Tesla forum I participate in discussed how to override that safety feature on AP2 (Autopilot version 2). The Poster was shouted down for the most part by fellow owners. The most interesting thing I learned at MIT on that subject was that the more people learn about vehicle autonomy, the less interested they become.


Here’s the next step in the evolution of self-driving vehicles…


On the other hand, it’ll never run out of gas.


Show me a plane with no pilot and I’ll show you a plane that people will not want to board. All day long I use technology. Computers that need to be rebooted, phones that drop calls and need to be reset, cars that lose their electronic minds and need new ECU’s or air bag controllers, and on and on. I love technology and embrace it in all parts of my life. I even use SnapChat and Instagram. But I am totally not thrilled about getting into any transport that has no human driver/pilot element. It’s not because I’m a better driver than a robot, I’m not. It’s not that my reflexes are faster than a robot, they aren’t. To put it bluntly, if today is my day to die on the road, I would rather it be while I am piloting my vehicle and not while a robot is making decisions. Robots are great at repeatable tasks and much of driving is NOT a repeatable task. Driving is a series of complex decision trees that often don’t have a clear right or wrong answer. Humans are much better at that.


Don’t equate your home computer to industrial computers. My company has installed systems that have been up and running for 2 years without an interruption.


I’m certainly no computer pro but my guys told me they had redundant systems so that one would be shut down for maintenance and repairs while the other one held the load. Seemed about every month as regular updating or more often when there were issues. We only had problems when both failed at the same time or when a device common to both failed, which did happen on occasion. Two years without a shut down or failure and my hat is off to you.


Redundant systems will fail over to one of the other systems seamlessly.

What you’re describing is a backup system. Usually will have one backup system that needs to be manually shut downs and the other one manually started.


As a follow-on to Mike’s post, consider communications satellites. They are in geosynchronous orbit 22,500 miles above the equator and operate for 20 years without stopping by using redundant systems. At that altitude there is no way they can be repaired.


Redundancy was a major factor in Roll Royce’s reputation for reliability. Of course highly trained drivers were necessary to operate the backup ignition and fuel systems.