The growing support for autopiloted automobiles is largely based on statistics involving irresponsible and incompetent drivers, Responsible, competent drivers who daily avoid catastrophes in the making by those less than adequate drivers won’t be easily convinced that AI modules can out drive them.
Wow! That rang a bell with me.
I remember hearing about an automotive innovation. It was called ABS, a new safer braking system that could stop a car faster than the most highly skilled professional race car driver.
Now that it’s on my cars, whether I want it or not, I’m still not convinced it lives up to the promise.
Speaking of incompetent drivers… I will concede this point… when both my children were first on their own with a car (and not very much driving experience) I was satisfied knowing they had ABS, especially considering that our roads are covered with snow/ice much of the year.
For those of us who learned to drive and stop with conventional brakes, I am one who wishes I had them back.
CSA
There are many competent drivers who get in catastrophic accidents because of ONE silly mistake, or because of ONE other not so good driver. I’ve yet to know the driver who can keep their eye on the road 100% of the time just driving to work - let alone taking a 300 mile road trip. NOT GOING TO HAPPEN. Fortunately most of the time those lapses don’t coincide with something that could cause an accident.
Autonomous vehicles will paying attention 100% of the time in 360 degrees for as long as it’s driving without any lapses. Just because you can’t convince them, doesn’t mean it’s true.
I too learned to drive long before ABS…but I do like ABS. Especially driving on snowy/icy roads. Along with ABS comes traction control. Traction control saved me at least twice when I came upon a road that was all black ice. Cars and trucks were swerving all around me. I was lucky I wasn’t hit. But traction control kept me on the road and safe. On one stretch of 495 there were at least 10 cars off the road with numerous accidents. I was one of 4 vehicles that came through it unscathed. The road was extremely slick…but didn’t look it. Pulled over because road was completely blocked. Had to wait a few hours for the tow trucks and sander/salter to com through. Only one person was injured, but not seriously.
I trust the safety of an autonomous vehicle on a par with the housekeeping ability of a robot vacuum.
I recall leaving LA heading east with a GPS which instructed me to take an exit to a side street in an industrial area of Ontario and in a few miles I got back on the freeway but the GPS immediately instructed me to exit again which I did at the insistence of my passenger who had more faith in the device than me. In a few more miles of heavy traffic I got back on the freeway and ignored the demands of the GPS driving hell bent to get out of the city while the GPS kept demanding that I exit then recalculating when I didn’t.
Of course then there was the trip across Texas when a Hurricane was in the Gulf and the GPS totally went blank as I drove from Houston to Biloxi using dead reckoning.
where would I have been on those trips and dozens of others if No 5
had been driving?
You keep equating autonomous vehicles to what is available NOW as opposed to what they WILL be like when they become available in 15+ years. The technology is still evolving (very rapidly) to make any judgments on what it will be like in 15 years. From what I’m seeing today I don’t see a problem with autonomous vehicles reaching their goal in 15 years. They’ve come a long way in a very short time. There are now a lot of players in the field.
People who don’t understand technology are the ones most scared of it.
True, but we will be easily convinced that AI modules can outdrive the idiots from whom we daily avoid catastrophes.
So, you’re saying none of the cars that had problems had traction control?
;-]
“100%” implies machines that never break down, which us implausible, and not borne out but my real-world experiences we the machinery, ESPECIALLY electronic machinery.
As to whether 99% reliability would be better or worse than the status quo, that depends on how hard it is for humans to quickly intercede in the event of a failure, and the extent to which human skills degrade as a consequence of not being regularly exercised. (It should be noted, in aviation…which is arguably “ahead” of driving WRT automation…it has contributed to several accidents due to both aforementioned tendencies.
The proof of the pudding is in the eating, after all, and we don’t yet have sufficient real world experience to base such an assertion upon. “I’ll believe it when I see it!”
Not really. If a system is 99% reliable and humans are, let’s be really generous here and say 90% reliable, then even if humans could not intercede, the system would still be better than when humans are controlling things.
I would argue that a good portion of the automation-related crashes in aviation are attributable to human factors precisely because aviation automation is not sufficiently advanced. The 777 crash at SFO is a great example. ILS is down. Poorly trained pilots because Asiana Air is lax in their training standards and essentially encourages pilots to assume the machines will do it for them. This is a problem in aviation because when the machine (ILS) isn’t working, humans can still make the bad decision to continue beyond their abilities. Had the pilots acknowledged that they weren’t current/competent enough in visual landings, they would have diverted to an alternate with a working ILS and none of this ever would have happened. Instead, because humans screwed up, the plane crashed and people got killed.
By contrast, there are a couple of factors in the current thinking for vehicle automation that will mitigate this problem. First, cars will not rely solely on external equipment to be working in order to function properly.
The plane cannot land itself unless equipment installed at the airport is working right. There isn’t a camera in the nose that the autopilot can use to shoot a visual landing.
Self-driving cars already have a full complement of cameras, radar, etc. They do not need external equipment to evaluate the driving environment. In this, btw, is I think evidence that aviation is not anywhere close to “ahead” of ground automation.
And the second factor is, the car will doubtless run a self diagnosis before and probably during each trip. If anything necessary for safe operation isn’t working properly, the car will not continue the trip.
Had the Asiana airplane had an autopilot which was programmed in the way that car autopilots will be, it would have on its own made the call to divert to another airport with working equipment, and the bad decisions made by the humans would not have gotten people killed.
Um, why?
Speaking as a pilot, an ILS is NOT a necessary element for a visual approach and landing. Heck, a 50-hour Cessna pilot is expected to land without one. In my flying years, we were told to tune in the ILS IF THE AIRPORT HAD ONE. There are VASIs and other lighted visual aids for visual reference. I flew to many airports without ILS approaches…or at least I would have had to accept a wicked crosswind to use the ILS runway.
Any pilot incapable of landing without a working ILS is incompetent. Knowing this, surely the pilots felt unable to concede this, costing the company six figures due to incompetence… they’d be filling out their resume!
(And ILS are often down for maintenance. Airplanes do NOT divert to alternates simply because of this fact; that’s laughable! The problem was one of training: in-house training over-emphasizing “flying the computer” and under-emphasizing “flying the airplane.” Had training emphasized “stick and rudder” flying, and de-emphasized “computer operator,” this crash doesn’t happen.
You wreck your car every tenth times you pull out from the driveway? Dang, I’d hate to see your insurance premiums!
Never said that…never implied that. All I said was that traction control helped me. I do no that some didn’t because they were built before traction control was available.
Yes. I have already said those pilots were incompetent. They were not capable of landing without an ILS. Since the ILS was not working, the machine was also not capable of landing. Had the machine been programmed to make safety decisions for the flight, it would have recognized that it was not capable of making the landing, and flown somewhere that it was capable of landing.
Instead, the incompetent humans in charge of the flight who were not capable of making a visual landing decided to make a visual landing anyway, and people died.
Please understand that I am not suggesting that standard procedure for IFR inop should be a diversion. But I am suggesting that when a pilot gets in over his head with any phase of flight, rather than continue doing the thing that he is not able to do, he should instead call for help and as quickly as possible get back into a situation that he is capable of handling.
AOPA has a very interesting set of videos on Youtube which analyze plane crashes (from GA to commercial). In nearly all of the crashes, the failure chain included the pilot getting into a situation that he was not proficient in (a common one is VFR-into-IFR for a non-instrument pilot), and then making the decision to continue doing the thing he wasn’t good at doing rather than turning around and getting back into a flight environment he was competent in. It’s amazing how many crashes involved a pilot seeing the IFR conditions ahead, acknowledged to ATC that he saw the IFR conditions ahead, listened to ATC recommending that he turn away from the IFR conditions ahead, and then chose to continue the flight toward the IFR conditions, reasoning that if he descended under the clouds he’d probably be fine – and then of course he flew into clouds and shortly thereafter flew into the ground.
The two main failures in the Asiana flight were 1) humans running the company decided to allow a training program which trained their pilots poorly so that they were not capable of making a visual landing and 2) the humans running the flight decided to do something they weren’t good at doing even though they knew they weren’t good at doing it.
Your arguments are basically echoing what I have already said, yet you seem to think that we are in disagreement.
A bad driving decision is a failure whether it results in wreck or not. No, I do not make bad driving decisions 10% of the time, but others make up for my low statistics by making bad driving decisions much more frequently than 10% of the time. Ever see someone weaving around while playing with a smartphone? That’s a driving failure and counts toward the human failure rate.
The idea that a failure is only a failure if it results in an actual catastrophe is exactly the kind of thinking that led to both space shuttle disasters. It’s the kind of thinking that needs to be avoided in any sort of safety culture.
True…but them breaking down is probably even less then todays vehicles breaking down. I know that some of the vehicles have extensive redundant/backup systems. If one breaks down…there are 1 (or more) to take over. You can make a system failure extremely remote. Backup systems to backup systems to backup systems.
The only issue that may happen when an autonomous vehicle fails is being surrounded by NON-AUTONOMOUS vehicles.
As I’ve stated many times in this forum…this is not going to all of a sudden pushed upon us. The technology is being done in incremental steps. We are already seeing some of the technology today (lane departure, auto braking).
I recommend that anyone who relies on ABS/traction control to keep them safe on a slippery road take a defensive driving course with emphasis on driving on slick surfaces.
I recommend people who don’t understand the technology to stay out of the conversation.
Sure you don’t want to edit that? I suspect the real reason for developing the technology is not to reduce accidents but for the technology companies to carve off another market in their ever expanding quest for profits and control over us humans in their utopian view of how the world should be.
I thought the technology was worthless until it happened to me. Black ice at 8pm is a rare encounter…and there is not much you can do about it when you encounter it. The fact there were at least a dozen cars off the road contests to that. The traction control helped me a lot…especially when the vehicle that was passing me starting spinning and then bouncing from one guard rail to the other. Traction control allowed me to slow down safely and keep vehicle in control. I really didn’t know I was on black ice until I started to maneuver around the car that was spinning and felt the traction control kick in. Then I saw many other cars in front of me and behind me spinning or just driving off the road.