Tesla's "Full Self-Driving" technology

True… but why the argument has to be about “all or nothing” ?

Current driver-assist technology makes much more to bridge the gap TODAY than waiting for the uncertain time until “fully autonomous” becomes a reality.

And since assist technologies become standard in modern vehicles, we can reasonably argue that the baseline for the “fully autonomous” will become a moving target, as it is something helping the situation here and now.

1 Like

The only one forcing that argument is Musk. No one has a problem with Honda or Toyota or Cadillac’s driver assist systems, because they call them various synonyms of “driver assist.” Musk, however, calls it “full self driving” when it’s not. And when people object that no one in authority has signed off on the idea that the cars can drive themselves, Musk winks and makes it out as though his cars can perform fully autonomous driving, and it’s just that those sticks-in-the-mud in the government won’t admit it. I remember one Tesla rep who was hocking Teslas talked to me at an SCCA race. He told me “yeah, I drove all the way here from Minneapolis on autopilot. I just read a book.” These are Tesla reps, telling people that it’s OK to let the car do the driving while they do something more entertaining.

And people buy Teslas with “full self driving” and then let the cars drive themselves, and cause crashes.

The bottom line is that consumers will take advantage of advertised conveniences even if those conveniences don’t really exist. So it behooves the automakers not to give them the idea that the cars can drive themselves without the driver having to pay attention. No one’s trying to stop responsible automakers from implementing driver assist systems. But one irresponsible blowhard pretending his car can do something it can’t is getting people killed.

4 Likes

yet the guy manages to get out dry from the water…

2 Likes

Not surprising. Rich people/corporations tend to get away with, in some cases, murder in this country. Musk should have been penalized for violating California’s Covid restrictions. He should have been penalized for violating the FAA’s no-fly order for his Starship test. And he should be penalized for the autopilot malarkey.

But then Ford still exists after literally choosing to murder people with a known-dangerous Pinto rather than fix it because they calculated the wrongful death lawsuits would cost them less than the fix.

We tend to be rather blasé about holding high-powered individuals and companies to task for their actions in this society.

4 Likes

And, that includes very wealthy people of all stripes. Wealth tends to make people think that they are “above” the strictures that are supposed to apply to everyone. And, that leads to very wealthy people engaging in dangerous driving behaviors, falsely promoting their proprietary “full self driving technology”, violating public health protocols, and engaging in other practices that would result in arrest for people of more moderate means.

3 Likes

No disagreement. But the question is, given an imperfect world, does it still make sense to do it? Just b/c something can be done, doesn’t necessarily imply it should be. And if it should be, under what circumstances? IMHO it makes more sense to start w/self-flying cargo-carrying jumbo jets than road vehicles. Self-flying jumbo jets have a lot of experience, what w/their auto-pilots. And there could be ground-based remote aviators (actual people, sitting in a control room) on hand to assist when something unexpected came up. If starting w/road vehicles, it makes more sense to begin with using them only on special-purpose roads.

Obvious study conclusions here, FSD breeds inattention:

1 Like

I do not recall exact details, but years ago, after a metro train accident, it was an article trying to get into the specifics on why after all “train driver assistive technologies” introduced there, the crash could still happen

they raised a point that the metro operator under question employed the “active assist” technology, where the system performs most of the operations and the human operator is present only to perform override/intervention if something goes wrong

the opposite example was from some EU country where they also employ assistive technologies, but from the opposite side - to catch the situations when the human operator makes some of the common/stupid mistakes and to warn or override if needed

unsurprisingly, the incident rates were much-much lower in the second modus operandi, as well as operators were never caught there reading books, playing in their smart phones or sleeping :slight_smile:

1 Like