Why trees have wreaked havoc on Uber's self-driving program

Just another problem to be solved. As I’ve stated many times in this forum…self-driving vehicles are still evolving. IMHO it’s a decade away from having truly autonomous vehicles. In 10 years the software will have evolved 30-50 times.

The problem is that they are on the roads driven by soccer moms taking their kids to practice now.

1 Like

You’re assuming that the driverless cars are actually driverless. They aren’t. There’s a driver to take over at any moment. And those are the ones complaining about the trees. The problem is the trees are causing the cars top stop or slow down significantly. That’s actually a good thing.

Why is it good? To work out the programming problem?

It’s good because the software is being over protective…I’d rather have it THINK there’s a person when there isn’t…as opposed to there IS a person when it THINKS there isn’t. The latter could kill someone. The first is an annoyance for being over protective.

1 Like

If a self driving car stands on the brakes because of shadows, that’ll cause accidents. Not a good thing.

#1 - I didn’t read in the report that braking too fast was a problem.
#2 - If there is an accident it’s because the person behind is driving too close.
#3 - There’s someone in the car to take over at any moment.

As I’ve stated many times in this forum over the past couple years…this technology is new…it’s NOT ready for primetime. It’s still being tested. I’m not ready to pass judgment until it’s released. Read the IEEE standards on when they consider autonomous vehicles are ready. From my estimate we’re at least 10 years away.

Doesn’t careless or reckless driving include braking too fast, or stopping suddenly, for shadows?

Not where I’ve ever lived. The driver in the rear is responsible for their car. If they run into a car in front of them for braking too hard the driver in the rear is charged and responsible for any damage. It happens all the time here in MA and NH. People drive just inches from the car in front of them doing 70+ mph.

That sounds limited.

That is clearly an oversimplification and so NOT the rule everywhere.

It also seems to be changing the subject, which is whether braking too fast, or stopping suddenly, for shadows can be a traffic violation.

Sonar can distinguish between a dense object and its shadow. It may be a necessary part of the inputs into a self-driving car.

That didn’t prevent the death of an Arizona pedestrian hit by an autonomous Uber vehicle in March.

Expecting the system to be 100% safe at all times is being naive. I agree the accident was tragic. I’m not sure why she wasn’t recognized. Personally I think it needs more time testing in a dedicated facility.

Again - as I’ve stated many times already…Autonomous vehicles are NOT ready for prime time. There are many things that need to be worked out. But I firmly believe the problems are solvable. And when they are ready (10+ years from now) and the majority of people are driving them - almost every expert on the subject agrees that accidents/injuries/deaths will go down…WAY DOWN. An estimated of 90% of all accidents are human error. Gee - just getting the drunks off the roads make me feel better.

Really? So if a guy is driving down the road and dog runs in front of him and he hits his brakes, then who’s fault is it when the guy behind slams into him? As the driver in the rear - you should be driving defensively. Who knows when a situation like that will happen. The second car probably would never see the dog or the kid running into the street.

If you mean “system” to be the car alone, I agree.
If you mean “system” to include the human “driver to take over at any moment” that you brought up, then the “system” failed to help the Arizona woman.

You seem to misunderstand – I was referring to your comment being based on “where I’ve ever lived”

And the part of the system that failed this woman was HUMAN. It was HUMAN ERROR that…

#1 - Didn’t monitor the drivers more closely.
#2 - Allowed this type of testing before it was fully demonstrated it could handle it.

Don’t dis the technology because the reason the woman was killed was because of human policy.

I think we have 2 things here

  1. Human can be the primary control and then they may be augmented by the AI => it will definitely reduce “human error rate” substantially, this is what we see with all major makers right now and I perceive it as a very positive development, ready for a prime-time

  2. Human eliminated from the primary role, and (in the current generation of AI control system) play a “fail-safe” role, at least on the legal fine-print nobody tends to read to the end

I perceive this situation #2 as much more explosive:

  • unlike AI, humans can not consistently concentrate when they are not actively in control, which makes them a very unreliable “backup system”, in opposite to #1, where AI can tirelessly watch and wait for the mistake to be corrected
  • as you mentioned, AI systems are totally not ready for the prime time, I agree completely, but what concerns me here is that we (humans) only grasp for the understanding on how we (humans) perform tasks we do subconsciously , driving included. This is an “unknown unknown” component involved, as we do not yet understand well how to solve the problem, it is not yet of the well understood engineering->production type of problem. I’m not at all convinced at what point humanity will get the AI error rate to be lesser than an average “human error rate”
  • this leads us to substitute the known (human) error rate with an unknown (AI) error rate, which we wish to be lower, but our wish is not necessarily can be accomplished in a time-boxed schedule, this is an “unknown unknown” actively at work here
  • competition pushes makers to push products out only to put a stake in the ground, Tesla is probably the worst offender in this regard
  • all leads to the half baked products pushed on the streets for the public to become beta-testers, as it is no really other economic solution to test it well

I don’t think the progress will stop, but that Arizona pedestrian fatality is not the last one in this context.

That’s quantifiable. Look at the IEEE Autonomous documents. There are a very specific set of tests/tasks/skills the autonomous vehicles need to prove before they are given the green light.

The technology is changing fast…very fast.

The transition stage is the most dangerous. Personally I think more testing in a lab environment is needed before they get put onto public streets. What I mean by lab is a basically a small town built for testing with real world and very life like buildings/people/cars/trees and anything else the driverless car will encounter. The reason why companies are doing this is cost.

Maybe not just ONE driver, but a driver and human monitors…