Last night a woman was struck by an autonomous Uber vehicle in Tempe, Arizona

Currently about 5400 pedestrian deaths occur every year and about half of them are drunk;

https://www.cdc.gov/motorvehiclesafety/pedestrian_safety/index.html

I’d guess only a fraction can even be affected by autonomous cars. Again, the physics of the situation is in force. Lack of stopping distance results in a pedestrian strike. No machine or person can prevent that.

1 Like

It’s just kind of interesting how these things are spun. We have all been in driving situations where we spot kids playing with a ball down the street, or a kid on a wobbly bike in a driveway, etc. and taken defensive measures. Just had one yesterday where a small kid on his bike wobbled down the driveway into the street and not under control. As humans we see these things ahead of time and slow down or move over just in case. Computers do not do this so why can’t we at least accept the fact that computer drivers sometimes aren’t as good as human drivers? Instead we make excuses for the computer saying its just physics and can’t be avoided. Just sayin’s is all. No need to check bank statements because computers are never wrong.

They put the electric choo choo trains between Minneapolis and St. Paul recently with much fanfare and pretty much disruption of what used to be a good east-west non-freeway street. Every couple of months someone else is mowed down for various reasons. Can’t hear the things, stuck on tracks, etc. In Minnesota usually you have your windows up in the winter and the radio on and hard to hear the ding ding ding before these things come out of nowhere. Just think we should make honest assessments on safety. If one life saved from air bags is well worth it, maybe we should also properly assess the risks from some of these other seeming innovations.

I just hope we don’t find out that the robot went nuts and targeted the pedestrian and ran her down, like get off my lawn.

2 Likes

One thing is for sure, this will be one of the most photographed accidents and there won’t be any lack of evidence because of the number of cameras on the Uber vehicle.

One depiction on the morning news showed her walking her bicycle across the street and not in a crosswalk. It showed her in a continuous walk so the vehicles computer should have been able to plot her course against the vehicles course and taken evasive action. But that depiction was an artists rendition. If she stopped in the center divider, the computer in the vehicle, and the safety driver onboard may have assumed she was waiting for the vehicle to pass.

If she misjudges the vehicles distance and/or speed, she may have erroneously thought she could beat the vehicle if she moved fast enough. That is something neither humans nor computers can predict. Sometimes with humans though, we see some minor muscle movement in the hands, legs or even face that give us an indication of intent that I don’t think any computer can detect, yet. But it was dark and for even a human, those small movements might go undetected where they would have been detected during daytime.

Who says they don’t? In fact they do. There’s software that is doing that. It’s analyzing what it sees ahead and making predictions on what might happen. If it thinks a kid might run in front of the car - it’ll slow down.

I keep saying this…autonomous vehicles with no driver is still years away. Phase 3 testing is being done now. The software and hardware is changing at extreme speeds.

Your point relies on the hypothetical situation where a vehicle cannot stop in time. Yes, that is possible. Yet was that the case in AZ? I don’t know that the final word is in, which means you are relying on your hypothetical to excuse/explain the incident. I’m not going along with that.

Your allegation is noted; I understand how statistics work well before this exchange with you. Interesting is how your comments could be taken as meaning that a hypothetical 5 fatalities in the first 1000 miles could still be an “average” of 1 in 100M because the next 499,999,000 miles could be fatality free.

And so even though it might be coincidence that this Uber vehicle and system happened to meet an unavoidable fatality in AZ (like your hypothetical above), I’m asking the question of how many miles HAVE they been tested before this incident because if it hasn’t been tested a lot, maybe it is something other than your hypothetical.
(I also brought it up because you raised a comparison to vehicular fatalities with human drivers.)

Yes, you’re relying on your hypothetical that AI-driven vehicles are deadly. I don’t see how my hypothetical is any worse than yours.

Assuming your math is right (I have no reason to doubt it - I’m just not going to bother checking) then yep… That’s… Pretty much how statistics work. :wink:

Waymo has done 4 million miles alone. I’m not sure the total aggregate mileage of all AI-drive systems has been ferreted out at this point, but even assuming that Waymo has done the most (they probably have), it’s very safe to say “many millions,” although I’m sure you are correct that they haven’t gotten to 100 million miles yet.

And that gets around to the point that others have been trying to impress upon you. You evidently expect them to test these vehicles for 100 million miles before they test these vehicles. How exactly would you suggest they go about doing that?

Incorrect allegation against me. There is news that an Uber vehicle was involved in a fatality. That is not a hypothetical.

It’s my understanding that Waymo is a different company from Uber. So what Waymo has done may be irrelevant to this case.

Incorrect allegation again. I only brought up the numbers relative to human driven cars because you brought it up with “when a human-piloted car runs someone over. It happens. It’s going to happen, no matter who or what is driving.”
I did bring up the question of why this type of testing is different from other areas, such as clinical trials for new pharmaceuticals.
A new pharmaceutical that kills someone, even as a coincidence, isn’t simply compared to ‘when a human being eats or takes something, they might die. It happens, It’s going to happen, no matter who is eating/taking or what is being eaten/taken.’

Were the car’s brakes applied when the bike rider was struck? Surely there is a BLACK BOX on the Uber car that will indicate if/when the “obstacle” was recognized and what evasive actions were taken. And surely the vehicle didn’t interpret “GREEN = GO” until too late.

I don’t seem to have near the trust in AI that many do. I recognize and consider the likelihood of drivers texting, playing make believe guitars, calming and feeding children, etc from a half block away. And for sure children rolling down driveways and between parked cars into the roadway is a life or death situation that I have safely watched from a car length away where I stopped in time to smile at the startled kid. I might one day ride in a car capable of driving itself but only when a driver I trust is sitting at the wheel ready to take charge.

I think I see where the confusion lies. Let me try to clear it up using your pharmaceutical example.

If someone takes a new pill, and then gets shot and dies, you don’t accuse the pharmaceutical company of killing him with their new pill.

If someone gets run over by a self-driving car, you don’t immediately say “I expect the vehicle and system to have been tested enough that such an accidental death would be a great unexpected shock” because you do not yet know why the car was unable to avoid killing the woman.

When reports start coming out that there is video of the woman darting out in front of the car from behind a parked vehicle, then it becomes fairly clear that the most likely scenario is that the AI, which cannot see through solid metal things, was unable to see the woman until she was already too close to the vehicle for the AI to stop in time – Just like what would have happened had a human been driving.

Your pharmaceutical example assumed no external influences. A bunch of people took pills in the past and lived. One person took a new pill and died.

The parallel to this wreck would be a bunch of people driving cars in the past and not running over anyone, and then one computer driving a car in the same conditions, and killing someone. But that’s not the case. The AI computer was, according to current reports, faced with a problem that could not have been solved by anyone or anything – which had those past drivers been faced with such a problem, they too would have run people over.

1 Like

Are you telling me what I can and cannot “immediately say”?
I have no problem asserting that such a fatality should be “a great unexpected shock” to Uber because if it isn’t, it might be indicating something deeply disturbing about Uber and the vehicle/system that they were testing.

There are also reports indicating other things. And so (unlike you) I’m reserving my conclusion until details are more definitive.

My guess is that you aren’t familiar with how clinical trials for new pharmaceuticals work. And so I’m going to stick with my simplified point: which is that the standards and methodology for testing a new pharmaceutical is quite different from those apparently used by Uber in this case.
An example that is closer to the clinical trial situation is finding a town or city where it’s residents sign informed consent waivers to be part of the Uber vehicle test, then test there. And even in such a case, a fatality from the vehicle isn’t simply taken for granted as a possible outcome; instead, it is taken as a potential reason to reconsider what’s going on in the test.

You can say whatever you want. I’m telling you what you can’t say while still expecting to be taken seriously.

Well, I’m married to a cancer biologist who has prosecuted pharmaceutical patents and who participated in a few trials back in her PhD candidate days, so while I’m no expert I’ve probably picked up more than most from listening to her stories.

I think it should go without saying that Uber is not a drug. This is what I mean by what you can’t say and still be taken seriously.

Additionally, you get informed consent from the person actually taking the pill. You don’t get informed consent from everyone in the building where the pill is administered, even though there may be a small chance that the pill will make the test subject freak out and attack the janitor.

At this stage of autonomous vehicle testing, getting “informed consent” from everyone the vehicle might get near would be impossible without constructing an entire artificial city and hiring several thousand actors to pretend to be citizens of it. As it’s not possible to predict who might be where in a given test area, your call for informed consent is unreasonable.

I’ll also note that I find it interesting that you were willing - eager even - to post up the original news article, but now are unwilling to consider news reports talking about what the video camera on the car has revealed to investigators. How do you know the car was under auto-drive control, and the babysitter human wasn’t messing with the steering wheel?

And you keep ignoring what I’ve been saying. Who knows? Maybe you have a personal vested interest in this and no amount of fatalities would change your mind.
So here’s a question, what evidence in the form of fatalities would it take for you to consider different testing for these vehicles and systems?

Yes, yet the underlying point is that both are being tested for safety with human morbidity and mortality at stake, with very different standards and methodologies.

Correct, because the pharmaceutical presents little risk of causing a fatality to another person. In this case, the vehicle presents exactly that risk.

Impossible or expensive? And apparently, difficulty or expediency determines the testing standards and methodologies. Maybe you see them as valid excuses/explanations regarding the current situation.
And maybe you don’t see that I was providing one example without excluding other examples for how testing could be done differently.

More allegations? Attributing eagerness, unwillingness to me? I’ve posted multiple news links about this case while you’ve posted none.
As for your question, the first link I provided includes “in Autonomous Mode” in the title and the statement that “According to Tempe PD, the car was in autonomous mode at the time of the incident.”

And unlike you, I’ve NOT been hypothesizing or concluding anything even with the news reports I’ve seen.
And probably also unlike you, I care about the fact that a woman has died, even if it turns out that she was more responsible for her death than the Uber vehicle. That’s partly why I started this thread.

I would not accept “evidence in the form of fatalities,” because that leaves it open to fatalities caused by things external to the AI. Get back to me when a self-driving car actually kills someone because its programming/design is bad. And please don’t mention the idiot in the Tesla, because he knew it was only a semi-autonomous system and he decided to treat it as a fully-autonomous one, which is not the car’s fault.

I would like to see your evidence of that. I’ll give you a pass on the methodology bit because, again, a car is not a drug and testing them the same would not work, but I’d love to see where you’ve discovered that the standards for testing autonomous vehicles prior to testing them in public were somehow inferior to the standards of testing drugs prior to testing them on humans.

Assumption. Another equally valid assumption is that the vehicle poses no more risk than a vehicle driven by a human, in which case unless you are arguing that we should remove human-piloted vehicles from the road as well, you are contradicting yourself.

Impossible. Logistically impossible.

Another assumption.

Yes, because:

Yes, I know. I did read the article. My point was that you are willing to accept news reports that it was in autonomous mode according to the police, but are not willing to accept news reports that the woman darted out in front of the car, also according to the police. The double standard you are applying to further your obvious agenda is what I objected to, not lack of information in your article.

There’s a great, and frankly insulting, assumption on your part. I’m kind of amused that you think any assumption I make is a logical misstep on my part, yet you pepper your posts with assumptions about the car, the situation, the selective validity of the news reports, and now you assume I’m a sociopath.

This is why people are not going to take you seriously. Because you write nonsense.

2 Likes

So first, you would not accept evidence of fatalities NOT “caused by things external to the AI”. Certainly seems like narrow to me.
And what does “bad” programming/design mean?

Talking about “prior to testing them in public” is irrelevant; the instant situation is testing in with actual human beings who can be injured (morbidity) or killed (mortality).

And these are the real risks (NOT assumptions) posed by the Uber vehicle system.

Back to comparing to human-piloted vehicles? Are you stating that the standard is, or should be, “no more morbidity or mortality than a human-piloted vehicle”?

Ok, your affirmative assertion, prove it.

Bologna on the double standard. I have seen no contradicting reports regarding autonomous mode. I have see contradicting reports regarding the scene, the woman’s actions, and other evidence. So I have reserved making any conclusions regarding them. If you see an “obvious agenda”, you see what you want to see.

Is that suppose to mean that you care about the woman who died? Your first reply in this thread:
“Much as we have a similar lack of surprise when a human-piloted car runs someone over. It happens. It’s going to happen, no matter who or what is driving.

Now that the detail has come to light that the woman darted right in front of the car…”

Sounds a lot like ‘people die, no surprise, and it’s her fault’ (even though the final word on that isn’t in yet).

So ignore me.

Just call it a learning experience, AI will learn, regular peeps will still keep killing more people than AI (artificial inntellegince) cars I assume!

1 Like

Just two nights ago I was driving home about 11:30 pm, rainy night, 35 mph neighborhood street. A homeless guy riding a bicycle (no lights, no helmet) and wearing a black overcoat and black hat was peddling down on the right side of the road. No concern on his part about being visible. I slowed to pass him by on the left. Meanwhile I noticed something small, something grey, further up ahead of the cyclist, darting on the sidewalk, barely visible b/c it was pitch black out, raining, whatever it was was gray and darting in front of a parked car. Oh, oh, a small cat, somebody’s family pet no doubt, runs right out in front of me, as I’m passing the cyclist. I braked while simultaneously veering to the left into the oncoming lane. I was anticipating all this b/c of what I had been seeing, and had already insured there was no oncoming traffic, so veering into the oncoming lane was safe. Avoided the cat and the cyclist. I very much doubt a driverless car could have done that. It may well have seen and avoided the cyclist, but I doubt it would have even seen the cat.

1 Like

That’s a very good question. They may have been distracted by other tasks involved w/monitoring the car’s performance or they may have just been bored, thinking their services weren’t necessary. Or – I think this less likely – it may have been physically impossible to stop or steer the car away from the pedestrian in that situation. Fact is, the driverless car hit the pedestrian. The question as I see it isn’t whether the pedestrian was at fault or not. It is whether a driver operated car would have hit and killed her or not. Until we mere Car Talk posters are allowed to see the actual camera evidence, it’s all speculation at this point.

1 Like

Even if it had seen the cat, would it have swerved into the oncoming lane? That would violate a basic bit of it’s programming, and could it override that? dunno, but I doubt it.

then we get into the weighing of ethical alternatives, which today’s computers are not up to. Should I swerve into oncoming traffic to miss a pedestrian, with a high risk of killing myself? What if there are only a few oncoming cars, could they swerve out of the way in time? What are the odds of that, and compare to the odds of killing a pedestrian or myself, or someone in the oncoming car?

If the autonomous car has infrared sensors, both the bicycle rider and cat would glow brightly enough to easily seen.

Exactly my point in an earlier post George. It is that “hunch” that stuff is going to happen that experienced drivers feel that is quite difficult to code into an algorithm. Note I did say, difficult, not impossible.

I think that a well designed learning algorithm could be created (and hopefully has been created) and tested with the data being collected right now to make those decisions better.

We can’t repeal the laws of physics but we can skirt the edges to our advantage!