It’s been debated since autonomous vehicle concept was first put on paper.
Oc course, I don’t know if I could (would) make the correct choice in a similar situation. Not even sure what the correct choice is.
The point is, my choice would not make headlines. There are thousands just like it every month. But an autonomous car? in the headlines no matter which it choose.
Actually, it’s your assertion that it isn’t, and I’ve already asked you to explain how you would go about soliciting informed consent from people when you don’t even know who will be near the vehicle.
Fixed that for you. Yes, that’s pretty much how I feel. That doesn’t mean it’s not sad that the woman died, but having unpleasant feelings about something shouldn’t get in the way of rational understandings of fact. If more people understood that, we might not be in such a political mess these days.
One advantage AI cars have over human drivers is that humans only have two optical sensors, and while they have pan/tilt capabilities, they both face the same direction. If someone jumps out in front of your car, you have to check the lane next to you to make sure something isn’t there before you can start swerving. But the AI’s sensors cover 360 degrees around the car simultaneously and are constantly monitoring, which means it already knows that it can start swerving while the human is still turning their head.
It also is significant that a human’s reaction time is limited by the maximum speed of a signal through a nerve pathway, which has an upper limit of around 270mph, which is roughly 670,599,730mph slower than the upper limit of the speed of a signal through a wire.
In short, it wouldn’t terribly surprise me if the AI in this case reacted better than a human could,
I’m not so sure people wouldn’t or don’t sacrifice themselves when faced with the ethical choice. I use pilots again as an example. There are many stories of the pilot desperately staying with the ship to avoid a populated area during a crash.
Still the purpose of a car or driving is not to be safe, so I’m not sure of the fixation of safety on robotic cars. After all if you want to be safe, stay home. Personally I’m more interested in who is making money off of them and who stands to gain in the future?
Chasing easy money Trumps all concern for public safety present and future it seems. Damn the bike riders and full speed ahead.
Me (above): “An example that is closer to the clinical trial situation is finding a town or city where it’s residents sign informed consent waivers to be part of the Uber vehicle test, then test there.”
You (above): “At this stage of autonomous vehicle testing, getting “informed consent” from everyone the vehicle might get near would be impossible without constructing an entire artificial city and hiring several thousand actors to pretend to be citizens of it. As it’s not possible to predict who might be where in a given test area, your call for informed consent is unreasonable.”
Me: “Impossible or expensive?”
You: “Impossible. Logistically impossible.”
Me: “Ok, your affirmative assertion, prove it.”
I offered an example (that extends concepts from Phase II and Phase III Clinical trial standards and methodology), not an assertion. You affirmatively claimed the example as “impossible”. Your burden to show it.
Btw, an “artificial city” does exist:
And hiring actors to play roles in such a city doesn’t seem to be categorically “impossible” or even “logistically impossible”.
Inserting the word “probably” doesn’t change the fact that you made an early conclusion that it was “probably her fault”.
And your continuing comments seem to confirm my previous view that “probably also unlike you, I care about the fact that a woman has died, even if it turns out that she was more responsible for her death than the Uber vehicle. That’s partly why I started this thread.”
Regarding your comment about things getting “in the way of rational understandings of fact”, it rationally and logically should include not jumping to an early conclusion with one’s first reply to this thread.
Last, I’m not going along with your shift to politics with “If more people understood that, we might not be in such a political mess these days.”
For those who might be interested in the continuing developments regarding this accident,
The tragedy of the first person killed by an autonomous vehicle points to a potential vulnerability with the nascent technology now being tested on the open roads: While robo-cars, powered by sophisticated sensors and cameras, can reliably see their surroundings, the software doesn’t always understand what it detects.
New details about the Uber Technologies Inc. autonomous vehicle that struck and killed a woman in Tempe, Arizona, indicate that neither the self-driving system nor the human safety driver behind the wheel hit the brakes when she apparently stepped off a median and onto the roadway at around 10 p.m., according to an account the Tempe police chief gave to the San Francisco Chronicle. The human driver told police he didn’t see the pedestrian coming, and the autonomous system behaved as if it hadn’t either.
Experts say that the sophisticated sensors on the autonomous vehicle almost certainly detected the woman pushing her bicycle laden with bags along the median, close to the road. But it’s possible the car’s lidar and radar sensors, which scan the surroundings for objects, may not have realized it was detecting a person. (Uber declined to comment.)
Sounds like Uber (via Otto) basically stole the technology from Google (Waymo).
According to the lawsuit, Waymo became aware of the issue when it was inadvertently copied in an email from a supplier that showed an Uber LIDAR circuit board, which bore a “striking resemblance” to one of Waymo’s designs. The complaint accuses Levandowski of downloading the 14,000 files in question in December 2015. That allegedly included the circuit board, part of a sensor that helps autonomous cars “see” their environment.
Levandowski — who invoked the Fifth Amendment to avoid self-incrimination in connection with the case — left Waymo in January 2016 and formed Otto that May. The lawsuit alleges that, prior to his departure, he created a domain name for his new company, and told other Waymo employees that he planned to “replicate” the company’s technology for a competitor. Creating Otto was a clever way to hide his agreement with Uber from Google executives, according to Waymo’s lawyers; Uber planned on buying the startup before it was even founded, they added.
If you can manage to dial back the vitriol and stop intimating that I approve of people dying, perhaps we can come to a consensus:
Yes, you’re right. And it’s new, and bright, and shiny, and doesn’t have any faded paint on the roads, and has the equivalent of half a block of fake, 2-dimensional buildings.
Ever watch someone from a very rural area try to drive somewhere like Chicago? That fake city is neat, but it doesn’t demonstrate that an AI is ready to take on real cities. Only real cities can do that.
And? It probably was. Roads are for cars. Pedestrians are supposed to stay out of them except in crosswalks. She, according to reports, did not.
I’ll try one more time to reach you here. Look at aviation. It’s one of the safest things you can do. You are far more likely to die in a car on a road than you are sitting in a metal tube going 600 miles an hour 5 miles above the ground. That’s pretty amazing on its surface.
But aviation only got that safe because there were a hell of a lot of crashes that killed a hell of a lot of people.
Driving is going to be the same. Currently driving kills somewhere in the neighborhood of 37,000 people per year in the US alone. Last year, it was 40,000. That’s more people than get killed by guns. Or terrorism.
AI driving promises to reduce those numbers by orders of magnitude. Yes, there will be mishaps on the road to full AI. They will happen to people who did not consent to risk dying at the hands of a computer, much as aviation accidents that led to safety advances happened to, and killed, non-consenting passengers. I guarantee that no one on the Eastern L1011 that crashed into the everglades because the pilots couldn’t stop fretting over a burned out bulb long enough to notice that they were descending right into the terrain consented to having the pilots become distracted. No one on UA535 or USAir 427 or Eastwind517 consented to a bad rudder hydraulic module design that caused their planes to crash either.
And interestingly in each of those aviation examples, and all of the other aviation crashes that have happened since the Wright Brothers, no one has suggested that we stop flying, or that we pull all planes from the skies until we can 100% guarantee that no one will ever die in a plane crash again.
So, yes, whichever party is to blame in this Uber crash, the fact is that people are going to die as a result of AI mistakes. But the long view is that fewer people are going to die at the hands of AI than are dying now.
I’ve said it many times on here. AI does not need to be 100% safe. It only needs to be safer than humans in order to be worth pursuing.
Considering AI will never get drunk, or distracted by cat videos and texting behind the wheel, or eating, or any of the other things dumb humans do when driving, the odds are heavily in favor of the idea that AI is not only going to be safer, but is going to revolutionize ground vehicle safety in the same way that aviation safety was revolutionized on the lessons learned from aviation tragedies.
One final thought:
So the AI performed exactly as well as the human did in the exact same situation, which is not by any stretch an indictment of the AI.
Yes, that is indeed possible, and now that this has happened it’s a foregone conclusion that engineers will build better person detection into the system, and once that’s done AI cars will always see pedestrians pushing bicycles along the median. The same cannot be said for drunk, tired, distracted human drivers.
Sorry guys but this is getting a little long winded. I’ve got to work on taxes.
On the same subject though, my DIL’s mother was ecstatic that they would likely get a federal grant a year or so ago to install robot cars in their downtown. I don’t remember the figure but something in the millions I think. I just kind of said I wonder where the feds are going to get the money, they have none left and are broke. Borrow it I suppose. I think she was hoping that this would put them on the map to have temperatures in their city included in the daily national temperature listings. I really don’t have an update on the project but I suspect in DC there are some career rubbing their hands with glee at the prospect of handing out grants. So that’s one group anyway counting the potential cash.
Correct. And so one part of your “impossible” assertion is gone.
As for the other, Uber and others are free to find a town or city willing to “sign up” thru informed consent to letting the vehicles be tested. Maybe expensive or difficult, but not impossible.
Still seem to be trying to justify/explain your early conclusion.
Yes, she should not have crossed outside a crosswalk. And yet, Arizona law:
28-794. Drivers to exercise due care
Notwithstanding the provisions of this chapter every driver of a vehicle shall:
- Exercise due care to avoid colliding with any pedestrian on any roadway.
- Give warning by sounding the horn when necessary.
- Exercise proper precaution on observing a child or a confused or incapacitated person on a roadway.
IOW, the verdict isn’t in yet on whether it was “probably her fault”.
I’m going to be blunt here (with a rhetorical question): why are you trying “to reach” me? And with a shift to aviation, which comes with other details and factors different and distinct from cars and driving; IOW, it muddies the waters.
Even I prefer to avoid a shift to pharmaceutical testing and yet had no easy comparison to another testing regimen, like autonomous driving vehicles, where morbidity and mortality are at stake AND where a fatality is NOT taken lightly.
It seems clear to me that where I’ve been trying to raise questions about how the testing occurs and how things might be different/better, you seem to have been dismissive of this accident with a ‘that’s how it goes’ attitude. This is borne out even with your shift to aviation, where it got "safe because there were a hell of a lot of crashes that killed a hell of a lot of people."
Well you are certainly entitled to your (pragmatic?) view, as I am to my (seeking improvement?) view, which is why I’m repeating my previous suggestion to “ignore me”.
And you can take that suggestion as including my separate reply regarding the additional Bloomberg article.
Seems this might finally be an answer to my question in comment 34 above: "Are you stating that the standard is, or should be, “no more morbidity or mortality than a human-piloted vehicle”?"
Better late than never.
IF it turns out to be a failure to detect a person, it would seem to be a case where “a self-driving car actually kills someone because its programming/design is bad”, like the hypothetical evidence requirement in comment 34 above.
Yes I hear that, got my taxes back from the accountant today, $285, I am going to sit down before I look at it!
Aren’t all cities artificial?
1.made or produced by human beings rather than occurring naturally, typically as a copy of something natural.
I saw the drivers view video of the collision and it makes the autonomous car seem quite unable to recognize and react to the lady crossing the street. On my worst night I would have had my brakes on the floor several feet away from the lady and depending on speed possibly avoided hitting her. But will we get the true data from the black box? And again, surely there is a black box. .
Hmmm…looks like the Uber “driver” looked away at just the wrong time:
If no brakes were applied (I don’t see the driver thrown forward until maybe the last second), looks bad for Uber.
I hope Uber releases the computer vision video.
The street light right before the collision would make it more difficult for a person, but should have no effect on lidar/radar.
Look at the driver’s eyes. The uber driver didn’t seem to be watching the road; instead was looking down at the console until just before it happened. It appears to me upon viewing the vdo that uber driver or not, a computer driven car should have been able to see something was about to cross its path and been able to veer away and/or stop in time. I see this as confirming there’s a self-driving car design problem. If it couldn’t see what was about to occur, it was going too fast for the conditions.
I dunno but I think most people would have cranked the wheel way to the left in an attempt to avoid her. Never hit anyone but that’s what I’ve done with deer to avoid a head on.
Unless you have the total amount of autonomous cars in use to compare with the total number of regular cars in use, these numbers mean very little. You might as well say “BLE’s motorcycle has gone 40,000 miles with zero fatalities” to make a point.