Very serious accident.
https://www.sfgate.com/tech/article/cruise-driverless-permits-suspended-dmv-18445296.php
Very serious accident.
https://www.sfgate.com/tech/article/cruise-driverless-permits-suspended-dmv-18445296.php
an unrelated car struck a pedestrian, knocking them into the path of a driverless Cruise car,
This could have easily happened to a human driver as well. I’d guess the Cruise had a sensor blind spot and the pedestrian was in it.
Humans would not be able to see the pedestrian laying in front of the car either… but they could stop the car, get out and look. An autonomous car cannot.
Update: Newspaper article today (November 9, 2023) said traffic safety officials apparently believe this problem can be resolved with a software update. hmmm …
They’ve got plenty of problems:
Cruise Knew Its Robotaxis Struggled To Detect Children But Still Kept Them On The Road (jalopnik.com)
Cruise’s Robotaxis Require Remote Human Assistance Every 4 to 5 Miles (gizmodo.com)
It’s hard to understand how they got licensed for public road use??? My Corolla got a complaint when the license plate light bulb burned out.
My 2005 Honda did too. We just don’t have friends in high places.
In all likelihood it can be resolved. I’ve said this many times…autonomous vehicles are not there yet, but probably will be. Over 1,000,000 lines of code changes almost yearly as this develops. Judging autonomous vehicles NOW when they’re not ready for primetime is a little foolish. There are 5 levels of autonomous vehicles. 5 being the highest where they can safely drive in all conditions. At BEST some companies have reached a level 3. Let’s wait a few years.
lol … yeah, I guess. In Bernie Taupin’s (Elton John/Taupin) autobiography, he tells a funny story. One evening he wanted to drive home from a restaurant but his car won’t start. He spies a bevy of school busses parked across the street, one has keys dangling from the ignition switch, so in an apparently somewhat fuzzy state of mind he drives it home, parks it in his driveway. The next AM he wakes to a loud knocking on his front door, police asking about the school bus in his driveway. He says he avoided serious consequences by apologizing & calling in some favors from friends in high places.
Even if the tech glitches get sorted out, I’m still uncomfortable with “self-driving” or “autonomous” concept. We’re obviously getting closer and closer to truly “autonomous” technology. But for now it’s mostly still human programming that runs them. So they’re not direct human commanded remote control (as in someone standing there with a remote), but they’re still being driven by software engineers. Which really means they will end up being driven by some nexus of lawyers, car companies, government, etc. Various “interests” will be built into the programming. In other words, they’re not really self-driving or autonomous.
Personally, I know that as a human I’m slower and not as well informed (perception-wise) as a computer. But I still like to make my own decisions.
Agree 100%
And it will be in the future.
I’m not sure you understand what a software engineer does. I work with government agencies all over the world. We design software/hardware solutions for the telecom industry. In most countries the telephone system is owned by the government. Designing and writing software first starts with a set of requirements. Some of these requirements are technical, some come from other sources (like the government, or a management decision. My first job in software was working on radar systems for the Airforce. The specs was mind boggling. And that’s NOTHING compared to the insurance industry. It’s up the engineers to push back on the design spec team if there’s conflicts with requirements. I’ve seen many requirements over the years that made other requirements moot. Those issues have to be resolved (and are resolved) - especially for mission critical systems where someone can get injured or killed.
IEEE and SAW have already set standards/requirements that define how autonomous vehicles should work (safely in real the real world). There are 5 levels. At best a couple companies are at level 3. We have a ways to go.
I’m not sure why you think I don’t understand what a software engineer does. What you said is exactly what I meant. E.g. “Designing and writing software first starts with a set of requirements. Some of these requirements are technical, some come from other sources (like the government, or a management decision.” Then the addition of insurance companies. And the engineering aspects are in the mix. They’re not going to be “self-driving” or “autonomous.” There are always those human things back there.
The reason that I wrote that “for now” it’s mostly human programming is that ML is changing that - and more rapidly than ever. The humans build the AI, but once ML is part of it, the software goes off and does it’s own “learning” and before long the creators of the software don’t really understand any longer what it is doing or why. Don’t take that as an absolute statement applying to all things now. But it already does apply to some systems and will more and more to others.
I’ve been on the web for as long as it’s been a “thing” and am not confused by navigating forum platforms…until this one…
I was using those examples of what kind of requirement specs there are…many of which are NOT technical.
Disagree with that. Machine learning isn’t - just let the system do what it wants and see what happens. It’s carefully controlled and monitored. It’s still at it’s infancy, but Machine Learning has been around for a while.
I know that you were using the examples as what kinds of technical specs there are. They come from management, state, and the insurance industry. I don’t disagree there even in the slightest. Of course, there are those hard limits from engineering too.
As far a ML goes, don’t put a straw man in my words like “let the system do what it wants and see what happens.” It’s a straw man because that is absurd, and that’s not at all what I meant. Control and monitoring has been part of automation all along, and it always will be. (E.g. back to Frederick Taylor and “scientific management” and on thru things like the Deming cycle and six sigma and all of that. “Plan Do Check Act”).
But the fact is that, more and more, engineers do cease to understand what their algorithms do and why. That doesn’t mean that it isn’t “monitored” with attempts to “control.” That’s just part of the protocol. But the ability to keep it under knowing control is fading into the sunset.
Humans are capable of building socio-technical systems complex enough that they cease to be able to predict or control what they do. Our reach can exceed our grasp.
Heh heh. I noticed mine was out too but before anyone could site me. Turned a ten minute job into an hour when the fixture slid down behind the bumper cover. Got it fished out an had the new bulb in stock. Open the cover from the top and then a wire or forceps to fish it out again. Should be a warning to tape a hand hold on it before unfastening it.
Oh yeah another paid for Mother Nature study. Most of the 30 mph streets I drive on though have no lane markings and I stay well away from kids and parked cars.
Most software development these days us Agile. I’ve had Six Sigma training, but no company outside of manufacturing has ever really used it much.
Let me know when that is reached. I don’t see it. I’ve worked in this field for over 4 decades.
Agile, the Deming Cycle (PDCA), Six Sigma, …they’re all reiterations of the exact same theme. Agile isn’t a “new” thing except from a marketing / sales perspective. And for what it’s worth, obviously…that’s what should be done: What do you want to do? Plan / develop for that. Test it. Deploy it. Review the results. Revise it. Heck, the logics are all just variations on the “scientific method” from our old (even 40 years old) high school textbooks. Hypothesis - test - revise.
As for when we’ve reached the point that we design/produce systems that we don’t fully understand and / or can’t control, it’s not a new thing, and the instances are too numerous to mention. IDK, look up the “normal accidents” literature. ML is pushing it deeper. In many tech fields there are very high degrees of incomprehensibility in what softwares do. Basic telecom probably isn’t one of them.
P.S. And btw, @MikeInNH - you’ve been on this CarTalk board way longer than I. And I’ve always appreciated you and your expertise in all sorts of areas, even in your somewhat argumentative style. So I enjoy the exchange. And I don’t mean in an “I like sparring” sort of a way. I mean that I’ve learned a lot of things from people here over the years, including you. I learn from these conversations.
No they aren’t. They have the same goals, but they are completely different mechanisms.
I know…been doing agile development for decades. For many years prior - Waterfall was the preferred method. Six Sigma can and in some cases use Waterfall - the opposite of Agile.
Basic telecom probably doesn’t. But that’s not what our company does. One telecom company I worked for built solutions for government agencies to monitor ALL conversations and look for patterns in crime and terrorism. Very heavy AI.