The Nut Behind The Wheel

“The Nut Behind The Wheel”. For nearly a century, almost impossible to reach, difficult to tighten up, every single one different and mostly just a mystery. Henceforth referred to as “NBW”.

For a moment I sat here pondering where car mfg is headed, so much tech added lately, but why? For whom, or what? We all know where they say it is headed, or could be… Electric and Autonomous. Hardly a mention of awesome and efficient public transit (yet another topic).

But it made me wonder, if we can, right now, have fully autonomous vehicles damn near good enough to interact with our world which includes millions of wayward “NBW’s”. Why then are our current vehicles being so heavily laden with tech to prevent said NBW from doing stupid things? Especially when the NBW can be eliminated. Is it just capitalism as usual? Last chance to bilk the car buying public at large…like they have been doing forever?

Doesn’t seem to be to help Autonomous vehicles navigate better as the tech seems very much NBW-centric currently. Autonomous vehicles would have a hell of a lot easier time when all the NBW’s are removed…right? How does that occur? By force?

I’m confused…(don’t worry, this condition is common for me)

I guess I’m just thinking out loud again (and having trouble keeping up with current vehicle tech AS a tech). But it does bring up more than one interesting topic and all sorts of issues, now doesn’t it?

1 Like

Capitalism certainly plays a part in it. IIHS-HLDI, the auto insurers safety association, does most of the safety testing for automobiles. They do it to save lives, but also to save money. Money saved is money earned for them. Cars are safer than ever, yet our annual premiums still go up.

Adaptive cruise control and all the other road monitoring systems currently are meant to assist the driver in accident avoidance. They also form a basis for autonomous vehicles. Sensor reliability needs to improve before they can be deployed autonomously. The big missing link is interautomobile communications. Autonomous vehicles won’t be ready for prime time until our cars can interact to maintain safe conditions for everyone in cars, on motorcycles and bicycles, and walking near the roads.

3 Likes

Having been involved in the development of some of these systems, many were developed as convenience products to add value to the driver. Cruise control eased fatigue as did automatic transmission, automatic headlights or wipers. These all appeared first on expensive luxury cars, usually as extra cost options.

Then we get ABS. Designed to help a driver avoid accidents he would normally skid into be locking the brakes. ABS brakes can do things the driver cannot like keep on wheel from locking without losing braking from the other 3 by reducing brake pressure to that one wheel. Also sold as an option.

As computing power and sensors improved, traction control and stability control were added so the brakes could be applied to that one wheel. Opens up new possibilities. Safety opportunities, performance opportunities.

Studies showed most NBW’s will NOT mash the brake pedal hard enough to just let ABS take over. When ABS kicks back, the NBW pulls their foot off the brake. Also NBW’s think 0.4 Gs of deceleration is the limit and don’t press harder sooooo… Auto over-rides were developed to go full 1 G stop if the computer identifies a panic stop. That development is all about safety and was mandated in Europe and migrated to the US even without regulation.

As cars add cool stuff like electronic shocks that can adjust ride but also help mitigate skids, electric steering that can “nudge” you back into the lane or park your car for you or smart cruise control, the suite of shared sensors and controls get pretty close to the complete set required for autonomous driving. Close… but no cigar!

Autonomous driving is coming, but it isn’t here yet. Humans, even the NBW, are still pretty good at processing abstract situations and road driving is loaded with those.

And I agree with this statement:

Note: ALL of these systems were developed without government regulation. Capitalism created the technology as a feature to make money. Later when the products were proven in use, they were mandated into ALL cars to save lives (and also money).

2 Likes

That’s a hard no. The tech simply isn’t there. Yet.

The visual systems aren’t good enough (which is why that Uber self driving test ran over the bicyclist). The programming isn’t good enough because there’s no way the programmers can anticipate every possible situation – and even if you eliminate the nuts behind the wheel, you can’t do much about that moose that decides to charge out at the last second, and so you need to have a vehicle that can recognize and adapt to such changing situations.

A few years back a pilot of a small plane on his way to the EAA Airventure air show in Wisconsin had engine trouble and had to land on the freeway. Other cars saw him descending and gave him room. He landed safely with only a bruised ego and one heck of an unusual tow ahead of him.

What would the autonomous car do? Because I’d be willing to bet that “watch for landing airplanes and avoid as necessary” has not been thought of by the programmers yet.

Plus, the legalities aren’t good enough yet because no one really knows who is gonna be at fault if a self-driving car causes a wreck. Speaking personally, I want that law figured out before I get an autonomous car because I don’t want to be held responsible when the car screws up.

Amusingly there was an episode of PHC (or it might have been Live from Here at that point - I think Thile was the host by then) that featured a mandated Minnesota programming setup for self-driving cars in the state. Things like it had to slow way down when entering a highway, it had to be programmed to cut over 3 lanes of traffic at the last second, etc.

Yes, but everyone here will be long dead before that happens. There are lots of people who aren’t going to give up their DIY-driving cars, either because they want to drive (me) or they can’t afford a new car. That latter category is going to be in for some harsh times, because as soon as auto-cars get good enough to be safer than humans, insurance companies are going to jack rates on people who drive themselves.

As to that former category, I think we can take some tips from the Amish. Cars have been common in America since the 20’s, yet the Amish still get around with horses and buggies, and they do it on the road. In another 100 years people like me who want to drive manually will be like the Amish - some might be annoyed by us, but we’ll still be out there in our ancient modes of transportation.

Also, just as an aside, note that above I said “safer than humans,” not “100% safe.” A lot of people are running around saying we can’t implement this technology until it’s 100% perfected and safe. I disagree. I think if it’s even 5% more safe than human drivers, lives will be saved and it should be implemented. After all, the computer isn’t going to eat, or fall asleep, or play with its phone, or any of the other idiotic things humans do to distract themselves from the job of driving.

6 Likes

I’m supporting @shadowfax on this.

I work on the technologies similar to the ones used in automotive, but in the Physical Security field, and it is plain too immature yet… although progress is made quite fast, my estimate is also quite pessimistic, probably not earlier than 5-10 years from now when something will get close to maturity.

2 Likes

That’d be my guess too. BTW on the moose example I used. I couldn’t possibly find the article again now but I remember reading an article that talked about how the auto-car industry was working on implementing moose avoidance in places that have moose. Another article (or it might have been the same one) talked about how they have to teach autonomous vehicles in Australia how to recognize kangaroos.

That struck me as incredibly dumb. At the end of the day, I don’t care what the animal is. Whether I slam into a moose or a kangaroo, my car is messed up, and likely so am I, and so the car needs to prevent a collision with the animal. Even if the animal is something the programmers never anticipated. Even if it’s a chupacabra!

Are you telling me that if a kangaroo escapes from the Minnesota Zoo, I’m out of luck because my car doesn’t have the region-locked kangaroo recognition protocol? That’s absurd.

The car needs to recognize that “big shape on an intercept course must be avoided, even if I don’t know what it is.” Any other approach is both overly complex and more prone to error.

3 Likes

So far I find “driver assist” features on new Honda working quite well, but YES, they do not raise the [very complex] problem of avoidance, they pretty much address much more simple/mundane scenario of “my [stupid] human is about to hit something big, let me wake him up and if I did not succeed in this, let me stop him”.

That pretty much emphasizes where technology is vs. where we want/dream it has to be.

Stupid simple scenarios, covering 80% of TYPICAL human mistakes → easy, we have it reliable now.

Much more complex scenarios of moose avoidance and/or chupacabra recognition… well, that last 20% will take substantially longer…

More on this, the current “best practice” is to get neural network based technology and go with a massive training effort, but that’s pretty much makes that “artificial intellect” only capable to work with abstractions you happen to include into your training sets.

Where do we see any novel idea?
I do not, what I see is “we will optimize the way we train and the way this technology performs processing”.

As an example, Tesla made a marvelous piece of hardware in their new car control platform: it is extremely optimized for speed and processing big models on a very modest power budget… do not forget, your “autonomous” costs you something in ballpark of 1 MPGe!
Then, they made all their vehicles to send training data to the “mothership” for the situations they need more training data for or when vehicle encounters some situation which raises some flag of low confidence in the real-life situation.

Is it anything novel here, concepts-wise ?
NO!
It is only very good work of engineers getting good/proven technologies bolted together, in the way which was not done before.
How close we are to “fully autonomous” in this sense?
I’m not holding my breath yet, it is not yet close to get there.

3 Likes

What about the NBC (the nut behind the computer) computer program’s are only as good as the human who writes the program the autonomous car’s use.

2 Likes

I want to slow for a kangaroo, moose, or deer since they are large enough to severely damage my car and possibly hurt me. Do we do the same thing for small animals? I don’t like the idea of running over dogs or cats, or even groundhogs, but at least they cause minimal damage to the car and hitting them don’t injure me. Where do we draw the line on size?

What about activity? Does something have to be in the road or headed that way, or is a hot body on the side of the road enough? There could be a lot of false positives looking for hot bodies, moving or not.

1 Like

Most of that is mainly due to the cost of everything has gone up. The cost of a vehicle is more…the cost of medical for liability is more.

2 Likes

It’s more intricate than that. If a dog runs out in front of your car, and there’s no one behind you, it’s fine to brake for the dog. If there’s someone on your tail who won’t stop in time, the dog’s out of luck. The autonomous vehicle computer needs to be programmed that way.

Incidentally once more are on the road and talking to each other, we’ll save more dogs that way, because you and the car behind you will both brake simultaneously for the dog.

1 Like

99% of drivers who are even aware of someone tailgating them won’t think about it…instinct will take over and if they are a dog lover - they’ll brake hard. If the driver behind hits the car that broke hard…it’s their fault. Shouldn’t be driving that close.

This situation will only occur during the period when transitioning over to autonomous vehicles. If all vehicles on the road are autonomous then the second car would be keeping a proper distance. And if the car in front detects an object it needs to brake for it’ll broadcast that to all other vehicles near by instantaneously.

3 Likes

See my second paragraph. :wink:

1 Like

That’s tough, though. What if a human runs out in front of you and you’re getting tailgated? I’d want to brake hard and get rear ended in that scenario. Lots of variables, and that’s a bit scary. Computers can’t make judgement calls.

I disagree on the 5% safer and let’s go autonomous idea. 5% safer than the average driver might be less safe for me. Of course, I assume you’ll still be able to switch off the autopilot…I hope…

I haven’t looked into the technology really, but I’m curious how it identifies traffic lanes on rural roads, roads without striping, or roads so narrow you need to slow down, pull halfway off the road, and let the other guy pass?

1 Like

Sure they can. You just have to tell them how to judge.
Think about how you arrived at that conclusion. You place human life at a higher level than other animals, and therefore when the thing you’re about to hit is a human, you modify your reaction to preserve that life where you wouldn’t do so to preserve the life of a dog.

Well, the good news is that humans are pretty much the only thing crossing roads that walk upright, so teaching a computer what the general look of a human is is pretty easy. Then you program the computer to avoid injuring a human (aka Asimov’s First Law of Robotics).

Where it gets trickier is if swerving to avoid the collision puts the car’s occupant in danger. That’s actually a problem they’ve been wrestling with. What if you need to swerve to avoid running a bus full of people off a cliff, but by swerving you yourself go off the cliff? The logical choice is to do it because it’s better to save 50 lives and kill 1 than it is to save 1 life by killing 50. But that’s understandably not the choice that the car’s owner is going to want the car to make.

In that situation the computer will do whatever we program it to do, and we have to figure out for ourselves which choice to have it make.

I’m talking average drivers. A computer that’s 5% safer than the average driver won’t be as safe as me. But it will be safer than all those idiots I’m constantly avoiding who can’t peel their eyes away from their phones. It kinda goes back to the bus problem. Yes, a computer that’s only 5% safer will cause wrecks that I would probably not cause. But it will avoid causing wrecks that its driver might have caused.

After all, at least at first, it’s not the hardcore driving enthusiasts who have put actual thought and practice into their driving abilities who are going to get these things. The early adopters will be the people who are irritated that they have to glance away from their phone once every 10 seconds to see if they’re still on the road.

It uses various sensors to determine where the road is and where it isn’t, and to determine that an oncoming car presents a collision risk and that it should respond by moving over to let that car get past.

You might find this article of interest:

1 Like

They do it all the time.

It can detect the edges of the road and do a simple calculation to determine where the center should be.

Autonomous vehicles are still a work in progress. Yes they’ve come a long ways, but they’re still a long ways off of passing the IEEE standards to be considered truly autonomous. The sensors used just a few years ago are considered obsolete today. There are something like 500,000 lines of new software being written every years. Faster processors are replacing older processors. Autonomous vehicles are not road worthy TODAY. But t hat doesn’t mean they won’ be 10 years from now. The earliest (realistic) estimate I’ve seen is 2028. That’s a long ways off.

1 Like

Ok, I’ll rephrase. Computers can’t make moral choices. Squash the object (human) or hit the ditch. I understand you can program a computer to do basically whatever you tell it to do in a situation given variables. I also understand there are a LOT of variables.

1 Like

Yes, computers can make moral choices.

You make moral choices based on your instilled morality. In other words, your parents taught you that stealing is wrong. You don’t steal because you have been steeped in the idea that stealing is wrong. To get a computer to make a moral choice about stealing, we just have to teach it that stealing is wrong. For a non-true-AI (i.e., a computer that runs on pre-set programs) it’s pretty easy. “Don’t steal.” Done. For a real AI, we will have to instill hard and fast, unbreakable laws – i.e., do not intentionally harm humans or allow them to be harmed through your inaction. If harm is unavoidable, minimize it as much as possible. If it’s a smart enough AI, that would not only cover driving, but stealing as well. Stealing harms people, and it’s not supposed to harm people, therefore, stealing is forbidden.

Computers may not be able to make independent moral choices, but that’s good because different humans have different ideas of what is and is not moral. After all, the slave owners of the 1800s did not consider themselves morally bankrupt simply because they owned people. The racists of the 1950s forcing black people to sit at the back of the bus and go to separate, lesser schools did not consider themselves immoral. There are some people today who still don’t consider those things immoral, while the rest of us emphatically do. I would not want to leave “how to treat black people” up to a computer to decide. I would want it to have a hard and fast rule that black people aren’t to be treated any worse than any other race. That’s a moral guideline, but it’s one that comes from (hopefully morally-upstanding) humans.

Put another way, what you call “morality,” I call “pre-considered modes of behavior.” There’s probably nothing that would induce you to intentionally kill a child because you are not morally bankrupt. If you consider it a “moral” decision not to murder children, that’s fine. But we can tell a robot “do not murder children,” and there you go – it’s gonna be morally upstanding because we told it to be.

I get the sense that what you’re really angling at is that you don’t want a computer to come up with its own moral code, as was the case with Hal-9000 in the 2001 movies. Yeah, neither do I. I want to be able to predict the “moral” decisions the cars will be making. In the bus problem, whatever side of the fence the car is going to come down on, I want to know what that decision is gonna be before it happens in my car.

And that’s entirely possible. To grossly oversimplify, we just tell it to minimize harm to life and property, in that order (assuming that’s what we want it to do in all cases). Obviously, getting it to understand how to do that is complex, but it’s not insurmountable.

Part of the current driving problem is unpredictability. If I’m 5 inches from your bumper in the next lane over, you may or may not dart in front of me. I can’t always predict that. If you were a computer and I knew what your pre-programmed conditions were for changing lanes, I could.

4 Likes

That is the ultimate goal. The concept of Artificial Intelligence also comes Artificial Morality. SiFi has been dealing with this for decades. In some degree robotics today have certain morality programmed in them. There are robotics on Automotive factory floors that can detect when a human working on an assembly line is in danger of getting hurt and will automatically shut down the line. That’s one simple way of programming morality into machines.

1 Like

You cannot program morality into a machine. You can program predetermined behavior into a machine. Morality is not a construct in a binary world of choices. Hit the school bus headed for you in your lane where you will probably be injured or killed and risk many others, a large target or a person with a baby in a stroller, small target where you will probably suffer no physical harm. Is it a deer or a human? to many what ifs!

2 Likes