Artificial intelligence: not there yet

When I first got my 2011 Outback, I decided to test its onboard GPS by using it to navigate to a hospital a few towns away from my house. I knew how to get there, but because I had to go to the hospital that day for an outpatient test, I figured that it was a good opportunity to see how the system navigated to a place that was well-known to me.

Imagine my surprise when it actually guided me to the office of Associates in Psychology, rather than the hospital. Ironically, I used to have professional dealings with that psych group when I was a Paralegal for the state’s child welfare agency. Anyway… that office was in the same town as the hospital, and one could say that it was health-related, but it was definitely not the regional medical center. Clearly, somebody screwed up when inputting the info into the database.

I have never had a problem when I put in a complete street address. Different providers might give you different route.
I had my GPS set to either quickest or shortest route one time, it actually guided me down a forest service road with through water creek crossings, fortunately I was driving my 4X4 truck that day. Now I know to check the route and settings, especially if driving my car.
Yes, I used to, usually, successfully use paper maps. But onetime using the strip maps ( what ever AAA called them) had to call a relative to get out of her town and back to the interstate, the strip map was outdated.

The misdirection that I noted in my previous post happened when I selected the hospital from a list of facilities in that town. If I had entered the hospital’s address, more than likely it would have directed me properly.

I had a similar problem last year when using my GPS to navigate to an arboretum in a rural area. While it did get me to the arboretum, it took me to its “back” side, where there is no entry gate or parking lot. Technically, it took me where I wanted to go, but…

I am not always happy with the GPS route, I know better ways for me sometimes. I rarely look at the map, the voice guidance works fine, except when you pick shortest route and end up towing a boat down a 1 lane dirt road.

With “active” GPS systems like WAZE, you are usually directed so that you can avoid accident scenes, construction blockages, and other sources of traffic tie-ups. What might seem to be somewhat roundabout could wind up saving you a lot of time because you won’t be sitting in a traffic jam.

Have not tried waze, but the old Garmin with lifetime map updates does show traffic issues and tells me how much time I can save by choosing an alternate route. It has some kind of receiver, but some areas do not seem to be covered.

If I’m driving to an out of town location I am unfamiliar with then, yes, checking at least one GPS based mapping program is helpful, including with alternate routes and travel times noted. When possible, I always ask family and friends familiar with those areas for routing suggestions which I give primary credence to.

Locally, if I need to get to a place new to me I plug in the location to be certain where I’m headed and get a street view of the building I’ll be looking for. But I don’t necessarily use the app suggested route. Traffic patterns vary depending on time of day and what may look best on a map may not be the best route through the urban traffic, especially at rush hour. And there are some places where the easiest route back home may differ a bit from the route to get there.

When working dispatch for the Red Cross during the flood of 1993, I discovered the street map guide books for this area. To this day I have the ones for five counties that encompass most of the metro area, three on the MO side and two on the IL side. I don’t use them as much as I used to before having Google and Map Quest handy on my phone but they still come in handy at times.

As with every system I’ve designed or been in the part of designing and writing. This is nothing new.

And as I’ve said several times so far…it’s a simple bug. You keep judging driverless systems like they are ready for public use. They are NOT. So wait til they meet IEEE and SAE standards and manufacturers are putting them in the hands of the general public. This technology is changing extremely fast. New hardware and millions of lines of code is being written every year. It’s like someone judging the Moon landing technology based on the Mercury rocket tests. “Look the Mercury Rocket can’t even get out of orbit so how are we going to get to the Moon?” Not the most intelligent way to judge a technology.

I did…but that was on early GPS systems (like 15+ years ago). But I surely wouldn’t judge the new GPS systems based on that technology.

the way Tesla “fleet-source” their AI training datasets gives some hope, yet I think the base math is still not there to transition from the “assist” to “autonomous”

What sign can be erected faster than a verbal command to the police car’s computer saying “reduce the speed limit to 30mph within 1/4 mile of this position,” and having that automatically uploaded to the network for auto-drive cars to obey immediately, and non-auto-drive cars to have the information flashed on a screen for the driver to read?

We’re not talking about what’s possible today, but what’s going to be possible in the future. And frankly, we have the technology for digital speed limit updates now (and in fact use it in a rudimentary way in some cities which auto-update road signs with changing limits). The only real thing holding us back is that there isn’t a solid standard that every car maker installs to receive road condition/limits updates.

OTOH, it has everything to do with “common sense.” My wife misread the map. Here’s what it looks like on my phone:

I knew nobody would put a restaurant in an industrial park based on past experience. In the case I originally cited, a human driver would know the speed limit couldn’t be 100mph even if the sign appeared to indicate otherwise, and he/she wouldn’t need GPS data to reach that conclusion.

I think AI may have advanced too far with the self scan checkouts at many retail stores. When I have used these machines, it seems to sense that I am a senior citizen and also knows that, as a machine, it can’t be prosecuted for old age discrimination. If I don’t scan my items fast enough for the machine, it says “Move it, you old geezer. There are people waiting to check out”. When I have had to look up the code to weigh a bunch of bananas, the machine yells at me “Can’t you remember the code for bananas is 4011”? When I try to pay on cash, the machine scolds me “Hurry up and pay”.
I got even with the machine. I went to a self service car wash. The car wash was $10, but all I had was a $20 bill. It returned $10 to me in quarters, 40 quarters total. At first, I was annoyed, but then remembered I had to pick up a few items at the grocery store. After I picked up the items, I decided to get rid of the quarters at the self scan machine. The machine growled at me “Don’t you have any paper currency or a credit card? I can’t process quarters this quickly”. I then got out my change purse and fed the machine pennies. The machine got very quiet and I went back to quarters. When I finished paying for the items, instead of saying “Thank you for shopping at our store”, it said “Next time go to a checkout line with a real person”. I haven’t used a self scan checkout since that time.
I have used MapQuest on my smartphone. I found that it doesn’t know everything. One the the bands I play with is in a small town. We have had to play in a different venue due to COVID. There is a short cut through the parking lot of a bar that is permanently closed. Everyone in town knows the shortcut, but MapQuest doesn’t. My experience with AI is that it is insulting, but not all that smart.


Cute story. I’m in the geezer category but can check out faster at self serve than the cashiers. The store I go to the self serve is great. The only annoyance is the clerk has to verify I am old enough to buy certain items.
I use plastic for everything except tipping waitstaff, they get cash.
The medical clinic I go to has the option of checking in via a kiosk or stand in line for a clerk.
Though I very rarely go to fast food places I like the ability to customize my order and pay for it at the kiosk.
To keep it car related, one fast food place requires you to use their drive through and hope the human gets your order right.

I must say though I am concerned there will come the day when Technology passes me by.

1 Like

I must say though I am concerned there will come the day when Technology passes me by

I too am in the geezer category and a lot of technolgy has already passed me by. :confounded:


The Math isn’t there? Are you kidding me? That part is there and working fine. Right now the problem with autonomous vehicles is the edge case scenarios. And unfortunately there are many. My specialized field for my MS in applied Mathematics was on networks and communication. If I did my MS today I’d specialize in autonomous vehicles.

that applies to so many things here in USA :slight_smile:

take signs with 3 lines of “fine print” or vehicle license plates where pictures have more contrast that the actual plate number, etc…, etc…

compare that to EU level of standards

don’t you see a contradiction here? :slight_smile:

is it not an “unknown unknown” type of the problem yet, unlike the one you can have a predictable plan on how much effort required before you are “done” ?

here is one good sample of “edge case scenario” highlighting that math is NOT ready:

That has nothing to with math. Your example is stupid. Yes math is involved. But not all programming is math.

Mike, I believe you know very well how adversarial attack works on the neural networks.
The noise introduced to the pig picture exploits the fact that neurons in the network provide very brittle connection between the input and the output classifiers, so the “right” type of noise overloads that and highlights the underlying issue that it has nothing to do with “programming” and everything to do with “training dataset”.
Now, at what point you think this “math” will become “ready” if a color noise can make it to give improper output, while human brain easily “gets it”?
My humble opinion is that the modern neural networks were improved to perfection… within the limits which they can reach, yet it’s clearly not enough.
Math is not ready yet in my view for “autonomous”, yet it’s clearly ready for “assistive”