Since computers have been able to optically read text and understand for well over 10 years…I firmly believe that it’s easily solvable.
Programming is applying knowledge and logic together. Of course you need to understand the rules you’re trying to program to. That’s a given. But that’s the easy part. Incorporating that knowledge into a program is the difficult part. Years ago I had to write a method that did Macro Appreciation for an accounting app. I knew NOTHING about it. Took me about 1/2 to understand how it worked…Applying those rules into an app took a lot longer.
You’re definition of what true AI is is very limited. If you’re talking about creating an AI system that can think like a human then yes…that’s not there and my never be there. But if you’re talking about building computer systems that can perform complex problem solving and adaptive learning then AI is here. At our company we have several AI programs incorporated into our applications. These systems are constantly and adapting on their own.
I was just going to mention Highway 101. There was an interesting story in Road & Track quite a few years ago. A CHP officer pulled over a car that was doing, correct, 101 mph. What made the story more interesting to me, if my recollection is correct, is that there were 4 people in the car, each of a different nationality, and each speaking 2 languages, but no 2 persons speaking the same 2 languages. This resulted in the conversation being translated 4 ways, if memory serves, among English, French Japanese and German. And yes, the driver had mistaken the highway sign for the speed limit sign.
1 Like
Limited definition or not, what currently passes for “artificial intelligence” is really pretty dumb. In this case, I don’t know whether the problem lies in the quality of OCR or in “understanding” what’s being “read” but it’s pretty disappointing in a brand new $49k automobile.
And I’d bet they don’t always get it right, just based on the public-facing AI that we’re exposed to.
Look at Google’s AI which determines what kind of news items you want to see in your phone’s news app. When I was shopping for my current truck I was looking up towing capacities of various trucks. Google’s AI then determined that I was very interested in stories about “towing.” I suddenly started getting articles from papers and TV stations around the country about local towing ordinances, and complaints against towing companies.
It got to the point where I kind of rolled my eyes any time my wife asked me to look something up, because I just knew the AI was likely to decide I wanted to see stuff about whatever topic it was all the time.
AI systems have made breakthroughs in Medical Research, Nuclear Research, Chemistry, Biology. It’s doing it far faster then we ever thought possible. Pretty dumb? I suggest you do some research. I have.
1 Like
I’ve gotten to the point were I look things up in “hidden mode” or “Secret mode” on my browser when I look random stuff like that up so I don’t keep seeing it in my targeted advertisement er I mean web results.
Safest way to do surfing is get a VM (Virtual Machine). VM is basically another system that resides in your system. You can install what ever operating system you want. I have one that runs Windows 10. And I have a copy of it in case that one gets a virus. All I do then is delete it and copy of the copy. It’s pretty much IMPOSSIBLE for any hacker to get to your real computer system. There are many ways to get a VM or even create your own. This can also be in the cloud if you want.
1 Like
You mean the same AI that can’t tell the difference between a route number and a speed limit?
TL;DR: The unusual effectiveness of adversarial attacks | by Kirthi Shankar Sivamani | Medium
that pretty much illustrates how limited the “modern days AI” is…
although in narrow areas it may and it is quite useful.
or if somebody wants to see more of the “action”
As I said…It’s a simple bug. You can’t judge a complete discipline by one bug.
The Buffalo to Lockport expressway is the I-990. ( It actually starts and ends in Amherst .)
1 Like
As processor speed increased artificial intelligence was able to be applied to more areas. Back in the late 1980s, I did research in an area of artificial intelligence called neural networks and more specifically, back propagation neural networks. I set up a problem on the computer in my office, which had the Intel 286 processor. The program ran day and night for a little more than four days. A couple years later, I bought a computer with the Intel 486 chip. I set up the problem again, started the program and went from my den to the kitchen and made myself a cup of coffee. Half an hour later, I went back to my den to check on things and the program had finished executing. The 486 processor did in 40 minutes what it took the 286 four days to complete. Now the processor speed of the Intel 486 compared with today’s processors is like comparing the speed of a turtle with that of a Ferrari.
I had a colleague that was doing research in computer vision. It was amazing how fast the computer could carry out the necessary Fourier transformations. Both my colleague and I retired 10 years ago. I am certain much progress has been made in AI since my retirement. Although I still think I have all my marbles, they roll rather slowly and I haven’t been able to keep up. I do believe that artificial intelligence will be there soon.
Fifty years ago when I was working with statistical models as a graduate student, my advisor, who was in his mid sixties,said that the future was in artificial intelligence. Twenty years later when I started studying back propagation neural net models, I found these models had better predictability than the statistical models.
If I were to begin my career today, it would be in artificial intelligence.
“Simple bug” or not, the fact that it made it into production is concerning, to say the least.
I can just imagine what the TLX would make of that. Mach 1.3 anyone?
I learned COBOL on an IBM mainframe in the early 70’s. Today my home PC has an i5 CPU and it’s nowhere near state of the art. But faster processing won’t help bad code.
@davepsinbox_157004 I agree that faster processor speed won’t help bad code. I have experienced bad code in commercially available statistics software. I was running some not often used statistic on a popular commercial software statistics package and the results didn’t seem correct. I made up some data and calculated the results by hand, then ran the data through the program. The results from the program were wrong. I made contact with the company and a couple of weeks later we received a patch for the statistics package.
I also made a mistake in a program I wrote to analyze some data and I was going to present the results at a conference. I was under a deadline to get the paper to the chair of my session at the conference. The analysis was rather involved and I was running my program on a time sharing machine. As I was writing up the results, I realized that something was wrong. My university was between terms, and the computer center operator was printing schedules for the next term and said that there was processing time for my job. She started my job at 4:00 p.m. She called me at 1:00 a.m. and said my job had run successfully. At 4:00 a.m., my job printed and she called me again. I drove to campus and picked up the printout. I made the corrections to the results and got the paper submitted by the deadline with express mail.
Coding errors do happen and the software must be exhaustively tested with different inputs for errors.
Here’s the problem which I’ve stated in this forum on numerous occasions. Autonomous vehicles are NOT ready for primetime yet. There are some segments right now…but the vast majority are not ready yet. It’ll be a few years before car manufacturers have reached the SAE Level 4 for autonomous vehicles. And another few years after that for Level 5. You are judging this technology on tomorrows standards TODAY. I’ll wait til vehicles meet the SAE Level 4 standard and then see what bugs (if any) there are.
SAE International Releases Updated Visual Chart for Its “Levels of Driving Automation” Standard for Self-Driving Vehicles
It could make it worse. One of the problems with coding today is efficiency. I started programming when processor speed was less then 1,000,000 the speed my laptop has today. And CPU memory was expensive and again over 1,000,000 less even a small cheap laptop. So we really had to learn how to take advantage of what we had. We really taxed the limits of what systems could do. And throughout my career I’ve seen many people just add CPU power to overcome the inefficiency of the program/programmer. Back in the 70’s computers were expensive and programmers were cheap. Now it’s the complete opposite. Sometimes it’s cheaper to just add more compute power to solve the problem (not always).
Coding errors happen all the time. You just want to mitigate them for the final release of the product. Testing is NOT the best way eliminate them. Sound programming standards and practices are the best and easiest way. You follow a “Best Practices” method. You write extensive unit tests, have code reviews. Development environments have a lot of stuff built into them to help. LSE (Language Sensitive Editors), analyzing tools…built in design patterns like MVC and built in data structurers and sorts. If you’re depending on the test team to find your bugs - it’s too late.
@MikeInNH I think a good analogy can be made between computers and automobiles. Automobile manufacturers kept increasing the displacement of engines which allowed for inefficiency in the automatic transmissions. The power of the engines increased at the expense of gasoline mileage. When gasoline mileage had to increase, manufacturers figured ways to get more power out of smaller engines with fewer cylinders and transmissions with lockup torque converters and more speeds. The industry went from two speed automatic transmissions to transmissions with six or more speeds.
I agree that good programming practices are important. The selection of algorithms is important. An algorithm that runs in exponential time on today’s higher processors may do the job, but if an algorithm that does the same thing that runs in logarithmic time is much more efficient. It’s like using a more efficient transmission to get more out of an engine.
I had an engineer who that worked for me that came from writing compilers. He was really big in writing Recursive methods. Recursive is cool and makes for nice neat code, but it’s NOT very efficient. Recursive makes the problem simpler, but you sacrifice performance.