Autonomous Vehicles Tech

For software engineers like me I find this very good for the advancement of autonomous vehicles.

1 Like

Darn good idea! Industry takes the lead to create common architecture. Should allow universal communication vehicle-to-vehicle as well as help solve for the gap in detection and decision. Those not in the group should strongly consider joining.

I saw it right off but then there it was right in the article-

Among other things, it could turn out to be a way for companies like NVIDIA and NXP to own a large share of the future market for self-driving-vehicle system components – and, potentially, to shut out smaller or less-nimble rivals.

Therein lies the problem…

Having worked on computers for the last 25 years, I am concerned! Are ther backup and restore systems in place, how is the antivirus or hijack of the system defended, yes we have dealt with many computer issues over the years, replacing poorly manufactured capacitors etc. It is not that I do not own a newer car, 2017 rav4, but that computers are less reliable in the long run.

1 Like

Working as a software engineer and software manager for over 45 years - Anti virus software is really only for systems with an open architecture (aka Windows or Linux). OS’s that are closed - it’s a lot easier to protect. The systems we design for the telecom industry is a very closed OS. Sure we’re on the network, but there are very very few ways in. Windows and Linux has hundreds of ways in so it’s a lot harder to protect.

As for faulty electronics…this isn’t anything new to what we’re dealing with today…just more of it. More complex the system the more likely of a failure. But electronics are several factors more reliable then mechanical devices.

I’d be extremely surprised if they have a disk or SSD that would require a backup or restore. The program should be all on EPROM or EEPROM.

Maybe/Maybe not. And I’m not sure it’s a problem.

Monopolizing any technology is a problem. Doing it by casting it into a standard is even more concerning.

The Intel stack (and clones) have been monopolizing the computer CPU’s for decades. Technology in this arena is NOT stagnant.

If anyone is interested - Nova on PBS next weeks show is about the autonomous vehicles. Looks interesting.

For those who never watched Nova show before…it’s a one hour Science and Technology in-depth show usually focusing on one topic at a time. This weeks show was on “Why Bridges Collapse.” That was a scary show to watch.

2 Likes

See the problem? It’s not one company with proprietary architecture. In this case, they are not exactly “clones” they are different architecture that accomplishes the same goal. This is also only pertaining to PCs. In the real hardware world, there are many Microprocessor and Microcontroller options to choose from. I would be very concerned if the hardware controlling critical safety features was PC based. In reality, the hardware being used here is a subset of the PC based video controller chips (DSPs) not the main CPU. And there are literally a dozen or more options for DSP type chips that can do this kind of work. No need to standardize the hardware or even chip embedded firmware to benefit one company and limit others from innovating at this stage of development.

A lot of it is still cloned. AMD and Intel co-developed the 386 and part of the 486 chips. Then they started going their own way. But now they are very different. The overall architecture is the same…the implementations are different. And that is actually a good thing.

DSP chips used for processing digital signals…not very good as a general purpose chip like the Intel X86 stack. Intel actually has a DSP chip based on the I7…but using software.

At my company in the telecom field - we use a slew of DSP chips from several different vendors. The other thing DSP chips lack is a good set of programming tools.

I’m sure autonomous vehicles have slew of DSP chips also…but the main core isn’t.

Here’s a look at Tesla’s chip set they designed and then Samsung built.

Well, we use quite a few DSP based microprocessors in designs here. There are very advanced tool sets for them, just like the other architectures out there. What tool set(s) are you using?

The NVIDIA design was based on their core DSP architecture used in graphics cards for PCs. This is their main controller…

BTW- your last link kinda proves my point. Tesla is using something different than what is being proposed as the be-all-end-all hardware platform that would be “standardized” and benefit one hardware manufacturer. That stifles competition at a very early stage…

Considering that nVidia just arbitrarily decided that instead of paying $500 for a front-line graphics card, we’re gonna pay $1000 and up for no particularly good reason, I’m not really on board with them getting into a position to set vehicle rental fees.

“The self-driving Yugo? That’ll be $500 a day plus $50 per mile.”

The problem with everyone doing it differently also leads to everyone making the same mistakes over and over again. This is one technology I’d like to see a group effort on because of the inherent dangers if it’s done wrong.

There are numerous safety and mission critical applications that are developed separately by multiple, separate companies. Take aviation for example. The various plane manufacturers do not share a common hardware platform nor do they share a common firmware architecture. At some point, whether there is one common platform or dozens of them, there will be developed a comprehensive V&V protocol that the system must pass. As long as each submitted system passes all of the regulatory testing, it should be assumed to be as safe as the rest of them.

I’m not saying it’s not bad…I know it’s not bad. But I know from experience standardizing has many technical advantages with far fewer problems. And it does’t stifle technology.

Well, I certainly agree that standardizing lowers the risk- just look at the PC v McIntosh as a prime example. But that also highlights the problem with innovation. Only one company is allowed to innovate in the latter platform. I see the same thing happening here except we are all now the captive audience without any choice in the matter. Use our version of hardware or nothing. And then they reap all the rewards for licensing that intellectual property. No thanks, too sweet a deal for those guys. BTW- If I owned that IP, I’d be all for it. :stuck_out_tongue_winking_eye:

I know a guy through some friends that developed a piece of hardware for trucks and then immediately campaigned to have that solution written into law in his home state. He was successful and made millions because everyone operating a truck had to install a version of his device, which he licensed to various manufacturers.

Explain how it cannot. They are casting one hardware platform into the standard where the technology rights are owned by a single company. How is anyone going to innovate under those conditions? I suspect you are thinking about the software aspect while I have been arguing about the hardware constraints…correct me if I’m wrong in that assumption.

In certain fields I agree…but that hasn’t been the case with the computer industry. The main reason is that the market is always looking for faster and cheaper. Chip designers are always looking for way to improve their design.

You mean to tell me that the Intel chips sold today are NOT faster and cheaper then Intel chips just 5 years ago? Moore’s Law is still in effect. Maybe in 10 years or so we’ll reach that limit.

Oh, man :slight_smile:

If AMD was not kicking Intel’s … back… we would see what was happening few years back when AMD happen to take a wrong direction. I’m happy they really kicked Intel now in multiple areas with the latest generation, I’m all for vibrant competition, not “next generation is barely 1% faster” we’ve seen from Intel lately.

Moore law is dead in the water for quite some time now. It was taking longer and longer to double computation speed of individual CPUs, so now the chip is routinely packed with 2-4-8 and more “cores”, which do not get faster anymore, as we did indeed reach the limits, so we we scale horizontally, where previously we would reduce the element sizes (got to limits here) and jacked up the clock speed (got to limits here too).

Now we are finally after fixing the actual software, as dumping more speed on the [non-effienent] code reached its limits. I’m talking purely single-execution-thread, sure you can still do more with parallel processing, where you need better processing methods (AKA “programs”).