CarTalk.com Blogs Car Info Our Show Deals Mechanics Files Vehicle Donation

Lexus and Porsche are the most reliable 2010 cars?


Any flaws in the method of analysis?
Top cars have least issues because they are driven the least?

Lexus, Toyota’s luxury division, led the nameplate rankings, with just 71 problems per 100 vehicles. It was followed by Porsche (94), Lincoln (112), Toyota (112) and Mercedes-Benz (115).


On the other end, the lowest-scoring brands were Land Rover, with 220 problems per 100 vehicles, Dodge (190), Mitsubishi (178), Jeep (178) and Volkswagen (174).

Since the cost of Lexus and Porsche is so exorbitant I would expect the vehicles to be very reliable. That’s the criteria for my analysis. It’s not always the case however…friends of mine own Land Rovers and Jaguars that have been nothing but money pits since new.

My niece also had continuous problems with her new BMW X5. She finally managed to get BMW to buy it back through an arbitrator. They claimed that the problems stemmed from the factory in South Carolina but it was a real lemon from day one in any regard.

Here’s Aother View On This Survey.

There’s not a lot of difference in the top selling brands any more. What’s more important to me is not the brand, but the individual model/model-year I’m interested in owning.

Example: Chevrolet Impala (as opposed to GM in general) and Impala compared with cars by other makers in the particular category, in this case “large cars”.

Oh, and that’s probably good advice to stay away from first edition models.

CSA

That JD Power is for the first 3 years only! In today’s car market almost any car can be reasonably good for those 3 years.

American cars, even the good ones, used to be almost do-it-yourself kits, with the dealer correcting numerous little things under warranty. But they lasted a long time because of the basic overdesign. My 1965 Dodge Dart had 13 things wrong with it (mostly assembly defects) that had to be corrected by the dealer, and the driveshaft universal joints had to be replaced under warranty at 49,900 miles!

By contrast, my 2007 Toyota, which is 5 years old this month, had only the button on the seat bet that holds the buckle in place, fail. It’s a 50 cent item. Our 1994 Sentra only needed a new rad under warranty at 59,000 miles. Nothing else failed.

JD Power makes its money from the manufacturers and sellers. An owner has to look at long term reliability and resale value. The first 3 year reliability is therefore not a terribly useful figure. Consumer Reports has more useful data.

Is the consumer reports data easily available? I am interested in some longer term reliability studies.

JD Power makes its money from the manufacturers and sellers. An owner has to look at long term reliability and resale value. The first 3 year reliability is therefore not a terribly useful figure. Consumer Reports has more useful data.

The 3 year figure is probably because most of those high end luxo cars are leased, rather than bought. Since most leases are for 3 years, the original buyer doesn’t need to worry about long term reliability; they just bring it back and rent another car

@usedEconobox2UsedBMW Consumer Reports normally goes out 6 years in their reliability postings. Next month their April Issue is the annual car survey edition. It will show reliability going back to 2007. If you go to the library you can get a look at back issues if you were intesrested in buying an older car.

In their last report, they show a Porsche 911 to have a Much Worse than Average reliablility record, while a 2006 Boxter is better than average.

The Lexus for 2006 has the ES Much Better than Average, the 2006 GS Better than Average, the GX Better than Average, and all the rest either Much Better than average or better than average.

Another no BS site is TrueDelta which has owners reporting in on a quarterly basis. Their stats go back about 10 years and reflect the Consumer Reports figures quite accurately.

If you really want to dig in there is a book published by Dundurn, called “Lemon Aid”, a guide to used cars & minivans. It gives 9 years of ratings as well as digs up all the problems and recalls these cars have had. It sells for about $20.

The editor is Phil Edmondston, a US Marine who moved to Canada in the 60s (reason unknown) and now lives somewhere in Central America. He initiated the “Rusty Ford” class action law suit which resulted in mandatory corrosion standards for cars in 1976.

Since the cost of Lexus and Porsche is so exorbitant I would expect the vehicles to be very reliable. T

If that were the case.then the Land rover should be MORE reliable then the Lexus since it’s MORE expensive. But instead - it’s DEAD LAST.

What does “reliable” mean? That it never craps out and sends you walking or waiting for a tow truck or that the heated leather seats have very few warranty claims?

Half the time, when I see these statistics, I think somebody just pulls them out of their rear end…How do you compare a car that sells 6000 units a year to one that sells 120,000 a year? Are they comparing new cars that are covered under warranty ?

Cost has no relationship to reliability. But a number of Lexus cars are very reliable. The LS400/430 is a great buy as a used car.

Some comparisons in quality are obvious. My wifes 2 Accords to the 3 Taurus’s her sister owned. No matter what measure you used (repairs per mile, cost of repairs…etc…etc)…the Accords won hands down.

Others aren’t as obvious…that’s why you need long term comparative analysis to make a valid argument. Then if you have ONE study showing one thing…but another study showing something different then…it becomes murky…HOWEVER…when you have hundreds (if not THOUSANDS) studies…from different groups in different parts of the world…and there’s a definitive pattern…then you have something. This is NOT the only study that has shown Lexus to be one of the most reliable vehicles on the road…It’s one of hundreds over the years.

@Caddyman I earn a good deal of my living with Reliability Analysis and Improvement. Mostly on large systems like refineries, power plants or fleets of vehicles or ships.

The technical definiton is the contuous fail-free operation between mandatory shutdowns for maintenance, or the function availability (mission readiness in military terms) for the equipment.

For instance, we monitored our 1984 Chevy Impala for “mission availablity” over 12 years and 210,000 miles. Maintenance and repairs we done proactively, we did not run the car until something broke.

Over 10,000 trips were made in those 12 years, and the car had 3 unexpected breakdowns, non of them serious. A rad hose blew, and I had to pull into a nearby shop to get it fixed, the wiper motor quit, and the dash light blew a fuse darkening the instrument panel. I also had 3 flat tires since I worked in construction. These were not the car’s fault.

So the failure rate of the car was 3/10,000 or 0.03% and the reliability or mission availability was 99.97%. In other words, the odds are 99.97% that that car would not let you down on any one trip.
If this had been an Italian Fiat that figure would have been very much lower.

Another way to measure reliability is to count how many times the car has to be in the shop for repairs (not regular maintenance), and use that as a measure. The problem with this is that a smart owner like you would repair something before it completely fails, while other would run things to failure and then claim they had an “unreliable” car.

In our case, for the first 100,000 miles, our most repaired car was a 1976 Ford Granada 351V8 which had 56 repair trips to the shop, followed by the 1965 Dodge Dart at 39, the 1988 Caprice at 33, the 1984 Impala (a really good car) at 21, then the 1994 Nissan Sentra (an econobox) at 18. Note the sharp drop when you switch to a Japanese (though made in the USA) car. My 2007 Corolla, now 5 years old, had yet to be garaged for any repairs! In case you wondered, Consumer Reports rated these as Average (Granada), Better than Average (Dart), Average (Caprice), Better than Average (Impala), Much Better Than Average (Sentra) and Much Better than Average (Corolla). A Crown Victoria would rate at Better Than Average. Those measures were against other cars of that year reporting in. A Better than Average of 1965 would likely be a Much Worse than Average today.

Fleet managers have to work with the reliability/availability as well as the overall cost per ton-mile for maintenance and repairs. Taxis use cost/mile and time in the shop.

TrueDelta, a new website, measures these trips to the garage as well as the cost per repair as their yardstick. By that measure the Toyota Corolla and the Lexus come out on top.

In industry we use Reliability as well as Availability to measure how the overall production capacity is available for use. We call it “uptime”. This figure includes maintenance downtime, which in a chemical plant or refinery might be 3 weeks every 3-4 years. One of the most notoriously unavailable planes was the Blackfbird SR-71 spyplane which spent 3 week on the ground for maintenance between each spy mission flight.

It is important to take the personal bias out of reliability reporting; when people love their cars they verbally report fewer problems; if they hate the car, every little thing becomes a “reliabilily” problem.

@MikeInNH Good point! You will find that the much maligned Yaris is even more reliable than the Accord. We lived 5 years in Asia in a country brutalized by the Japanese Army in WWII. They worship Japanese cars for their reliability but only status seeking rich will buy British luxury cars since they are so unrelaible as to be a joke The wife of the Finance Minister had a Jaguar which had to be towed from a 5 star hotel parking lot in full public view. Lexus and Mercedes are neck on neck there in terms of perceived value.

Taxi fleets also have to consider acceptability to their users. In many places, people consider anything but a full-sized car unacceptable, since they’re paying a metered rate and big cabs is what they’re used to. San Francisco started requiring cabs be hybrids a few years ago and there was some consternation at first, with companies worried people would feel cheated by the smaller cars. Yellow grumbled a lot because their fleet was all Crown Vics and they made a point of it in their ads. People got used to the smaller cabs, and the foreign tourists never even noticed. Now the cabs are Fusions, Camrys, some Priuses, Altimas and Escapes.

New York ran an experiment with hybrid taxis a few years ago and the companies found they were cheaper to run, to their surprise. The drivers buy gas, so that wasn’t it. The owners saved on brakes and tires, especially, but also the hybrids were in fewer crashes. They speculated that less power promoted less aggressive driving. That definitely makes sense in New York. San Francisco cabbies are so much more laid back.

I wonder how accurate the data is, the raw numbers, that Consumer Reports gathers to make their reliability pronouncements?? How do they gather their data? Since consumers seem to place great importance on these reliability conclusions, car makers can be expected to do everything possible to tilt the “data” in a favorable direction…The conclusions Consumer Reports comes to is only as good as the raw data they are provided with…Do car makers volunteer information about warranty repairs performed at their dealerships without “adjusting” the numbers a little? Why would they tell Consumer Reports ANYTHING? Especially anything negative…

Carmakers have nothing to do with the CR data. It’s collected annually from CR subscribers in a written survey.

Here we go down the CR trail, once again…

@Caddyman You seem to be very suspicious of Consumer Reports data. This forum has discussed this in great detail, and an outfit like this has to be scientific in its data gathering and processing. They buy randomly each item they test, including cars. The dealer does not know it is CR that is buying the car, so no special preparation is done. They also do not accept any advertising or donations from companies that produce the items they test. If you buy one of their magazines there will be a description of how they operate. J.D. Powers gets their money from the manufacturers.

The raw input is the completed forms by 250,000 or so respondents; greater than a typical political survey input. If the sample received is statistically too small to be accurate, they will not compile a figure and put in an asterisk stating “insufficient data”.

The only argument you might have is that small items are reported as well as major ones, but the categories have “engine major” and “engine minor” and 7 other. So if you live in Arizona “body rust” may not be of interest to you. Items fixed under warranty are excluded, but there is a monthly article on recalls and defects for all products, including cars.

I have been using this source since 1964 to buy anything from cars to refrigerators and TVs, etc. As a result, we have Panasonic TVs, Whirlpool washer/dryer, LG microwave, Kitchen Aid and Braun small appliances, and so on. All reliable and long lived!

Others will chime in that you should not use CR alone when buying a car. I agree with that since cars are a very subjective purchase, and lack of seat comfort may not be an issue with everyone. They still rate the Yaris as a poor buy, but the reliability is outstanding. For years they rated the Elantra as unacceptable, although it was very reliable. The reason was poor crash resistance from a certain angle.

The proof of the pudding is the results arrived at by other impartial surveys. TruDelta has been in business a few years now and their feedback from users, including myself, is very similar.
In a prior post here, I recommended a book called “Lemon Aid”, a Dundurn publication, by Phil Edmondston, which goes into great detail on recalls, defects, premature failures, etc, and comes to similar conclusions on car reliability.

My only issue with Consumer Reports is they do not give annual actual out of pocket upkeep costs. A Lexus 400 is very reliable, but would likely cost more to keep running than a stripped Ford Focus, for instance. Years ago the AAA gave repair costs for various models. The other issue is very long term ownership. A VW Passat is a good car for a certain length of time, but the second half of its life tends to be expensive and troublesome. The CU reports do not reflect that.

Since the makeup of the readership is mostly middle class people who read, and few who would drive very old cars, they do not publish data older than 8 years. Every 5 years or so they have an old car survey, ususally called their “drive it forever” article. The results of that strongly favor Japanese cars, but include such US favorites as the Crown Victoria, and US pickup trucks.

My recommendation to you is to spend the measly $45 or so and get a year’s subscription; it comes with a free 300 page Annual Buying Guide listing all sorts of products. Your wife will be delighted reading it for the recommendations of household products.

In summary, CR is the most bias-free source available for a number of products, including cars and trucks.

Others will chime in that you should not use CR alone when buying a car.

I’m one of the people who say that…but not as what you say.

The problem I have with CR…is - I’ve seen two identical vehicles but under different name plates…Like the GMC S-15 and the Chevy S-10…yet these vehicles have different ratings. If true scientific data gathering methods are being used…then that’s statistically impossible. But it happens all the time with CR. I’ve been able to find examples in every issue I’ve ever looked at.

But in General CR has been based on brand…I just question specific vehicles.

Mike, it’s statistics that make those conflicting results possible. It means the sample sizes weren’t big enough. CR should increase their sample size cutoff, but they probably don’t want that many “*” symbols on their ratings.

That’s why I, like you, use CR for brand trends, not specific trouble areas on specific models for specific years.

Mike, it's statistics that make those conflicting results possible. It means the sample sizes weren't big enough.

But they don’t indicate that sample rate IS different. And if the sample rate is too small…then they shouldn’t report any data…good or bad.