Diving Deeper on Tesla and Consumer Protections

After Tesla's first fatal autopilot crash, doing the math on their claims finds some holes in the PR. Besides, the math excuse is a total dodge.

10 minute read

Tesla v Fortune

Just a few hours after I posted yesterday, a couple different folks directed me to Tesla’s blog post regarding the Joshua Brown’s fatal crash on ‘autopilot’ and an article by Fortune that is critical of Tesla and CEO Elon Musk.

It took me some time to digest both pieces, as well as some additional information surrounding them, but the timelines they present are clearly different than those that were more broadly reported in the media.

The most contentious part is the gap in time between the initial accident, Tesla’s investigation and ultimately the NHTSA evaluation kicking off. Though, it is hard to say if this is just standard practice for accident investigations, or if there is anything abnormal to the process.

All things considered, it is a story that is still developing, and it would be speculative to comment on why the timeline has unfolded the way it has until all parties have published their findings and reports.

The remainder of the discussion over the SEC filing, sale of shares in Tesla, and other fiscal matters is not really relevant to the piece I wrote.

New tech, old fogey

I also got a few emails yesterday after publishing that asserted something along the lines of how I was being “anti-innovation” or was simply “resisting change as [I] get older.”

While it is true that I am indeed getting older, it is far from the case that I am somehow resisting change. My larger point was not about the fantastic new tech that comes out every day, but about two things:

  1. We, as consumers, and as human beings, need and deserve oversight of companies that are pushing at the forefront of technological innovation (which I’ll write more about below).
  2. We all need to be aware of the human costs (fatal or otherwise) associated with every advance in technology, and we should be sure that companies acknowledge those risks, provide adequate education about their technology, and are not utilizing the general public as a laboratory for validating hypotheses.

When I wrote about how Tesla’s confident attitude towards its technology can potentially lead drivers to invite greater risk through reckless behavior, I was less interested in chastising Tesla and more concerned with how we provide better education for drivers, greater oversight for emerging technology and generally work to further improve safety by not abusing technology or becoming overly reliant on it.

There is a wonderfully written, well considered article that covers much ground, from Mr. Brown’s fatal crash, to the first recorded steam railway fatality. In short, author Karl Stephan artfully establishes a pattern of learning from tragedy when new technologies are introduced to the world.

Mr. Stephan illuminates this notion of trust from a different angle, describing how the very success of a technology can be one of its greatest dangers:

Even if I had a self-driving car (which I don’t), and after driving it for a while and learning what it typically can and can’t do, I wouldn’t feel very comfortable just sitting there and waiting for something awful to happen, and then having to spring into action once I decided that the car wasn’t doing the right thing. 

That’s a big change of operating modes to ask a person to do, especially if you’ve been lulled into a total trust of the software by many miles of watching it perform well. Who wouldn’t be tempted to watch a movie, or read the paper, or even sleep?

Indeed, when companies provide new technology that can save lives by removing some portion of human decision-making and/or responsibility for certain actions, the temptation to exploit that technology is always strong.

When airbags were first introduced, there were inevitably cavalier folks that believed the airbag was a stand-in for seatbelts, when in fact, some studies show fatalities rose with airbags and no seatbelt. As a result, a great deal of education was focused on how the two technologies were intended to be used in conjunction for the greatest benefit. Reporting after the first major study results were in shows just how successful airbags were (when used together with a seatbelt).

Much like Tesla’s ‘autopilot,’ anti-lock braking, the seatbelt, or countless other automobile innovations, the benefits were clear, despite some fatalities linked to the technology itself. The NYT article summed it up thusly:

“It is like the introduction of a life-saving vaccine,” said Mr. O’Neill. “There may be a a few predictable fatalities, but on the balance there is a huge public health saving.”

Our rights: consumers, citizens, humans

I wanted to take a second to elaborate on the subject of oversight.

When I write “oversight,” I do not want to suggest by default that that is government-led oversight. Indeed, as consumers, we already have some very great protections built into the Bureau of Consumer Protection, various state and local offices, and countless independent organizations that review products on behalf of consumers.

That said, I firmly believe we cannot rely on government as our sole advocate in the emerging tech landscape. Increasingly, the burden continues to fall on consumers / users / citizens, as companies push hard in ever-increasing directions. Indeed, the fact that several markets segments for new products didn’t even exist one, two, or five years ago, makes it improbable for any traditional regulatory body to cope with the pace of innovation.

The problem is not just in hardware, or tangible goods, either. As more of our data moves online, new aspects of our lives are quantified and tracked, and more intimate details of our lives are bought and sold for research or marketing, we risk losing control of aspects of our humanity that we never thought of as ‘for sale.’

While the vast majority of that data is used for the collective and individual good—health care, social welfare, economic equality, etc—we cannot discount the influence of bad actors working within the system, or corporate entities motivated by retiring profits to hungry investors and shareholders.

Our rights in a more digital world are a complex and tricky thing. New issues of data ownership seem to emerge on a daily basis from sources as varied as Facebook to the smart fridge that knows what you eat. Can an insurance company charge you higher premiums if the fridge cam shows you eat like crap every day? Some might say yes. But what about if your (hypothetical future) FitBit shows that you are building muscle, burning thousands of extra calories a day. Does the added consumption make sense now?

Regardless of the answers to the above questions, a larger, more fundamental question looms: who owns that data, and what are our rights regarding its use?

Almost universally at this point, when we sign up for a new service, buy a new piece of tech or otherwise participate in any activity that is connected to the internet, we have agreed to terms and conditions that vastly favor the corporation. Knowingly or not, we surrender our rights and our ownership to hundreds of services and partner business on a regular basis. In return we get the benefits—real or perceived—of the service in question, but we get very little of the traditional protections afforded to us in more established markets.

Moving forward, there is likely very little headway to be made with the corporations themselves. We will have to invest in better education for consumers, greater community resources for when things go wrong, and broader protections for individual citizens that balances the value of human life against the volume of consolidated corporate wealth.

A bit of fact checking

Finally, I wanted to dig into one thing Tesla asserts in some of their literature, and that Elon is quoted, by the Fortune article, as writing in an email thread:

“Indeed, if anyone bothered to do the math (obviously, you did not) they would realize that of the over 1M auto deaths per year worldwide, approximately half a million people would have been saved if the Tesla autopilot was universally available. Please, take 5 mins and do the bloody math before you write an article that misleads the public.”

I cannot find accurate data regarding the total number of worldwide fatalities (cited above as 1M), but the claim that ‘autopilot’ will reduce fatalities by half is interesting, and has been brought up before. I know lots of other media outlets (even MIT) are covering this right now, but I thought I’d take a look, based on Tesla’s own numbers and some available math.

First, some data points: - roughly 80K Model S sold* worldwide, with available ‘autopilot’ tech through Q2 2016 source - roughly 63K of the above in the U.S. source source - above based on ‘autopilot’ capable equipped cars started shipping in Oct 2014 source - 253M cars on the road in the U.S. (2014) source - 1.2B cars on the road worldwide (2014) source

(* I denote sold here because they may not all be delivered yet. For the sake of argument, I will assume they are.)

Tesla’s post after Mr. Brown’s death claims: - “Among all vehicles in the US, there is a fatality every 94 million miles” - “Worldwide, there is a fatality approximately every 60 million miles” - 130M ‘autopilot’ miles through June 2016

So, let’s start the math!

If there have been 130M miles on ‘autopilot,’ driven by a fleet of 80K Model S vehicles, then we get an average of 1,625 miles on ‘autopilot’ for each vehicle:

130,000,000 / 80,000 = 1,625

That means that, since this is the first fatality, we’d get rates of accidents on ‘autopilot’ that look like:

1 / 80,000 = 0.0000125 'autopilot' fatality rate worldwide
1 / 63,000 = 0.0000158 'autopilot' fatality rate U.S.

By using the same number of miles (1,625) as an average for all of the cars worldwide and in the U.S., we’d get some total miles driven like so:

1,625 x 1,200,000,000 = 1,950,000,000,000 miles worldwide
1,625 x 253,000,000   =   411,125,000,000 miles U.S.

Plugging in the accident rates of 60M and 94M miles respectively, we get:

1,950,000,000,000 / 60,000,000 = 32,500 fatailties worldwide
32,500 / 1,200,000,000         = 0.0000271 fatality rate worldwide

253,000,000 / 94,000,000 = 4,373 fatalities U.S. 4,373 / 253,000,000 = 0.0000172 fatality rate U.S.

Thus, if we were to compare the rates of ‘autopilot’ fatalities, we’d get:

              Tesla AP      All Cars
Worldwide     0.0000125     0.0000271
     U.S.     0.0000158     0.0000172

Indeed, world wide the rate of reduction is closer to 54%, but here in the U.S. the difference is closer to 9%, despite the vast majority of Model S’ being within the U.S., and thus potentially representing a far greater amount of the total miles driven (roughly 78% of all ‘autopilot’ miles).

The difference may be attributable to a great many things. Certainly the ratio of Tesla’s cars to total cars world wide is far smaller than it is to total cars in the U.S.

But it may also be that U.S. safety laws, traffic laws and requirements on automobiles manufactured for the U.S. market do a better job of providing safety for drivers.

It may be because of more readily available trauma care that reduces the incidence of fatalities relative to total number of accidents.

The bigger point is that statistics are very easily manipulated and repurposed. We should always be careful when taking a statistic at face value, and ideally we should do our best to form our own opinions, given the available information.

In this case, it is clear why Tesla states the worldwide figures for PR purposes, but that doesn’t mean it is any more, or less, valid than other readings of the numbers.

v0.2