Why not? It's how everything else works. If you buy a house and then install your own electrical wiring which causes your house to burn down, you have no claim against the construction company and nobody blames them for it because it was your fault, not theirs.
This recent attitude of "people won't understand, therefore corporations have to be paternalistic" is patronizing and factually incorrect. People do actually understand. The worst outcome is that technology-ignorant bureaucrats require companies to clamp down on modding. But having companies do that to begin with is no improvement.
The basis of this seems to be the conceit that manufacturers can actually control their products after they've been sold. It can be true for a time, in the sense that it takes people that long to figure out how to break back into the things they own, but it happens. If you can jailbreak an iPhone then you can jailbreak your car.
Which means that we have a choice. The first option is to accept the inevitable and support it. Expect people to make changes and take responsibility for their own changes, and facilitate that. The second is to try to lock everything down, so that the people making modifications have no access to documentation and have to be running out of date software because the old version with the jailbreak vulnerability is the same old version with the anti-lock brake glitch, which makes it much more likely that people will die. What does that do for your brand?
I think you're really overestimating both the level of technical comprehension generally, and people's attitude towards "due diligence". People understand what a house is and what wires are. They are not, as of yet, that clear on how neural nets, sensor systems, and dynamically updated software come together to make a self-driving vehicle. If modders cause problems, especially with something as mobile and potentially destructive as a car, the general reaction could easily be "this is mostly the fault of the self-driving Tesla, and the fact that they didn't do due diligence in preventing misuse." It is not a very reasonable reaction, but it is a likely one - and one that could have serious consequences for Tesla as a company.
>The worst outcome is that technology-ignorant bureaucrats require companies to clamp down on modding. But having companies do that to begin with is no improvement.
It's not the same. A bureaucrat's rules are mandatory for everybody. Tesla has competitors who may choose a different route.
> The basis of this seems to be the conceit that manufacturers can actually control their products after they've been sold. It can be true for a time, in the sense that it takes people that long to figure out how to break back into the things they own, but it happens. If you can jailbreak an iPhone then you can jailbreak your car.
A self-driving Tesla is not an iPhone, because an iPhone is unlikely to run somebody over or block a freeway. Risks are relevant. Self-driving systems are also dynamically updated and dependent on networked information. It already is a service, not a fixed product. You don't own a self-driving system (at least Tesla's idea of it) any more than you own an Amazon Web Services server rack by having something hosted on it.
> Which means that we have a choice. The first option [....] which makes it much more likely that people will die. What does that do for your brand?
Maybe in the future, when such things are more familiar, Tesla will feel confident enough to open their products up to modders. But right now? When even the idea of self-driving cars is a challenging sell, and they're being criticized for it not being secure and consistent enough? No, not now.
They don't have to be, for the same reason that you don't have to understand Maxwell's equations to understand that it's possible to cause a fire with bad wiring. The concept of "the user modifications caused the problem" is not beyond the public understanding. Anyone who cares about more details than that can take the time to understand them but those are not the people you have to worry about.
> It's not the same. A bureaucrat's rules are mandatory for everybody. Tesla has competitors who may choose a different route.
Your arguments either apply or they don't. Their competitors are not in a different situation and there are not very many of them. They'll likely all make the same choice.
> A self-driving Tesla is not an iPhone, because an iPhone is unlikely to run somebody over or block a freeway. Risks are relevant.
That's the point. You don't want a black box in control of life or death situations. You want something the user can understand when they're capable of it, so that the 2% of users who are can identify and fix problems that will affect the other 98% before it turns into deaths and liability.
Opening up the car could reduce their liability, because it gives them somewhere else to point the finger sometimes, and a large population of people willing to find and fix problems. If they control the car entirely then all the liability is always on them and no one can help them when they make a mistake.
> Maybe in the future, when such things are more familiar, Tesla will feel confident enough to open their products up to modders. But right now? When even the idea of self-driving cars is a challenging sell, and they're being criticized for it not being secure and consistent enough? No, not now.
Right now is when the precedent and expectations are being set.
>They don't have to be, for the same reason that you don't have to understand Maxwell's equations to understand that it's possible to cause a fire with bad wiring. The concept of "the user modifications caused the problem" is not beyond the public understanding. Anyone who cares about more details than that can take the time to understand them but those are not the people you have to worry about.
You try convincing the public of that. Every time a Tesla has caught on fire, it's been reported on in numerous news outlets. Despite the fact that thousands of gasoline cars catch on fire every year, people are worried about much rarer, milder lithium-ion battery fires. It's a new thing, so it gets way more scrutiny. People aren't always reasonable, and regulations are often made based on perceptions of danger, not actual risks. I think Tesla is not being unreasonable in thinking that "well, it's the owner's fault for messing with it!" is going to come off as anything but mealy-mouthed blame deflection. Even if they're in the right.
>Your arguments either apply or they don't. Their competitors are not in a different situation. They'll all make the same choice.
So whatever choice Tesla makes, all their competitors will also make? Or, in the case of Uber's in-development car, probably already have made? I think you're according them more influence than they really have. The point stands that whatever Tesla's choice on this matter, it does not apply a legal obligation on their competitors to do the same. So it is not "no improvement".
>That's the point. You don't want a black box in control of life or death situations. You want something the user can understand when they're capable of it, so that the 2% of users who are can identify and fix problems that will affect the other 98% before it turns into deaths and liability. Opening up the car could reduce their liability, because it gives them somewhere else to point the finger sometimes, and a large population of people willing to find and fix problems. If they control the car entirely then all the liability is always on them and no one can help them when they make a mistake.
But that's the thing - they don't want to be pointing the finger anywhere because that's almost always bad press for them. Throwing blame just looks bad. While helpful users are certainly an asset, it's mostly as a resource for identifying problems, not attempting to solve them on their own cars. Trying to aggregate, understand, vet, and integrate users' solutions is a huge and difficult undertaking, and not something that I think they want to attempt while trying to roll out a reasonably consistent product.
The reply link doesn't appear right away on new posts. If you click the post time next to the poster's name it will open the post by itself which has the reply link.
> Every time a Tesla has caught on fire, it's been reported on in numerous news outlets.
And in every one of those stories there are ten people asking why this is a story because battery fires are less common than gasoline fires. System working as intended.
> So whatever choice Tesla makes, all their competitors will also make?
They don't have 10,000 competitors. It isn't a stretch to imagine that the small handful all make the same choice.
But suppose they don't. You can't have it both ways. Either someone allows modifications and whatever hypothetical bad reactionary laws that causes will come to pass, or nobody does and the laws are never passed but the result is the same.
> Or, in the case of Uber's in-development car, probably already have made?
Uber is going to make a self-driving car and then sell them to the public so that members of the public can lease it back to Uber and Lyft?
> But that's the thing - they don't want to be pointing the finger anywhere because that's almost always bad press for them. Throwing blame just looks bad.
You're assuming the net problems as a result of openness are more rather than less, which is wrong. People fix more problems than they create, because people don't like having problems. You open things up and you get a lot of people fixing trouble and a smaller number of people causing trouble. The net result is less overall trouble and the ability to blame the troublemakers for the trouble they cause. None of that hurts you.
> While helpful users are certainly an asset, it's mostly as a resource for identifying problems, not attempting to solve them on their own cars.
Well first of all that's false, having them fix the problem saves you the trouble of doing it. But even identifying problems doesn't work without understanding. 99% of the fix is thoroughly understanding the problem.
If the system is a black box then the user doesn't know how it's supposed to work. So you simultaneously get spammed with false positive bug reports from users who think there could be a problem but aren't allowed to understand whether there really is, and lose true positive reports because weird behavior gets written off as "it's a black box" and the last three things they reported were false positives.
It's still FUD that they could rightfully perceive as hurting them. They're taking enough risks as it is, I can imagine that they don't need more criticism of their safety.
> But suppose they don't. You can't have it both ways. Either someone allows modifications and whatever hypothetical bad reactionary laws that causes will come to pass, or nobody does and the laws are never passed but the result is the same.
I'm not just saying it's about the laws (Tesla has had a hard enough time tangling with these, and has been specifically singled out by regulators on occasion, such as in Germany right now), but also about their brand. They're sensitive to criticism, and, therefore, to the possible sources of criticism - whether justified or not.
> Uber is going to make a self-driving car and then sell them to the public so that members of the public can lease it back to Uber and Lyft?
I'm saying that Uber is developing self driving cars (or rather, the self-driving systems) which they appear intent on using for their own fleet, rather than selling as a kit or whole car, which they could have done. Tesla has been selling cars already, but it's going to be selling a new kind of car which has certain Uber-like properties. There are a range of ownership options and this is not an inherently illegitimate one.
> Well first of all that's false, having them fix the problem saves you the trouble of doing it. But even identifying problems doesn't work without understanding. 99% of the fix is thoroughly understanding the problem. If the system is a black box then the user doesn't know how it's supposed to work. So you simultaneously get spammed with false positive bug reports from users who think there could be a problem but aren't allowed to understand whether there really is, and lose true positive reports because weird behavior gets written off as "it's a black box" and the last three things they reported were false positives.
Sure, understanding the problem often helps more than simply reporting "there is a problem". But they could allow people to inspect the code and diagnostics without allowing any changes, the first does not necessarily imply the second. And if you really allow people to get into the code deeply enough to make the kinds of changes that fix problems in the system itself (rather than some sort of API for strictly limited input) then you've potentially given a large amount of your system to your competitors.