I'd argue it >is< a corporate problem, and the article we are looking at shows exactly why. There should be consequences for running a company in this manner, and there are not. The people who made this decision did it because they were protected from the damage they did.
No, that assumes people are rational actors and they are not; preying on human psychology doesn't alleviate you of guilt, the companies are the problem, not their victims for not leaving.
It's similar to a company selling defective products or contaminating a city's water supply. The market response is too late to deal with those types of problems, and undervalues individual lives.
Yup, and it's too reactionary to problems that can be easily avoided by regulation, food safety for example. If it were up to the market, people would be dropping like flies because safety doesn't tend to increase short term profits as well as corner cutting.
I don't think you need to even concede the idea that users are rational actors--there are plenty of reasons why a rational actor would prioritize another factor over security. For example, many people got Yahoo email addresses a long time ago, and built a personal contact list of people who only know their Yahoo email. A rational actor might value keeping in contact with those people over their privacy. That doesn't mean that it's okay to expose that person's data.
The consequences should be that the company loses its ability to run a business. You've arbitrarily decided that the only acceptable mechanism for this happening is users choosing a different company. There are a whole host of reasons that doesn't work, and simply shifting the blame onto users for not making it work doesn't solve the problem.
> The consequences should be that the company loses its ability to run a business.
Or gains ability to run it properly.
> the only acceptable mechanism for this happening is users choosing a different company.
I didn't state it should be the only mechanism. There could be others. Those class action lawsuits mentioned in the article prove there are some. But the primary mechanism is users' responsible choice.
> shifting the blame onto users for not making it work
Actually I think the blame is on us, techies. We should create a culture where security matters as much as performance, pleasant design or simple UI. Both among users we live with and companies we work in.
And one fundamental problem of security for the masses is not solved yet: how a user can see if a product they use is secure without being a security expert.
> I didn't state it should be the only mechanism. There could be others. Those class action lawsuits mentioned in the article prove there are some. But the primary mechanism is users' responsible choice.
That's simply not realistic on technical issues. Users can't take responsibility for choices they can't be reasonably expected to understand.
> Actually I think the blame is on us, techies. We should create a culture where security matters as much as performance, pleasant design or simple UI. Both among users we live with and companies we work in
If you believe that, in your own words, user's responsible choice should be the primary mechanism of enforcement of this, you've rejected any effective means of achieving the above trite and obvious truisms.
In fact, security should matter to us a lot more than performance, pleasant design, or simple UI, because unlike those, security can be a matter of life and death. Which is why I don't want to leave it up to users.
> And one fundamental problem of security for the masses is not solved yet: how a user can see if a product they use is secure without being a security expert.
Which begs the question why you want to leave security regulation up to users moving away from the product.